News
 

Bookmark and Share

(1) 

Modern processors, whether they are central processing units (CPUs) or graphics processing units (GPUs), contain many parallel computing engines which tremendous processing power cannot be utilized due to the lack of sophisticated programming platforms. In order to solve this issue and associated problems several tech industry giants teamed up with Stanford University to develop dedicated parallel computing platform.

It is not a secret that until recently, computers with multiple processors were too expensive for all but specialized uses (e.g. supercomputing) where the high performance of parallel processing was deemed essential. As a consequence, few programmers have learned how to design software that exploits parallelism. The problem has caused serious concern among technology companies and scientists that the progress of computing overall could stall.

“Parallel programming is perhaps the largest problem in computer science today and is the major obstacle to the continued scaling of computing performance that has fueled the computing industry, and several related industries, for the last 40 years,” said Bill Dally, chairman of the computer science department at Stanford.

The Pervasive Parallelism Lab (PPL) pools the efforts of many leading Stanford computer scientists and electrical engineers with support from Advanced Micro Devices, HP, IBM, Intel Corp., Nvidia Corp., Sun Microsystems in an attempt to develop sophisticated parallel computing platform by the year 2012. The research will be directed by Kunle Olukotun, a professor of electrical engineering and computer science who has worked for more than a decade on multicore computer architecture.

Mr. Olukotun says he hopes that by working directly with industrial supporters, the work of PPL faculty and students will reach the marketplace where it can have an impact. He emphasized that the lab is open, meaning that other companies can still join the effort and none has exclusive intellectual property rights.

The center, with a budget of $6 million over three years, will research and develop a top-to-bottom parallel computing system, stretching from fundamental hardware to new user-friendly programming languages that will allow developers to exploit parallelism automatically. In other words, game programmers who already understand artificial intelligence, graphics rendering, and physics would be able to implement their algorithms in accessible “domain-specific” languages. At deeper, more fundamental levels of software would do all the work for them to optimize their code for parallel processing.

To enable the research, the team’s hardware experts will develop a novel testbed called FARM, for Flexible Architecture Research Machine. The system, which Mr. Olukotun said will be ready by the end of the summer, will combine versatility with performance by blending reprogrammable chips with conventional processors.

The head of the lab hopes the effort will pave the way for programmers to easily create powerful new software for applications such as Artificial Intelligence and robotics, business data analysis, virtual worlds and gaming. Among the PPL faculty are experts in each of these areas, including Pat Hanrahan, a professor of computer science and electrical engineering whose graphics rendering expertise has earned him two Academy Awards.

Research in the PPL also will be able to make use of parallelism technologies that Stanford has already developed, as part of years of research on the subject. These include not only Mr. Olukotun’s work on multi-core chips but also his collaboration with computer science and electrical engineering assistant professor Christos Kozyrakis to develop a more efficient way for processors to share memory, called “transactional memory.” Mr. Dally, meanwhile, has developed new ways for the flow, or “streaming,” of software instructions from a compiler to parallel processors to work much more efficiently than in conventional supercomputers.

“We have a history here of trying to close this gap between parallel hardware and software,” Olukotun says. “It’s not enough just to put a bunch of cores on a chip. You also have to make the job of translating software to use that parallelism easier.”

Stanford, however, is not the only university trying to solve the problem. The announcement of the PPL comes less than two months after the University of California at Berkeley and the University of Illinois at Urbana-Champaign each received multimillion-dollar grants from Microsoft and Intel to address the issue.

Discussion

Comments currently: 1
Discussion started: 05/01/08 01:22:14 AM
Latest comment: 05/01/08 01:22:14 AM

Add your Comment




Related news

Latest News

Thursday, August 28, 2014

12:22 pm | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

9:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

6:41 pm | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture

Monday, August 25, 2014

6:05 pm | Chinese Inspur to Sell Mission-Critical Servers with AMD Software, Power 8 Processors. IBM to Enter Chinese Big Data Market with the Help from Inspur

Sunday, August 24, 2014

6:12 pm | Former X-Bit Labs Editor Aims to Wed Tabletop Games with Mobile Platforms. Game Master Wants to Become a New World of Warcraft