Bookmark and Share


William Dally, chief scientist and senior vice president of research at Nvidia, said in a column that Moore’s Law was no longer enabling scaling of computing performance on microprocessors. In addition, Mr. Dally indicated that central processing units (CPUs) in general could no longer fulfill the demand towards high performance.

“[Moore’s Law] predicted the number of transistors on an integrated circuit would double each year (later revised to doubling every 18 months). This prediction laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months. [Moore] also projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance. But in a development that's been largely overlooked, this power scaling has ended. And as a result, the CPU scaling predicted by Moore's Law is now dead. CPU performance no longer doubles every 18 months,” said Bill Dally in a column published at Forbes.

Perhaps, performance of CPUs no longer doubles every year and a half, but, firstly, those chips are universal and very flexible, secondly, they can be manufactured in large volumes. Graphics chips, which, from time to time, outpace the Moore’s Law, quite often cannot be manufactured in large volumes because of poor yields. Moreover, although GPUs can provide higher horsepower than CPUs, they are not that universal and flexible.

Even though historically developers of central processing units were concentrating on increasing clock-speeds of chips, five years ago Advanced Micro Devices and Intel Corp. concentrated on creating more parallel multi-core microprocessors that work on moderate clock-speeds. However, the vice-president of Nvidia also claims that multi-core x86 CPUs will ultimately not solve problem with the lack of necessary computing performance.

“Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance,” said Mr. Dally.

It is rather logical that Nvidia calls central processing units obsolete since it does not produce them or develop them. The big question is whether AMD and Intel give up and let Nvidia to actually capture part of the market of high-performance computing, where multi-core CPUs rule today.

“Parallel computing, is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world. The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs. Let's enable the future of computing to fly – not rumble along on trains with wings,” concluded the chief scientist of Nvidia.

Tags: Nvidia, Moore\'s Law, Semiconductor, Intel, , AMD, x86, GPGPU, Geforce, Fermi


Comments currently: 7
Discussion started: 05/03/10 03:24:44 PM
Latest comment: 05/04/10 04:27:37 PM


0 0 [Posted by: ariseshellfish  | Date: 05/03/10 03:24:44 PM]

I think that AMD and Intel agree with NVIDIA. AMD will soon be there with Fusion, and I imagine Intel will also get there eventually. Whether that's good or bad for NVIDIA is a good question. It will certainly validate the idea of GPU processing, but NVIDIA will not be able to offer the flexibility of an x86 compatible CPU with compute cores on the same chip.
0 0 [Posted by: ET3D  | Date: 05/04/10 04:26:15 AM]

Proof of concept, please? As far as I'm aware there isn't any CPU-less computers out there that will boot and perform at least basic tasks, less those few neural network type machines, but they can't really boot nor process Crysis either, tho they might have ability to play it CPU stands for "central processing unit" and when GPUs reach their potential in controlling all aspects of modern computing, guess how they're gonna be called You got it, they're gonna be called CPUs! Highly parallel, true, but we've seen those well before nVidia ever existed (SPARC or any other PA-RISCs, anyone?).

As for Moore's law, it's long been an industry-wide target and never a dogmatic law one's obliged to follow. Guess there's one company that realized they can't follow the trend anymore so they decided to bash it with rather poor understanding of the not-even-so-distant history of computing? And that's their "chief scientist and senior vice president of research"?? LOL! Talk about lame!

I think it's time to "jar your pickles", nVidia! The smell is foul!!!
0 0 [Posted by: MyK  | Date: 05/04/10 05:52:35 AM]

Unbelievable! nVidia just keeps on bad mouthing everyone on the planet. The more I see nVidia constantly critize their competitors the more I think they are losers. It's like negative campaign ads during elections!
0 0 [Posted by: TrueGamer  | Date: 05/04/10 10:05:50 AM]

What are we going to replace our CPUs with, FERMI? LOL.

I think they're just trying to deflect attention away from their poor outlook and shady business practices.

This is one thing nvidia can't just solve by renaming GPU into CPU.
0 0 [Posted by: blzd  | Date: 05/04/10 10:36:11 AM]

well, to sum it up, if you work for a company (nVidia) that might be at some point pushed out of the market by your competitors (Intel and/or AMD) that can produce a certain product (CPUs) you didn't want to/cannot produce, then you'll need to at least scare those people who cannot think for themselves [in terms of science fact] and rely on the advice of someone else (management and/or investors on the stock exchange) ... I would say this is common practice in any industry ...
0 0 [Posted by: solearis  | Date: 05/04/10 01:17:03 PM]

I have programed on both multi-core machines and on GPGPUs. Multi-core machines are much easier to program, and provide more flexibility for various applications than GPGPUs do.
0 0 [Posted by: peter shi  | Date: 05/04/10 04:27:37 PM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture