News
 

Bookmark and Share

(7) 

William Dally, chief scientist and senior vice president of research at Nvidia, said in a column that Moore’s Law was no longer enabling scaling of computing performance on microprocessors. In addition, Mr. Dally indicated that central processing units (CPUs) in general could no longer fulfill the demand towards high performance.

“[Moore’s Law] predicted the number of transistors on an integrated circuit would double each year (later revised to doubling every 18 months). This prediction laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months. [Moore] also projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance. But in a development that's been largely overlooked, this power scaling has ended. And as a result, the CPU scaling predicted by Moore's Law is now dead. CPU performance no longer doubles every 18 months,” said Bill Dally in a column published at Forbes.

Perhaps, performance of CPUs no longer doubles every year and a half, but, firstly, those chips are universal and very flexible, secondly, they can be manufactured in large volumes. Graphics chips, which, from time to time, outpace the Moore’s Law, quite often cannot be manufactured in large volumes because of poor yields. Moreover, although GPUs can provide higher horsepower than CPUs, they are not that universal and flexible.

Even though historically developers of central processing units were concentrating on increasing clock-speeds of chips, five years ago Advanced Micro Devices and Intel Corp. concentrated on creating more parallel multi-core microprocessors that work on moderate clock-speeds. However, the vice-president of Nvidia also claims that multi-core x86 CPUs will ultimately not solve problem with the lack of necessary computing performance.

“Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance,” said Mr. Dally.

It is rather logical that Nvidia calls central processing units obsolete since it does not produce them or develop them. The big question is whether AMD and Intel give up and let Nvidia to actually capture part of the market of high-performance computing, where multi-core CPUs rule today.

“Parallel computing, is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world. The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs. Let's enable the future of computing to fly – not rumble along on trains with wings,” concluded the chief scientist of Nvidia.

Tags: Nvidia, Moore\'s Law, Semiconductor, Intel, , AMD, x86, GPGPU, Geforce, Fermi

Discussion

Comments currently: 7
Discussion started: 05/03/10 03:24:44 PM
Latest comment: 05/04/10 04:27:37 PM

[1-7]

1. 
Brilliant!!!
0 0 [Posted by: ariseshellfish  | Date: 05/03/10 03:24:44 PM]
Reply

2. 
I think that AMD and Intel agree with NVIDIA. AMD will soon be there with Fusion, and I imagine Intel will also get there eventually. Whether that's good or bad for NVIDIA is a good question. It will certainly validate the idea of GPU processing, but NVIDIA will not be able to offer the flexibility of an x86 compatible CPU with compute cores on the same chip.
0 0 [Posted by: ET3D  | Date: 05/04/10 04:26:15 AM]
Reply

3. 
Proof of concept, please? As far as I'm aware there isn't any CPU-less computers out there that will boot and perform at least basic tasks, less those few neural network type machines, but they can't really boot nor process Crysis either, tho they might have ability to play it CPU stands for "central processing unit" and when GPUs reach their potential in controlling all aspects of modern computing, guess how they're gonna be called You got it, they're gonna be called CPUs! Highly parallel, true, but we've seen those well before nVidia ever existed (SPARC or any other PA-RISCs, anyone?).

As for Moore's law, it's long been an industry-wide target and never a dogmatic law one's obliged to follow. Guess there's one company that realized they can't follow the trend anymore so they decided to bash it with rather poor understanding of the not-even-so-distant history of computing? And that's their "chief scientist and senior vice president of research"?? LOL! Talk about lame!

I think it's time to "jar your pickles", nVidia! The smell is foul!!!
0 0 [Posted by: MyK  | Date: 05/04/10 05:52:35 AM]
Reply

4. 
Unbelievable! nVidia just keeps on bad mouthing everyone on the planet. The more I see nVidia constantly critize their competitors the more I think they are losers. It's like negative campaign ads during elections!
0 0 [Posted by: TrueGamer  | Date: 05/04/10 10:05:50 AM]
Reply

5. 
What are we going to replace our CPUs with, FERMI? LOL.

I think they're just trying to deflect attention away from their poor outlook and shady business practices.

This is one thing nvidia can't just solve by renaming GPU into CPU.
0 0 [Posted by: blzd  | Date: 05/04/10 10:36:11 AM]
Reply

6. 
well, to sum it up, if you work for a company (nVidia) that might be at some point pushed out of the market by your competitors (Intel and/or AMD) that can produce a certain product (CPUs) you didn't want to/cannot produce, then you'll need to at least scare those people who cannot think for themselves [in terms of science fact] and rely on the advice of someone else (management and/or investors on the stock exchange) ... I would say this is common practice in any industry ...
0 0 [Posted by: solearis  | Date: 05/04/10 01:17:03 PM]
Reply

7. 
I have programed on both multi-core machines and on GPGPUs. Multi-core machines are much easier to program, and provide more flexibility for various applications than GPGPUs do.
0 0 [Posted by: peter shi  | Date: 05/04/10 04:27:37 PM]
Reply

[1-7]

Add your Comment




Related news

Latest News

Tuesday, July 22, 2014

10:40 pm | ARM Preps Second-Generation “Artemis” and “Maya” 64-Bit ARMv8-A Offerings. ARM Readies 64-Bit Cores for Non-Traditional Applications

7:38 pm | AMD Vows to Introduce 20nm Products Next Year. AMD’s 20nm APUs, GPUs and Embedded Chips to Arrive in 2015

4:08 am | Microsoft to Unify All Windows Operating Systems for Client PCs. One Windows OS will Power PCs, Tablets and Smartphones

Monday, July 21, 2014

10:32 pm | PQI Debuts Flash Drive with Lightning and USB Connectors. PQI Offers Easy Way to Boost iPhone or iPad Storage

10:08 pm | Japan Display Begins to Mass Produce IPS-NEO Displays. JDI Begins to Mass Produce Rival for AMOLED Panels

12:56 pm | Microsoft to Fire 18,000 Employees to Boost Efficiency. Microsoft to Perform Massive Job Cut Ever Following Acquisition of Nokia