Bookmark and Share


Chief executive officer of Nvidia Corp. said during his keynote at Hot Chips conference that graphics processing units (GPUs) have excellent prospects for further performance growth. He also indicated that it makes no sense to integrate central processors and graphics chips since discrete processors have higher performance.

According to Mr. Huang, by 2015 graphics processing units will have computing power that is 570 times higher compared to performance of today’s GPUs. Meanwhile, central processing units (CPUs) will be only three times faster than today’s most powerful chips. Considering the  fact that modern graphics chips can offer about 1TFLOPs of computing power, then in 2015 they will offer whopping 570TFLOPs.

The prediction of Mr. Huang sharply contradicts with prediction of William Dally, chief scientist of Nvidia, who expects GPUs to have 20TFLOPs performance in 2015.

During question and answer section at the end of the speech, professor David Patterson of U.C. Berkeley asked if Mr. Huang had to do it over, would he still partition the CPU and GPU into separate chips. The answer Nvidia’s chief exec gave was that there were three constituents, the programmers, OEMs/ODMs, and chip designers, and each had differing requirements that make it difficult to bet on integrating new and very rapidly developing architectures into one device. By separating these functions, each can develop at its own pace and also provide the flexibility to address many market opportunities. Of course, Mr. Huang stressed that the GPU is evolving much faster than any other chip architecture.

The head of Nvidia also enthusiastically painted a picture of a world where the massive threading and computing capability of the GPU can provide many orders of magnitude performance increases over just the multi-core CPU alone.

Tags: Nvidia, Geforce, GPGPU


Comments currently: 5
Discussion started: 08/26/09 02:06:38 PM
Latest comment: 10/23/09 08:25:23 AM
Expand all threads | Collapse all threads


Today's fancy math is 50 * 1?.? = 570X. Also 1.2^6 = 3X. These suite the fancying of performance.

They have no relationship for performance because of inefficient programing practices done by colleges and/or universities.

Using "smart" compilers and debuggers can help programmers create efficient programs. It depends on the programmer if he or she can write an efficient program then it makes the above figures look stupid and real world values look smarter.

I doubt having graphics chip being separate a good thing because integrated graphics is better for certain environments. Businesses does not care for graphic performance and for them a high end video card is a waste. If companies like AMD or Intel to include a graphics chip in the CPU, certain areas of computing can be improved. The FPU was a co-processor a long time ago, so I think the FPU will eventually be replaced by a GPU in the CPU.

I disagree that GPU are evolving faster than CPU. I have not seen any significant changes over the years for both GPU and CPU. The only changes that I seen in the GPU industry is the move from being a fixed specialized processor to a programmable specialized processor. This was predictable. Today the CPU industry is also predictable, so I have not dropped my jaw to the ground like I did in the 90s.
0 0 [Posted by: jmurbank  | Date: 08/26/09 02:06:38 PM]
- collapse thread

"They have no relationship for performance because of inefficient programing practices done by colleges and/or universities."

Hear, hear .... but it's all in the hands of lazy programmers not some college grads.

And math is all but realistic .... and he puts CPU into these fancy blob.
0 0 [Posted by: OmegaHuman  | Date: 08/27/09 06:37:10 AM]

Is that a P.M.P.O or RMS XD
1 0 [Posted by: zaratustra06  | Date: 08/26/09 03:17:39 PM]

Nvidia Math at work.... tsk... tsk... tsk... I remember the dumb look on people when the NVidia Math was used for marketing the original XBox, same with the PS3's perofrmance...But to give Hwang the benefit of the doubt, Maybe his comparison chart was misunderstood by the Author of the article, maybe he was comparing a projected performance of GPU (in 2015) to a current CPU (2009). Who knows, maybe the CPU in question is an intel Atom.
0 0 [Posted by: goury  | Date: 08/27/09 06:44:28 PM]

Well, this is odd. What the article says and what I can interpret from the picture alone says 2 different things. The article states that GPU performance is expected to increase 570x over the next 6 years, while the picture states that CPU performance will only increase 3x over the next 6 years while if GPU and CPU were used together, overall processing power will increase 570x.

I'm seriously doubting that Mr. Huang implied that their GPUs will be processing at 570TFlop in 2015.

Please correct me if I'm misunderstanding.
0 0 [Posted by: Zshazz  | Date: 10/23/09 08:25:23 AM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture