News
 

Bookmark and Share

(5) 

Chief executive officer of Nvidia Corp. said during his keynote at Hot Chips conference that graphics processing units (GPUs) have excellent prospects for further performance growth. He also indicated that it makes no sense to integrate central processors and graphics chips since discrete processors have higher performance.

According to Mr. Huang, by 2015 graphics processing units will have computing power that is 570 times higher compared to performance of today’s GPUs. Meanwhile, central processing units (CPUs) will be only three times faster than today’s most powerful chips. Considering the  fact that modern graphics chips can offer about 1TFLOPs of computing power, then in 2015 they will offer whopping 570TFLOPs.

The prediction of Mr. Huang sharply contradicts with prediction of William Dally, chief scientist of Nvidia, who expects GPUs to have 20TFLOPs performance in 2015.

During question and answer section at the end of the speech, professor David Patterson of U.C. Berkeley asked if Mr. Huang had to do it over, would he still partition the CPU and GPU into separate chips. The answer Nvidia’s chief exec gave was that there were three constituents, the programmers, OEMs/ODMs, and chip designers, and each had differing requirements that make it difficult to bet on integrating new and very rapidly developing architectures into one device. By separating these functions, each can develop at its own pace and also provide the flexibility to address many market opportunities. Of course, Mr. Huang stressed that the GPU is evolving much faster than any other chip architecture.

The head of Nvidia also enthusiastically painted a picture of a world where the massive threading and computing capability of the GPU can provide many orders of magnitude performance increases over just the multi-core CPU alone.

Tags: Nvidia, Geforce, GPGPU

Discussion

Comments currently: 5
Discussion started: 08/26/09 02:06:38 PM
Latest comment: 10/23/09 08:25:23 AM
Expand all threads | Collapse all threads

[1-4]

1. 
Today's fancy math is 50 * 1?.? = 570X. Also 1.2^6 = 3X. These suite the fancying of performance.

They have no relationship for performance because of inefficient programing practices done by colleges and/or universities.

Using "smart" compilers and debuggers can help programmers create efficient programs. It depends on the programmer if he or she can write an efficient program then it makes the above figures look stupid and real world values look smarter.

I doubt having graphics chip being separate a good thing because integrated graphics is better for certain environments. Businesses does not care for graphic performance and for them a high end video card is a waste. If companies like AMD or Intel to include a graphics chip in the CPU, certain areas of computing can be improved. The FPU was a co-processor a long time ago, so I think the FPU will eventually be replaced by a GPU in the CPU.

I disagree that GPU are evolving faster than CPU. I have not seen any significant changes over the years for both GPU and CPU. The only changes that I seen in the GPU industry is the move from being a fixed specialized processor to a programmable specialized processor. This was predictable. Today the CPU industry is also predictable, so I have not dropped my jaw to the ground like I did in the 90s.
0 0 [Posted by: jmurbank  | Date: 08/26/09 02:06:38 PM]
Reply
- collapse thread

 
"They have no relationship for performance because of inefficient programing practices done by colleges and/or universities."

Hear, hear .... but it's all in the hands of lazy programmers not some college grads.

And math is all but realistic .... and he puts CPU into these fancy blob.
0 0 [Posted by: OmegaHuman  | Date: 08/27/09 06:37:10 AM]
Reply

2. 
Is that a P.M.P.O or RMS XD
1 0 [Posted by: zaratustra06  | Date: 08/26/09 03:17:39 PM]
Reply

3. 
Nvidia Math at work.... tsk... tsk... tsk... I remember the dumb look on people when the NVidia Math was used for marketing the original XBox, same with the PS3's perofrmance...But to give Hwang the benefit of the doubt, Maybe his comparison chart was misunderstood by the Author of the article, maybe he was comparing a projected performance of GPU (in 2015) to a current CPU (2009). Who knows, maybe the CPU in question is an intel Atom.
0 0 [Posted by: goury  | Date: 08/27/09 06:44:28 PM]
Reply

4. 
Well, this is odd. What the article says and what I can interpret from the picture alone says 2 different things. The article states that GPU performance is expected to increase 570x over the next 6 years, while the picture states that CPU performance will only increase 3x over the next 6 years while if GPU and CPU were used together, overall processing power will increase 570x.

I'm seriously doubting that Mr. Huang implied that their GPUs will be processing at 570TFlop in 2015.

Please correct me if I'm misunderstanding.
0 0 [Posted by: Zshazz  | Date: 10/23/09 08:25:23 AM]
Reply

[1-4]

Add your Comment




Related news

Latest News

Tuesday, July 15, 2014

6:11 am | Apple Teams Up with IBM to Make iPhone and iPad Ultimate Tools for Businesses and Enterprises. IBM to Sell Business-Optimized iPhone and iPad Devices

Monday, July 14, 2014

6:01 am | IBM to Invest $3 Billion In Research of Next-Gen Chips, Process Technologies. IBM to Fund Development of 7nm and Below Process Technologies, Help to Create Post-Silicon Future

5:58 am | Intel Postpones Launch of High-End “Broadwell-K” Processors to July – September, 2015. High-End Core i “Broadwell” Processors Scheduled to Arrive in Q3 2015

5:50 am | Intel Delays Introduction of Core M “Broadwell” Processors Further. Low-Power Broadwell Chips Due in Late 2014

Wednesday, July 9, 2014

4:04 pm | Intel Readies New Quark “Dublin Bay” Microprocessors. Intel’s “Dublin Bay” Chips Due in 2015