Bookmark and Share


Graphics processing units (GPUs) of the future will require both higher levels of performance and greater programmability compared to today's GPUs, according chief scientist at Nvidia Corp. But while central processing units (CPUs) will get close to each other, they will not converge into something completely universal and there will rather be heterogeneous multi-core chips than some universal processors.

Programmability and Performance - The Future of GPUs

"Future GPUs will be both far more powerful in terms of raw performance and more programmable - in the sense that the range of applications that they can accelerate will be much broader than today. A lot of our architecture research is focused at improving programmability without sacrificing performance," said Bill Dally, the chief scientist of Nvidia, during a public conference on Wednesday.

The progress made by graphics processors in terms of performance in the last ten years is colossal and the evolution in terms of programmability is massive. But the progress made by video games in the most recent decade is much less evident for many reasons. Budgets of advanced video games today may be similar to budgets of a movie and so are constraints. Apparently, additional GPU performance and programmability are needed to enable not only abstract GPU-accelerated consumer applications, but games, something that graphics processors are developed for.

"There will always be demand for more performance, but game developers are also increasingly limited by the complexity of content creation. Techniques like ray tracing and stochastic rasterization offer more robust approaches to rendering problems that developers want to work with today. Greater programmability will make these techniques easier," said David Luebke, director of graphics research at Nvidia.

Right Core for the Right Task

During the conversation Mr. Dally once again reiterated his earlier statement that the future of computing are heterogeneous systems with CPUs and GPUs performing tasks they make best. In fact, Nvidia already has such a heterogeneous system in the form of Tegra, which combines ARM9 general-purpose processing cores with Nvidia GeForce graphics processor.

"The future is heterogeneous computing in which we use CPUs (which are optimized for single-thread performance) for the latency sensitive portions of jobs, and GPUs (which are optimized for throughput per unit energy and cost) for the parallel portions of jobs. The GPUs can handle both the data parallel and the task parallel portions of jobs better than CPUs because they are more efficient. The CPUs are only needed for the latency sensitive portions of jobs - the serial portions and critical sections," said the chief scientist of the graphics company.

The chief scientist of Nvidia does not expect central processors and graphics processors to completely converge into a device with many similar cores processing both serial and parallel data. He believes that the heterogeneous multi-core approach, such as AMD Fusion or Cell, is a better candidate for the longer-term future.

"I don't see convergence between latency-optimized cores and throughput optimized cores. The techniques used to optimize for latency and throughput are very different and in conflict. We will ultimately have a single chip with many (thousands) of throughput cores and a few latency-optimized cores so we can handle both types of code," said Mr. Dally.

The worst enemy of heterogeneous computing (or accelerated computing in AMD's terminology) are current programming models. For software developers it would be nice if operating system could assign the right task to the right type of computing device, but Mr. Dally warns that any kind of load-balancing between CPU and GPU (depending on their load at particular time) may be very costly in terms of performance.

"We expect that operating system will eventually treat CPU cores and GPU cores as peers - scheduling work for both types of cores. However moving work from a CPU core to a GPU core or vice versa would be very sub-optimal," stressed the chief scientist of Nvidia.

Tags: Nvidia, GPGPU, Geforce, Tesla, Tegra, Quadro, Fusion, Cell, MIC, AMD, Intel,


Comments currently: 3
Discussion started: 09/02/10 06:46:10 AM
Latest comment: 09/07/10 12:56:40 PM


the future is looking bright, good guys at nvidia are working their dreams out! i can not wait for something new in the next 24 months lets hope nvidia brings out a revolution once again since Geforce2 and since Geforce FX 8800/9800 and current GTX variants.
0 0 [Posted by: mike1101  | Date: 09/02/10 06:46:10 AM]

wait, nvidia blowing smoke actually works on some people? this article says nothing new. And pretty much is just designed to make worried investors keep trusting nvidia that their market will remain.

and wow 24 months is the time frame for the next product you want to blow you out the water, you have some very low expectations. chip designs start 1 year before the other is completed and since fermi was delayed a year, it should have been ready to come out... ( beyond obvious issues with most contract semiconductor makers canceling 32nm)

The article even says that game graphic advances have come to a stand still cause of complexity and costs. So it pretty much is saying don't look forward cause there's nothing good coming.
0 0 [Posted by: cashkennedy  | Date: 09/02/10 10:03:00 AM]


NVIDIA unviels Seven DirectX 11 Fermi Mobile GPUs

Lucid Hydra Engine to be integrated into select graphics cards [multi-GPU]
0 0 [Posted by:  | Date: 09/07/10 12:56:40 PM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture