News
 

Bookmark and Share

(0) 

Tim Sweeney, chief executive officer of major game developer Epic Games, said in an interview that he expected 3D graphics cards used to accelerate rendering of video games as well as major applications programming interfaces (API) to vanish into thin air in the coming years. According to Sweeney, software rendering will return, but this may reshape the whole computer graphics industry.

“In the next generation we’ll write 100% of our rendering code in a real programming language – not DirectX, not OpenGL, but a language like C++ or CUDA. A real programming language unconstrained by weird API restrictions. Whether that runs on Nvidia hardware, Intel hardware or ATI hardware is really an independent question. You could potentially run it on any hardware that's capable of running general-purpose code efficiently," said Mr. Sweeney in an interview with Ars Technica web-site.

More Interesting Possibilities Available

It is easy to understand why old-school video game creators like John Carmack from id Software or Tim Sweeney from Epic Games demand removal of API limitations. Since, both Carmack and Sweeney have extensive experience in working with software rendering in general, this may give them a huge competitive advantage over rivals, who are used to work with common APIs. Moreover, having deep experience in graphics programming and without API limitations, game developers will be able to create more realistic video games. Finally, with software rendering, programmers will not have to worry about peculiarities of special-purpose graphics chips on the marker.

According to Sweeney, there is no need for APIs now that graphics processing units (GPUs), such as ATI Radeon HD 4000-series or Nvidia GeForce GTX 200,  are considerably more than just a set of special-purpose co-processors and there will hardly be substantial need for APIs when new generations, such as Intel Larrabee, become available.

“Now that you have completely programmable shaders, the idea that you divide your scene up into triangles rendered in a certain order to a large frame-buffer using fixed-function rasterizer features is really an anachronism. With all that general hardware underneath, why do you want to render scenes that way when you have more interesting possibilities available?” Mr. Sweeney asked.

The Ideal Software Layer Is to Have a Vectorizing C++ Compiler for Everything

The modern 3D graphics for video games has made four revolutionary steps: software rendering in early nineties, fixed-function 8-bit rendering epoch in late nineties – early 2000s (DirectX 6 – DirectX 8) as well as programmable rendering pipeline (DirectX 9, DirectX 10, DirectX 11). Tim Sweeney believes that going forward there is no place for APIs as programmability of future graphics processing units as well as multi-core central processing units (CPUs) will be infinite.

“The ideal software layer is just to have a vectorizing C++ compiler for every architecture – Nvidia, Intel, AMD, whoever. Let us write code in C++ to run on the GPU, including shaders in C++, and rendering algorithms in C++, where programmers explicitly create threads, hand work off to them, synchronize shared data, and so on. Then use what Nvidia calls ‘pixel pipelines’ and Intel calls ‘vector registers’ by means of a vectorizing compiler that takes loops, unrolls them, and runs it on the wide vector units,” Mr. Sweeney explained.

The well-known game programmer claims that with software rendering the only difference between processors – whether they are called CPUs or GPUs – will be their performance in real-world application, something that may greatly simplify life of all game developers. Still game developers will have to keep in mind that different hardware may behave completely differently, hence, they will still need to optimize their products.

Performance Will Be the King... If Software Rendering Ever Restarts

“You’ll see wide variations in performance depending on the application, because some applications will scale really well to wide vectors, some will scale really well to threads, some will do differently on different hardware depending on cache system tradeoffs and memory system tradeoffs and those sorts of things,” said Mr. Sweeney.

But despite of all challengers that software developers have to experience now to create a high-quality game title, it remains to be seen whether the new approaches will actually be more efficient. In fact, even Mr. Sweeney, who have been an advocate of the software rendering admits that if development costs increase significantly, the new approaches may not become popular.

“If it costs $10 million to develop a game for current-gen, and on a next-generation chip it costs $30 million, that likely makes the whole thing uneconomical. So we need easy, simple programming models that scale to multiple threads and cores,” said Mr. Sweeney.

Tags: ATI, AMD, , Intel, Nvidia, Geforce, Radeon, Larrabee

Discussion

Comments currently: 0

Add your Comment




Related news

Latest News

Wednesday, October 8, 2014

8:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

12:22 pm | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

9:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

6:41 pm | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture

Monday, August 25, 2014

6:05 pm | Chinese Inspur to Sell Mission-Critical Servers with AMD Software, Power 8 Processors. IBM to Enter Chinese Big Data Market with the Help from Inspur