Once upon a time, when PC games used to be 2-dimensional, every kind of graphics processing was done on the CPU. This was also true for early 3D games. Neither Wolfenstein 3D nor Doom with its numerous clones listed a graphics accelerator among their system requirements. Well, they couldn’t since there were no graphics accelerators at that time and game developers did not rely on them. Moreover, talking about gaming applications of their products, Intel and AMD focused on enhancing the capabilities of their CPUs in the way of MMX and 3DNow! instruction sets. Intel promoted MMX as a means to boost the quality, level of detail and speed of gaming graphics whereas the name of AMD’s technology speaks for itself.
This situation went on for quite a while. Even rather late projects by id Software and Epic Games such as Quake and Unreal used the CPU as the main tool for processing graphics, notwithstanding the significantly expanded game worlds. Things changed in 1996 when the obscure and young firm 3dfx Interactive unveiled the world’s first 3D graphics accelerator for the PC affordable for ordinary gamers. The product looks ridiculous by today’s standards as it could only map and filter textures, but the quality of the filtering was unprecedented at that time. None of then-existing CPUs, whatever multimedia instruction sets they supported, could deliver such performance even at a lower image quality. A game would look completely different running in the Glide mode as opposed to software mode.
That was the first revolution in the world of gaming 3D graphics. CPUs were losing their ground year by year, their influence on performance constantly diminishing. There were other turning points, the next one being the Nvidia GeForce 256, the world’s first graphics processor with a TCL unit (Transformation, Clipping, Lighting) that could transform 3D coordinates into 2D ones, clip polygons and light the scene, offloading the CPU. As is often the case, the new product did not take off immediately. Nvidia’s opponents still relied on the growing computing capacities of CPUs for those tasks, yet hardware TCL had become widespread by the end of 2001 anyway. The same year there was a third revolution that expanded the capabilities of GPUs even more by making them programmable. The Nvidia NV20 (GeForce 3) came out as the first chip to support DirectX 8.0. And the last notable innovation occurred in 2002 when ATI Technologies announced the R300, the first GPU to support DirectX 9.0.
From that moment onwards, GPUs were developing in an evolutionary way. New versions of DirectX and OpenGL were being implemented. The computing part, originally divided into vertex and pixel processors, became unified. New types of shaders were supported and there were lots of other innovations. GPUs quickly surpassed ordinary CPUs in sheer computing power, giving birth to the idea to use them not only to process graphics but also to accelerate complex computations unrelated or loosely related to 3D applications. Both leading developers, AMD and Nvidia, are working actively in this direction, but that’s not the point of this review. Looking at the computing capabilities of today’s GPUs estimated at teraflops (more than the huge supercomputers of earlier times could offer!), one might find modern CPUs to have rather humble parameters.
This provokes a natural question if 3D games need powerful CPUs at all. The answer is not as simple as it seems. First, if some GPUs resources are allotted to compute the game AI or physical model, there are fewer resources left for graphics processing. And we know just too well that today’s games have very complex visuals that may take all the 1600 stream processors of an RV870 chip to be rendered at a decent frame rate. Second, it is not so easy to rewrite the game code to make maximum use of GPU resources. It looks like a number of computational tasks, including gaming ones, are still performed better on the CPU, therefore premium-class gaming computers from famous manufacturers like Alienware are equipped with extremely fast and expensive CPUs. Particularly, the quad-core Intel Core i7-975 Extreme Edition cost $999 when announced and the newest six-core Core i7-980X is going to cost that much today. That’s quite a lot, but a top-end graphics card is expensive as well. For example, a Radeon HD 5850 costs about $300 whereas topmost solutions that deliver maximum performance are as expensive as $600-800 (a dual-processor Radeon HD 5970) or even $1000 and more (a couple of Nvidia GeForce GTX 480 cards working in SLI mode).