Articles: Graphics
 

Bookmark and Share

(32) 

Table of Contents

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 ]

It’s no secret now that the first generation of mainstream graphics processors with DirectX 10 support and of graphics cards based on such GPUs was far from a success, especially in terms of performance. Notwithstanding the innovative architecture, ATI’s Radeon HD 2600 and Nvidia’s GeForce 8600 series didn’t do well in games, often being outperformed by previous-generation solutions, which perhaps were not so technically advanced but were free from obvious bottlenecks such as a small number of execution subunits or insufficiently optimized drivers. Another performance-limiting factor of both solutions was a 128-bit memory bus that couldn’t feed the GPU enough data in modern games, especially when the gamer enabled FSAA and high display resolutions.

This simplification of the mainstream GPUs could be explained by the necessity to keep their manufacturing cost within reasonable limits with the tech process available at the time. The unified graphics architecture, even cut down as it was in the Nvidia G84 and ATI RV630, was still very complex to implement, the chips incorporating 289 and 390 million transistors, respectively, which was about the same amount of transistors as in the flagship DirectX 9 GPUs with ordinary, non-unified architecture. To remind you, the Nvidia G71, the heart of the GeForce 7900 family, consisted of 278 million transistors while the ATI R580, employed in the Radeon X1900/X1950 series, was made out of 384 million transistors. A memory bus broader than 128 bits could not be used due to the thin tech process, which reduced the cost of the GPUs.

Of course, we remember that GeForce FX and Radeon 9600 could not offer high performance in DirectX 9, either, yet we hadn’t expected the GeForce 8600 and ATI Radeon HD 2600 to be so poor in comparison with the previous-generation graphics cards (Radeon X1950 Pro). The developers obviously overdid it with the cost-reducing measurers.

Nvidia was the first to find a way out of the situation, announcing a new G92 graphics core on October 29, 2007. That GPU was installed on the GeForce 8800 GT series. Manufactured on 65nm tech process, the new chip proved to be a success. Despite the awesome complexity of the core (754 million transistors), its cost, area, heat dissipation and power consumption could be kept within reasonable limits. The G92 was just slightly inferior to the G80 in terms of functional subunits, but made up for that with increased clock rates and a 256-bit memory bus. The new card priced at only $259 would be just slightly slower than the $599 flagship GeForce 8800 GTX across most of our tests (see our article called From Extreme to Mainstream: Nvidia GeForce 8800 GT from Leadtek for details). The best representative of the other camp, ATI Radeon HD 2900 XT, was beaten mercilessly as it delivered lower performance at a much higher level of power consumption. So what did AMD do?

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 ]

Discussion

Comments currently: 32
Discussion started: 01/25/08 05:49:45 PM
Latest comment: 02/26/08 02:40:18 AM

View comments

Add your Comment