Articles: Graphics
 

Bookmark and Share

(3) 

Table of Contents

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 ]

We have already discussed in detail ATI’s strategy: they decided not to focus on high-performance monolithic GPUs and instead bet on the relatively inexpensive mainstream graphics solutions. And it has totally paid back (for details see our article called ATI Radeon HD 4800 Graphics Architecture: Long Anticipated Revenge?). Today we should pay special attention to a completely opposite approach to winning the market suggested by Nvidia and find out if they made the right bet.

Until recently Nvidia has been dominating the high-performance single graphics solutions market and the only competitor it had there was probably ATI Radeon HD 3870 X2 Nevertheless, graphics architecture needed a refresh and the market longed for new graphics processors, especially since Nvidia’s major competitor didn’t keep hands in pockets and put the final touches to its “weapon of retribution”. Moreover, some games, such as Crysis or Call of Juarez, needed more than the industry could offer at that time. As usual, only next generation solutions could actually provide a new level of gaming performance. Besides, a mighty long time - a year and a half - has already passed since the arrival of Nvidia GeForce 8800 GTX, so the industry simply cried for more.

As we have already said, ATI gave up the idea of high-performance monolithic GPUs targeted at the high-end market segment. However, the reason for that decision had nothing to do with the resources availability.

The previous GPU generation already turned out pretty complex: the number of die transistors was long past 500 million, the die measured 484sq.mm and over 100W power consumption turned into a common standard. As a result, if they continued increasing the GPU potential the usual way, they would have ended up with a much more complex chip of larger size, which would evidently heat more and consume much more power. Of course, developing and finalizing “mega-chips” requires significant technical skills, financial investments and takes a lot of time, therefore, at some point they could eventually become unbearable even for the recognized leaders of the discrete graphics market. Moreover, the performance could actually increase not as dramatically, as it used to in the past years.

As a result, ATI decided to focus on designing the most advanced mainstream graphics processor. By combining a couple of those within a single product they successfully obtained a high-end solution. Of course, this strategy has a number of drawbacks. The main one is certainly the need to optimize drivers for multi-GPU configurations, as well as significant performance and price difference between single- and dual-chip graphics solutions.

Nvidia, however, remained loyal to its principles and wasn’t afraid to pursue “GPU performance at any price” slogan, despite their recent success in designing a dual-chip Nvidia GeForce 9800 GX2 solution. Although ATI’s approach certainly has its advantages, the traditional way also offers a number of positive things, the major being the absence of problems typical of contemporary multi-GPU configurations. Moreover, the use of monolithic chips gives a little more room for flexibility when lining up the entire family of solutions. By simply disabling some functional GPU units they can get less powerful and less expensive graphics adapter modifications (however, it also means that some solutions targeted for specific applications field will be more expensive to manufacture). And by polishing off the production technology and increasing the core frequency potential they will get more powerful and expensive solutions.

New Nvidia chip codenamed G200 was supposed to top the traditional approach to GPU design: the most complex, the most expensive, the most powerful and of course the fastest of all. Everything was at stake here: the company didn’t stop at anything trying to prove to the world who the real king of consumer 3D graphics was.

They claimed their absolute leadership on June 17 2008 and in our today’s article we will try to find out if that was a rightful claim. As usual, we are going to start with comparative analysis of the new GPU features and functionality of solutions based on it.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 ]

Discussion

Comments currently: 3
Discussion started: 08/05/08 10:22:58 AM
Latest comment: 08/10/08 03:31:23 AM

View comments

Add your Comment