Articles: Graphics
 

Bookmark and Share

(1) 

Table of Contents

Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 ]

2008 was a tough year for Nvidia. The company had to suffer defeat after defeat, retreating in every sector of the discrete 3D graphics market before the sudden and energetic attack of AMD’s graphics department. Nvidia has got itself to blame for these misfortunes, however. The company had chosen a wrong development strategy for its GPUs, putting all effort into creating the G200. Nvidia had also delayed with the transition of its cores (including the previous-generation G92) to thinner manufacturing process.

The high potential of the G200 chip could not be untapped on 65nm tech process. Being a highly complex with its 1.4 billion transistors, the G200 just could not work at high frequencies. The frequency of the shader domain of the flagship GeForce GTX 280 was limited to 1.3GHz whereas the GeForce GTX 260 had to be clocked at 1242MHz only. For comparison, the shader domain of the original GeForce 8800 GTS (with a 90nm G80 core) used to be clocked at about the same frequency! And most unpleasantly, the new GPU was not always superior to the simpler RV770 chip from ATI.

That’s why the G200 needed to transition to newer and thinner tech process if it was to be competitive. This transition could increase the chip’s frequency potential while keeping its power consumption and heat dissipation within acceptable limits. And this could also pave the way for Nvidia’s counterpart of ATI’s Radeon HD 4870 X2. Creating a dual-GPU card with two 65nm G200 chips would result in an unacceptably hot and uneconomical product.

People at Nvidia realized all that better than anyone else. Losing one’s share of the discrete GPU market is easy but winning it back is a daunting task. As the result of Nvidia’s effort, the 55nm version of the G200 was created. It is also known under the codenames of G200b, GT200b, GT206 and some others. We will call it G200b. There is nothing new about the architecture of this chip. Like the G200, it incorporates 240 unified shader processors, 80 texture-mapping units and 32 raster back-ends. The only difference is that the G200b is manufactured on 55nm tech process. It is thus supposed to be cooler and more economical. Or, if its power consumption and heat dissipation are as high as those of the 65nm G200, the G200b should be faster.

The GeForce GTX 260 graphics card still uses cut-down chips with 216 ALUs, 72 TMUs and 28 raster back-ends. The G200 remains just as complex as before even in the 55nm version, so its manufacturing cost is quite high. Therefore, we guess that some cores installed on the GeForce GTX 260 Core 216 card either didn’t pass the frequency check or have defective subunits, which prevents Nvidia from using them for the GeForce GTX 285 and 295.

EVGA, one of Nvidia’s closest partners, was the first company to introduce graphics cards based on the new version of the G200 GPU. This privilege is a kind of a reward for loyalty: EVGA produces graphics cards based on Nvidia’s GPUs only. Other manufacturers are sure to follow the suit soon, too. But thanks to EVGA we’ve got the opportunity to check out the G200b right now using the EVGA GeForce GTX 260 Core 216 Superclocked card.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 ]

Discussion

Comments currently: 1
Discussion started: 01/18/09 04:10:10 AM
Latest comment: 01/18/09 04:10:10 AM

View comments

Add your Comment