Articles: Graphics
 

Bookmark and Share

(0) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 ]

PCB Design

All modern top-end graphics cards look alike or very much alike to each other. The new Nvidia GeForce GTX 580 has a standard enough appearance, too:

 
GeForce GTX 580

 
GeForce GTX 480

There are but a few and minor visual changes compared to the previous flagship. The GeForce GTX 580 lacks the slit in the cooler's casing through which the metallic cap of the heatsink used to peep out in the GeForce GTX 480 cooler. That cap used to get very hot at work, by the way. The new card also has a different cooler design, therefore it does not have heat pipes sticking out the cooler's casing. There is no slit below the fan in the PCB, either. Nvidia seems to be very optimistic about the new cooler design, but if you install two GeForce GTX 580 cards into a SLI tandem, the top card’s fan is going to have problems with air supply, being blocked by the bottom card. The "SLI-optimized" shape of the cooler's casing can hardly help then because the optimizations boil down to the small depression near the fan.

As usual, the most interesting things can be found below the cooler. We took the latter off to have a better view of our GeForce GTX 580.

 
GeForce GTX 580 (left), GeForce GTX 480 (right)

There are a lot of similarities between the PCBs of the GeForce GTX 580 and GeForce GTX 480. The developers copied some blocks of the older PCB when designing the new one (quite an efficient development method that helps reduce the time and cost necessary to create a new PCB from scratch). The empty back part of the PCB which used to have air vents suggests that the GeForce GTX 580 might have been made shorter. On the other hand, there is no urgent reason for that. The top position of the power connectors prevents any problems with attaching the power cables in a short system case. This card won’t fit into compact system cases, but it is not really meant for them anyway.

The power subsystem of the GeForce GTX 580 is the same as that of its predecessor, following the 6+2 design. The 6-phase GPU voltage regulator is managed by a CHL8266 controller from CHiL Semiconductor located on the reverse side of the PCB.

Lower down the PCB there is a seat for a chip of similar size, but we don't know its purpose. Nvidia says the GeForce GTX 580 is equipped with an advanced power monitoring system but points to a different location in its presentation where we can only spot two tiny chips marked as A219.

There is one difference concerning the memory subsystem: instead of the popular uP6210 controller, the memory voltage regulator is managed by an APW7088 chip from Anpec Electronics. It wasn’t easy to find its documentation because it is missing on the official website. We could only find it with Google. The graphics card has two power connectors: a 6-pin and an 8-pin one. The latter doesn’t seem redundant even considering the promised improvements in energy efficiency.

Like the GeForce GTX 480, the new card is equipped with GDDR5 memory in K4G10325FE-HC04 chips from Samsung. These 1Gb (32Mbx32) chips have a rated frequency of 1250 (5000) MHz. And like the previous flagship, the card doesn't make the most of such fast memory, even though its memory frequency is increased to 1002 (4008) MHz. The card carries 12 memory chips for a total capacity of 1536 MB accessed across a 384-bit bus. The peak memory bandwidth is quite impressive for a single-GPU card, namely 192.4 GBps. The dual-processor Radeon HD 5970 is the only card to boast higher memory bandwidth just because it has two independent memory banks with 256-bit access. The GeForce GTX 580 lowers its memory frequency to 162 (648) MHz and to 68 (270) MHz in its two power-saving modes. The popular MSI Afterburner tool reports frequencies of 2004, 324, and 134 MHz at that.

The GPU of our sample of the card is revision A1. Its marking contains the same frequency code of 375 as the marking of the GF100. We guess this number denotes GPU samples that have passed tougher frequency control. The GF110 is no different from the GF100 in dimensions. These processors seem to be even pin-compatible. Our sample was manufactured on the 38th week of 2010, i.e. between September 19 and 27. Judging by the small revision number, Nvidia didn’t have any serious problems manufacturing the GF110.

GPU-Z version 0.4.8 supports the GF110 but, as we’ve said above, you shouldn’t trust everything it reports as such parameters as the die size and the transistor count are simply written into the utility's database and may differ greatly from the real values. Otherwise, the rest of the GPU parameters are reported correctly. The main domain frequency is indeed 772 MHz whereas the shader domain is clocked at twice that frequency, i.e. at 1544 MHz. The number of ALUs and raster operators agrees with the official specifications: 512 and 48, respectively. GPU-Z doesn’t tell us how many texture-mapping units the GPU has but the texture and pixel fillrate parameters are correct and agree with the official specs. We can remind you here that the GF110 has 64 texture-mapping units capable of processing textures in any popular format at full speed whereas the GF100 can only process FP16 textures at half the full speed.

MSI Afterburner identifies the key frequencies of the graphics card correctly but reports one half rather than one fourth the effective GDDR5 frequency for the card's memory. Version 2.0.0 of this software tool cannot control the GPU voltage but offers monitoring options. We can use them to see that the card's core frequencies are lowered to 51/101 MHz in idle mode and to 405/810 MHz in easy tasks like HD video decoding. It is only in 3D applications that the graphics core runs at its full speed.

The mounting bracket of the new card hasn’t changed since the GeForce GTX 480. Its first tier is occupied by a couple of DVI-I ports together with a mini-HDMI that requires an adapter for connecting a standard HDMI cable. The second tier is a vent grid for exhausting the hot air out of the system case. DisplayPort is not supported. If you want to connect more than two display devices, you have to build a SLI configuration out of two such graphics cards using the pair of standard MIO connectors on the face side of the PCB. This is in fact a typical selection of interfaces that modern Nvidia products have and it looks rather limited compared to the connectivity options of Radeon series products.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 ]

Discussion

Comments currently: 0

Add your Comment