by Tim Tscheblockov
07/27/2004 | 03:23 PM
Contemporary graphics cards surpass the older products across a number of parameters like speed, functionality, complexity and, alas, power consumption and heat generation. For many users the words “fast” and “high-performing” are synonyms of “power-hungry”, “hot” and “needs a hell of a cooling system”.
That’s why many hardware enthusiasts and gamers who’re going to purchase a top-end graphics card consider the necessity of changing their PSU for a more powerful one or get ready to a serious test of the old unit. The well-known story about GeForce 6800 Ultra graphics cards, for which NVIDIA recommends a PSU with a wattage of 480W or at least 350 “high-quality” watts only adds pessimism into the calculation of the future expense.
So, power and heat of modern graphics cards are matters of great concern for each overclocker and gamer who’s preparing for an upgrade. In order for such people not to get overwhelmed with doubts I will lay out a method of measuring power consumption – and heat dissipation, too – of modern graphics cards and will also share with you the results of my tests, also with overclocking.
This is the first part of the investigation, and it concerns graphics cards on ATI’s chips only – those on NVIDIA’s GPUs will be discussed later. To escape the righteous anger of the overclocker who won’t see our traditional global comparison of a score of graphics cards, I will examine the power consumption of the cards not only at overclocking but also at extreme overclocking (with Vcore adjustments!). No one ever did this. I promise it is going to be very interesting :).
So, let’s get started.
Linking power consumption and heat dissipation of a graphics card I follow the law of conservation of energy. Evidently, the graphics card is not a power source for any other PC component, so all the energy it consumes is exuded in the form of heat. Thus, all the power consumption numbers that are listed below can be referred to as heat dissipation.
Before the arrival of graphics cards based on the RADEON 9700/9500 series chips – and earlier still, there had been Voodoo 5500/6000 GPUs from 3dfx) – all gaming graphics cards received their power from the AGP slot. Quite a few of the AGP slot pins are designed to supply power to the graphics card – 3.3v, 5v and 12v. The maximum consumption currents on these lines are 6A, 2A and 1A, respectively, according to the latest specification, AGP 3.0. If you know the voltages and the maximum currents, you can easily calculate the power consumption of the graphics card through the AGP slot – it is about 41.8W.
This is enough for modern mainstream graphics cards – they don’t have an additional power connector. Faster cards are not satiated by that. Even if the peak power consumption of a device doesn’t exceed 41.8W, but approaches this point, additional power is required – long exploitation under a strain never made any computer component live longer.
Today no one is surprised to see an extra power connector on top-end graphics cards. These connectors are analogous to the power connectors of hard disk drives and optical drives (RADEON 9700/9500-based cards use the power connector of the floppy drive) and they supply power by 5v and 12v rails. Some graphics cards are equipped with two such connectors even. According to this whitepaper from Molex, the maximum currents through these connectors are 6.5A or 10A, depending on the positioning of the connector on the PCB (“right angle” or “vertical”). Thus, the maximum power consumption of graphics cards with one additional connector, without counting the AGP slot in, may range from 110.5W to 170W, of graphics cards with two additional connectors – from 221W to 340W. You must agree that this is enough for any modern graphics card as well as for a few generations of them to come.
By the way, the fact that the GeForce 6800 Ultra is the only top-end graphics card today that has two connectors for supplying extra power (I don’t want to talk about the stillborn Volari from XGI here) shouldn’t be considered as the card’s willingness to eat and to radiate as heat up to 340 watts of electricity. The two power connectors on the GeForce 6800 Ultra is not a consequence of a crazy appetite, but a desire of the manufacturer to ensure the stability of power supply by dividing the currents into two connectors. Well, I’ve deviated too much from the topic – power consumption of graphics cards with NVIDIA’s chips is going to be the subject of a separate article.
It’s easy to grasp the idea of measuring a graphics card’s power consumption – just recall the basic physics course, particularly the legacy of the wise George Ohm. By measuring the voltage in a power circuit and multiplying it by the current strength in this circuit we get the amount of power consumed by the graphics card from this power circuit. Since there are several power circuits, we should then sum up the results for each of them to get the total power consumption of the card.
We can measure the current in a power circuit with the help of a shunt, attaching it into a rupture in this circuit. According to Ohm’s law, the current strength is the same at any point of an electric circuit. It means that the current that flows through the shunt is nothing else but the amount of power consumed by the graphics card.
I used 5W resistors with a resistance of 0.12 Ohms as a shunt, connecting them in parallel in fours and assembling a simple adapter for an easy connection between the power cable and the graphics card:
You can see it in the snapshot that this adapter contains two perfectly identical shunts, attached to 5v and 12v rails. The two medium wires – “zeroes” – go from one connector to another without ruptures:
The resulting resistance of the shunts was 0.03 Ohms – good enough for me as the voltage in the shunts never exceeded 0.15v in the hardest modes. Overall, the resistance should be low enough for the voltage not to drop too much on the shunt, but high enough to be measured by a normal voltmeter. I used a professional digital multimeter from UNI-T, the UT70D model.
Let’s move on. The voltage drop on the shunt divided by the shunt resistance equals the value of the current that flows through the shunt. By multiplying this value by the voltage that comes to the card after the shunt I get the amount of power consumed by the graphics card.
This should all be clear from the scheme:
So, it’s easy to find the total power consumption of a graphics card from the additional power connector.
Well, this is only a portion of all the consumed power. Besides the 5v and 12v circuits that go through the additional power connector(s), there are also 3.3v, 5v and 12v power circuits going through the AGP slot. It’s more difficult to measure the current flowing in these circuits – you can’t plug an adapter with the shunts into the AGP slot!
What to do? I took it easy and just insulated the appropriate pins of the AGP connector on the graphics card with stripes of adhesive tape (pin A1 – 12v; pins B2, B3 – 5v; pins A9, B9, A16, B16, A25, B25, A28, B28 – 3.3v) and applied those voltages directly to the graphics card, through the prepared shunt, taking the voltages from the PSU.
This gave me more trouble with each of the cards – I had to mess around with scissors and tape searching for any points to attach 3.3v, 5v and 12v lines to and soldering wires to them – but this allowed calculating the total power consumption of the card with more precision. Moreover, some of the graphics cards don’t have an additional power connector, so the powering through the shunt, bypassing the AGP slot, was the only way of measuring their power consumption.
I tested the graphics cards on the following testbed:
I measured the power consumption in two modes: “Idle” and “Burn”. There were no running applications in the Idle mode; the screen was displaying the Windows Desktop with a scattering of standard icons in the 1280x1024x32@75Hz display mode. The Burn mode used one of the scenes from Far Cry in the 1600x1200 resolution with forced 4x full-screen antialiasing and 16x anisotropic filtering. On the Training level, near the hut with the binoculars, I saved the game and used this save with all the graphics cards to create the identical test conditions.
I also ruminated on the idea of using 3DMark 2001, 3DMark03, Unreal Tournament 2004 or “IL-2: Sturmovik” for the tests in the Burn mode, but the scene from Far Cry proved to be the heaviest of all for the graphics cards.
The RADEON X800 XT Platinum Edition graphics processor is ATI’s flagship product currently. It is manufactured by 0.13-micron tech process with low-k dielectrics and consists of about 160 million of transistors. The graphics processor works with GDDR3 SDRAM that differs for the better from DDR and DDR2 by having lower power consumption and heat dissipation, among other things. The RADEON X800 XT Platinum Edition is represented by a graphics card from HIS. Its full name is HIS Excalibur X800 XT IceQ II:
This product is made in full compliance with ATI’s reference design, but the cooling system is different. There’s an intricate contraption on the face side of the card that consists of: 1) a copper foundation that takes heat from the GPU die and the memory chips, 2) aluminum ribs attached to the base and 3) a plastic casing that directs air along the heatsink’s ribs and exhausts it to the outside. For the system to work normally, it is necessary that the neighboring PCI slot be unoccupied.
The memory chips on the backside of the card also have a passive aluminum heatsink on:
The nominal frequencies of the GPU and memory on this card were 525/1150MHz. The card didn’t boast an exceptional overclockability notwithstanding its advanced cooling system: the maximum stable frequencies were 550/1250MHz.
The following table contains detailed information:
Click to enlarge
Well, ATI’s claims that the RADEON X800 XT Platinum Edition consumes no more than 70W are fully confirmed. Curiously enough, the biggest portion of the consumption falls on the additional power connector, while the AGP slot only supplies about 10W in the Burn mode.
When overclocking, I increased the GPU frequency by 4.7%, and the memory frequency by 8.7%. The power consumption grew up slightly: by 4.6% in the Idle mode and by 4.2% in the Burn mode. Even overclocked, the card never reached a power consumption of 70W.
The RADEON X800 Pro’s distinguishing features, compared to the RADEON X800 XT Platinum Edition, are reduced clock rates and availability of only 12 pipelines in the core. The total amount of transistors is the same in both cards, but some of them do not work in the X800 Pro – 4 pixel pipelines are disabled. I took a RADEON X800 Pro card from PowerColor for my tests:
This card doesn’t differ from the reference cards from ATI in anything except the sticker on the casing of the cooling system. Its standard frequencies are 475/900MHz. Overclocking was normal: without any modification of the cooling system or volt-modding, the card was stable at 530/1180MHz at best.
It’s clear from the diagram that the RADEON X800 Pro has just a little lower values of power consumption and heat dissipation than the top-end model in the Idle mode, but the gap widens in the Burn mode. Without overclocking, the RADEON X800 Pro consumes about 25% less than the X800 XT Platinum Edition (and 15% less at overclocking). This difference – 15% and 25% - is explained by the fact that the clock rates of the RADEON X800 Pro grew up more than those of the X800 XT Platinum Edition.
The RADEON X800 Pro and the RADEON X800 XT Platinum Edition have a similar design, so they consume power in a similar way: the RADEON X800 Pro also feeds mostly out of the additional power connector.
Click to enlarge
The overclocked RADEON X800 Pro experienced a bigger consumption growth than the X800 XT Platinum Edition: 13.4% in the Idle mode and 15.1% in the Burn mode (its core frequency grew by 11.6%, and the memory frequency grew by 31%).
You can refer to my PowerColor RADEON X800 Pro review for details about modification and extreme overclocking of this graphics processor. This time the graphics card had a water-cooling system on instead of the standard cooler. The diagram below shows you the power consumption results as measured at the card’s regular frequencies with the nominal voltages as well as at overclocked frequencies with Vgpu adjustment:
During extreme overclocking, the power consumption of the graphics card grew up by 48.9% in the Idle mode and by 66.9% in the Burn mode (the GPU frequency growth was 32.6%, the memory frequency growth was 31.1%).
Curiously enough, the power consumption of the card at the nominal frequencies turns to be lower with the water-cooling system than with the standard air cooler. Evidently, the water leads to a lower GPU temperature, thus positively affecting the power consumption. This is probably due to the reduction of various leakage currents and other effects when the die temperature is low – specialists in semiconductors may have something to add on the topic. I can only show you a diagram from the report on extreme overclocking of the RADEON X800 Pro – it contains the die temperature and the ambient temperature:
But back to the power consumption – it grew up to 78 watts at extreme overclocking!
The next diagram shows the dependence of the power consumption on the GPU frequency and the GPU voltage. Note that we have only the core frequency increasing, while the memory is left on the nominal clock rate. That’s why the power consumption is lower than at the simultaneous overclocking of the core and memory.
Up to 535MHz core frequency, the GPU voltage remained the same and the card worked with its standard cooling system. The power consumption was growing up in a direct proportion to the core clock rate.
At 550MHz and 560MHz frequencies, the core voltage remained the same, but I installed a water-cooling system. The power consumption then went down somewhat.
To increase the GPU clock rate further, I had to increase the GPU voltage. The power consumption started going up, and at a higher rate than at ordinary overclocking. However, the power consumption and the core voltage remained in a linear proportion: 3-4 Watts for each extra 10MHz of core frequency and 0.05v of core voltage.
So, extreme overclocking with voltage modification is the best way to check your system for reliability. You shouldn’t experiment with volt-modding without having a PSU reserve, a good cooling system on the graphics card and a clear goal before you.
The RADEON 9800 XT is a top-end graphics processor of the last generation, made with 0.15-micron tech process out of 110 million transistors. We’ve got a RADEON 9800 XT card from Sapphire:
Curiously enough, RADEON 9800 XT graphics cards have their memory chips cooled – unlike RADEON X800 devices. The base of the face-side cooler covers the core as well as the memory chips, while the chips at the back side of the PCB are under a copper plate:
The nominal frequencies of the card are 412/730MHz; I made it to 470/840MHz at overclocking. The measurements of the power consumption gave the following results:
It is clear that the developers of the X800 discussed power-related problems. The X800 Pro consumes a little less than the RADEON 9800 XT, and the X800 XT Platinum Edition, which is incomparably faster – only 3 Watts more than the 9800 XT and only in the Burn mode!
The transition to the thinner 0.13-micron tech process, the reduced core voltage (X800’s 1.4v against 9800 XT’s 1.7v), the use of a less power-hungry memory and various improved power-saving technologies – these factors all contribute to the excellent result. The difference between the X800 and the RADEON 9800 XT is especially clear in the Idle mode: X800-based graphics cards consume twice less of electricity.
The power consumption of the card grew by 16.3% in the Idle mode and by 12.3% in the Burn mode (the GPU and memory clock rates grew by 14% and 15%, respectively).
Click to enlarge
Interestingly, the RADEON 9800 XT loads the AGP slot more than the new generation of ATI graphics cards: in the Burn mode, the total consumption on the 3.3v, 5v and 12v lines in the AGP slot reaches about 20W, while the X800 consumes 10W at most through the AGP.
In comparison with the RADEON 9800 XT, the RADEON 9800 Pro has lower clock rates and twice less memory (128MB). So, here’s a RADEON 9800 Pro graphics card from Sapphire:
As you see in the snapshot, the RADEON 9800 Pro has a modest GPU cooling system, while the memory chips are not cooled at all.
The regular clock rates of this card were 380/680MHz. It did nothing exceptional at overclocking: 440/760MHz. The power consumption measurements follow:
The RADEON 9800 Pro has a noticeably smaller power consumption compared to the RADEON 9800 XT. At overclocking, its consumption in the Idle mode grew up by 14.9%, and in the Burn mode by 11.8%. The GPU frequency increased by 15.8% at that; the memory frequency increased by 11.8%. More details in the following table:
Click to enlarge
The RADEON 9800 PRO loads the AGP slot more than the RADEON 9800 XT, consuming a total of 23W in the Burn mode through the 3.3v, 5v and 12v lines. The current on the 3.3v line is as high as 5A in the Burn mode, and this is very close to the limit – 6A.
The RADEON 9600 XT is a middle-range product made by 0.13-micron technology out of 70 million transistors. I took a RADEON 9600 XT card from PowerColor for my tests:
Memory chips are placed on the face as well as back side of the PCB, but only the face chips have some cooling:
It means that the RADEON 9600 XT doesn’t really need any cooling of the memory chips. The big cooler that covers the GPU as well as the memory chips is just a fashionable thing, adding solidity to the product.
The default frequencies of the card are 500/680MHz – the memory frequency turns to be higher than the standard 600MHz. Overclocking was rewarding as I got to 600/850MHz clock rates.
The power consumption characteristics:
More details are in the table:
Click to enlarge
Well, the RADEON 9600 XT loads the AGP slot well enough, but never gets close to the peak currents. The overall power consumption of the card is very small compared to the faster products. At overclocking, the power consumption grew by 11.1% in the Idle mode and by 6.8% in the Burn mode (the GPU and memory clock rates grew by 20% and 25%, respectively).
The RADEON 9600 Pro has a lower GPU frequency than the RADEON 9600 XT. Here’s a 9600 Pro card from PowerColor:
The memory on this card is not cooled at all – there’s only a GPU cooler:
The nominal frequencies of the card are 400/600MHz. I reached 450/740MHz at overclocking. Power consumption measurements:
More information in the table:
Click to enlarge
The RADEON 9600 Pro has a lower power consumption that the RADEON 9600 XT, and there’s nothing wrong with it. The frequencies of the GPU and the graphics memory grew by 12.5% and 26.7% at overclocking, and the power consumption increased by 5.9% and 7.4% in the Idle and Burn modes, respectively.
First of all, I’d like to emphasize a remarkable fact: the new graphics cards from ATI, based on the RADEON X800 Pro and X800 XT Platinum Edition, have practically the same power consumption and heat dissipation in the Burn mode as the top-end products of the previous generation. At that, the performance of the new cards is of course incomparable with that of the RADEON 9800 XT/Pro.
Thus, if you’ve got a RADEON 9800 XT/Pro graphics card and want to replace it with a RADEON X800 XT/Pro, you will have no power-related problems whatever. Moreover, thanks to the lower power consumption and heat dissipation of the new cards in the Idle mode, the thermal environment will even improve in those cases when you have no “heavy” 3D applications running.
If you’re an owner of a weak graphics card and plan to replace it with a RADEON 9800 or X800, make sure you have a high-quality PSU. Cheap 300W units from obscure manufacturers are not the best choice for such a system, especially if you also use a powerful CPU.
I have no comments about the RADEON 9600 XT/Pro: these graphics cards are no threat even to the least powerful PSUs – their power consumption properties shouldn’t bother you at all.