Power Consumption of Contemporary Graphics Accelerators. Part II: NVIDIA vs. ATI

Today we are going to take a closer look at the power consumption of graphics cards based on NVIDIA chips, which are available in the today’s market. We will also compare these results with the power consumption of the cards on the rivalry solutions from ATI and see who managed to achieve better results here.

by Tim Tscheblockov
08/22/2004 | 12:11 PM

Power consumption and heat dissipation of modern graphics cards are most important matters for any user who’s going for an upgrade. In my first report on the subject (see our article called Power Consumption of Contemporary Graphics Accelerators. Part I: Graphics Cards on ATI Chips), I tried my best to accurately measure power-related characteristics of graphics cards on GPUs from ATI Technologies. Today I will do the same with cards on NVIDIA’s GPUs.

 

My method remains the same: I attach a shunt into each of the power circuits and measure the voltage drop in this shunt. Knowing the value of the voltage drop and the shunt’s resistance, I can calculate the current consumed on each power circuit. Then, from the known current and voltage values, I can easily determine the power consumption.

With graphics cards that have additional power connectors, I attached the shunt into those additional power circuits…

…as well as into the power circuits of the AGP slot. I isolated the appropriate contacts of the AGP slot and send the current directly to the card:

For cards that are only powered through the 3.3v, 5v and 12v lines of the AGP slot, I attached the shunt into these power lines only, of course. For more details about my method of measuring the power consumption of graphics cards, refer to my previous article.

The measurement tools are the same as in the first article: a professional digital multimeter UT70-D from UNI-T…

…and a homemade shunt:

Testbed and Methods

I tested the graphics cards in the following system:

Software:

I measured the power consumption in two modes: “Idle” and “Burn”. There were no running applications in the Idle mode; the screen was displaying the Windows Desktop with a scattering of standard icons in the 1280x1024x32@75Hz display mode. The Burn mode used a scene from Far Cry in 1600x1200 resolution with forced 4x full-screen antialiasing and 16x anisotropic filtering (8x on GeForce FX family cards). I saved the game on the Training level and used this save with all the graphics cards to create the identical test conditions.

NVIDIA GeForce 6800 Ultra

This is the flagship product of the new GPU series from NVIDIA. GeForce 6800 series chips are manufactured with 0.13-micron tech process and have about 220 million transistors. The fastest chip, the GeForce 6800 Ultra, has all the 16 pipelines enabled and works at a frequency of 400MHz. I’ve got an Apogee AA6800 Ultra card from Chaintech for my tests:

The card follows the reference design with a standard cooler that covers the GPU as well as the graphics memory chips, with two additional power connectors and with a heatsink that cools the power elements of the GPU and memory voltage regulators. The Apogee AA6800 Ultra features the NVIDIA GeForce 6800 Ultra processor and has 256MB of GDDR3 memory on board. The regular frequencies of the GPU/memory are 400/1100MHz. They grew up but slightly at overclocking, to 440/1250MHz.

My power consumption measurements yielded the following results:

Here’s the first surprise to you! I was speaking back in my first report that the two power connectors of the GeForce 6800 Ultra was no real reason for panic, and that’s really so. The card does not eat a lot more power than the topmost competitor model, the ATI RADEON X800 XT Platinum Edition. In the Burn mode, the non-overclocked GeForce 6800 Ultra consumes 72.09W against the RADEON X800 XT PE’s consumption of 63.23W. This is only 14% more.

In the Idle mode, the power consumption of the GeForce 6800 Ultra is 29.28W against 17.62W of the RADEON X800 XT PE. The percentage difference is bigger here, though. This GeForce consumes about 66% more than that RADEON in this mode.

What’s curious, GeForce 6 series GPUs, like the RADEON X800, do not drop their operational frequency or voltage. The GeForce 6800 Ultra is always clocked at 400MHz, and its voltage, according to my measurements on the Chaintech card, is always 1.45v.

So, 66% looks terrible, but is actually equivalent to the same difference of 10 watts as in the Burn mode. A dozen watts this or that way is a trifle for modern power-supply units. But why, then, does the GeForce 6800 Ultra-based graphics card require two additional power connectors?

As shown, the GeForce 6800 Ultra doesn’t practically take any power through the AGP slot. The total consumption of the card on the 3.3v, 5v and 12v lines that go through the AGP is less than 5W. But the card consumes as much as 48.91W in the Burn mode through the 12v line of the additional power connector. The current in this line is 4.22 amperes, which is rather much even for the Molex 8981-04 connector.

So, it seems like the two power connectors on GeForce 6800 Ultra cards are implemented to ensure a higher stability of the system at large by reducing the noise and voltage slumps caused by the graphics card, which may negatively affect other devices attached to the same power rails. The reliability of the card itself is also increased as there’s a lesser probability of overheat, burning and other consequences of poor contact in the connectors.

My tests suggest that this graphics card consumes a little bit more power than the RADEON X800 XT PE in the Burn mode, so NVIDIA’s recommendation about 480W PSUs is an overstatement. Well, it is probable that first revisions of the cards did have power-related problems, but now we deal with an off-the-shelf product. Anyway, the card does load the +12v line much, so you have to have a high-quality PSU with a good current on the +12v line to assemble a system with this card and a powerful CPU, for example (CPU voltage regulators are steadily transitioning to powering from the 12v line, too).

NVIDIA GeForce 6800 GT

The NVIDIA GeForce 6800 GT is a slower and cheaper model, but based on “healthy” chips with all the 16 pixel pipelines and GDDR3 memory. I’ve got a GeForce 6800 GT-based card from Leadtek for my today’s tests. It is the WinFast A400 GT TDH model:

This graphics card uses the same design as the GeForce 6800 Ultra, but has only one additional power connector. The GeForce 6800 GT GPU and 256MB of GDDR3 memory are clocked at 350/1000MHz, respectively. This card quite naturally had a better overclockability than the top-end product. I reached 420/1150MHz frequencies at overclocking, which are higher clock rates than the GeForce 6800 Ultra’s regular ones.

The results of my measurements follow:

The GeForce 6800 GT consumes much less power than the GeForce 6800 Ultra: 55.39W in the Burn mode and 23.41W in the Idle mode. Two factors contribute to the lower power consumption of the device: 1) the GeForce 6800 GT has lower GPU and memory frequencies than the 6800 Ultra and 2) the GPU voltage is lower here, too. According to my measurements on the Leadtek card, the GPU voltage was 1.35v against 1.45v on the 6800 Ultra card. The effect of the reduced voltage is especially noticeable at overclocking: although the GeForce 6800 GT overclocked above the standard frequencies of the GeForce 6800 Ultra, its power consumption remained at a lower level.

The specifics of the power consumption remained the same, though. The GeForce 6800 GT doesn’t practically load the AGP slot, but takes a bulk of its supply from the +12v power rail.

The GeForce 6800 GT has a little higher power consumption than its direct market competitor, the ATI RADEON X800 Pro; the difference is smaller between them than between the two topmost products. The GeForce 6800 GT and the RADEON X800 Pro consume 55.39W and 48.2W, respectively, in the Burn mode, and 23.41W and 15.1W in the Idle mode.

NVIDIA GeForce 6800

This is the slowest model based on the NV40 architecture. The GeForce 6800 uses the same dies as the faster products, but has only 12 out of 16 pixel pipelines enabled. The GeForce 6800 also has lower frequencies compared to the 6800 GT. This GPU is represented by the Galaxy Glacier GeForce 6800 graphics card:

The design of GeForce 6800-based cards differs from the GeForce 6800 GT/Ultra reference design. The device from Galaxy uses a cooler from Arctic Cooling, which resembles the cooler of the HIS Excalibur RADEON X800 Pro from my first report (see this article for details), instead of the standard cooling system. The card is based on the GeForce 6800 GPU and has 256MB of DDR SDRAM, rather than high-performance GDDR3, on board.

The nominal clock rates of the Galaxy Glacier GeForce 6800 are 350/700MHz – as you see, the core frequency is originally 25MHz higher on this card than the recommended 325MHz. The graphics memory overclocked well, while the GPU – less successfully. The maximum stable frequencies were 375/900MHz.

The results of the test:

The NVIDIA GeForce 6800 eats less energy than the GeForce GT as it has lower frequencies and the GPU voltage (it equals 1.22v on the Galaxy card against 1.35v on the GeForce 6800 GT). As a result, without overclocking, the card consumes 16.96W in the Idle mode and 38.88W in the Burn mode.

The tabled results:

The voltage regulators of the GPU and memory on the GeForce 6800 are built in a different way than in the top-end models. This is clear as we have a different PCB design here as well as consumption currents. The GeForce 6800 uses the power provided by the AGP slot more actively than the GeForce 6800 Ultra even, eating a total of 7W in the Burn mode through the 3.3v, 5v and 12v lines of the AGP. However, the main load still rests on the 12v line which goes through the additional power connector. In the Burn mode, the consumption on this line grows up to 23.62W with a current of 2.01amp.

NVIDIA GeForce FX 5950 Ultra

This was the top-end model from NVIDIA’s last generation of graphics processors. It is made with 0.13-micron technology and consists of about 135 million transistors. MSI supplied us their graphics card on this GPU called FX5950 Ultra-VTD256:

The card copies the reference design, although this is hardly visible for the massive cooling system, which consists of a cooler than covers the GPU and the face-side memory chips, and another cooler with heatsinks that cover the PCB surface and the memory chips at the back side.

The nominal frequencies of the card are 475/950MHz in the 3D mode and 300/950MHz in 2D. So, although 475/950MHz frequencies are shown in the diagram, the results in the Idle mode correspond to 300/950MHz clock rates. The card (or rather, the driver) controls the frequencies and there’d no sense in my interfering.

Here are the results of my tests:

Here is the second surprise to you! The GeForce FX 5950 Ultra has the highest power consumption among all the tested graphics cards, with GPUs both from NVIDIA and ATI, in the Burn mode!

This result is indicative of what price NVIDIA had to pay to set so high a clock rate. Under a load, this graphics card eats more power even than the GeForce 6800 Ultra, which delivers an incomparably higher performance. Note also that the GeForce FX 5950 Ultra consumes less power in the Idle mode than the 6800 Ultra – unlike the new-generation chips, it just drops the clock rate and the voltage of the GPU when idle.

The GPU voltage regulator on GeForce FX 5950 Ultra graphics cards can adjust the output voltage at the driver’s request – it is lifted up in 3D and reduced in 2D applications. If the core is too hot, the voltage and frequency of the graphics processor are reduced to an in-between value – the so-called Low Power 3D mode is activated.

According to RivaTuner, the version 61.24 Detonator sets a 2D voltage of 1.2v and a 3D voltage of 1.5v for the GeForce 5950 Ultra. The real voltages on the MSI card were 1.217v and 1.577v, as my measurements showed.

So, the GeForce FX 5950 Ultra needs more power than the GeForce 6800 Ultra. Why then was there no clamor about PSUs to use such graphics cards with? Let’s examine the detailed picture:

Unlike graphics cards with GeForce 6 series chips, the GeForce FX 5950 Ultra distributes the power load among the power circuitry more evenly and uses the +12v line sparingly since it is traditionally overloaded with other system components. In the Burn mode, the currents on the 12v, 5v and 3v lines of the AGP slot are 4.92amp, 2.29amp and 5.59amp. The highest current is on the 3.3v line, which is usually least loaded of all.

A remarkable fact: the current in the 3.3v line approaches the maximum acceptable value as stated in the AGP specification – 6 amperes. At overclocking I even exceeded this limit, hitting 6.05 amperes. However, this shouldn’t worry you too much – I haven’t heard yet any users reporting problems related to the power limit of the AGP slot.

NVIDIA GeForce FX 5900 Ultra

This is the GeForce FX 5950 Ultra’s precursor in the top-end graphics card sector. I am speaking about those times when such cards were the fastest offers from NVIDIA, of course. The GeForce FX 5900 Ultra doesn’t differ from the 5950 Ultra architecturally. In fact, the latter GPU is an “officially overclocked” version of the 5900 Ultra.

I took a reference GeForce FX 5900 Ultra board from NVIDIA for my tests:

This graphics card features the GeForce FX 5900 Ultra GPU and comes equipped with 256MB of DDR SDRAM. The nominal clock rates of the card in the 3D mode are 450/850MHz. The overclockability of my sample was laughable: 455/880MHz.

Power consumption results:

The GeForce FX 5900 Ultra quite naturally consumes less power than the GeForce FX 5950 Ultra, because it works at lower frequencies and has a smaller GPU voltage in the 3D mode – the driver sets the voltage to 1.2/1.4v for 2D/3D modes, respectively. The real voltages on the outputs of the GPU voltage regulator on my sample were 1.235/1.449v.

The GeForce FX 5900 Ultra differs from the FX 5950 Ultra in the specifics of its power consumption:

Like GeForce 6 series graphics cards, the GeForce FX 5900 Ultra receives most of its power supply from the +12v line that goes through the additional power connector.

NVIDIA GeForce FX 5700 Ultra

The GeForce FX 5700 Ultra is the fastest GPU originally developed for the use in mainstream graphics cards. Now that the GeForce 6 series has arrived, products on the GeForce FX 5900/5950 begin to move down into the middle class, pushing the GeForce FX 5700 Ultra lower still. After there appears the GeForce 6600 GPU, the new mainstream graphics processor, the GeForce FX 5700 Ultra will become obsolete.

I took a Tornado GeForce FX 5700 Ultra graphics card from Inno3D for my tests:

The graphics card follows the reference design and only differs from the etalon in the shape and color of the plate that covers the cooler. Again, this product features the NVIDIA GeForce FX 5700 Ultra GPU and has 128MB of DDR2 memory. The nominal frequencies of the GPU and memory in the 3D mode are 475/900MHz. The card was average at overclocking, notching 530/1000MHz.

Here are the results of my measuring the power consumption of this card:

You shouldn’t wonder at the high power consumption of this mainstream graphics card. A greater part of the total falls on the graphics memory, rather than the GPU, since this product uses DDR2 memory chips, notorious for their voracity. High power consumption and heat dissipation were among the factors that made DDR2 soon give way to power-frugal GDDR3 chips. For example, the GeForce 6800-based graphics card with GDDR3 memory has a smaller power consumption than the GeForce FX 5700 Ultra, both in the Idle and Burn modes, although it’s rather stupid to compare the performance of the two.

The detailed picture follows:

There are no oddities here. The +12v power line that goes through the additional power connector takes the biggest load; in the Burn mode, the current is 2.11amp in this line.

NVIDIA GeForce FX 5700

The NVIDIA GeForce FX 5700 is cheaper and slower than its “Ultra” namesake. Such graphics cards usually carry chips of ordinary DDR SDRAM, which feel all right without any cooling. Cheaper and cooler, GeForce FX 5700-based cards gained more popularity than their “Ultra” analogs.

I took Leadtek’s WinFast A360 TDH MyVIVO graphics card for my tests:

This card has no additional power connector and its cooling system only removes heat from the GPU. The nominal clock rates of this card – they are the same in 2D and 3D applications – are 425/550MHz. The card had a nice frequency growth at overclocking: up to 470/660MHz.

Here are the results of my tests:

The manufacturer of the card reduced the frequencies and gave up DDR2 memory to achieve an excellent result: the product consumes about twice less power than the GeForce FX 5700 Ultra! It is a remarkable fact that they didn’t have to separate the frequencies and voltages of the graphics processor in 2D and 3D modes. The core clock rate is always 425MHz; its voltage, as measured by me, is about 1.43v.

The results in more detail:

The 3.3v line takes the biggest load: the current in the Burn mode is 3.96amp, and the power consumption is 12.36W. Anyway, this is all very modest by today’s standards.

NVIDIA’s GPUs: Results Summary

So, we’re approaching the end of the article. First, I’d like to present a diagram to you with all the results put together:

As shown in the diagram, the power consumption of top-end graphics cards has diminished after the release of NVIDIA’s new generation of graphics processors. That’s an excellent job!

Still, it is not perfect. New graphics cards on NVIDIA’s GPUs are misbalanced as concerns power consumption. The most powerful cards put a heavy burden on the +12v power rail, and this is the real reason – not the total power consumption of the device! – why you must ensure your PSU can stand such a strain before purchasing the card.

ATI’s GPUs: Results Summary

I didn’t post a summary diagram in my first report. I apologize now:

The new graphics processors from ATI’s X800 series have slightly higher power consumption in comparison to previous-generation GPUs. New graphics cards on ATI’s chips have a more “balanced” consumption, so you shouldn’t run into any trouble with your current not-very-powerful, but high-quality PSU.

Power Consumption: NVIDIA vs. ATI

Now let’s compare the two graphics camps. For better readability, I don’t put down the results I got when overclocking the cards:

Graphics cards on ATI’s GPUs generally require less power than their counterparts from NVIDIA’s line-up. However, I don’t think this difference is so great as to be of more importance at shopping than cost or speed factors. The only problem with graphics cards on new NVIDIA GPUs is their high load on the +12v power rail, which may be too much for your PSU.

On the other hand, top-end expensive graphics cards like NVIDIA GeForce 6800/GT/Ultra or ATI RADEON X800 XT/Pro are usually bought for a ragingly-fast gaming station where they won’t be the only power consumer. For example, modern top-end processors, even non-overclocked, have the same power consumption as those graphics cards or even higher. So, it is probable that your other system components will demand a new PSU earlier than the graphics card does.

Overclocking produces a small power consumption growth with graphics cards with any GPUs. The frequency growth is higher. Thus, overclocking seems to be a safe occupation as concerns the power consumption of the graphics card. If your PSU works with a non-overclocked graphics card without any problems, you can be almost sure that it won’t start experiencing any with that graphics card overclocked.

In the near future, there will be products on new graphics processors developed for mainstream graphics cards. Today, only the “power-hungry” cards on previous-generation chips fall into this category. The advent of NVIDIA’s NV43 (GeForce 6600 and 6600 GT) and ATI’s RV410 (the marketing name is not revealed yet) is going to change the situation with power consumption of mainstream graphics cards. The new chips are manufactured with new tech processes and have every chance to be more economical and “cold”.