Power Consumption of Contemporary Graphics Accelerators: Spring 2010

The graphics cards power consumption is always of interest to gaming fans and as the new GPU generations come out, we do our best to respond to this interest in the most extensive manner. So today, time has come for another update on this popular topic.

by Alexey Stepin
03/22/2010 | 01:12 PM

Times when serious gaming systems could do just fine with 200-250 W power supplies are long gone. Today graphics accelerators have become one of the most power-hungry computer components having left even the most powerful CPUs far behind, even though they still hold the proud second position in this rating. From time to time things roll a little backwards, as ATI/AMD and Nvidia switch to new manufacturing technologies, but the overall tendency is undeniable: the graphics cards power consumption keeps increasing as they become more and more powerful. This is the price you have to pay for 3D technological progress. Of course, choosing the right power supply unit is of utmost importance especially to the dedicated gaming fans, who enjoy their favorite titles with maximum image quality settings, highest level of detail and enabled antialiasing. In this case the most powerful graphics accelerators can consume up to 200 W easy, or maybe even more than that, however, there are barely any data about these power consumption levels at this time.  Most reviewers usually measure the overall power consumption of the system as a whole, but even these results are still not fault-free, as we have told you in our article called PC Power Consumption: How Many Watts Do We Need?.

 

More complex methodology, which, however, guarantees more precise and complete results, implies measuring the currents going through different power lines of the graphics accelerator. We have been using one of the variations of this approach until recently, namely, we installed measuring shunts into all graphics card power circuits including PCI Express power lines. However, this approach was not ideal: first, it required us to modify the mainboard pretty seriously; second, it didn’t allow any automatization of the actual measuring process; and third, it used Futuremark 3DMark06 and PCMark05 to load the GPU, which have already become completely obsolete. Although this method allowed us to gather a significant data base of graphics cards electrical characteristics, it no longer poses any significant interest in 2010 for the reasons mentioned above. Namely, outdated software is no longer capable to squeeze all juices out of the contemporary graphics accelerators, which means that the power consumption readings obtained using our old approach will be seriously underestimated. Moreover, graphics processor developers have been actively migrating to finer 40 nm process and have released quite a few new generations of energy-efficient GPUs, which are also worth checking out from the power consumption standpoint.

As a result, time definitely called for a new article that would discuss the power consumption levels of contemporary graphics processors in different modes and that would use more advanced up-to-date testing approach. Since we have already developed this methodology and have been using it for quite some time, all we had to do was to apply it to all graphics accelerators available in our test lab at this time and sum up the obtained results in easy to read format for our readers, which is exactly what we did today. Our article will tell you about the energy-efficiency of the latest and existing graphics accelerators and about the electrical peculiarities of the popular GPUs during overclocking. Moreover, we hope that the obtained results will help you better understand what power supplies your gaming platform may need depending on its configuration.

Testing Methodology

As we have already told you in the previous part of our today’s article, we are currently using a separate test bed for graphics cards power consumption tests and study of graphics card electrical features and characteristics. This testbed has the following hardware and software configuration:

The heart of our testbed is a special measuring board built around Allegro ACS713-30T current diodes and 8-bit Atmel ATmega168 microcontroller:

We have already described its features and design in our article called PC Power Consumption: How Many Watts Do We Need?. Using special software developed specifically for this platform we currently measure all electrical parameters of the tested graphics cards. With this complex testing equipment we can simplify and even automate to some extent the power consumption measuring process. At the time of our today’s tests we used the following available versions of ATI and Nvidia drivers:

As usual, we used the following benchmarks to load the tested graphics accelerators in different modes:

We measured the power consumption for 60 seconds in each mode except maximum load simulation in OCCT. To avoid graphics card failure caused by overclocking of electrical circuits during OCCT: GPU test we limited the measuring interval with 10 seconds.

Power Consumption of Contemporary Graphics Cards

During our test session we obtained the following results:

It would be incorrect to call “Windows 7 desktop” mode idle mode, because even if the user is not doing anything, the graphics adapter does display the OS interface on the screen.  Any work with 2 D applications, such as office tasks, for instance, can be included into the same category. This is where the new Advanced Micro Devices solutions have no equals. Even the top single-processor solution in the family, ATI Radeon HD 5870, consumes only a little over 15 W. the only exception to this rule is Radeon HD 5970, but it would be silly to assume that those who decide to go with this two-headed monster would ever care about energy-efficiency. Moreover, even this solution in “desktop” mode consumes less that Radeon HD 4890, which is indeed pretty uneconomical unlike other Radeon HD 4000 solutions.

As for Nvidia solutions, the best ones would be the products using new 40 nm graphics processors – GeForce GT 240, 220 and 210. In terms of energy-efficiency, they can compete successfully against their AMD rivals. However, the readings taken off the products on G200 leave much to be desired: not the most optimal production process for the cores of this complexity definitely has its standing here. Luckily, it looks like GeForce GTX 295 can disable one of the cores in this mode, which makes it more energy-appealing than Radeon HD 5970. GeForce 9 solutions also don’t star in this test, which is quite logical. Overall, if you are mostly using your computer in office and other 2D applications, then graphics products from Advanced Micro Devices would be your best bet. You just have to remember that according to some sources, Radeon HD 5000 series still experiences some problems with 2D acceleration in Windows 7 that is why for AutoCAD and similar type of tasks Nvidia solutions could be a better choice.

 

During HD video playback the laurels go to ATI Radeon HD 5000, which offers decent energy-efficiency and fully-fledged HDMI 1.3a support including flawless transfer of high-definition multi-channel sound formats over HDMI. Nvidia solutions on 55 nm and 65 nm cores are not just uneconomical in this mode, but also do not fully support hardware VC-1 format decoding moving some of the load to the CPU. Luckily, GeForce GT 240, 220 and 210 are free from this issue, but they still do not support protected audio path, so you will have to forget about Dolby TrueHD or DTS-HD Master Audio. You can read more about the multimedia capabilities of inexpensive AMD and Nvidia solutions in our latest article called ATI Radeon HD 5670, Radeon HD 5570 and Radeon HD 5450: A Multimedia Ideal?.

Radeon HD 5000 is also ahead in games. Even the top solution from this lineup, the dual-processor Radeon HD 5970, consumes considerably less than Nvidia GeForce GTX 295. Even 40 nm production process doesn’t really help Nvidia that much here: GeForce GT 240 GDDR5 is way less energy-efficient than its direct competitor - Radeon HD 5670. And of course, do not forget that all ATI Radeon HD 5000 solutions support DirectX 11, which current Nvidia solutions do not have by definition. In other words, the choice of gaming fans is obvious here.

However, we have to say that Radeon HD 5830 may in the end turn out a less appealing buy than Radeon HD 4890. Although the latter doesn’t support DirectX 11 and consumes considerably more power, it also runs faster in a number of cases. Moreover, Radeon HD 5830 turns out even less energy-efficient in games than the faster Radeon HD 5850, which results from its higher chip speed of 800 MHz against 725 MHz.

The results discussed in this part of our review are interesting mostly from the theoretical prospective. We would warn you against repeating our tests in OCCT: GPU, because we have record of quite a few cases when not only the PSU safety would kick in, but also when graphics cards would fail because their electric circuitry couldn’t bear the load created by this synthetic benchmark. In fact, the results of OCCT: GPU test do not reveal anything new to us: the today’s most energy-efficient solutions are still the ones from Radeon HD 5000 family, although the difference between Radeon HD 5870 and GeForce GTX 285 is really minimal in this case.

Overall, the situation is pretty clear: the best choice for those who care about energy-efficiency of their computer system would be one of Radeon HD 5000 solutions except the dual-processor Radeon HD 5970. In our opinion, the most optimal model offering the best combination of power consumption and high 3D performance is Radeon HD 5850. Nvidia GeForce GTX 275 would be a good alternative to it. Although it consumes considerably more power in games, it may still become a more preferable choice in a number of situations, namely in applications that depend a lot on the quality of 2D acceleration or in CUDA apps. As for non-gaming solutions, Radeon HD 5670 and 5570 look pretty good here. I would call them the best choice for a multimedia platform for high definition video and audio formats. It has every chance to become extremely economical among other things.

Dependence of Power Consumption on Overclocking

In addition to our power consumption tests discussed above we also decided to find out how greatly the power consumption of a typical graphics accelerator may be affected by overclocking. For this type of study we picked the most typical solutions of the today’s graphics market: ATI Radeon HD 5850 and Nvidia GeForce GTX 275. The latter is currently a little cheaper in retails, but is also a little slower while Radeon HD 5850 is a true sales hit in $320-$350 price range.

Since the top Radeon HD 5800 models are equipped with Volterra controllers, they allow software management of the GPU voltage levels that is why for our overclocking experiments we resorted to MSI Afterburner and AMD GPU Clock Tool.

The original vGPU setting for Radeon HD 5850 is 1.088 V and we decided to raise it only if the card loses stability at the given frequencies. To avoid possible overheating and failure for the card the Radeon HD 5850 cooler was working at the utmost of its capacity.

As FOR Nvidia GeForce GTX 275, things got a little bit more complicated here: the reference design of these graphics cards uses a VRM controller that doesn’t support any software management. Therefore, there is only one way to increase the GPU voltage besides hardware voltmodding: BIOS modification using NiBiTor or any other program with similar functionality and further use of this modified BIOS on the card. By default, GeForce GTX 275 graphics processor works at 1.05 V in 3D mode and 1.17 V in extreme mode. Using NiBiTor both settings can be increased to 1.18 V, but unfortunately, not any higher than that.

Since we are particularly interested to see how the cards will behave in games, we decided to use the same Crysis Warhead with DirectX10/Enthusiast settings on "frost" map in 1600x1200 resolution and enabled MSAA 4x. Here are the obtained results:


* - memory overclocked to 1250 (2500) MHz

First of all, we see that even if memory overclocking does affect power consumption, this influence is not dramatic: after raising the memory frequency on Radeon HD 5850 from 1000 (4000) MHz to the maximum possible frequency (without stability issues) of 1200 (4800) MHz, we managed to detect an about 10 W increase in power consumption. The same action performed on GeForce GTX 275, which memory was overclocked from the nominal 1134 (2268) MHz to 1250 (2500) MHz produced the same 10 W power consumption increase.

Further overclocking of Radeon HD 5850 core with 25 MHz increment continued successfully up until 825 MHz. This is when the card became unstable. In the interval from 725 to 850 MHz the power consumption increased by only about 20 W. by raising the core voltage to 1.14 V, we managed to hit 925 MHz core frequency. The card has immediately turned way more energy-hungry. However, the biggest challenge awaited us ahead: we managed to hit 950 MHz frequency only by pushing the voltage to 1.2 V, which bumped the power consumption from 165 to 200 W right away. We struggled to hit 1000 MHz barrier until we decided to take the risk and increase vGPU to a pretty dangerous level of 1.35 V. as a result, the height was taken, the card passed our stability tests successfully, but the peak power consumption readings in this case registered 276 W! I believe that with liquid-cooling the card can work like that for a mighty long time, but overheating is not the only threat during overclocking involving increased vGPU. When the chip voltage is increase by more than 30% over the nominal, electro-migration accelerated significantly and the chip may fail at any time. Moreover, the graphics adapter voltage regulator circuitry, which is also not designed to withstand loads like that for a long time gets overloaded as well.

As for GeForce GTX 275, things were a little easier on it, because it didn’t allow any extreme voltmodding without serious hardware modifications. As a result, the card worked stably at 734/1620 MHz without a modified BIOS and at 756/1674 MHz with one. It seems to be the best our GeForce GTX 275 can do: all further attempts to increase the GPU frequencies at least a little bit more didn’t succeed and every time we launched Crysis Warhead and started the test the system would reset everything and reload the driver. The power consumption difference between the minimum and maximum core frequencies was only 32.4 W, which is nothing compared with 148.8 W we have just seen by ATI Radeon HD 5850. However, there is also less risk involved with Nvidia graphics card overclocking, unless you resort to hardware modification of the GPU voltage regulator circuitry.

The results for each power line are pretty interesting:

The +3.3 V line that seems to be feeding the auxiliary circuitry of the graphics card hardly depends on the clock frequency. The internal +12 V line depends on them, but in a somewhat strange way: the current in it was increasing slowly as we continued overclocking, but as soon as we resorted to the first voltmodding measures, it dropped from the maximum registered level of 2.6 A down to 2.1 A and stayed there until the end of the experiment. The line marked “12V 6/8-pin” on the graph was connected to the top first power connector of Radeon HD 5850. During the entire overclocking session it behaved: the current was going up slowly from 2.3 A to 4.3 A and only when we increased vGPU to the extreme 1.35 V it jumped up to 6.5 A. The load on the lower power connector increases much more aggressively. Although the graph remains pretty gentle up to 1.14 V GPU voltage, we see a sharp surge at 1.2 V, and at 1.35 V the current in this line may easily hit 14 A or more, which is around 170 W! I have to repeat one more time: do not attempt to repeat these experiments at home, especially, if your favorite graphics card is the only one you have and there is nothing to replace it with once it fails.

As I have already mentioned, Nvidia GeForce GTRX 275 acts more calmly during overclocking and the graph below proves it clearly. The currents in the internal power lines remain almost unchanged, and grow gradually in the external ones, with the second power connector located closer to the PCB edge is loaded noticeably heavier. However, we didn’t manage to hit the same extreme numbers as we did with Radeon HD 5850 even here; the maximum current we recorded here was only 8.8 A, which means that the load never exceeded 106 W on this connector.

Conclusion

Well, we have tested 23 contemporary and not very contemporary graphics accelerators trying to find out their power consumption levels in different operational modes. As we have expected, the best results were demonstrated by Radeon HD 5000 solutions in all categories. And in fact there is nothing surprising about it: they use the most advanced consumer graphics processors available today. The new 40 nm Nvidia solutions represented by GeForce GT 240, 220 and 210 follow closely behind the junior ATI offering from this family, but the top models in the lineup are still unrivalled in this respect and won’t be until GeForce 400 products come out. Since this new family promises to be pretty power-hungry and since G200b based solutions will then move over to GeForce 3000 family, Nvidia may remain an energy-efficiency outsider for quite some time.

From the consumer prospective the best-balanced solution would be ATI Radeon HD 5850 – it is pretty energy-efficient in 2D mode and offers high gaming performance consuming a little over 120 W. its only drawback is its high price – not everyone can spare over $300 for just a graphics card. The second favorite would be Radeon HD 5770 – it is definitely not a gaming solution, but will be ideal for a quiet and economical HTPC system, because it consumes only 13 W during high definition video playback, which would be the primary operational mode for this type of systems. Yes, theoretically, GeForce GT 220 could be an alternative to it, but it has no proper support of high definition audio formats. I have to admit that Radeon HD 5830 spoils the overall positive feel from the Radeon HD 5000 family, because in almost all modes it consumes a little more than Radeon HD 5850, and in games can even yield to Radeon HD 4890.

As for Nvidia solutions, if for some reason you absolutely have to pick one of those, then GeForce GTX 275 is worth taking into consideration, because it elder brother costs more and the younger one doesn’t run as fast in 3D applications. In this case, however, you will have to forget about energy-efficiency as well as about fully-fledged HD video support. However, we only imply the absence of proper hardware VC-1 decoding, which will hardly become a big issue if you have a powerful CPU in your system. If you are fond of games supporting PhysX effects, then it would be nice to bundle your GeForce GTX 275 with a GeForce GT 220, which will function as a discrete PPU.

As for the PSU choice, we have every right to say the following: you will only need a really powerful PSU if you intend to equip your gaming platform with solutions like Radeon HD 5970, GeForce GTX 295 and soon also GeForce GTX 400. Under maximum load the current in some of these graphics cards’ lines may hit 17 A, which is a pretty serious number that is why you must be very careful about the PSU choice. Less monstrous solutions like Radeon HF 5870 have lower energy appetite and should be happy with a quality PSU with 600-650 W capacity. This correlates nicely with our previous conclusions about the total system power consumption levels, which indicated that even a powerful gaming platform could easily fit into a 500 W range. It wouldn’t hurt, however, to have a little extra power available, just in case. Mainstream gaming systems equipped with solutions like Radeon HD 5770 will easily do with a 400 W PSU, just make sure that the power supply unit you pick is reliable and provides stable output voltages: it has never hurt anyone to be a little more cautious, so why risk your entire system by buying a cheap PSU?

Our overclocking experiments showed that power consumption increases significantly only when you raise the GPU voltage, which is a risky undertaking anyway. Without any extreme measures the difference between the startup frequencies and maximum frequencies during overclocking without any vGPU increase was only 20 W for Radeon HD 5850 and 31 W for GeForce GTX 275. I have to say that in the former case some power-saving functions may be disabled during overclocking, namely, the card won’t lower the GPU and memory frequencies in 2D mode and during video playback. Moreover, I have to repeat one more time that overclocking rarely makes a really big difference: if the performance was high enough in nominal mode it will stay the same also during overclocking and the other way around. In other words, overclocking is merely a sport that does have it fair share of excitement and impressiveness, but hardly makes a lot of purely practical sense, especially its extreme variations, which include high risk of overclocked components’ untimely “death”.