Natural Born Winner: Nvidia GeForce GTX 580 Review

The DirectX 11 GeForce GTX 480 that was seriously late for the battle failed to become a winner. But today Nvidia has a much stronger candidate for the crown. Please meet GeForce GTX 580!

by Alexey Stepin , Yaroslav Lyssenko
11/21/2010 | 12:34 PM

Nvidia’s Fermi graphics architecture did not find its way to the market easily but now a GPU that is destined to be the flagship of the company's line-up as well as of the whole industry is moving up that road.

 

The GF100, the first graphics processor in the Fermi series, was meant to be the best among single-GPU solutions, but Nvidia had some serious problems with it. Notwithstanding its earlier negative experience with the G200 chip, the company once again tried to come up with a "world's fastest graphics processor" and packed as many as 3 billion transistors into the GF100. As the consequence of such complexity, the GF100 came out too big, sophisticated and hot even when manufactured on 40nm tech process. Its power consumption was huge as well. The GF100 had been meant to incorporate 512 unified stream processors, 64 texture-mapping units, 48 raster back-ends and a wide 384-bit bus to access a local bank of GDDR5 memory. While there were no significant problems with the graphics memory and RBEs, Nvidia did not succeed in achieving an acceptable yield of chips with the planned degree of complexity on the available manufacturing process. As a result, the new series' flagship model was released in a cut-down configuration with 480 stream processors and 60 texture-mapping units. Officially announced on March 26, 2010, the new card only emerged on the market in April, and in very limited quantities, too.

The GeForce GTX 480 graphics card delivered good, yet not exceptional performance for its class due to its cut-down GPU configuration and rather low clock rates. It proved to be unable to shake the position of the leader Radeon HD 5970. One reason for that was the misbalanced texture-mapping subsystem. The GeForce GTX 480 had only 60 TMUs and clocked them at a rather low frequency. Besides, its TMUs could not process textures in the FP16 format, which is rather popular in today's games, at full speed. Hardware reviewers all agreed that that was a bottleneck in the GeForce GTX 480 design. That was bad news for Nvidia and the situation called for improvements. The first news about the new successor to Nvidia GeForce GTX 480 appeared back in October.

Long before the official announcement of the GF100’s successor, it had been rumored on the Web that the next flagship core from Nvidia would have not only 512 stream processors but also 128 texture-mapping units to get rid of one of the bottlenecks of the Fermi architecture and make the new GPU competitive in the upcoming fight with the AMD Radeon HD 6900 series. Of course, winning such a fight doesn’t change the GPU developers’ fortunes much because expensive graphics cards account for but a small share in the total sales volumes. It is the affordable products selling at $200-250 and lower that bring in the main profit for both AMD and Nvidia. However, premium-class graphics cards are important for marketing reasons as they tell the whole world about the company’s ability to develop and produce advanced technological solutions. Being a technological leader is very positive publicity, after all. We, on our part, should confess that benchmarking top-end graphics cards is far more exciting for us, hardware reviewers, and for spectators, i.e. gamers. Again, this puts the rest of the winning company's solutions under the spotlight.

So how do the things stand right now? Nvidia has managed to deal a preventive blow to AMD by releasing its GeForce GTX 580 before the announcement of the Radeon HD 6900 series. The new top-end graphics card is going to prove that Nvidia has not lost its ability to develop super-fast graphics solutions capable of beating all its opponents. In this review we will check out what we can really expect from the new claimant to the throne of the 3D graphics realm.

Nvidia GeForce GTX 580

Market Positioning

As we’ve already said above, the GeForce GTX 580 is meant to become Nvidia's new top-of-the-line solution, the flagship of the entire Fermi line-up, the proof of the company's technological superiority in graphics card development, and the winning trump against the upcoming Radeon HD 6900. Let's first see how the specifications of the new card have improved over those of the GeForce GTX 480.

The first thing we can see is that the clock rates of both the graphics core and memory have grown up. Other things being equal, this fact alone would make the new card considerably faster. The die has become smaller by 9 square millimeters. This is an expected outcome of the optimizations the engineers have made to the successor of the GF100. What is somewhat unexpected is that the new GPU is said to incorporate 200 million transistors less than its predecessor. This is not a trifle after all and 200 million transistors should have made up quite a big portion of the core. Well, we don’t think that’s some kind of a sensation. Although all the official sources say that the GF100 contains 3.2 billion transistors and the GF100, only “about 3 billion”, and although the popular diagnostic tools like GPU-Z report the same values, no one, except for the GPU developers themselves, can know the exact number of transistors. The software tools simply report what is written into their databases. It is in fact impossible to learn the exact number of transistors in a given chip. The GF110 may be indeed slimmer than its predecessor due to optimizations, but the difference may be something like "3.09 billion now against 3.15 billion before" and that would sound far more realistic than the loss of 200 million transistors at a time, the two GPUs having the same architecture and the same number of functional subunits.

Next we see 512 active ALUs. This is the same number of ALUs as were physically present in the GF100 but some of them were not activated. The structure of the new graphics core has remained intact:

Each of the four graphics processing clusters incorporates four stream multiprocessors. Each of the multiprocessors consists of 32 general-purpose execution cores and is allotted four texture-mapping modules.

There were rumors that Nvidia would increase the number of texture-mapping processors in the GF110 up to 128 to correct the misbalance of the GF100's TMU subsystem, but the new core still has only 64 TMUs. However, the developers have revised their architecture so that the TMUs could filter all texture formats at full speed whereas the GF100 used to filter FP16 textures at a half speed. This is similar to the texture processors of the G80 and G92 chips: the G80 had two filter units for each fetch unit whereas the G92 had the same number of both types of units. As a result, the G80 had an advantage when doing anisotropic filtering. Things must be more sophisticated with the GF100 and GF110 and we won't speculate on them since we don’t know the details of the TMU design in the Fermi architecture. Let’s just take it for granted that if the GF110 can process textures of all formats at full speed, this makes it superior to its predecessor in modern games which make wide use of the FP16 texture format.

Besides the improved texture-mapping units, Nvidia has also optimized the occlusion culling algorithms. Coupled with the 512 active ALUs, this should guarantee the GeForce GTX 580 the top place among the fastest single-GPU graphics cards and, perhaps, make it strong enough to beat the long-time leader Radeon HD 5970, which is a dual-processor solution.

As for tessellation, there were no problems in this area with the GF100 and even with the GF104. The GF110 is even going to be somewhat faster when doing tessellation because all the 16 geometry processing units called PolyMorph Engines are active in it whereas its predecessor had only 15 (and these engines are also clocked at a higher frequency in the new GPU). AMD may have something to say on this issue with its upcoming Radeon HD 6900, but so far Nvidia’s solutions are superior in this respect. Although there is an opinion that excessive tessellation may be unnecessary or even harmful, it is better to have a graphics card capable of coping with scenes like in H.A.W.X. 2 that have up to 1.5 million polygons per frame than not to have one. Every argument against too much tessellation looks lame compared to a real hardware device capable of doing so much tessellation. On the other hand, Nvidia's claims that its solutions are 20 times as fast at tessellation as their opponents should not be trusted blindly, without a practical check. Besides, the biggest problem with tessellation is that there are still too few games available that make use of it.

Summing everything up, the Nvidia GeForce GTX 580 looks a well-balanced solution of the premium class. Its recommended price equals that of the GeForce GTX 480 at the moment of the latter’s announcement, i.e. $499. This makes the new card more appealing than the Radeon HD 5970 which still cannot be found for less than $600.

PCB Design

All modern top-end graphics cards look alike or very much alike to each other. The new Nvidia GeForce GTX 580 has a standard enough appearance, too:

 
GeForce GTX 580

 
GeForce GTX 480

There are but a few and minor visual changes compared to the previous flagship. The GeForce GTX 580 lacks the slit in the cooler's casing through which the metallic cap of the heatsink used to peep out in the GeForce GTX 480 cooler. That cap used to get very hot at work, by the way. The new card also has a different cooler design, therefore it does not have heat pipes sticking out the cooler's casing. There is no slit below the fan in the PCB, either. Nvidia seems to be very optimistic about the new cooler design, but if you install two GeForce GTX 580 cards into a SLI tandem, the top card’s fan is going to have problems with air supply, being blocked by the bottom card. The "SLI-optimized" shape of the cooler's casing can hardly help then because the optimizations boil down to the small depression near the fan.

As usual, the most interesting things can be found below the cooler. We took the latter off to have a better view of our GeForce GTX 580.

 
GeForce GTX 580 (left), GeForce GTX 480 (right)

There are a lot of similarities between the PCBs of the GeForce GTX 580 and GeForce GTX 480. The developers copied some blocks of the older PCB when designing the new one (quite an efficient development method that helps reduce the time and cost necessary to create a new PCB from scratch). The empty back part of the PCB which used to have air vents suggests that the GeForce GTX 580 might have been made shorter. On the other hand, there is no urgent reason for that. The top position of the power connectors prevents any problems with attaching the power cables in a short system case. This card won’t fit into compact system cases, but it is not really meant for them anyway.

The power subsystem of the GeForce GTX 580 is the same as that of its predecessor, following the 6+2 design. The 6-phase GPU voltage regulator is managed by a CHL8266 controller from CHiL Semiconductor located on the reverse side of the PCB.

Lower down the PCB there is a seat for a chip of similar size, but we don't know its purpose. Nvidia says the GeForce GTX 580 is equipped with an advanced power monitoring system but points to a different location in its presentation where we can only spot two tiny chips marked as A219.

There is one difference concerning the memory subsystem: instead of the popular uP6210 controller, the memory voltage regulator is managed by an APW7088 chip from Anpec Electronics. It wasn’t easy to find its documentation because it is missing on the official website. We could only find it with Google. The graphics card has two power connectors: a 6-pin and an 8-pin one. The latter doesn’t seem redundant even considering the promised improvements in energy efficiency.

Like the GeForce GTX 480, the new card is equipped with GDDR5 memory in K4G10325FE-HC04 chips from Samsung. These 1Gb (32Mbx32) chips have a rated frequency of 1250 (5000) MHz. And like the previous flagship, the card doesn't make the most of such fast memory, even though its memory frequency is increased to 1002 (4008) MHz. The card carries 12 memory chips for a total capacity of 1536 MB accessed across a 384-bit bus. The peak memory bandwidth is quite impressive for a single-GPU card, namely 192.4 GBps. The dual-processor Radeon HD 5970 is the only card to boast higher memory bandwidth just because it has two independent memory banks with 256-bit access. The GeForce GTX 580 lowers its memory frequency to 162 (648) MHz and to 68 (270) MHz in its two power-saving modes. The popular MSI Afterburner tool reports frequencies of 2004, 324, and 134 MHz at that.

The GPU of our sample of the card is revision A1. Its marking contains the same frequency code of 375 as the marking of the GF100. We guess this number denotes GPU samples that have passed tougher frequency control. The GF110 is no different from the GF100 in dimensions. These processors seem to be even pin-compatible. Our sample was manufactured on the 38th week of 2010, i.e. between September 19 and 27. Judging by the small revision number, Nvidia didn’t have any serious problems manufacturing the GF110.

GPU-Z version 0.4.8 supports the GF110 but, as we’ve said above, you shouldn’t trust everything it reports as such parameters as the die size and the transistor count are simply written into the utility's database and may differ greatly from the real values. Otherwise, the rest of the GPU parameters are reported correctly. The main domain frequency is indeed 772 MHz whereas the shader domain is clocked at twice that frequency, i.e. at 1544 MHz. The number of ALUs and raster operators agrees with the official specifications: 512 and 48, respectively. GPU-Z doesn’t tell us how many texture-mapping units the GPU has but the texture and pixel fillrate parameters are correct and agree with the official specs. We can remind you here that the GF110 has 64 texture-mapping units capable of processing textures in any popular format at full speed whereas the GF100 can only process FP16 textures at half the full speed.

MSI Afterburner identifies the key frequencies of the graphics card correctly but reports one half rather than one fourth the effective GDDR5 frequency for the card's memory. Version 2.0.0 of this software tool cannot control the GPU voltage but offers monitoring options. We can use them to see that the card's core frequencies are lowered to 51/101 MHz in idle mode and to 405/810 MHz in easy tasks like HD video decoding. It is only in 3D applications that the graphics core runs at its full speed.

The mounting bracket of the new card hasn’t changed since the GeForce GTX 480. Its first tier is occupied by a couple of DVI-I ports together with a mini-HDMI that requires an adapter for connecting a standard HDMI cable. The second tier is a vent grid for exhausting the hot air out of the system case. DisplayPort is not supported. If you want to connect more than two display devices, you have to build a SLI configuration out of two such graphics cards using the pair of standard MIO connectors on the face side of the PCB. This is in fact a typical selection of interfaces that modern Nvidia products have and it looks rather limited compared to the connectivity options of Radeon series products.

Cooling System

As opposed to the PCB design, the cooling system has been thoroughly revised. As we’ve written above, no heat pipes stick out of the cooler’s casing. This is somewhat odd, considering that the new card is comparable to the GeForce GTX 480 in terms of heat dissipation. The answer is simple. Even without removing the casing and heatsink we can see that Nvidia has selected an evaporation chamber as the basis of the new cooler.

Nvidia is not a pioneer in this field because its opponent employed this solution in one of the two heatsinks of the Radeon HD 4870 X2. Later, a larger evaporation chamber was used for the Radeon HD 5970 where it cooled two Cypress cores (RV870). Sapphire also employed evaporation chambers in its Vapor-X series products.

The heatsink of the GeForce GTX 580 is designed in a similar way:

It is smaller than the heatsink of the Radeon HD 5970 but large enough to effectively cool such a hot chip as the GF110. The photo above shows the evaporation chamber in the base of the heatsink which consists of thin aluminum fins. In the bottom right corner there is a soldered opening through which the coolant was poured into the chamber back at the factory.

The operation principle of an evaporation chamber is very simple. It is in fact one large and flat heat pipe. The coolant evaporates at the place of contact between the chamber’s bottom and the GPU’s cap, then gets condensed in the top part of the chamber, transferring the heat to the heatsink fins, and then flows down the capillary surfaces to start the whole process anew. Since there is a vacuum inside the chamber, the coolant begins to boil at a lower temperature whereas the small distance between the surfaces that receive and give off the heat accelerates the whole heat transfer process. An important advantage of an evaporation chamber is that it is compact. There is no need to lay out a bunch of thick heat pipes as in the cooling system of the GeForce GTX 480.

The downside of this design is that the top sections of the heatsink fins do not heat up much because they do not receive any heat directly, but the higher efficiency of the chamber design should make up for that. We’ll check this out in the next section of our review.

Power Consumption, Temperature, Noise

Nvidia claims in its marketing materials that the GeForce GTX 580 features higher energy efficiency compared to its predecessor and also had a lower TDP. Of course, we need to check this out in practice. So we took our sample of the card and measured its power draw on our standard testbed configured like follows:

The new testbed for measuring electric characteristics of graphics cards uses a card designed by one of our engineers, Oleg Artamonov, and described in his article called PC Power Consumption: How Many Watts Do We Need?. As usual, we used the following benchmarks to load the graphics accelerators:

Except for the maximum load simulation with OCCT, we measured power consumption in each mode for 60 seconds. We limited the run time of OCCT: GPU to 10 seconds to avoid overloading the graphics card's power circuitry. Here are the obtained results:

 

So, the GeForce GTX 580 does not have a lower peak power draw in 3D applications compared to the previous flagship model. Well, we could hardly expect that as the new card has more subunits enabled as well as higher clock rates. However, it is quite impressive that the developers have managed to keep the power consumption of the new card within the limits set by the GeForce GTX 480.

Judging by the load on the individual power lines, we can see that the 8-pin power connector is indeed necessary. Its load may be as high as 13-15 amperes in 3D applications, which is beyond the recommended limit of 150 watts. You can also note the behavior of the card in the OCCT:GPU test as the graph is indicative of the protection mechanism Nvidia implemented in the GeForce GTX 580 to safeguard it against overload in stress tests like OCCT or FurMark which may damage the graphics card by putting it under unrealistically high load. The protection mechanism seems to be something like Intel's Thermal Throttling and makes the GPU skip some clock cycles. This supposition is confirmed by the fact that the fuzzy doughnut in OCCT:GPU was rotating in a jerky manner when we launched that test on our GeForce GTX 580.

The electrical parameters of the GeForce GTX 580 are better than those of its predecessor in the other modes. By the way, the result of 70 watts when playing HD video from a Blu-ray disc is not quite right. As the power consumption graph suggests, the card doesn't drop its clock rates immediately and this peak value is in the first part of the graph. The card's power consumption lowers in the video playback mode down to 30-35 watts after a while.

The GeForce GTX 580 has also become more economical in idle mode even though not as good as the AMD solutions with their PowerPlay technology. On the other hand, the new card from Nvidia looks splendid against the Radeon HD 5970 in this respect.

Nvidia’s new cooling system is very good, too. Despite the increased clock rates and 512 active ALUs, the GPU temperature of the GeForce GTX 580 is never higher than 86°C whereas the GPU of the slower GeForce GTX 480, which has a large heatsink with five heat pipes and direct-contact technology, is as hot as 91°C under the same conditions. The evaporation chamber is obviously more effective while being more compact. The temperature of the new flagship is lower in idle mode, which is due to the improved power management system. All in all, we are quite satisfied with the thermal parameters of the reference GeForce GTX 580 considering that it is a top-of-the-line product.

The high-performance heatsink with evaporation chamber makes the GeForce GTX 580 far more comfortable in terms of noisiness. The card is virtually silent in 2D mode. Our testbed producing a background noise of 38 dBA, we could not hear the sound of the GeForce GTX 580 amidst the noises of the other components such as system fans, PSU fan, HDDs, etc. The card’s fan was rotating at 1300 RPM, according to MSI Afterburner.

Of course, it is next to impossible to make such a premium-class solution absolutely silent as it dissipates over 250 watts of heat. Therefore the fan accelerates when you launch 3D applications, up to 2940 RPM, and the noise level goes higher, too. However, even after running OCCT:GPU for a while, the graphics card remained quite comfortable in terms of noisiness, being but slightly louder than the GeForce GTX 470 and Radeon HD 6870. The reference cooler of the GeForce GTX 480 is far inferior in this respect.

Thus, the GeForce GTX 580 features an excellent combination of electrical, thermal and noise characteristics. The only thing left for us to find out is how well it performs in modern games.

Testbed and Methods

We are going to investigate the gaming performance of Nvidia GeForce GTX 580 using the following universal testbed:

We used the following ATI Catalyst and Nvidia GeForce drivers:

The ATI Catalyst and Nvidia GeForce graphics card drivers were configured in the following way:

ATI Catalyst:

Nvidia GeForce:

Below is the list of games and test applications we used during this test session:

First-Person 3D Shooters

Third-Person 3D Shooters

RPG

Simulators

Strategies

Semi-synthetic and synthetic benchmarks

We selected the highest possible level of detail in each game. If the application supported tessellation, we enabled it for the test session.

For settings adjustment, we used standard tools provided by the game itself from the gaming menu. The games configuration files weren’t modified in any way, because the ordinary user doesn’t have to know how to do it. We ran our tests in the following resolutions: 1600x900, 1920x1080 and 2560x1600. Unless stated otherwise, everywhere, where it was possible we added MSAA 4x antialiasing to the standard anisotropic filtering 16x. We enabled antialiasing from the game’s menu. If this was not possible, we forced them using the appropriate driver settings of ATI Catalyst and Nvidia GeForce drivers.

Besides GeForce GTX 580, we also tested the following solutions:

Performance was measured with the games’ own tools and the original demos were recorded if possible. We measured not only the average speed, but also the minimum speed of the cards where possible. Otherwise, the performance was measured manually with Fraps utility version 3.2.3. In the latter case we ran the test three times and took the average of the three for the performance charts.

Performance in First-Person 3D Shooters

Aliens vs. Predator

The new card from Nvidia has a good start, being practically as good as the Radeon HD 5970 in this game. At the resolution of 1920x1080 the GeForce GTX 580 is somewhat slower than AMD’s dual-processor solution but offers the same level of comfort. As for 2560x1600, neither of them can maintain a playable bottom speed, but the GeForce GTX 580 has higher results.

Battlefield: Bad Company 2

It is hard to compete with the two combined Cypress cores even though AMD released its dual-processor flagship as far back as September of 2009! However, the Radeon HD 5970 is no cheaper than $600 in retail, so the GeForce GTX 580 just don’t have real opponents in its market niche and delivers excellent performance in Bad Company 2 across all the resolutions. Interestingly, the new card has no advantage over the GeForce GTX 480 in the Full-HD mode. This may be due to some inherent inaccuracies of our manual measurement method (with Fraps).

Call of Duty: Modern Warfare 2

The new card has no rivals in Modern Warfare 2 and is very close to the Radeon HD 5970 at resolutions up to Full HD. Falling behind the leader more at 2560x1600, the GeForce GTX 580 is still not slower than 60 fps, making the more expensive, hotter and noisier Radeon HD 5970 a less appealing choice. Well, this game runs fast enough even on the Radeon HD 6870.

Crysis Warhead

Although not a new game, Crysis Warhead is still too heavy an application for a single graphics card to allow playing it comfortably at 2560x1600 with the highest graphics quality settings. The new GeForce GTX 580 has progressed towards that goal, though. It is also preferable to the Radeon HD 5970 at the lower resolutions due to its higher bottom speed. And we should also keep it in mind that the Nvidia card is cheaper and quieter than AMD's flagship.

Far Cry 2

Nvidia’s new offer is as good as the Radeon HD 5970 at 1600x900 but falls behind at the higher resolutions in terms of average frame rate. The bottom speeds of the GeForce GTX 580 and Radeon HD 5970 are similar across all the standard resolutions, though. Far Cry 2 having modest system requirements, you can play the game even at 2560x1600 without any fear of slowdowns.

Metro 2033

This game is tested without full-screen antialiasing but with tessellation turned on.

For all its 16 geometry-processing units and increased frequencies, the GeForce GTX 580 cannot maintain a playable bottom speed when we turn the tessellation option on, although the new card's average frame rate is as high as that of the Radeon HD 5970. Anyway, the GeForce GTX 580 sets a new standard of performance among premium-class single-GPU graphics cards.

S.T.A.L.K.E.R.: Call of Pripyat

We run this game with tessellation turned on.

The GeForce GTX 580 leaves the Radeon HD 5970 behind in this test! We did not expect the new card to be so fast, but it is 25 to 45% ahead of the GeForce GTX 480 here, depending on the resolution. This must be the effect of the increased number of ALUs clocked at a higher frequency and of the improved texture-mapping units which are now capable of processing FP16 textures at full speed.

Performance in Third-Person 3D Shooters

Just Cause 2

The new GeForce GTX 580 is almost as fast as the Radeon HD 5970 at 1920x1080. The difference is also small at 2560x1600 whereas the bottom speeds of these two solutions differ by a mere 1-4 fps, depending on the resolution. Thus, each card is suitable for playing this game, but the Radeon HD 5970 is going to be far more comfortable for your ears.

Lost Planet 2

It's like what we saw in Call of Pripyat: the GeForce GTX 580 in the lead, leaving the Radeon HD 5970 far behind. Well, the GeForce GTX 480 is also faster than AMD's dual-processor card, thanks to higher tessellation performance. The upcoming Radeon HD 6900 series may change this situation in favor of AMD as it is going to feature a better tessellation unit than the Radeon HD 6800, yet it will have a hard time competing with the 16 PolyMorph engines of the GF110 processor.

Performance in RPGs

Fallout: New Vegas

Like the previous games in the Fallout 3 series, New Vegas has modest system requirements. At the highest graphics quality settings with 4x MSAA it runs fast even on the Radeon HD 6870, the more advanced solutions differing but slightly from that card even at 2560x1600. In fact, monsters like the GeForce GTX 580 just can't show their full potential in this game.

Mass Effect 2

We enforced full-screen antialiasing using the method described in our special Mass Effect 2 review.

The innovations introduced in the GeForce TX 580 are enough to make it competitive to the Radeon HD 5970 at every resolution including 2560x1600. Considering its lower price and noise level, Nvidia's solution seems to be a better choice, just like in the majority of other tests.

Performance in Simulators

Colin McRae: Dirt 2

We turn tessellation on in this game’s settings.

The GeForce GTX 580 beats the Radeon HD 5970 at two out of the three resolutions and is a mere 4% behind the latter at 2560x1600, still keeping the game playable. Thus, the Radeon HD 5970 is no longer the sole offer in the sector of super-fast graphics solutions. It has to share that niche with the GeForce GTX 580 now.

Tom Clancy’s H.A.W.X. 2 Preview Benchmark

This benchmark uses tessellation to render the earth. There can be up to 1.5 million polygons in a single frame.

The visual benefits of extreme tessellation with polygons smaller than 16 pixels (6 pixels in this particular game) can be questioned but it's good to have a graphics card capable of handling such a load. The GeForce GTX 580 with its 16 PolyMorph engines performs superbly whereas the AMD solutions can’t match the leader’s speed. On the other hand, the Radeon HD 5970 and 6870 make the game playable even at 2560x1600. Besides, this benchmark is not the final version of the game.

Performance in Strategies

BattleForge

Like in most other games, the GeForce GTX 580 doesn’t outperform the Radeon HD 5970. However, Nvidia’s solutions deliver higher bottom speeds in this test, and the new card has no rivals in this respect, especially at high resolutions. Occasional slowdowns are not as crucial in strategies as in first-person shooters, so BattleForge seems to be playable on the Radeons. However, the GeForce GTX 580 is obviously preferable to the Radeon HD 5970 here.

StarCraft II: Wings of Liberty

The GeForce GTX 580 raises the bottom speed to 28 fps at 2560x1600, becoming competitive to the Radeon HD 5970. The difference in average performance is only about 20% and can hardly be felt without benchmarking tools. It may matter for professional cybersportsmen but not for an ordinary StarCraft II gamer. The difference in the level of noise produced by the GeForce GTX 580 and Radeon HD 5970 is much more obvious.

Performance in Synthetic and Semi-Synthetic Benchmarks

Futuremark 3DMark Vantage

We minimize the CPU’s influence by using the Extreme profile (1920x1200, 4x FSAA and anisotropic filtering). We also publish the results of the individual tests across all resolutions.

The GeForce GTX 580 easily scores 13,000 points but cannot get close to the Radeon HD 5970’s record score of over 15,000. Well, it is hard to beat a dual-processor solution in this benchmark.

The individual tests do not show anything exceptional: the GeForce GTX 580 is always faster than the GeForce GTX 480 but inferior to the Radeon HD 5970 (by 8-16% in the first test and by 12-22% in the second test).

Final Fantasy XIV Official Benchmark

This benchmark can only run at 1280x720 and 1920x1080.

Notwithstanding the huge computing resources of the GeForce GTX 580, the modest Radeon HD 6870 outperforms it at 1280x720. It is only at 1920x1080 that the new flagship from Nvidia goes ahead, enjoying a 18% advantage over its predecessor.

Unigine Heaven Benchmark

We use Normal tessellation in this test.

Despite the weaker tessellation units, the Radeon HD 5970 is no more than 7-10% slower than the GeForce GTX 580 which has as many as 16 geometry-processing blocks. The new card from Nvidia has a much higher bottom speed at 2560x1600, though. The results of this benchmark suggest that the GeForce GTX 580 is going to be very fast in upcoming tessellation-heavy games.

Conclusion

So what have we found about the GeForce GTX 580, the new flagship graphics card from Nvidia? In fact, this card is what the GeForce GTX 480 had been meant to be until Nvidia had to cut down its configuration despite the chip’s physically incorporating all those 512 stream processors and 64 texture-mapping units. The GF110 now has all of these resources active and working while the architecture of the TMUs has been improved so that they can process textures in all formats at full speed. Coupled with the optimizations of the manufacturing process which enable higher clock rates, the GF110 has got rid of the bottlenecks typical of its predecessor GF100. The GeForce GTX 580 comes out a well-balanced premium-class solution with no obvious drawbacks and with excellent performance in modern games. The summary diagrams make everything clear.

The resolution of 1600x900 pixels is not often used by owners of $500 graphics cards since such cards just can’t show their best here, the frame rate being often limited by other factors. However, the GeForce GTX 580 is an average 14% ahead of its predecessor even at 1600x900. It is also slower than the dual-processor Radeon HD 5970 in nine out of the 19 tests, but the gap is only large (10% and more) in four games, including the second graphics test from 3DMark Vantage. So, this looks like a tie or even a win for Nvidia, if you consider the difference in price between the two flagship solutions.

The resolution of 1920x1080 (Full HD) is quite popular nowadays as many gaming monitors come with such LCD matrixes. The GeForce GTX 580 can boast an impressive 38% advantage over its predecessor here. The Radeon HD 5970 wins 10 tests now, but the gap is only really large (24%) in Battlefield: Bad Company 2. Of course, the GeForce GTX 580 ensures a comfortable frame rate in every game including the notorious Crysis Warhead. The only exception is Metro 2033. If you turn on the tessellation option in that shooter, none of the existing graphics solutions (perhaps save for 3-way SLI and 4-way CrossFireX configurations) can maintain a playable frame rate in it.

There are not so many users who own large monitors with a native resolution of over 1920x1080 and such monitors are mostly used for serious work rather than for gaming. However, if you do have a 30-inch or larger display, you may consider purchasing a GeForce GTX 580. It won’t disappoint you even though it doesn’t ensure a playable speed in every game at the highest resolution if you select the highest graphics quality settings together with full-screen antialiasing. In such games as Aliens vs. Predator, Crysis Warhead and Metro 2033 you will have to turn FSAA off. Tessellation should also be turned off for Metro 2033, especially as it doesn’t improve the game’s visuals much. In the other games the new card should be strong enough to maintain a comfortable frame rate even if you launch such heavy applications as S.T.A.L.K.E.R.: Call of Pripyat or Lost Planet 2.

Thus, the Nvidia GeForce GTX 580 has performed in our tests as a well-balanced solution worthy of its price. It is indeed the fastest single-GPU graphics card available today. Moreover, it has challenged the previously unrivalled Radeon HD 5970. Considering the lower price of the GeForce GTX 580, AMD loses its superiority in the sector of premium-class products, at least until the next generation of dual-processor Radeon HD cards. Most importantly, the GeForce GTX 580 is far more comfortable than its opponent in terms of noisiness and is only inferior to it in power consumption. However, it has the same power draw as the GeForce GTX 480 in 3D applications while being faster. It is also more economical than both its predecessor and the Radeon HD 5970 in power-saving modes.

Thus, the GeForce GTX 580 is a perfect offer for any gamer who can afford a $500 graphics card. It can’t become a bestseller for obvious reasons but it deservedly gets the title of the best premium-class product. We will see if it can hold it for long because the ATI Radeon HD 6900 series is already coming up!

Highs:

 Lows: