by Alexey Stepin , Yaroslav Lyssenko, Anton Shilov
08/06/2008 | 11:21 AM
The GeForce GTX 200 graphics architecture had been announced a couple of days before the ATI Radeon HD 4800. As you already know from our reviews, Nvidia’s new GPUs were developed following a different strategic concept than the competitor’s chips. While AMD’s graphics department ATI focused on simple and relatively inexpensive GPUs combining them into multi-GPU configurations when necessary, Nvidia did everything to create the highest-performance monolithic core of the new generation. That’s what the G200 is.
Each approach has its pros and cons, and we won’t repeat the arguments again here. Suffice it to say, Nvidia’s new chip came out very big and sophisticated but, as our theoretical tests showed, not always the fastest. It is mostly the consequence of the G200 having lower computing power as it has only 240 ALUs as opposed to the RV770’s 800 ALUs. On the other hand, Nvidia has traditionally put the focus on the performance of texture processors and raster operators, even at the expense of the chip’s computing capacity, and the G200 was promisingly good from this aspect in the theoretical tests
G200-based graphics cards proved to be difficult and expensive to make, particularly due to the 512-bit memory bus, but it was clear that Nvidia’s original prices on the GeForce GTX 280 and GTX 260 were too high for these cards to be competitive. Soon after the release of the Radeon HD 4800 series which showed superb performance in games notwithstanding its modest pricing, Nvidia had to step down the official price of the GeForce GTX 280 from a sky-high $649 to a more acceptable $499. The GeForce GTX 260 got a price tag of $299 instead of $399. With such price cuts it is a question how profitable GeForce GTX 200 cards are considering the high cost of manufacture. Well, this review is not about the price factor after all.
Our goal is to check out the practical potential of the GeForce GTX 200 series and find how competitive it is on the modern desktop 3D graphics market. What the end-user wants to know is what he gets for his money. So, we will benchmark GeForce GTX 280 and 260 graphics cards in 15 popular games of different genres as well as in two versions of Futuremark’s 3DMark.
For our performance tests of Nvidia GeForce GTX 280/260 and results comparison against the current and previous generation ATI and Nvidia solutions we put together the following testbed:
According to our testing methodology, the drivers were set up to provide the highest possible quality of texture filtering and to minimize the effect of software optimizations used by default by both: AMD/ATI and Nvidia. Also, to ensure maximum image quality, we enabled transparent texture filtering. As a result, our ATI and Nvidia driver settings looked as follows:
For our tests we used the following games and synthetic benchmarks:
First-Person 3D Shooters
Third-Person 3D Shooters
We selected the highest possible level of detail in each game using standard tools provided by the game itself from the gaming menu. The games configuration files weren’t modified in any way, because the user doesn’t have to know how to do it. The only exception was Enemy Territory: Quake Wars game where we disabled the built-in fps rate limitation locked at 30fps. Games supporting DirectX 10 were tested in this particular mode.
Besides Nvidia GeForce GTX 280 and Nvidia GeForce GTX 260 we have also included the following graphics accelerators to participate in our test session:
The tests were performed in the following resolutions: 1280x1024/960, 1600x1200 and 1920x1200. If the game didn’t support 16:10 display format, we set the last resolution to 1920x1440. We used “eye candy” mode everywhere, where it was possible without disabling the HDR/Shader Model 3.0/Shader Model 4.0. Namely, we ran the tests with enabled anisotropic filtering 16x as well as MSAA 4x antialiasing. We enabled them from the game’s menu. If this was not possible, we forced them using the appropriate driver settings of ATI Catalyst and Nvidia ForceWare drivers.
Performance was measured with the games’ own tools and the original demos were recorded if possible. Otherwise, the performance was measured manually with Fraps utility version 2.9.1. We measured not only the average speed, but also the minimum speed of the cards where possible.
This game doesn’t support display resolutions of 16:10 format, so we use a resolution of 1920x1440 pixels (4:3 format) instead of 1920x1200 for it.
The game isn’t new. Therefore it is not a hard test for modern graphics cards, even for mainstream solutions such as the Radeon HD 4850. Nvidia’s new cards can’t show all of their potential here, but you can note that the GeForce GTX 280 is somewhat ahead of the others at 1920x1440.
The new cards are unrivalled in terms of minimum speed, though. Theoretically, it means the frame rate fluctuates less, resulting in smoother gameplay. On the other hand, you won’t be able to feel that because the frame rate is high anyway. The cheaper, smaller and more economical Radeon HD 4850 provides as much comfort in this game as the GeForce GTX 260, so it is in more demanding games that we should look for any benefits from Nvidia’s new solutions.
BioShock doesn’t support FSAA when running in Windows Vista’s DirectX 10 environment. We benchmark graphics cards without FSAA in this game.
Not a very demanding application, BioShock can however show the difference between graphics cards.
Although benchmarked without FSAA, the GeForce GTX 280 is superior to every other single graphics card in every display mode. It is 12% faster than the Radeon HD 4870 at 1280x1024. Not much for a $200 difference in price, however. The gap shrinks to 7-9% at the higher resolutions, so that’s hardly a convincing win even considering the better minimum speed. The GeForce GTX 260 is somewhat slower than the Radeon HD 4870 at every resolution, save for 1280x1024.
The GeForce 9800 GX2 wins the first two resolutions, proving that the concept of super-fast monolithic GPUs has nearly exhausted its potential. The senior model of the new series should be given credit for delivering similar minimum speed but that’s the only achievement of the GeForce GTX 280 in this test. Its average frame rate is not very high. For example, it is 13% slower than the Radeon HD 4870 at 1920x1200 notwithstanding having two times the latter’s amount of memory and raster back-ends.
The GeForce GTX 260 is comparable to the cheaper Radeon HD 4850 in average frame rate but has a higher minimum of speed than the same-price Radeon HD 4870. It doesn’t make the game more enjoyable though, because the frame rate bottoms out to an uncomfortable 20fps and lower.
The GeForce GTX 200 series is no winner in Call of Duty 4, either. The GeForce 9800 GX2 is about as fast as the new cards at 1280x1024 but leaves them behind at the higher resolutions.
Quite disappointingly, the GTX 280 falls behind the HD 4870 at 1600x1200. Having a 512-bit memory bus and 32 raster back-ends, the GeForce GTX 280 should feel more confident at high display modes but it does not. The senior model is 17-18% slower than the Radeon HD 4870 at 1920x1200 and comparable to the Radeon HD 4850 which costs a mere $199!
The results of the GeForce GTX 260 are surprising. It is about as fast as the $199 Radeon HD 4850 in two out of the three display modes and doesn’t compete with the senior Radeon that is equipped with GDDR5 memory. Nvidia’s solution has a higher bottom speed, yet the Radeon HD 4850 delivers a comfortable frame rate in every resolution, too.
This game is tested at the High level of detail, excepting the Shaders option which is set at Very High This way we try to achieve a compromise between image quality and speed.
The GeForce 9800 GX2 is everywhere faster than the GeForce GTX 280. It is logical since the former has a total of 256 shader and 128 texture processors against the latter’s 240 and 80 such processors, respectively. On the other hand, the G200 shows itself as the fastest monolithic chip in this test. The Radeon HD 4870 is not far slower, though. The gap between these two chips is negligible at 1920x1200.
The GeForce GTX 260 doesn’t look good and has almost no advantage over the GeForce 9800 GTX. The newer card is even somewhat slower at 1920x1200!
We should note the higher minimum of speed of the new solutions from Nvidia. This is the benefit from the reinforced texture-mapping section of the G200 chip. However, the new cards are unable to deliver comfortable performance in Crysis We can only hope the game will be playable at the highest settings on such solutions as Radeon HD 4870 X2, GeForce GTX 280 SLI, etc.
The frame rate is fixed at 30fps in this game as this is the rate at which the physical model is being updated at the server. Thus, this 30fps speed is the required minimum for playing the game.
Nvidia wins back the lost advantage on the OpenGL field. The senior and junior models of the new series both deliver excellent performance, outperforming the Radeon HD 4870 greatly. At a resolution of 1920x1200 the gap is as large as 24% for the GeForce GTX 280 and 11-12% for the GeForce GTX 260. Although this win has no practical value due to the frame rate limitation, it is a win for Nvidia anyway.
The increased number of texture processors doesn’t help the GeForce GTX 200 much here although Half-Life 2 abounds in high-quality high-resolution textures. The GeForce GTX 280 is about as fast as the GeForce 9800 GTX at high resolutions, the latter even coming up on top at 1280x1024, probably due to its higher core clock rate. Thus, the senior model of the new series doesn’t seem to be worth its price because it does not improve performance much over the previous-generation solutions or ATI’s new cards.
The GeForce GTX 260 looks better. Judging by its performance in Episode Two, it is worth its new price of $299 even though it is inferior to the Radeon HD 4870 in certain consumer properties. Note that the Radeon HD 4850 doesn’t look good in this test although is occasionally faster than the ex-flagship Radeon HD 3870 X2.
The game doesn’t support FSAA when you enable the dynamic lighting model, but loses much of its visual appeal with the static model. This is the reason why we benchmarked the cards in S.T.A.L.K.E.R. using anisotropic filtering only.
This game is traditionally Nvidia’s home turf. The new series proves this point but the GeForce 9800 GX2 outperforms the new flagship in two out of the three tested resolutions. The multi-GPU concept shows its worth here. Its downside can be seen, too. The minimum speed of Nvidia’s dual-chip card is below comfortable at 1920x1200 whereas the single-chip GeForce GTX 280 and 260 are keeping the frame rate above 35fps.
ATI’s multi-GPU solution is good, too. Judging by the results of the Radeon HD 3870 X2, we can expect the Radeon HD 4870 X2 to challenge Nvidia’s superiority in this game.
The GeForce GTX 200 series is far from brilliant here, being inferior to the GeForce 9800 GX2 as well as to ATI’s Radeon HD 4800 cards. The latter are actually the only solutions to ensure smooth gameplay even in the most complex scenes. Their speed is never lower than 24fps at 1280x1024. The rest of the tested cards can’t do that.
It’s unclear why the GeForce GTX 200 series cards are so slow here. It must have something to do with the game engine’s specifics. The game is too demanding at the highest graphics quality settings.
It is the first real win for the GeForce GTX 280 and 260. The senior model of the new series leaves the other cards behind, notching 130fps. The GeForce GTX 260 is good, too. It is only inferior to the GeForce 9800 GX2 at resolutions above 1280x1024. On the other hand, this speed is redundant for a third-person shooter, and the Radeon HD 4870 also ensures smooth gameplay in every display mode including 1920x1200.
Being in fact a mix of a role-play game and a first-person shooter, this game should be enjoyed at a high frame rate like that delivered by the GeForce GTX 200 cards. It is one of the few cases when the G200 processor achieves an unprecedented level of performance. The GeForce GTX 260 makes the game playable at 1920x1200. The Radeon HD 4870 can’t do the same due to the low bottom speed.
The game loses much of its visual appeal without HDR. Although some gamers argue that point, we think TES IV looks best with enabled FP HDR and test it in this mode.
The GeForce GTX 200 series doesn’t show anything exceptional in this test. It is even inferior to Nvidia’s previous-generation cards due to the lower core clock rates. TES IV doesn’t require anything special from the graphics card, though. Every tested card, including the Radeon HD 3870, ensures comfortable playing conditions at resolutions up to 1920x1200 pixels. We can note that the Radeon HD 4870 and 4850 offer a higher bottom speed at 1920x1200 than the rest of the cards including the GeForce GTX 280. Perhaps we’ll check out even higher display modes in another review.
The new add-on to Company of Heroes is tested in DirectX 10 mode only since it provides the highest quality of the visuals.
The Company of Heroes results are rather complicated: the GeForce GTX 260 and 280 are unrivalled in terms of average frame rate but their bottom speed is below comfortable at 1600x1200 whereas the ATI solution always maintains a near-comfortable frame rate.
ATI seems to be on the winning side in this test because it offers the only graphics card capable of delivering acceptable average and bottom speed at 1600x1200/1680x1050. Nvidia’s solutions are limited to 1280x1024 but may improve with driver updates. The new cards seem to have big potential.
The add-on to C&C 3: Tiberium Wars brought no changes into the technical aspect of the game. The game still having a frame rate limiter, you should consider the minimum speed of the cards in the first place.
The new cards from Nvidia are inferior to the Radeon HD 4870 and 4850 as their bottom speeds are 2fps lower at 1920x1200. Of course, this is a negligible difference, so all the cards are suitable for playing this game.
Unfortunately, the GeForce GTX 200 cards have a low bottom speed in this test. The average frame rates are not exceptional, either. It is only at a resolution of 1280x1024 that the GeForce GTX 280 differs notably from the other cards. The new flagship products don’t have any breakthroughs here.
The overall 3DMark06 score is disappointing. The GeForce GTX 280 couldn’t outperform the dual-chip GeForce 9800 GX2 and Radeon HD 3870 X2 or the single-chip Radeon HD 4870 on our testbed. That’s not the result you could expect from a 1.4-billion-transistor chip claiming to be the fastest GPU available. The junior model of the new series is not good, either. It barely scored 12,000 points.
Oddly enough, the GeForce 9800 GX2 can’t repeat the successful performance of the Radeon HD 3870 X2 in the SM2.0 tests. It is somewhat slower than Nvidia’s single-core solutions (including the new G200-based) ones as well as the ATI Radeon HD 4870. In the SM3.0/HDR tests the ATI Radeon HD 3870 X2 is but barely better than the GeForce 9800 GX2. The new Radeon HD 4870 is behind both dual-processor cards but scores 7,000 points. The GeForce GTX 280 and 260 do not impress in this test.
It is in the first SH2.0/HDR test that the GeForce GTX 280 shows what a monolithic GPU with 80 texture processors is capable of. It is almost as fast as the dual-processor GeForce 9800 GX2 that has a total of 128 TMUs. The GeForce GTX 260 is good, too. It is ahead of the Radeon HD 4870 and inferior only to the mentioned two cards from Nvidia.
The second test is indicative of the more efficient processing of complex geometry in the G200 chip. The senior graphics card with the new GPU is only inferior to the two G92 chips installed on the GeForce 9800 GX2 whereas the junior model is as fast as the Radeon HD 4870, which is a success considering that they come at the same recommended price.
ATI is still on the winning side in the SM3.0/HDR tests with its high-computing-capacity solutions. The GeForce GTX 280 is barely ahead of the Radeon HD 4870 in both tests but is beaten by the Radeon HD 3870 X2. The GeForce GTX 260 has disappointing results due to the cut-down configuration of its core.
So, we don’t have any new records in 3DMark06. Will 3DMark Vantage reveal the full potential of Nvidia’s new graphics cards?
We minimize the CPU’s influence by using the Extreme profile (1920x1200, 4x FSAA and anisotropic filtering).
The GeForce GTX 200 series show their best in this benchmark. The senior model is far ahead of the rest of the graphics cards, including the dual-processor GeForce 9800 GX2, while the junior model is just as confidently ahead of the Radeon HD 4870. Perhaps things will change for the better for ATI’s cards with driver updates, but Nvidia’s new solutions are unrivalled in 3DMark Vantage as yet.
The GeForce GTX 280 is on top in both gaming tests. The GeForce 9800 GX2 is very close to the leader in the second test, though. It is in the second test too that the GeForce GTX 260 is barely ahead of the Radeon HD 4870. Although the former card has more texture processors, ATI’s solution makes up for that with its higher clock rate. 3DMark Vantage doesn’t seem to make full use of the RV770’s computing capabilities and we can expect improved results with driver updates. But even without them, the Radeon HD 4870 X2 is going to beat the GeForce GTX 280 in this benchmark.
Having tested Nvidia’s new G200-based graphics cards, GeForce GTX 280 and GTX 260, we should acknowledge that the GTX 280 is indeed the fastest single graphics card available now. However, it doesn’t push the performance bar much higher in comparison with the previous cards notwithstanding Nvidia’s efforts. The summary diagrams show that quite clearly:
The GeForce GTX 280 is supposed to outperform the cheaper Radeon HD 4870 as well as the ex-flagship GeForce 9800 GX2 but that’s not the case:
The junior model of the series, the $299 GeForce GTX 260, did rather well in comparison with its market opponent:
Thus, both cards deliver similar performance but the 4870 model is better in terms of consumer properties such as PCB dimensions, integrated audio core, hardware support for VC-1 decoding.
Few people buy $300 and more expensive cards to play at 1280x1024, so high resolutions are going to be the most representative.
The GeForce GTX 280 and its opponents, the 9800 GX2 and the Radeon HD 4870, are not much different at a resolution of 1600x1200:
Notwithstanding its advanced memory subsystem with a 512-bit bus and its 32 raster back-ends, the GeForce GTX 280 has rather modest results for a would-be leader.
The GeForce GTX 260 compares well enough with its opponent. There is no definite leader between them:
The 1920x1200 results are not good for the GeForce GTX 280 because the Radeon HD 4870 gets closer, especially in Hellgate: London. Here is the comparison for the highest tested resolution:
That’s not the performance you might expect from a graphics card that had been intended to come at a price of over $600. It is hardly worth its new price of $499 even. It is a serious blow for the whole concept of top-performance monolithic GPUs because it is clear that two RV770 cores won’t leave a chance to the G200 if multi-GPU technology works correctly.
The GeForce GTX 260 looks better than the flagship:
So the choice between the GeForce GTX 260 and the Radeon HD 4870 will depend on the games you intend to play. Our tests don’t show an advantage of either card.
Although it is but slightly faster than the Radeon HD 4870 and GeForce 9800 GX2, the GeForce GTX 280 can be viewed as the fastest single graphics card in the price category of $499 because the GeForce 9800 GX2 doesn’t always deliver its maximum performance and doesn’t support multi-monitor configurations.
It is yet unclear at what official price the Radeon HD 4870 X2 will come out. It cannot be far cheaper than two Radeon HD 4870 cards. Most likely it will be in a higher price category, especially its version with GDDR5 memory. That’s why it would be interesting to compare the GeForce GTX 280 with a couple of Radeon HD 4850 cards working in a CrossFireX tandem. The GeForce GTX 280 will remain the best of single graphics cards for a while yet, even though it is not an undisputed overall winner as Nvidia had wanted it to be.
The new solution will find its customer. Not all people are fond of the multi-GPU concept, particularly because a dual-chip card may not show its full potential in some applications. The GeForce GTX 280 will be the best choice for such gamers. You can also use it in a 3-way SLI configuration to achieve unrivalled performance (in supported games). And still, the GeForce GTX 280 is not as superior a solution as the legendary GeForce 8800 GTX was, for example.
The GeForce GTX 260 is a good product but it is somewhat slower than the Radeon HD 4870. This struggle may lead to price cuts for both models, which is going to be good for the end user.
So, Nvidia’s new solutions are far from a breakthrough as their results in the new-generation games, Call of Juarez and especially Crysis, suggest. We are now waiting for the Radeon HD 4870 X2. Perhaps it will be this kind of a breakthrough.