The Fastest Graphics Cards of Summer 2004

The Summer is here and it is time to buy new graphics cards and play the newest games. Since the market is flooded with offerings, it is pretty hard to decide which one is better for you. We have taken 13 high-end and mainstream graphics cards and benchmarked them in 35 gaming benchmarks to find out which are worth their money.

by Alexey Stepin , Tim Tscheblockov, Anton Shilov
07/31/2004 | 04:02 PM

Introduction

The passed half of the year 2004 established a new landmark in the development of consumer 3D graphics as the two graphics giants – NVIDIA Corporation and ATI Technologies – both unveiled their new generations of graphics processors. The birth of a new architecture is always an Outstanding Event; long before it really happens, the community starts boiling with variegated and contradicting rumors, grounded on the scraps of information from the companies themselves as well as from mass media.

That was roughly the beginning of 2004: the press was engaged into a hot discussion on the characteristics of the upcoming solutions from NVIDIA and ATI, although no one seemed to have absolutely trustworthy information. We knew that the new processors would both work with power-frugal and high-clocked GDDR3 memory; NVIDIA’s NV40 was also certain to have 16 pixel pipelines against 8 or 12 or 16 – who knows? – pipelines in ATI’s R420. The frequencies were indefinite, too, something in a range of 500-600MHz. As usual, the truth became apparent only after we saw the real silicon.  

So, let’s try to evaluate the progress of the developers of graphics processors from our traditional point of view: are these GPUs good for us as end-users? As usually, we tend to bring you the most detailed picture of a massive amount of benchmarks, today we have 35 of them!

GeForce 6800 and RADEON X800: Debutants of this Season!

NVIDIA GeForce 6800

NVIDIA’s was the first stroke in the fight as the company announced its next-generation graphics processor codenamed NV40 and a new graphics card series under the GeForce 6800 brand. They removed the “FX” suffix probably to avoid associations with the rather unlucky NV3x architecture that had been selling under the GeForce FX brand.

The announcement, which was in fact a proclamation of a new era in GPU making, happened on April 14, 2004. Out of the development labs crept a mind-boggling creature: the huge die of the new GPU consisted of as many as 220 million transistors! Its stuffing had been so thoroughly revised that you could hardly see any trace of the NV3x architecture (if there were any). Unfortunately, this level of complexity was incompatible with high frequencies; NVIDIA could only reach 400MHz keeping an acceptable chip yield.

The deficiencies of the previous architecture were all worked upon, and a number of new technologies, both for increasing the performance and improving the image quality, were introduced. Particularly, the new GPU boasts support of a high dynamic range (HDR) color space. However, the main innovation of the GeForce 6xxx is the comprehensive support of Shader Model 3.0, i.e. support of pixel and vertex shaders of the next version. The new GeForce was the world’s first GPU to comply with this new shader standard. This exciting technology along with the rest of them is discussed at length in the theoretical part of our NV40 review.

The technological breakthrough came at a high price in terms of money as well as power: the topmost model of the family came with two power connectors, which we had only seen in XGI’s Volari Duo V8 Ultra card. Moreover, NVIDIA’s advice about the PSU to use with a GeForce 6800 Ultra card was simply shocking: a 480W PSU was recommended! It transpired later that this was a bit of an overstatement – a less powerful PSU would suffice. However, high-powered and expensive PSUs are usually made of high-quality parts and don’t use any simplified schemes. They just provide the necessary stability of the voltage, and that’s the most important thing for the GeForce 6800 Ultra.

The new graphics cards from NVIDIA, as mentioned above, come into market under the name of GeForce 6800. The family includes:

All the cards have got the full-width 256-bit memory bus.

The new series of high-end products from NVIDIA looks complete. You may be wondering why they shipped the GeForce 6800 GT, a card only slightly differing from the 6800 Ultra in the frequencies. Probably, the NV40 proved to be too complex a processor to allow for an acceptable yield of 400MHz chips and the manufacturer decided to clock such chips a bit lower, labeling them as GeForce 6800 GT. This GT version has certain advantages over the 6800 Ultra – its cooling system takes only one expansion slot, and the card has only one power connector.

Our tests of the GeForce 6800 Ultra confirmed its excellent performance; there’s no trace of that sluggishness that GeForce FX cards used to exhibit at executing pixel shaders! The new product from NVIDIA became the best where creations of ATI Technologies had been traditionally superior. Of course, NVIDIA’s blow couldn’t go unanswered. On May 4, ATI gave back a shattering punch.

ATI RADEON X800

The “retribution weapon” forged in the Canadian laboratories was known under a codename of R420. Interestingly, this chip had been originally prepared to ship with 12 pipelines, but the company executives decided to add performance right before the launch and enabled four additional pipelines. The R420 is smaller and cooler and less power-hungry than the NV40. In fact, this GPU was the next step in the evolution of the R3xx architecture, never receiving support of Shader Model 3.0, unlike the NV40.

Again unlike the GeForce 6800, an embodiment of new technologies and capabilities, the RADEON X800 is less exciting from the architectural point of view (see our review). ATI adjusted the structure of the pipelines so that the chip in fact is four-pipelined, but each of the pipelines can process a group of four pixels at a time. This organization allows considerably decreasing the performance loss suffered on enabling full-screen anti-aliasing and texture filtering. They also increased the number of vertex pipelines, from four to six, and improved the vertex processors themselves somewhat.

There’s a new version of the well-known HyperZ technology. Its purpose remained the same, though. HyperZ HD is still responsible for optimizing the communication between the GPU and the memory subsystem.

As for truly new technologies, I can only name the compression algorithm now used with normal maps. It is called 3Dc and helps to increase the level of detail of the game objects without spending much resources or compromising the image quality. Besides that, the so-called Temporal Anti-aliasing was offered to the public – it improves the quality of anti-aliasing where there’s enough speed for that.

However, the major innovation came from the manufacturing field: the new 0.13-micron technological process incorporates new dielectric material, the so-called low-k dielectric. Using this process, ATI reached beyond 500MHz, thus outpacing NVIDIA, although NVIDIA used to clock its GPUs at higher frequencies than ATI. Moreover, the relative simplicity of the R420 (about 160 million transistors against 220 million in the NV40) and the thinner manufacturing process helped to keep the heat dissipation at a level with the RADEON 9800 XT and to cool the card with a simple one-slot cooling system. As a final touch to the new product, the Canadians at last decided to use high-speed memory working at 1GHz and higher – exactly the thing that the older RADEONs needed desperately.

The family of R420-based graphics cards was originally comprised of three names: RADEON X800 XT Platinum Edition, RADEON X800 PRO and RADEON X800 SE, but the last version, with eight pipelines and a 128-bit memory bus, will only appear (if it ever does) somewhere near the fall. In this summer ATI will be offering the good old RADEON 9800 in its price sector. Some sources say that this decision came as ATI found the R420 chip yield very satisfying and didn’t want to cripple normal chips to make RADEON X800 GPUs. Thus, ATI is now offering only two new products:

Both cards use the same PCB design and are equipped with one DVI-I connector, one D-Sub connector and one Molex power plug. The memory bus has a width of 256 bits.

R420-based graphics cards were overall successful in our tests, often surpassing GeForce 6800 products, although the R420 can hardly claim to be a revolution. Once again, a simple and efficient architecture wins where a complex and universal one loses. This is especially true for the hard working situation when we use both full-screen anti-aliasing and anisotropic filtering. Well, frankly speaking, ATI has been strong in such tests before, too.

Overall, there’s parity in the market of computer 3D graphics, only the weapons have become much more perfect and powerful. It is yet unclear who’s going to win this war, if there’s going to be any winner at all. ATI has high frequencies, low power requirements, and a time-tested efficient architecture that does nicely in modern games, while NVIDIA offers the most sophisticated and universal GPU for today, with support of next-generation pixel and vertex shaders and a bunch of unique technologies. In the future, with the release of the new version of DirectX and games that use Shader Model 3.0, this equilibrium may change, though.

Second-Tier Developers in 2004

The other developers of desktop graphics solutions behaved listlessly this year. S3 Graphics issued another series of announcements dedicated to the quite interesting DeltaChrome processor that belongs somewhere at the bottom of the mainstream class. Graphics cards on this GPU even emerged in Japan (they had taken half a year to reach the shops!), and then in Europe, too, supplied mostly by Club 3D. S3 Graphics is of course unlikely to pretend to a good slice of the market, but at least they got going. Otherwise, the company follows in the wake of the giants developing a shader complier for faster execution of shaders by its GPUs and preparing to launch the higher-performing GammaChrome processor that also supports the new PCI Express bus.

XGI’s position is less solid: the aggressive attempt to invade the graphics market ended in a total failure. You can refer to our review of the Club 3D Volari Duo V8 Ultra graphics card for details and reasons for that. Nasty performance, undisguised reduction of the image quality to boost this performance, various bugs and flaws in the software – this all crushed the XGI Volari series in the very bud. Still, the company doesn’t give up, but goes on developing new GPUs. Particularly, they promise a GPU with support of Shader Model 3.0. They also showcased an operational prototype of a PCI Express graphics card at Computex Taipei 2004, but the support of the new interface is realized through an AGP-to-PCI Express bridge.

XGI’s future remains vague: besides the aggressive attitude it takes a competitive architecture to release a competitive graphics processor. Right now, XGI doesn’t seem to have such an architecture on its hands. The Volari, however thoroughly redesigned, won’t solve the problem, while the development of a new architecture would require too much time, and XGI doesn’t have that time since ATI, NVIDIA and even S3 Graphics are not just marking time. The desktop graphics market is a dainty pie and one that seems to have been knifed up already – the current players don’t want others to join. Of course, it’s possible to pick up the crumbs left after the feast of the giants, but it’s a resource-consuming, unrewarding and unprofitable business.

PCI Express Is Coming! (To Your System, Too)

This review would be incomplete without any mentioning the products that support the new data-transfer standard, actively promoted by Intel. PCI Express is a next-generation bus, born to replace PCI and AGP interfaces.

The PCI Express bus does offer numerous advantages over its predecessors. Particularly, these are the point-to-point topology, bi-directional data transfers, and highest bandwidth. Even the slowest version, PCI Express x1, provides twice the bandwidth of the PCI (250MB/s against 133MB/s), and that in each direction, to the total of 500MB/s! As for the PCI Express x16 slot, intended for installation of graphics cards, its peak bandwidth is 4GB/s in each direction, while AGP 8x provides 2.1GB/s from the chipset to the GPU and about 200MB/s in the opposite direction.

Intel has already announced platforms with PCI Express x1 and x16, but they started developing graphics cards capable of working with this bus much earlier. As you know, PCI Express x16 will replace AGP 8x in computer systems of the future, so let’s consider this transition closely.

ATI Technologies and NVIDIA Corporation took diametrically opposite approaches to developing PCI Express-compatible products. The former taught the GPU itself to talk with this bus, while the latter devised a special HSI chip that functions as an AGP-to-PCI Express bridge. This chip also allowed NVIDIA to transfer the existing products to the new interface without any redesign. The result was the announcement of the GeForce PCX series that includes:

They also issued some GeForce PCX 5900 cards, which are based on the GeForce 5900 and have a HSI bridge.

Besides the obvious advantage of easy transition, the use of the bridge is deficient in its very nature: it’s impossible to use the potential of the PCI Express bus to the full this way. Besides that, this bridge generates quite an amount of heat and requires a passive heatsink. This design looks rather clumsy and the company admitted this, too, moving the HSI bridge to the chip’s substrate in the NV45, which is going to be used in PCI Express GeForce 6800 cards. This is a temporary measure, too, but the dual-die chip looks more secure and reliably than two separate chips. So, the NV45, against everyone’s expectations, turned to be nothing more than a combination of the NV40 and the HSI bridge.

ATI Technologies looks more advanced technologically as its GPUs have an inborn support of PCI Express. They are the R420 (RADEON X800), RV380 (RADEON 9600 XT) and RV370 (RADEON 9600 made by 0.11-micron tech process).

Recently, NVIDIA questioned the implementation of the PCI Express support in GPUs from the Canadian developer. You may have seen the snapshots on the Internet that compare RV380 and RV360 cores – they suppose that ATI just integrated an AGP-to-PCI Express bridge into the die, without using external chips. Well, this supposition may be true, but it is more likely that the part of the die that was responsible for communicating with the AGP was replaced with another circuitry, responsible for PCI Express x16. As a proof to that point we have the results of our tests of the bandwidth of PCI Express graphics cards. Although the results of the RADEON X600 are far from the theoretical maximum, they are anyway higher than the numbers the GeForce PCX 5900 got, meaning a better realization of PCI Express support in ATI’s products.

The whole transition affair is reminiscent of the move to SerialATA-150: of all the hard disk drive makers only Seagate equipped its devices with “native” support of the new interface, while others were content using bridge chips. It is only today that we see hard disk drives emerge that fully utilize the capabilities of the SerialATA interface. That’s probably the scenario the PCI Express x16 bus will be following: products with this interface won’t take the market by storm, in a couple of days. Moreover, appropriate drivers are necessary, so PCI Express will most likely uncover its full potential only with the release of the new version of Windows, codenamed Longhorn.

So, the first half of the current year brought the following into the realm of desktop 3D graphics:

Optimization Wars: What about the Image Quality?

It is not a secret that NVIDIA and ATI resort to certain software tricks to boost the performance of their products. For example, they simplify tri-linear and anisotropic filtering, reduce the precision of pixel shaders and do some other things, often resulting in a certain loss of quality as concerns the rendering of a 3D scene.

Of course, we are past those sensational scandals around 3DMark, and the companies behave much more discreetly nowadays. There are situations when the optimizations are overt, but in most cases they are barely noticeable during the game process – only a close scrutiny reveals certain differences. Quite naturally, the player doesn’t look for any traces of those optimizations but keeps track of the gameplay in order not to see that awful Game Over screen much too soon.

Anyway, this optimization problem remains urgent even today, only its wording has changed. Now the question is “Should the user be allowed to control the optimizations?” The two leading graphics companies answer this question in a diametrically opposite manner, like two good old fatal enemies. NVIDIA says yes and offers the option of disabling the optimizations in the latest versions of its driver, whereas ATI thinks it senseless to provide such options. Quoting the representatives of the Canadian company, disabling the texture filtering optimizations in its newest products leads to nothing but performance degeneration since even the most fastidious user wouldn’t notice any image quality improvements without those optimizations – so negligible they are! We can’t deny some truth in this statement but this is rather a question of the respect towards the end-user – from this point of view, NVIDIA’s approach is friendlier.

We decided to check out for ourselves if the optimizations were really as negligible as ATI and NVIDIA would make us think, and made a few screenshots in several modern games. We think this approach is better than the use of any test programs that output artificial scenes with and without highlighting of mip-levels, which is only indicative of the presence or absence of optimizations (and only when the optimizations are not automatically disabled on turning the highlighting on), but cannot say how much the overall rendering quality degenerated. Besides that, the purchaser of a gaming graphics card doesn’t buy it to enjoy the look of a simple checkerboard texture or the patterns of the highlighted mip-levels, but does it to just play games!

And it is the real games that we should turn to and search for any defects in the image. We took several technologically advanced games as examples. They are:

We made the screenshots in the 1280x1024 resolution and in the “eye-candy” mode (it means that we had enabled the maximum available anisotropy level as well as 4x full-screen antialiasing). The only exception was Halo, which didn’t support FSAA due to its method of rendering the scene. Every screenshot was taken two times, with enabled texture filtering optimizations and without them, on the following four graphics cards:

As you know, ATI’s current drivers do not allow you to control the optimizations in the driver’s control panel – they are always enabled. Especially for those of you who are interested in the opportunity of turning off the optimizations on RADEON-family cards, we offer the following instructions:

  1. Search the registry for the AnisoDegree string variable.
  2. Correlate its value with the settings in ATI’s Control Panel.
  3. Change the anisotropy level in ATI’s Control Panel;
  4. Press F5 in RegEdit to see if the value of AnisoDegree has changed.
  5. If yes then go to the next item else continue searching for the necessary branch.
  6. Add a new string variable “RV350TRPER” and give it a value of 1.
  7. Add a new string variable “RV350ANTHRESH” and give it a value of 1.
  8. Add a new string variable “R420AnisoLOD” and give it a value of 2.
  9. Reboot the computer.

We used the screenshots made on the GeForce 6800 Ultra in the High Quality mode as the etalon. In this mode, all the optimizations are disabled on the card.

We used the screenshots made on the GeForce 6800 Ultra in the High Quality mode as the etalon. In this mode, all the optimizations are disabled on the card.

So, let’s see how the optimizations implemented by the GPU manufacturers affect the image quality.

FarCry Image Quality Comparison

GeForce 6800, optimizations enabled

GeForce 6800, optimizations disabled

RADEON X800, optimizations enabled

RADEON X800, optimizations disabled

GeForce FX 5950, optimizations enabled

GeForce FX 5950, optimizations disabled

RADEON 9800, optimizations enabled

RADEON 9800, optimizations disabled

FarCry reference image quality (GeForce 6800 in High Quality mode)

You can see that all the cards produce an image of similar quality. On a closer examination we notice that the GeForce FX 5950 Ultra sometimes produces sharper-looking textures than the other cards. This is due to the 5950 Ultra’s resource-consuming, but more honest anisotropic filtering algorithm that doesn’t have any “inconvenient angles” at which the new graphics processors reduce the anisotropy level abruptly.

Halo Image Quality Comparison

GeForce 6800, optimizations enabled

GeForce 6800, optimizations disabled

RADEON X800, optimizations enabled

RADEON X800, optimizations disabled

GeForce FX 5950, optimizations enabled

GeForce FX 5950, optimizations disabled

RADEON 9800, optimizations enabled

RADEON 9800, optimizations disabled

Halo reference image quality (GeForce 6800 in High Quality mode)

The differences are more conspicuous in Halo, especially on the walls of the passageway: the GeForce FX 5950 Ultra and the GeForce 6800 Ultra with disabled optimizations share the title of the best image quality card. In other cases, we see a step-like rather than smooth display of the detailed texture, which is to simulate the micro-relief of the wall surface.

Thanks to complexity and saturation of the game scene, there are practically no visible differences between the cards and operational modes in Max Payne 2.

Max Payne 2 Image Quality Comparison

GeForce 6800, optimizations enabled

GeForce 6800, optimizations disabled

RADEON X800, optimizations enabled

RADEON X800, optimizations disabled

GeForce FX 5950, optimizations enabled

GeForce FX 5950, optimizations disabled

RADEON 9800, optimizations enabled

RADEON 9800, optimizations disabled

Max Payne 2 reference image quality (GeForce 6800 in High Quality mode)

You can find differences in the pictures produced by the GeForce 6800 Ultra and the RADEON X800 XT with and without their optimizations, but it’s hard to discern them at a glance, really.

Painkiller Image Quality Comparison

GeForce 6800, optimizations enabled

GeForce 6800, optimizations disabled

RADEON X800, optimizations enabled

RADEON X800, optimizations disabled

GeForce FX 5950, optimizations enabled

GeForce FX 5950, optimizations disabled

RADEON 9800, optimizations enabled

RADEON 9800, optimizations disabled

Painkiller reference image quality (GeForce 6800 in High Quality mode)

As you see, optimizations are not always equivalent to image quality degradation. On the contrary, texture filtering optimizations don’t affect the visual perception of the scene in a majority of cases. Things may be different in dynamics, though. For example, the noise and the boundaries between mip-levels, not visible in a static screenshot, become most conspicuous in the process of the game. But, as you understand, the player of Painkiller, Far Cry, Halo and other such games has no time to examine the transitions between mip-levels for any hidden optimizations: he or she is first of all concerned about self-preservation in the game world.

Of course, it doesn’t mean we should condone the optimizations altogether. For example, XGI’s approach to making the Volari-based cards perform faster is downright unacceptable and provoked our immediate and negative reaction. However, such rough “optimizations” are so destructive that they catch your eye immediately, contrary to those from ATI and NVIDIA, which can only be traced by scrutinizing each game screenshot or searching for scenes that make a particular optimization conspicuous. As for real gaming situations, again, it’s hard to see anything wrong unless you do it on purpose.

Anyway, our opinion is that the end-user should have the right to choose between a dozen or other of frames per second and true tri-linear and anisotropic filtering. For example, optimizations may become visible and annoying in a certain game due to peculiarities of its engine or some other factors, and the user must have an opportunity to avoid that. Some users may be quite content with the texturing quality with optimizations enabled and may want just leave them on.

The customer’s trust in the manufacturer is a very important factor, since it conditions the success of the product in the market. This trust is easy to lose, but hard to regain, even with titanic efforts.

We are glad NVIDIA chose to be open to its own customers, offering them the control over the optimizations. We hope this practice will be continued by the company as well as by other graphics processor manufacturers.

Testbed and Methods

It’s time we moved on to the practical part of the review. We decided to expand our benchmarking toolset to 35 names to get a more comprehensive picture of performance of modern top-end and mainstream graphics cards. These are “gaming” products since junior models like the RADEON 9200 or the GeForce FX 5200 suit but slightly for playing modern games. Cards on GPUs like RADEON 9600 and GeForce FX 5600 can also be put into this category (we don’t even mention various “mutants” with a 64-bit memory bus) – there are ever more really hard games in the market that can become a real burden even for a most advanced graphics card! Here’s the full list of our benchmarks:

First Person 3D Shooters:

Third Person 3D Shooters:

Simulators:

Sport Games:

Real-time strategies:

Semi-synthetic benchmarks:

Synthetic benchmarks:

We’ve added several new games – their names a given in bold in the list above. The 3D strategy game called Perimeter is maybe of a highest interest among them, since it uses an absolutely original and unusual engine. In this game the battlefield is not only three-dimensional, but also fully interactive: it can be destroyed, deformed, and terraformed by means of special units. The secret is simple: the landscapes of Perimeter are realized with a technology similar to voxel graphics - it seems like each landscape in the game is a three-dimensional texture. Regrettably, there’s a price for all those beautiful things you see in Perimeter: at its maximum graphics quality settings the game is only playable with comfort on a system with a powerful CPU and a graphics adapter of the latest generation like the RADEON X800 or GeForce 6800 Ultra.

The rest of the new games, highlighted in our list, are rather traditional, although are good representatives of their respective genres.

All new games not equipped with an inbuilt benchmark were tested with the help of Fraps utility and we now post the minimal as well as average fps rate for a more exact representation of the game process. We are going to do our benchmarking this way in the future, too.

The testbed hasn’t changed since our last report:

It is a kind of our tradition already to test each game at the maximum possible graphics quality settings in three resolutions and in two modes: “pure speed” and “eye candy” (the latter mode features full-screen anti-aliasing and anisotropic filtering). We used the latest version of the driver for the cards of the new generation (GeForce 6800/GT/Ultra and RADEON X800 Pro/XT). For older cards we used the latest official version of the driver.

We include the results of the GeForce 6800 Ultra overclocked to 435/1150MHz frequencies for the comparison’s sake. Since the GeForce 6800 Ultra Extreme Edition hasn’t been officially announced, these numbers are of pure theoretical interest. On the other hand, some graphics card manufacturers may come up with overclocked versions of the 6800 Ultra to sell them at a higher price. In this case, these results may help you determine if such a product is worthy the extra money you are invited to pay for it.

All in all, the following graphics cards were used for testing:

First person 3D shooters

Call of Duty




It’s all natural here: the cards lined up in an orderly fashion from top-end models down to junior ones. The graphics cards with NVIDIA’s chips feel well in low resolutions. In high ones, the competing products from NVIDIA and ATI are nearly equal.



We turn on full-screen anti-aliasing and anisotropic filtering to see NVIDIA rule the low resolutions again. However, the new RADEON X800 GPUs strike back in high display modes. Well, RADEONs have always been most efficient under difficult working conditions.

RTCW: Enemy Territory

We use only the “eye candy” mode in this game since the “pure speed” results were non-representative: all the cards, except the junior models of the last generation, were stopped by a kind of fps limiter, which we don’t see with FSAA and AF enabled.



You can’t tell what GPU manufacturer is better in this test. On the one hand, the RADEON X800 XT performs much like the GeForce 6800 Ultra, only achieving a perceptible advantage in the 1600x1200 resolution. On the other hand, the RADEON X800 Pro is a little behind the GeForce 6800 GT.

The GeForce 6800 is greatly impeded by its slow memory (compared to other NVIDIA’s products), so that it even loses the 1600x1200 resolution to the GeForce FX 5950 Ultra! However, both these cards show some competition to the RADEON 9800 XT. The junior products are on a similar level of performance.

Star Trek: Elite Force 2



NVIDIA’s GPUs have usually been record-setters in this rather old 3D shooter. We see the same story repeat again; the RADEON X800 XT can only surpass the GeForce 6800 series in the 1600x1200 resolution, in the “eye candy” mode.



Unreal Tournament 2004

Torlan level



Without full-screen anti-aliasing and anisotropic filtering all the cards reached the ceiling set by the speed of the system’s central processor. The models of the older generation fell somewhat behind in high resolutions. The ex-king of 3D graphics, the RADEON 9800 XT, is the best among them.



The new-generation GPUs handle well the higher load, except that the slow memory once again hamstringed the GeForce 6800 – it is behind the GeForce 5950 Ultra and the RADEON 9800 XT in high resolutions, although they belong to the older generation.

Metallurgy level

It’s similar to the Torlan level, but the fps rates are overall higher – this level is less complex. The GeForce 6800 architecture shows its best qualities here.




After we turn on anti-aliasing and anisotropic filtering, the leader changes – now it is the RADEON X800 XT and RADEON X800 Pro. Meanwhile, the GeForce 6800 series is a worthy rival, never falling too far behind the leaders. The older graphics cards perform similarly to each other.



We should note that the above-presented games don’t practically use any of those innovations implemented in the graphics processors and cards of the new generation. Let’s see what we’ll have next.

Halo: Combat Evolved



The RADEON X800 XT remains the leader in this game, although we can’t say it beats the GeForce 6800 Ultra heavily. The GeForce 6800 GT has a big advantage over the RADEON X800 Pro that belongs to the same price category. The GeForce 6800 is also confident in its own price sector, outperforming the RADEON 9800 XT and the 9800 Pro that are themselves a little faster than the GeForce FX cards.

Deus Ex: Invisible War

Thanks to its special effects, Deus Ex is an extremely heavy game, although not for the new generation of graphics cards.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

With its efficient execution of pixel shaders and high clock rates, the RADEON X800 XT goes unrivalled through this test. The RADEON X800 Pro doesn’t feel that easy but performs at the level of the GeForce 6800 GT. The GeForce 6800 doesn’t provide a bare minimum of playability, not to mention the graphics cards of the previous generation.

Far Cry

When this article was being prepared, mass media got the access to the version 1.2 patch for Far Cry, which gives a certain performance boost to GeForce 6800 Ultra graphics cards, but deprives the GeForce FX series of any possible optimizations. To offer you the complete picture, we post the results of both versions of the game (1.1 and 1.2):

Far Cry 1.1, MP_Dune

Far Cry is probably the most perfect first-person shooter of our days. It demands all that the graphics subsystem of the computer can cater and devours all the resources and asks for more.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The new graphics card families from ATI and NVIDIA are anyway excellent in this game, and it’s rather hard to name a winner. The RADEON 9800 XT/Pro family is good, too, and the RADEON 9800 XT is again closely following the GeForce 6800. The old GeForce FX architecture hardly meets the requirements of Far Cry.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Turning on the “eye candy” mode, we see certain changes in the ranks: the RADEON X800 XT steps up on the top position! The 12-pipeline RADEON X800 Pro doesn’t handle the load well anymore and loses to the GeForce 6800 Ultra as well as to the GeForce 6800 GT. The GeForce 6800 is again found incapable of working quickly with FSAA and AF enabled and only competes with the RADEON 9800 XT.

Far Cry 1.2, Pier

In order to present more adequate results for high-end graphics cards, we decided to include Shader Model 2.0b and Shader Model 3.0 render paths results as well. We used ForceWare 61.45 and CATALYST 4.8 beta drivers for these 3 levels.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.


NVIDIA’s GeForce 6800-series clearly wins in “pure mode”, but once “eye candy” features are enabled, the GeForce products lose their crown to RADEON X800 XT. On the other hand, the GeForce 6800 GT is much faster compared to the RADEON X800 PRO, while the RADEON 9800 XT cannot deliver results comparable to the GeForce 6800.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Far Cry 1.2, Research

Both RADEON X800-series and GeForce 6800-series got the slightest of the speed increases because of “long” pixel shaders they calculate instead of multitude “short” pixel shaders.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Neither ATI RADEON X800 XT nor the RADEON X800 PRO could outperform the competing offerings from NVIDIA, not talking about the RADEON 9800 XT that remains galaxy behind the GeForce 6800 in "pure mode"...



Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

...but with FSAA and anisotropic filtering enabled RADEON X800 XT leads all the way, GeForce 6800 Ultra and GeForce 6800 GT follow the leader, while the RADEON X800 PRO is behind all. Performance of the GeForce 6800 drops as a result of slow memory amid not very effective RAM management, as a consequence, the RADEON 9800 XT is a bit faster.

Far Cry 1.2, Volcano

“Volcano” is certainly the best example of what Shader Model 3.0 or Shader Model 2.0b optimizations can bring in terms of performance to the latest graphics cards.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

ATI Technologies’ RADEON X800 XT leads on the “Volcano” level compared to NVIDIA’s GeForce 6800 Ultra even in “pure mode”,  but the RADEON X800 PRO is still outperformed by the GeForce 6800 GT.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

In “eye candy” mode NVIDIA’s high-end card is a bit behind ATI’s, while the results of the RADEON X800 PRO and the GeForce 6800 GT seem to be close. The GeForce 6800 delivers higher results compared to the rivaling RADEON 9800 XT card.

Painkiller




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The RADEON X800 XT and the top-end representatives of the GeForce 6800 family (Ultra and GT models) achieve practically identical results; the RADEON X800 XT and the GeForce 6800 look well, too. The game remains playable even on the least powerful graphics cards like the RADEON 9600 XT and the GeForce FX 5700 Ultra. Note also a curious fact: all the tested graphics cards have the same minimal fps rate in Painkiller.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

With full-screen anti-aliasing and anisotropic filtering turned on, the new ones feel excellent; you can play this game in resolutions up to 1600x1200. The RADEON X800 XT wins here, while the RADEON X800 Pro is somewhat worse than the GeForce 6800 GT.An exception to the rule is the GeForce 6800, burdened by its slow memory. Anyway, the game’s engine is a well-made gadget, which runs smoothly in the “eye candy” mode even on the last-generation graphics cards.

Tron 2.0

Tron obviously favors the GeForce 6800 series graphics cards unless you use full-screen anti-aliasing and anisotropic filtering. Only the GeForce 6800 GPU loses in performance to the RADEON X800 family.




In the sector of the oldies, on the contrary, the RADEON 9800 XT/Pro family easily wins the test. The RADEON 9600 XT and the GeForce FX 5700 Ultra are equals.



The above-described situation remains the same in the “eye candy” mode with FSAA and AF in the first two resolutions. However, in 1600x1200 the GeForce 6800 Ultra loses its ground to the RADEON X800 XT, the GeForce 6800 GT sinks to the level of the RADEON X800 Pro, while the GeForce 6800 is left to compete with the RADEON 9800 XT.

FireStarter

This game uses a non-standard resolution of 1600x1024, which ATI’s RADEON chips don’t support in the full-screen mode without third-party utilities. So we only offer the results for the first two resolutions.


The GeForce 6800 Ultra is ahead followed by the RADEON X800 XT; the GeForce 6800 GT occupies the third position. The RADEON X800 Pro easily leaves the GeForce 6800 behind but still loses to its own immediate market competitor – the GeForce 6800 GT. As for the graphics cards of the previous generation, we can ascertain the superiority of the RADEON 9800 XT over the GeForce 5950 Ultra and of the GeForce FX 5700 Ultra over the RADEON 9600 XT.


There’s only one change in the “eye candy” mode: the GeForce FX 5700 Ultra is now slower than the RADEON 9600 XT.

Breed

This shooter refuses to run in the 1600x1200 resolution on graphics cards with NVIDIA’s chips – that is, this display mode was just missing in the options, so we offer the results for two resolutions only.



Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The game evidently has a liking toward NVIDIA’s GPUs: the RADEON X800 XT and the X800 Pro are even slower than the GeForce FX 5950 Ultra! In the 1280x1024 resolution the RADEON X800 XT improved the situation somewhat, although didn’t make it to the level of the GeForce 6800. This fact is hard to explain; at least, we can’t say that the game’s graphics is too complex or advanced technologically.



Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

It is strange but the RADEONs don’t look advantageous in the “eye candy” mode as they usually do. The reason may be lying somewhere deep in the game engine’s bowels. Note also that the min fps rate is similar for all the graphics cards, so the user shouldn’t feel any great difference between different cards.

America’s Army 2

We used the latest version of this charge-free online shooter (ver. 2.1) downloaded from the developer’s website.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The RADEON X800 XT/Pro cards run this game a bit faster than the GeForce 6800 Ultra/GT, respectively. The game’s engine is rather simple, without any special treats, so all the cards are found capable of providing the necessary speed in all resolutions.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The advantages of the RADEON X800 architecture show up under a higher load, especially in the 1600x1200 resolution. Without the encumbrance of modern special effects, the game engine only requires fast memory and an efficient use of it – that’s why the GeForce 6800 loses to the GeForce 5950 Ultra again.

Highly Anticipated DirectX 9 Game 1

Seafloor level



There’s a small difference in performance of the GeForce 6800 Ultra and the RADEON X800 XT in this game – about 3-5 fps. The RADEON X800 Pro starts out by repeating the result of the GeForce 6800, but leaves it behind in higher resolutions thanks to faster memory and efficient methods of using it. However, it still cannot reach the GeForce 6800 GT. As for the previous generation of GPUs, the RADEON 9800 and 9600 series are leaders in their classes.

Under 2 level

The GeForce 6800 Ultra and the 6800 GT are far ahead of the RADEON X800 XT and the RADEON X800 Pro, although the difference between the two top-end solutions diminishes in high resolutions.



The RADEON X800 Pro lost to its immediate market rival, but runs faster than the GeForce 6800. The GeForce FX family suffers a bitter defeat in the race with the RADEON 9800 XT/Pro, and even with the RADEON 9600 XT.

Highly Anticipated DirectX 9 Game 2

Danger level



NVIDIA’s solutions are evidently better in this game, but it’s hard to account for this fact. Probably, NVIDIA’s better driver plays its role since this preliminary version of the game doesn’t have any sparkling special effects. At least we can’t lay blame on the low geometry processing speed anymore – ATI’s new graphics processor is NVIDIA’s better in this respect as you know.



With FSAA and AF enabled, NVIDIA’s graphics cards look unsure, as we’ve seen a number of times, since their algorithms of working with the memory are less efficient. No wonder the RADEON X800 XT is at the level of the GeForce 6800 Ultra here, and the RADEON X800 Pro easily beats the GeForce 6800, competing on equal terms with the 16-pipelined GeForce 6800 GT. As for the advantages of NVIDIA’s new architecture, you can see them in the example of the GeForce 6800, although the RADEON 9800 XT is very close to it in the 1600x1200 resolution.

Escape level



The Escape level has a more complex geometry than Danger, so the gap between the topmost models from ATI and NVIDIA is smaller than in the previous case. The game favors graphics cards from NVIDIA and the GeForce 6800 looks preferable even to the RADEON X800 Pro, although has worse formal characteristics. The cards of the RADEON 9800 family are the leaders among the previous generation.



The situation changes in the predictable direction in the “eye candy” mode. The RADEON X800 XT bears the higher load easily, and the RADEON X800 Pro can even catch up with the GeForce 6800 GT. The GeForce 6800 once again gives up the fight with the RADEON X800 family. The GeForce FX graphics cards also lose their advantage here.

FPS, Quick Summary

You’ve caught the overall picture: the new graphics processors from NVIDIA and ATI suit ideally for running next-generation computer games. However, we should acknowledge that ATI’s solutions often look preferable if complex shader-based special effects along with full-screen anti-aliasing and anisotropic filtering are in use. 

Well, it’s too early to give any advice concerning such upcoming games as Doom 3, Half-Life 2, Stalker: Shadows of Chernobyl – all is in the hands of the developers of the game. As we remember, NVIDIA has a nice reserve for the future in the form of support of vertex and pixel shaders version 3.0, but we are still to see if the industry makes any use of it.

Third Person 3D Shooters

Star Wars: Knights of the Old Republic




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The graphics cards based on NVIDIA’s GPUs run this game better than their competitors from ATI. The GeForce FX 5950 Ultra looks well here, while the RADEON 9800 XT and lower models don’t provide a comfortable fps rate.



Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Things don’t practically change with turning FSAA and AF on, only the RADEON X800 XT is slightly better than the GeForce 6800 Ultra, and the RADEON X800 Pro is reaching and sometimes outperforming the GeForce 6800 GT. The “oldies” – the GeForce FX 5950 Ultra/5900/5900 XT and RADEON 9800/9600/XT/PRO – all cease to output the necessary fps count.

Splinter Cell: Pandora Tomorrow



The RADEON X800 XT equals the GeForce 6800 Ultra in this game, and the RADEON X800 Pro performs like the GeForce 6800 GT. The 12-pipelined GeForce 6800 GPU can only compete with the RADEON X800 XT. The GeForce FX 5950 Ultra, 5900 and 5900 XT cards only allow playing in the 1024x768 resolution with any comfort whereas the GeForce FX 5700 Ultra and RADEON 9600 XT don’t allow even that.

Tomb Raider: Angel of Darkness



This game extensively uses sophisticated pixel shaders to create various visual effects, so the RADEON X800 series shows its best qualities here. The GeForce 6800 cards are behind, but notch good results nevertheless (save for the GeForce 6800), unlike the GeForce FX series that only provide playability in the 1024x768 resolution. The RADEON 9800 XT and the RADEON 9800 Pro, on the contrary, feel at their ease and beat the GeForce 6800 card.



The overall picture is the same in the “eye candy” mode, only the fps rates are down.

Prince of Persia: Sands of Time





The GeForce 6800 Ultra looks better than the RADEON X800 XT, whose performance is at the level of the GeForce 6800 GT. The GeForce 6800 shows competition towards the RADEON X800 Pro and this may be considered as NVIDIA’s success. The RADEON 9800 XT is faster than the GeForce FX 5950 Ultra. The rest of the test participants get rather low scores, but still provide playability in the 1024x768 resolution.

Max Payne 2: The Fall of Max Payne



All the new graphics cards, and some of the old ones, are limited by the performance of the central processor of the system in the 1024x768 resolution, but the GeForce 6800 and the RADEON X800 Pro can’t keep up the tempo since 1280x1024. The GeForce 6800 is practically exhausted in 1600x1200.

Among the second-tier participants, the RADEON 9800 XT seems preferable to the GeForce FX 5950 Ultra, and the RADEON 9800 Pro to the GeForce FX 5900/5900 XT. The same goes for the junior models: the RADEON 9600 XT is better than the GeForce FX 5700 Ultra in this test.



Antialiasing and anisotropic filtering put a higher load on the cards and the GeForce 6800 suddenly slows down, although doesn’t lose much to the GeForce FX 5950 Ultra. Otherwise, things remain the same: the RADEONs are somewhat faster than their competitors from NVIDIA.

The Lord of the Rings: The Return of the King




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

With FSAA and AF disabled, the new ones from ATI and NVIDIA are slowed down by the central processor’s performance in this game. It is only in the 1600x1200 resolution that we see any differences in the leading group. Particularly, the RADEON X800 Pro starts lagging behind the leaders. Strangely enough, the GeForce 6800, also with 12 pipelines and with much slower memory, suffers a smaller performance hit.

There’s parity in the camp of 8-pipeline GPUs of the previous generation: the results of the RADEON 9800 XT match those of the GeForce 5950 Ultra. The GeForce FX 5900/XT loses to the RADEON 9800 Pro somewhat. The RADEON 9600 XT surpasses the GeForce FX 5700, but in two low resolutions only – in 1600x1200 it evidently feels the lack of memory bandwidth and loses to its rival.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Nothing new happens when the load becomes heavier: the RADEON X800 XT and the GeForce 6800 Ultra go on fighting each other, while the RADEON X800 Pro obviously cannot handle the GeForce 6800 GT; sometimes it even sinks down below the GeForce 6800, which costs $100 less!

We can see a few interesting things in the sector of previous-generation solutions: the RADEON 9800 XT performs like the GeForce 5900/XT, but slower than the GeForce FX 5950 Ultra. The RADEON 9600 XT loses its ground to the attack of the GeForce FX 5700 with its fast memory and high geometry processing speed.

Thief: Deadly Shadows

Similar to Splinter Cell, this shooter game also uses the Bloom effect, which is made incompatible with full-screen anti-aliasing to avoid image quality related problems.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Fast execution of complex pixel shaders is the decisive factor in this test and the RADEON X800 XT/Pro confirm their superiority in this aspect once again. Well, the GeForce 6800 family is not at all hopeless here. From the older seed, only the RADEON 9800 XT and the 9800 Pro run this game relatively fast.

Hitman: Contracts

This game uses the same engine as Splinter Cell and Thief and that means it doesn’t support full-screen anti-aliasing if you switch the Bloom effect on, so we have only results for the pure speed mode.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The results suggest an abundance of pixel shaders 2.0 in this game, so that the GeForce 6800 as well as FX family cards cannot boast any really high performance even in the 1280x1024 resolution. On the other hand, only the X800 XT is brilliant here among the RADEONs, while the RADEON X800 Pro is close to the GeForce 6800 Ultra in performance.

Manhunt




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The RADEON X800 XT and the GeForce 6800 Ultra are the performance leaders in this game, while the RADEON X800 Pro failed this test and lost even to the GeForce 6800. The RADEON 9800 XT was the best among the 8-pipeline GPUs, leaving the GeForce FX 5950 Ultra behind.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

TPS, Quick Summary

Once again ATI’s GPUs show their exceptional effectiveness at doing full-screen anti-aliasing and anisotropic filtering.

The RADEON X800 XT surpassed both 16-pipeline solutions from NVIDIA, and the RADEON X800 Pro managed to keep very close to the GeForce GT. As for the past generation, the RADEON 9800 XT outruns the FX 5950 Ultra, nearly catching up with the GeForce 6800. The RADEON 9600 XT seems very poor – in spite of its architectural advantages, it loses to the GeForce FX 5700 Ultra.

Simulators

IL-2 Sturmovik: Aces in the Sky

The new version of IL-2 Sturmovik got improved graphics, particularly support of pixel shaders version 2.0. When enabled, it renders the water surfaces with more realism. We set up the game for the maximum image quality level to put the biggest load on the graphics cards.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The results – the RADEON X800 XT is 1.5 to 2 times faster than the GeForce 6800 Ultra – are simply astonishing. Even the 12-pipelined RADEON X800 Pro outperforms the flagship of the GeForce 6800 series. The game must be employing all the advantages of the new architecture from ATI: fast execution of pixel shaders, high frequencies of the RADEON X800, fast and efficient memory.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The victory of the RADEON X800 XT is less impressive as in the first test mode, but this card still remains an unrivalled leader with enabled FSAA and AF.

Lock On: Modern Air Combat



The test participants all get very low results in this game – this air sim is a very complex thing, quite unplayable at the maximum graphics quality settings. However, the new GeForce cards look better in this test than the new RADEONs. Among the 8-pipeline solutions, the RADEON 9800 XT has the best result.



The RADEON X800 XT is either a little faster than the GeForce 6800 Ultra or equal to it in the “eye candy” mode. The same goes for the pair of GeForce 6800 GT and RADEON X800 Pro. The GeForce 6800 is once again worse than the GeForce FX 5950 Ultra due to the insufficiently fast memory subsystem.

Microsoft Flight Simulator 2004

In fact, all the graphics cards are stopped by the central processor, with the only exception of the GeForce FX 5700 Ultra. In higher resolution, the other GeForce FX cards and the RADEON 9600 XT join it. Curiously, the minimal fps rate in this game is very close to the average fps rate.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The top-end members of the new GPU series from ATI and NVIDIA again give out the same number of frames per second, but the GeForce 6800 GT starts slowing down in 1280x1024. In the 1600x1200 resolution, the RADEON X800 XT and Pro get superiority over the GeForce 6800 Ultra and GT, respectively; the GeForce 6800 suffers again from its slow memory.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Among the cards of the previous generation, we see a fight between the RADEON 9800 XT and the GeForce FX 5950 Ultra as well as between the RADEON 9800 Pro and the GeForce FX 5900. The RADEON 9600 XT outperforms the GeForce FX 5700 Ultra in all resolutions, save for 1600x1200 – its slower memory starts playing its negative role here.

X2: The Threat



The RADEON X800 XT seems to be winning in this test, while the RADEON X800 Pro is slightly slower than the GeForce 6800 GT. The GeForce 6800 does well in the low resolution against the competing RADEON 9800 XT, but its lack of fast memory negates its results in the higher resolutions.

As for the graphics cards of the previous generation, the GeForce FX series feels overall better in this game than the RADEON 9800 line due to more effective work with shadows. The GeForce FX 5700 Ultra outperforms the RADEON 9600 XT.



The situation hardly changes in the “eye candy” mode, only the GeForce 6800 Ultra is now better than the RADEON X800 XT in low resolutions.

F1 Challenge 99-02




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

All the cards, except the RADEON 9600 XT, have their results cut short by a certain frame rate limiter in the 1024x768 resolution – it seems like the limiter cannot be disabled at all. In higher resolutions the cards move away from that barrier – all but the GeForce 6800 series that provide an exceptionally high performance here.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

There’s more variegation in the “eye candy” mode but with no sensations: the GeForce 6800 Ultra is a bit faster than the RADEON X800 XT; the GeForce 6800 GT matches the performance of the same RADEON, outperforming the X800 Pro; the GeForce 6800 again feels the lack of fast memory.

In the bottom tier, the GeForce FX 5950 Ultra and 5900 are better than the RADEON 9800 XT in this test, while the GeForce FX 5700 Ultra is somewhat faster than the RADEON 9600 XT.

Colin McRae Rally 04



The RADEON X800 XT Platinum Edition is on top, but the X800 PRO falls behind the GeForce 6800 GT with the growth of the resolution. The GeForce 6800 gives out good fps rates, better than the RADEON 9800 XT. ATI’s products are anyway better in the competition of the two older generations of the cards.



The RADEON X800 Pro catches its breath in the “eye candy” mode to outperform the GeForce 6800 GT on 12 pipelines only. The RADEON 9800 XT is overall slower than the GeForce 6800 but perceptibly faster than the GeForce FX 5950.

Sport Simulation Games

FIFA 2004




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

The RADEON X800 series starts off with banners waving, but loses to the GeForce 6800 Ultra/GT in the 1280x1024 resolution and thereafter. This is explained by the relative simplicity of the game engine as well as by the numerous shadows to be drawn. The GeForce 6800 also slows down in 1280x1024 and for the already trite reason – the same slow memory! The RADEON 9600 XT found itself behind the rest of the participants, and quite expectedly so – it has the slowest memory subsystem and the lowest geometry processing speed of all the graphics cards present.





Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

Enabling FSAA and AF makes the gap between the RADEON X800 XT/Pro and the GeForce 6800 Ultra/GT wider. Even the GeForce 6800 finds itself capable of competing with ATI’s produce, although only in 1024x768. The RADEON 9800 XT lags behind the GeForce FX 5950 Ultra as well as the GeForce 6800. The GeForce FX 5700 Ultra loses the 1024x768 resolution to the RADEON 9600 XT but then outruns the competitor thanks to its fast memory.

Real-time Strategies

C&C Generals: Zero Hour



There’s nothing interesting here as all the graphics cards cannot show their highest speed due to the restricting influence of the central processor. We only see some differences in high resolutions, but all the new-generation cards still get the same scores.



On enabling FSAA and AF we see the graphics cards on ATI’s chips gaining superiority. In the 1600x1200 resolution, the GeForce 6800 is again behind the GeForce FX 5950 Ultra.

Perimeter




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

You can play Perimeter with some comfort on the RADEON X800 XT only. Other cards don’t provide even the bare minimum of 30 fps. Subjectively, though, the GeForce 6800 Ultra and the RADEON X800 Pro give a similar level of comfort. The above-said things still refer to the 1024x768 resolution (!), and only the same RADEON X800 XT hits 30 fps in 1280x1024.




Note: minimal fps are marked with white numbers on the diagrams, black numbers represent average fps.

In the “eye candy” mode Perimeter is only playable on the RADEON X800 XT and only in the 1024x768 resolution. Everything else looks more like a slideshow. We haven’t yet met such a demanding game, by the way, but we’re sure there’ll be more such games in the future. It is for them that the computer industry is rolling out the fastest and most advanced graphics cards and processors.

Semi-Synthetic Benchmarks

Final Fantasy XI Official Benchmark 2


The RADEON X800 XT is domineering in this test, closely followed by the GeForce 6800 Ultra. The RADEON X800 Pro and the GeForce 6800 GT occupy the third and fourth places, respectively. The performance of the GeForce 6800 matches that of the RADEON 9800 XT; the GeForce FX 5700 Ultra is the last one here.

The cards remained on their respective positions, only the RADEON X800 Pro cannot compete with the GeForce 6800 GT anymore. Overall, the RADEON family seems preferable to their counterparts from the GeForce series.

Aquamark3



The RADEON X800 family is less sparkling in Aquamark: the RADEON X800 XT races with the GeForce 6800 GT, while the RADEON X800 Pro mostly lags behind the GeForce 6800. The same goes for 8- and 4-pipeline GPUs: NVIDIA’s solutions are equal to or better than ATI’s ones.



Under a higher load, the RADEON X800 cards show their best – the RADEON X800 Pro competes with the GeForce 6800 GT and even with the Ultra! There’s no clear leader among the 8-pipeline cards: the GeForce FX 5950 Ultra and the RADEON 9800 XT match each other. The RADEON 9600 XT wins in the lightweight category.

Synthetic benchmarks

Futuremark 3DMark03, build 340

Overall Score

This diagram shows that the top-end graphics cards from ATI and NVIDIA score practically the same number of points. Among the junior models, the GeForce 6800 GT outscores the RADEON X800 Pro due to its 16 pipelines. The slowest model of the new generation from NVIDIA, the GeForce 6800, is better than the RADEON 9800 XT but worse than the RADEON X800 Pro.

Let’s examine the results of this benchmark in more detail.

Game Test 1



The first game test is very simple, without any of the newer graphical tricks. There’s nowhere to show a high efficiency at executing pixel shaders upon – fill rate is more important! That’s why the RADEON X800 XT is slightly behind the GeForce 6800 Ultra, but the RADEON X800 Pro outperforms the GeForce 6800 due to higher frequencies. The GeForce FX 5950 Ultra is faster than the RADEON 9800 XT due to the same reason.



More effective work with the memory subsystem helps the RADEON X800 family to work better than the competitors with FSAA and anisotropic filtering enabled.

Game Test 2



There are many shadows to be drawn in the second game test, and NVIDIA’s products are on top here – the GeForce 6800 Ultra is beyond competition. The RADEON X800 XT goes neck and neck with the GeForce 6800 GT, and the RADEON X800 Pro with the GeForce 6800. Among the previous-generation cards we see parity between the RADEON 9800 XT/Pro and the GeForce FX 5950 Ultra.




After we turn on full-screen anti-aliasing and anisotropic texture filtering, NVIDIA’s advantage grows up.

Game Test 3



The third game test is nearly the same as the second, so they produce similar results.



Again, we witness the same picture as in the second test.

Game Test 4

Fast execution of pixel shaders is the paramount thing for the fourth gaming test of 3DMark03 – we have witnessed that a number of times. That’s why the new RADEONs are on top here, and the old ones are better than their competitors, too.



The same happens when we enable FSAA in combination with anisotropic filtering.



Overall, the graphics cards on NVIDIA’s GPUs have an advantage in the first three tests of 3DMark03. However, ATI’s products are still superior in the fourth – probably the most important – test!

Conclusion

So we have benchmarked twelve modern graphics cards in thirty-five computer games. What’s the big picture? What card is the best buy? Are they future-proof? We’ll try to answer these questions one by one.

 

First of all, this testing session didn’t reveal an absolute leader in the consumer 3D graphics field. ATI Technologies and NVIDIA Corporation both have wide product assortments for every category of users. NVIDIA has claims on technological superiority since its latest graphics processor NV40 features a score of exciting capabilities and technologies. However, a sophisticated and feature-rich GPU architecture doesn’t necessarily mean a success – just remember the NV3x series, which would lose in performance to ATI’s analogous products notwithstanding all its innovations. So we have no winners here, but we still have something to say about products of each price category.

Graphics Cards with a recommended price of $499

Probably the most difficult thing in this review is to find an absolute performance leader among all graphics cards widely available today. None of the fastest cards – the RADEON X800 XT and GeForce 6800 Ultra – demonstrated a breath-taking performance advantage in modern games.


Please click to enlarge

The NVIDIA GeForce 6800 Ultra feels at its ease in low resolutions as well as when you don’t enable anisotropic filtering and anti-aliasing. Besides that, NVIDIA’s new GPU outperforms its rival in older games and, however strange it seems in view of the defeat of the GeForce FX, in a number of pre-release versions of upcoming applications.

It is a paradox, but the GeForce 6800 Ultra doesn’t have much need for overclocking. It doesn’t require a frequency of over 400MHz in those games where it outruns its main rival at the nominal clock rates, but in those games where ATI’s GPU is superior, the extra megahertz cannot help the GeForce 6800 Ultra.

It is possible that the GeForce 6800 graphics cards will get some speed boost through additional optimization of the driver and applications themselves, but you may not see it right after new games come out.


Please click to enlarge

The support of Shader Model 3.0 is listed among the advantages of the GeForce 6800 Ultra. It means a slightly higher speed and new special effects. Today, support of Shader Model 3.0 is a light argument, giving just a small gain in Far Cry, but if this shader version becomes widely accepted throughout the industry, there will be ever more such games and Shader Model 3.0 will have more influence. If this scenario develops rapidly enough, the lack of support of this standard may become a serious disadvantage of the RADEON X800 XT/Pro in comparison to the GeForce 6800 Ultra.

The ATI RADEON X800 XT Platinum Edition develops an excellent speed across the entire spectrum of gaming applications, but loses to the GeForce 6800 Ultra in a number of games. With enabled full-screen anti-aliasing and anisotropic filtering this chip usually confirms its superiority. If you invest into a RADEON X800 XT, you can rest assured that the speed of games actively using DirectX 9 shaders will be stably high, irrespective of ATI programmers’ efforts to optimize the drivers.


Please click to enlarge

Besides that, the RADEON X800 XT boasts such complementary advantages as small size, low noise level, only one power connector, and only one slot occupied – all in contrast to NVIDIA’s flagship product.

Note also that the RADEON X800 XT doesn’t require a super-high-quality power-supply unit like the competitor does, but if you’re going to purchase this powerful graphics accelerator, keep it in mind that it should be accompanied with much RAM and a powerful central processor to realize its full potential – these things by themselves may call for a new and better and more powerful PSU.


Please click to enlarge

The RADEON X800 XT has its own drawbacks, too. Its reference cooling system might be more efficient, and it doesn’t support Shader Model 3.0. The latter thing may affect its performance in upcoming games that use this shader version. The first game to use this innovation was the 3D shooter Far Cry (see our review).

We think that you can word your own demands to $499 graphics cards and select a product that suits your particular needs – just examine the diagrams for the best results in your favorite games.

Graphics Cards with a Recommended Price of $399

Getting down the price stairs we meet the GeForce 6800 GT and the RADEON X800 Pro, evaluated at $399 by the manufacturing companies. The GeForce 6800 GT seems preferable between the two due to its 16 pipelines and a performance that is only slightly below the level of NVIDIA’s topmost product. However, there is one more dilemma here as well: in a number of games when we enable anisotropic filtering and anti-aliasing the leading position gets taken by the ATI RADEON X800 PRO, mostly due to efficient pixel shader algorithms and high-quality implementation of anisotropic filtering and anti-aliasing techniques.


Please click to enlarge

Besides that, many of such cards are likely to be operational at the frequencies of the GeForce 6800 Ultra, so the GeForce 6800 GT looks like the optimal choice for users who don’t want to spend $400 and more for a graphics card.


Please click to enlarge

So, the GeForce 6800 GT is your card if you’re willing to have a highest performance in games and support of Shader Model 3.0 at a discount.


Please click to enlarge

If silence, compactness, power-saving are notions that matter much to you, or if you’re a real hardware enthusiast, consider the RADEON X800 Pro . Modders should be interested in this product since it can be converted into a RADEON X800 XT Platinum Edition with a bit of skill and luck. In this case you’ll get a nice performance boost. ATI’s R420 chip yield is very high and some of fully operational dies go to produce 12-pipeline cards. This is the ground for the conversion, although you should approach the problem soberly and not risk it if you don’t feel confident as to the outcome.


Please click to enlarge

Besides the possibility of turning it into a faster product, the RADEON X800 Pro offers effective anisotropic filtering and full-screen anti-aliasing algorithms, which often help it to surpass the GeForce 6800 GT in tests. However, this is still not enough for us to say that this GPU from ATI is overall better than its rival.

Graphics Cards with a Recommended Price of around $299

Lower still, we hear the natural clash as the ex-leaders – the fastest GPUs of the previous generation – are trying to save their dignity at the onslaught of the slowest representatives of the new top-end GPU series from both manufacturers.

The GeForce 6800 stands somewhat aloof from the row of new-generation solutions. With its 12 pipelines, this GPU should have been pitted against the RADEON X800 Pro, but NVIDIA equipped it with slow memory clocked at 700MHz. Thus they reduced the cost of the product but also reduced its performance. As a result, the GeForce 6800 doesn’t suit well for high resolutions and hard modes since its good NV40 architecture is hamstringed by the slow memory and less efficient methods of using it. Sometimes the GeForce 6800 even loses to the GeForce FX 5950 Ultra, not mentioning the RADEON 9800 XT and the X800 Pro. In new games, however, this graphics card feels at ease, especially if you don’t do full-screen anti-aliasing. At a recommended price of $299 it can make a good buy.


Please click to enlarge

In spite of its not very efficient memory subsystem, the GeForce 6800 feels better than the competitors in the 1600x1200 resolution. Unfortunately, $300 graphics cards don’t run newer games in this resolution at a comfortable speed, so you should consider 1280x1024 first, and in this resolution the GeForce 6800 and the RADEON 9800 XT and even the GeForce FX 5950 Ultra all feel quite confident.


Please click to enlarge

As for alternatives, you have one – the RADEON 9800 XT. In most cases, this graphics card provided performance close to that of the GeForce 6800. On the other hand, it does not feature Shader Model 3.0 support, what may be important for some users, which means that the GeForce 6800 product is a kind of a more advanced product in terms of feature-set and possibly more future-proof in terms of technology.


Please click to enlarge

The fate of the GeForce FX 5950 Ultra looks rather vague: this card has a smaller potential compared to the RADEON 9800 XT but costs more! It can run rather simple games, but doesn’t suit very well for playing modern shooters and simulators, rich in pixel shaders. None is going to make simple game engines just to make the 5950 Ultra live longer – this GPU is rapidly becoming obsolete and seems to be an unreasonable investment.


Please click to enlarge

Right now, the RADEON 9800 XT costs from $380 to $410 but the prices will be going down as there appear more new-generation graphics cards in the market. So if you can wait for a while, you will have an opportunity of getting a fast graphics card at a modest price. Although without Shader Model 3.0 support, the RADEON 9800 XT may become an excellent choice.

Graphics Cards with a Recommended Price of $199 and Below

At the bottom of the stairs we meet the good old GeForce 5900 and 5900 XT embracing the RADEON 9800 Pro . What’s your choice if your budget is rather tight but you still want to enjoy modern games? The answer is simple: with the price being roughly the same, you may want to prefer the RADEON 9800 Pro to the GeForce FX 5900/5900 XT due to the same reason according to which you’d prefer a RADEON 9800 XT to a GeForce FX 5950 Ultra.


Please click to enlarge

Then, if you cannot have a RADEON 9800 Pro, and you choose between the RADEON 9600 XT and the GeForce FX 5900 XT, the latter graphics card seems preferable due to its 8 pipelines as well as good overclockability.


Please click to enlarge

And lastly, if you’re choosing between the GeForce FX 5700 Ultra and the RADEON 9600 XT, you should again consider the games you’re playing or intending to play. Due to its architectural peculiarities, the RADEON 9600 XT is better in pixel shaders-heavy games, while the GeForce FX 5700 Ultra shows its best in games where geometry data processing and fast work with shadows is important.


Please click to enlarge

Anyway, you can hardly play any modern game on one of these cards in a resolution of more than 1024x768, especially with enabled FSAA and AF.


Please click to enlarge

A Few Final Remarks

Our today’s testing session confirmed the fact that modern computer games more often use the complicated pixel shaders version 2.0. The complexity of the geometry of the scene grows up, too, and this affects the performance provided by the available graphics cards. That’s why you should start shopping for a new graphics card by identifying the range of games you’re going to play and the range of video modes you’re intending to use.

You see that we can’t give a simple recommendation since the choice of each user depends on many factors, also those we don’t count in. The main thing you should have when shopping for a new graphics card is knowledge of what you want to use it for. In this case, you are unlikely to make a wrong decision.

So far, any recommendations for the PCI Express platform would be too early, but we’re going to discuss this subject in one of our upcoming reviews.