by Alexey Stepin , Yaroslav Lyssenko
09/09/2005 | 12:37 AM
The summer season being over in the northern hemisphere, we all get back from our vacations back to work or study and of course to playing our favorite games. There’s always summer in the virtual jungles of Far Cry after all!
An appropriate graphics card is needed to get the full satisfaction from newest game releases and we, at X-bit labs, are offering you the results of our comprehensive test session with 17 graphics cards and 30 benchmarks. This review was conceived to be a shopper’s guide for the time being and to make your choice of a graphics card in any price category easier. But before we see how fast popular games run on currently available graphics hardware, we want to remind you in brief what happened in the first half of this year.
The first half of 2005 began quite peacefully, despite the fact that the development of new-generation solutions was underway at NVIDIA Corporation and ATI Technologies. The excitement about the last-year releases of GeForce 6800 and RADEON X800/X850 had already calmed down and the first release of 2005 occurred only on the 28-th of February when ATI Technologies introduced its R481 graphics processor which was in fact an AGP modification of the R480 core.
The Canadian company was also preparing an answer to NVIDIA SLI technology which allowed using multiple GPUs as a single graphics subsystem. SLI was already commercially available then and had even become popular to some extent among the PC enthusiasts crowd. Two GeForce 6800 Ultra or GT cards outperformed the fastest solution ATI offered, RADEON X850 XT Platinum Edition, but the price of the SLI configuration was still rather high.
Thus, there was no obvious leader in that market in early 2005: ATI offered the fastest single graphics card and a wide range of GPUs for both PCI Express and AGP platforms, but NVIDIA solutions were somewhat more advanced technologically, supporting Shader Model 3.0, HDR, etc, and could work in the SLI mode, delivering the highest performance then possible.
The new-generation processors developed by both companies had given rise to numerous rumors and speculations, as usual. First samples of ATI’s new chip, codenamed R520, were made back in October or November of 2004, but it is only this fall that the product will be available commercially because ATI met problems trying to clock the 90nm GPU at high enough rates. So we will only know the real technical characteristics of the new solution at the time of its official announcement. One thing is certain so far – ATI Technologies’ chip will support Shader Model 3.0 and will thus be equal to NVIDIA solutions in this respect.
As for NVIDIA, it rolled out its G70 exactly according to the schedule and its characteristics are not a mystery anymore. We’ll give you a brief summary of the new-generation GPU from NVIDIA in the next section.
Unlike in the same period of the last year when two new graphical architectures were brought to market, only NVIDIA made the same in the first half of 2005. On June 22, the company introduced the G70 chip and the G70-based GeForce 7800 GTX graphics card to the public.
NVIDIA changed the nomenclature of its GPUs and the chip that had earlier been referred to as NV47 got the new name of G70 which indicated its belonging to the GeForce 7 series. The NV40 chip had earlier set a record as consisting of as many as 222 million transistors, but the G70 turned to be a much more complex die – 302 million transistors! That was an astounding level of complexity for consumer graphics hardware, yet quite explicable: the number of pixel pipelines had been increased from 16 to 24 by G70, and of vertex processors from 6 to 8. The number of raster operators remained, however, the same (16 ROPs), as the developers must have thought it unreasonable to increase it to 24.
Then, each pixel pipeline got two additional mini-ALUs besides the two existing ALUs and that addition increased the complexity of the chip even more. The die area of the G70 got larger in comparison with the NV40/45, even despite the use of a thinner 0.11-micron tech process. The power consumption and heat dissipation remained almost on the same level, however.
The frequency of the new graphics processor was a little higher than that of the previous model and equaled 430MHz, but only for the pixel pipelines. As we wrote in our earlier review, the G70 chip was the first solution in the consumer 3D graphics market to feature independent clocking of its subunits. That is, the pixel pipelines of the GeForce 7800 GTX work at 430MHz, but the ROPs and vertex processors of that card are clocked at 470MHz. This independent clocking feature gives more flexibility in controlling the GPU than usual and also helps in making a well-balanced product.
Generally speaking, the G70 was not a completely new solution, despite the fact that NVIDIA claimed otherwise. From the architectural standpoint this chip is rather a greatly improved NV40/45. The functionality of the new chip remained the same (i.e. the widest available on the market), but its performance improved considerably (you can refer to our special report for more details about the G70 architecture).
The new family of graphics cards from NVIDIA, based around the G70 GPU, was quite logically named GeForce 7800. Right now this family includes two graphics card models with the following characteristics:
We will surely see new products in this family in near future, but so far the rest of NVIDIA product line-up is represented with graphics cards on NV40/45 and NV41/42 chips. The latter have 12 pixel pipelines and seem to differ in tech process alone (the NV42 is a 0.11-micron chip).
The GeForce 7800 GTX graphics card was an ingeniously designed product. Its power consumption was only 3 watts higher than that of the GeForce 6800 Ultra, but the newer card was equipped with an almost silent single-slot cooling system on heat pipes with “smart” control over the fan speed. The card wasn’t much larger in size (just a single centimeter longer than the predecessor), while the power requirements for it were even relaxed: NVIDIA recommended a 480W and higher power supply for the original GeForce 6800 Ultra, while the GeForce 7800 GTX is satisfied with 350W and higher units that can output 22 amperes on the +12V rail. The new card exists only in a PCI Express version, and its AGP version is unlikely to ever appear – the ATI RADEON X850 XT Platinum Edition AGP (R481) will probably remain in history as the fastest solution with the AGP interface.
The GeForce 7800 GTX met no rivals as concerns speed, using the power of its 24 improved pixel pipelines to beat the RADEON X850 XT Platinum Edition in nearly all gaming tests. Two GeForce 7800 GTX cards joined in a SLI system also set a new record as the fastest consumer 3D solution in the world. We won’t dwell much on the performance characteristics of the GeForce 7800 GTX as we already wrote a special review on this subject.
Quite importantly, the problem of market availability of the new product never arose. NVIDIA had learned much with the GeForce 6800 Ultra and made the GeForce 7800 GTX available for purchase almost on the day it was announced, even though slightly overpriced. Right now the price has stabilized and new cards can be even found at a smaller price than the officially recommended $599. A side effect of that laudable promptness in bringing the new product to the market was that all GeForce 7800 GTX cards come out almost identical irrespective of their manufacturer, differing only in the design of the cooler casing. There’s nothing wrong in that, however, as the new cooling system from NVIDIA is quiet and highly efficient for the G70 chip. So, NVIDIA again got the crown of a market leader, but ATI didn’t react immediately as in the previous year. GeForce 7800 GTX and GT cards found themselves competitor-less in their market sector.
As for other NVIDIA-related news in the first half of this year, we can mention the further improvement of the multi-GPU SLI technology. It has become compatible with more games, has acquired new rendering modes and has offered the opportunity to use two graphics cards in a SLI configuration without the adapter. Besides that, NVIDIA developed and implemented SLI-exclusive high-quality antialiasing modes (we are going to publish a review of these modes soon)
That’s all about NVIDIA. Now let’s recall what happened in the ATI camp in the same time stretch.
ATI Technologies entered the year 2005 with a widest range of products for any user, from the high-performance RADEON X850 XT Platinum Edition to the low-end RADEON X300 and RADEON 9550. Its top-end graphics cards were only inferior in performance to SLI configurations made out of two GeForce 6800 Ultra or GT. The only drawback of ATI solutions was the lack of support of Shader Model 3.0, but that was hardly a serious problem as few available games required it.
At the beginning of May, the Canadians launched another new product, RADEON X800 XL 512MB, but it was hardly a significant event. As our tests proved, 512 megabytes of memory this graphics card is equipped with do not improve its performance much. It is only in Half-Life 2 and in F.E.A.R. Demo that we can see some performance gains, and only in highest resolutions. So much memory might be of some help in resolutions above 1600x1200, but the performance of the RADEON X800 XL GPU would become a bottleneck then. Moreover, a majority of CRT monitors do not support resolutions above 1600x1200, while mass-produced TFT panels are limited to 1280x1024 resolution. So, the RADEON X800 XL 512MB turned to be a niche product targeted only at enthusiasts who just wanted to have a graphics card with 512MB of memory and were ready to pay $449 for it.
On the last day of May ATI also announced its own vision of a multi-GPU technology. Long known under the codenames of AMR and MVP, the new technology was called CrossFire in the end. Two new chipsets, RD400 and RD480, were introduced alongside for building CrossFire-compatible mainboards for Intel and AMD processors, respectively.
Regrettably, the flexibility of the Crossfire technology fell short of what had been rumored and expected. ATI graphics processors don’t have any built-in logic that would allow them to join in pairs, so Crossfire is implemented by means of an external chipset called Compositing Engine. This engine is responsible for sewing together the image fragments rendered by the different cards. RADEON X800 and X850 graphics cards that have that chip on board acquired the words “CrossFire Edition” in their names. In fact, graphics cards were divided into “masters” and “slaves”. The Master was equipped with a special DMS connector the slave graphics card was connected to via an external cable.
So, CrossFire proved to be even less configuration-flexible than NVIDIA SLI. With SLI, you only needed a couple of identical cards, while ATI’s technology required a special CrossFire Edition card to be installed in the system. NVIDIA SLI was implemented not only in high-end GeForce 6800/7800 cards but also in cheaper GeForce 6600 GT (and currently in GeForce 6600). CrossFire, on the contrary, was only limited to RADEON X850 and X800 families. In other words, the following graphics card models were announced:
It was not a drawback as two mainstream graphics cards could always be replaced with a single high-performance one, but this approach left no trace of the alleged flexibility of ATI CrossFire. By the way, a 12-pipelined RADEON X850 or X800 can also be used as a “slave” device, but the “master” card will then disable its four “surplus” pipelines, giving you a total of 24 pixel pipelines.
ATI did implement a few exciting things in CrossFire like the SuperTiling rendering mode in which the frame is divided into squares of a 32x32 pixels size and each card renders half of these squares. Another innovation is the Super AA mode that improves the antialiasing quality using the combined power of two graphics cards. Well, we already dedicated a separate review to the CrossFire technology, so you can refer to it for details, if you wish.
One unpleasant thing about CrossFire is that since the announcement it has remained largely on paper. It seems that ATI’s multi-GPU technology will only come to market for real with the R520 chip. Until that moment there is just no sense in promoting CrossFire because a system with two RADEON X850 cards will hardly be faster than a single GeForce 7800 GTX, but will cost much more money.
The second-tier graphics developers presented nothing particularly interesting to the public in the last half a year. Two events can be mentioned, though. On March 14, S3 Graphics announced their entry-level GammaChrome S18 processor which, however, never really took off due to the same reasons that had earlier ruined the DeltaChrome. We mean driver-related problems and inadequate support of modern games. As a result, none of the major graphics card and computer manufacturers wanted to do anything with that product. You can read our review of the S3 GammaChrome S18 Pro here.
On July 13, Matrox Graphics, another once-successful 3D graphics company, announced the world’s first graphics card with the PCI Express x1 interface. The Millennium G550 PCIe is in fact a PCI Express x1 version of the Matrox Millennium G550 graphics card that was originally released back in 2001. Of course, there’s no talk about 3D performance; this solution is intended for systems without a PCI Express x16 port like servers and such. It also provides an opportunity to connect simultaneously more than two monitors to the same machine. The price of this niche product was set at $139.
So, here are the three most important events in the desktop graphics world that happened in the first half of the current year:
It seems NVIDIA’s position is stronger today. Yes, it is, but only from the technological point of view. What’s the market situation at large?
The market at large grew in the first half of 2005 in comparison with the same period of 2004.
Intel still had the biggest share of sales (43% of the whole graphics market) in the second quarter, while ATI’s and NVIDIA’s shares were 26% and 16%, respectively. Intel and ATI Technologies performed just like in the previous quarter, but NVIDIA’s share shrunk by 2%, mostly due to the company’s failure in the low-end sector.
Yet if we only talk about discrete graphics solutions, there are no changes: NVIDIA still holds 41% of the market and ATI has 56%. The remaining 3% are shared by smaller suppliers like S3 Graphics, Matrox Graphics, XGI and others.
Meanwhile, NVIDIA has got stronger in the top-end market where it now has 73% (3% more than in the first quarter). This percentage results from the mass shipments of the GeForce 7800 GTX and from certain popularity of SLI solutions. ATI’s performance in this market sector has been worse: only 27% of the market, 3% down since the previous quarter.
On the other hand, ATI Technologies strengthened its position in the mainstream DirectX 9 compatibles sector, holding now 56% of it against NVIDIA’s 46%. That’s quite natural since the Canadian company is offering a widest range of mainstream and performance-mainstream graphics cards and collaborates actively with OEMs. ATI’s pricing policy is very aggressive, too.
The overall picture seems clear enough: NVIDIA is the leader in the top-end sector, enjoying its technological superiority, whereas ATI has opportunities to expand its influence in the mainstream sector. This situation will probably remain such till the start of mass shipments of R520-based products. We shouldn’t forget, however, that a market situation can’t change dramatically in a single moment, so we expect ATI’s market share to shrink a little further because the company delays commercial shipments of new products.
The chief goal of this review is to provide some bearings for people who are shopping today for a new graphics card (as a standalone device or as part of a new PC). We wanted to give this review more practical value by adding to it as many popular games as possible so that you had a full picture of performance of currently available graphics cards. 26 games of various genres, 2 semi-synthetic and 2 synthetic benchmarks were used, giving a total of 30 test applications. The full list follows below.
First Person 3D Shooters:
Third Person 3D Shooters:
We think these tests suffice to make this review helpful as a shopping guide. You may note that 3D shooters with first-person view have the most positions in the list. We think it proper since it’s in this genre than more new games come out and new 3D technologies are polished there, too.
PCI Express platforms are steadily ousting AGP ones from the market and upcoming graphics solutions are unlikely to support the latter interface, so we decided to include only PCI Express devices as the most likely candidates for purchase.
We used the games’ integrated benchmarks if possible. If the benchmark could produce the minimal fps rate besides the average one, we put it into the table of results and marked it in white. Games without built-in benchmarking tools were tested using the FRAPS utility and the minimal fps rates are also indicated in the diagrams.
We tested graphics cards with the PCI Express x16 interface on the following testbed:
Sticking to our standard testing procedure, we set up the drivers from ATI and NVIDIA in the following way:
NVIDIA ForceWare 77.72:
ATI CATALYST 5.7:
We turn on full-screen antialiasing and anisotropic filtering from the game menu, if possible. Otherwise, we force the necessary mode from the driver. We do not edit the games’ configuration files. We select the highest graphics quality settings in each game, the same for graphics cards from ATI and NVIDIA, but choose the rendering mode depending on the capabilities of the graphics card. If a game supports Shader Model 3.0, we use this mode for NVIDIA solutions. ATI cards work in Shader Model 1.1/2.0/2.0b mode depending on the game.
We included 16 graphics cards with the PCI Express interface into this test session, including SLI configurations. Here’s a full list of the cards with links to their respective reviews:
Ultra High End
Entry-Level (below $149)
The GeForce 6600 and the RADEON X700 graphics card models were emulated by down-clocking the GeForce 6600 GT and RADEON X700 PRO, respectively. Unfortunately, we didn’t have a RADEON X850 PRO for our tests, so we could not compare this 12-pipelined solution with the 16-pipelined RADEON X800 XL and GeForce 6800 GT.
It takes a while to prepare a truly comprehensive review, so we are a little behind the newest releases. We hadn’t time to test the recently announced ATI RADEON X800 GT and NVIDIA GeForce 7800 GT for this review, even though the latter is present in some of the diagrams.
This is a representative of the very popular genre of tactical 3D shooters that focus on planning and teamwork rather than on the gamer’s reflexes and shooting skill. Unlike many projects of that kind, Battlefield 2 is a large-scale game and is rather sluggish even on one gigabyte of system memory as a result. The gaming process involves using land and surface vehicles as well as helicopters and jets, so you shouldn’t wonder at the high memory requirements – you’ll want 2 gigabytes or even more to play with maximum comfort.
The game is very appealing visually; its numerous special effects make it beautiful, yet not as demanding as, for example, F.E.A.R.
The game depicting large-scale battles, the CPU unavoidably becomes a bottleneck at some point. We can see it for top-end graphics cards like the GeForce 7800 GTX and the SLI platforms with two GeForce 7800 GTX and GeForce 6800 Ultra.
In a lower sector the GeForce 6800 Ultra and GT are faster than the RADEON X850 XT Platinum Edition and X850 XT in resolutions above 1024x768, but the GeForce 6800 is quite expectedly worse than the RADEON X800 XL, the latter having 16 pixel pipelines and costing $100 more.
And lower yet, the GeForce 6600 GT leaves the RADEON X700 PRO behind thanks to the high frequencies – 256 megabytes of onboard memory give no advantage to the RADEON, at least in the “pure speed” mode. The RADEON X700 rules in the entry-level sector as it works at 400/700MHz frequencies (GPU/memory) against the 300/500MHz of GeForce 6600.
It’s all different in the “eye candy” mode. The SLI platform with two GeForce 7800 GTX remains on top, but the RADEON X850 XT PE and the RADEON X850 XT are ahead of the GeForce 6800 Ultra and even leave the single GeForce 7800 GTX behind in high resolutions. This is one of those cases when an efficient graphics memory subsystem is more important for the end result than the architectural features of a GPU. The same is true for the less advanced graphics card models.
Brothers in Arms belongs to the same genre as Battlefield 2, but has a World War II setting. Not as large-scale as Battlefield 2, this game is still very beautiful with numerous DirectX 9 effects and highly detailed and superbly animated characters. If you like games about World War II, you will surely appreciate the unique atmosphere created in this game.
Brothers in Arms: Road to Hill 30 uses a modified Unreal Engine 2, but for some reason prefers the GeForce 6/7 architecture. You can see that everywhere expect for the performance-mainstream class where the more expensive RADEON X800 XL is slightly faster than the GeForce 6800 just because it has more pixel pipelines.
The SLI configuration with two GeForce 6800 cards is a little ahead of the GeForce 6600 GT SLI – by not more than 6-8fps.
It’s all roughly the same in the “eye candy” mode: even the RADEON X850 XT Platinum Edition cannot yield a playable frame rate in resolutions above 1280x1024. The GeForce 7800 GTX, on the other hand, allows playing this game with enabled FSAA and anisotropic filtering even in 1600x1200.
As for the SLI configuration based around GeForce 6800 Ultra and GeForce 7800 GTX, they are beyond competition. Note also that the SLI platforms with GeForce 6800 and GeForce 6600 GT cards have almost the same performance in the “eye candy” mode.
Using per-pixel lighting, normal maps and realistic stencil shadows, this shooter runs faster on the GeForce 6/7 architecture which excellently works with shadows as you know. That is why NVIDIA based cards triumph over their ATI rivals here. The only exception is the GeForce 6800 – RADEON X800 XL pair: the higher performance of the (more expensive) RADEON counterbalances the architectural advantages of the GeForce. SLI technology acts up here: the two GeForce 7800 GTX perform just superbly, but the SLI configurations on GeForce 6800 Ultra/GT and 6600 GT don’t enjoy any big advantage over the corresponding single cards.
It’s all roughly the same in the “eye candy” mode, except that the RADEON X800 XL is noticeably faster than the GeForce 6800, while the RADEON X700 is ahead of the GeForce 6600. In the other product classes NVIDIA solutions are still superior to their respective opponents from ATI Technologies.
This is one more tactical 3D shooter. This time you lead a four-man squad of U.S. marines along the streets of Beirut, the capital city of Lebanon. The game is based around a unique engine from Destineer Studios that supports volumetric shadows, normal maps and per-pixel specularity. The graphical aspect of First to Fight is up to the level of recent projects, even though we don’t see any particular innovations. This is just another product of quality targeted at the fans of the genre.
The game is not very demanding about the graphics subsystem and runs rapidly even on fixed T&L cards. ATI RADEON X850 XT Platinum Edition and X850 XT perform overall better in First to Fight than the GeForce 6800 Ultra and GT, but the GeForce 6600 GT quite successfully contends with the RADEON X700 PRO in the lower class.
There are no problems with NVIDIA SLI technology in this game. It ensures a hefty performance gain in high resolutions.
The same can be said again about the “eye candy” mode. ATI solutions even enjoy a bigger advantage here, again except the RADEON X700 PRO that is a little behind the GeForce 6600 GT – the clock rates of the GPU and memory still remain the major performance-influencing factor.
Counter-Strike: Source uses the same engine as Half-Life 2 (refer to our Half the Way to Half-Life 2: Counter Strike: Source Benchmarked review for details).
The more advanced graphics cards hit against the performance ceiling set by the central processor in the “pure speed” modes. In higher resolutions, however, you can discern that the RADEON cards are one step ahead of the same-class GeForce cards, save for the GeForce 6600 GT.
The GeForce 7800 GTX SLI, GeForce 7800 GTX and GeForce 6800 Ultra SLI deservedly occupy the top positions on the podium, but ATI RADEON X850/X800 enjoy an advantage in the lower product categories.
The Piranesi scene is somewhat more difficult, but produces the same results: the ATI products are better than their respective rivals from the NVIDIA camp, save for the RADEON X700 PRO.
When we turn on FSAA with anisotropic filtering, even the GeForce 6600 GT which has earlier been faster than the RADEON X700 PRO falls behind it. The gap between these two cards amounts to 15% in 1600x1200.
SLI technology works fine in Counter-Strike: Source, but there’s not much sense in using it with the GeForce 7800 GTX – the performance of the system CPU becomes the limiting factor. As for the GeForce 6800 Ultra, you can achieve the level of the single GeForce 7800 GTX by joining two such cards. The less advanced SLI configurations are not that efficient. For example, the GeForce 6600 GT SLI performs worse than the single GeForce 6800 GT, so there’s not much sense in using that configuration.
There’s a frame rate limiter set at 100fps in the single player mode of Doom 3. The senior SLI configurations and the single GeForce 7800 GTX reach this point in lower resolutions; the GeForce 6800 Ultra stops a mere 5-6fps short of it. You should already know from our reviews that the game engine from id Software runs better on the GeForce 6/7 architecture, so there is no need for long comments. The diagrams clearly illustrate the superiority of NVIDIA’s solutions in this game.
The same goes for the results we got on the same level with enabled full-screen antialiasing and anisotropic filtering. Note, however, that only top-end solutions ensure a playable speed in high resolutions here. Owners of less advanced graphics cards have to choose between playing in a high resolution or with enabled antialiasing.
The above-mentioned frame limiter does not work in the multiplayer mode, so the topmost-class solutions yield close to 200fps on the d3dm4 map. The general picture is the same as in the previous case: NVIDIA GeForce 6/7 cards beat their opponents everywhere.
There are no big changes in the “eye candy” mode: NVIDIA is still victorious. The new-generation graphics card GeForce 7800 GTX, alone and as a SLI pair, and the GeForce 6800 Ultra SLI are especially brilliant in this test. The products from the GeForce 6600 and RADEON X700 families only provide a playable frame rate in 1024x768, just like they did on the Hellhole map.
The open vistas of the Pier map lead to the CPU rather than the graphics card becoming the main performance-limiting factor as you can see with all the top-end solutions. In the lower categories the GeForce 6600 GT triumphs over the RADEON X700 PRO again, while the GeForce 6600, on the contrary, is defeated by the RADEON X700.
The “eye candy” mode clearly shows which card is better for playing Far Cry. The RADEON X850 XT Platinum Edition takes top places in all resolutions while the GeForce 7800 GTX can only compete with the RADEON X850 XT, a less powerful model from ATI. SLI technology, however, helps NVIDIA strike back – the GeForce 6800 Ultra SLI performs superbly in high resolutions, being only inferior to the SLI configuration with two GeForce 7800 GTX cards. The GeForce 6600 GT is faster than the RADEON X700 PRO, but only by 1-3fps. Anyway, these mainstream cards don’t allow playing this game at the “eye candy” settings with comfort.
Enclosed spaces are typical for the Research map and this affects the standings of the participating cards: the senior RADEON cards and the GeForce 6800 Ultra have similar speeds. The RADEON X800 XL is slower than the GeForce 6800 GT but faster than the 12-pipelined GeForce 6800.
The GeForce 6600 GT is the best choice in the mainstream sector, while the RADEON X700 with its higher frequencies is preferable to the GeForce 6600 among the entry-level solutions.
Nothing changes in the “eye candy” mode. Moreover, all RADEON X850/X800 cards and all GeForce products, excepting the 12-pipelined GeForce 6800 and the 6600 series, ensure a rather high level of performance even in 1600x1200.
Graphics cards no weaker than the RADEON X800 XL and the GeForce 6800 GT will most likely be required to enjoy the beauty of this game. Well, we’ll only know it for sure when the final version of F.E.A.R. is released.
If you usually play with enabled full-screen antialiasing, you may be disappointed here: the GeForce 7800 GTX and GeForce 6800 Ultra based SLI configurations are the only solutions to deliver a playable frame rate in that case. Even the single GeForce 7800 GTX can’t cope with F.E.A.R. in the “eye candy” mode, despite the power of its 24 pixel pipelines.
It’s hard to tell which card is better in the “pure speed” mode, yet the members of the RADEON X850/X800 families seem to be somewhat faster than the GeForce 6800 Ultra/GT. As usual, the GeForce 6600 GT achieves a win for NVIDIA in the mainstream sector, having higher frequencies than its immediate opponent.
The overall standings remain the same in the “eye candy” mode, but we can now evaluate the performance of the ultra-high-end products. The SLI platforms with GeForce 7800 GTX and 6800 Ultra and the single GeForce 7800 GTX are competitor-less, of course, and the latter isn’t much slower than the GeForce 6800 Ultra SLI. Note also that the RADEON X700 PRO overtakes the GeForce 6600 GT in 1600x1200 resolution.
The previous test scene contained a trip on a speedboat along the Half-Life 2 world, but the d3_c17_02 record shows a battle of Gordon Freeman’s robot companion the Dog with a squad of Syndicate soldiers in one of the narrow streets of City 17. The character of the scene being different, the results of the test are nonetheless similar to what we’ve seen above.
The speed ceiling goes at about 88fps here and is easily achieved in all resolutions by the SLI pair of two GeForce 7800 GTX cards. The single GeForce 7800 GTX spots a little short of that mark in 1600x1200. As for the rest of the participating cards, ATI RADEON cards enjoy a bigger advantage over the same-class GeForce cards in the “eye candy” mode than at the “pure speed” settings. Even the GeForce 6600 GT doesn’t make an exception.
Nearly all the graphics cards included in this review have the same speed in this game, being limited by the system central processor. The products from the GeForce 6600 and RADEON X700 series make an exception.
We can compare even high-performance products in high resolutions after we have enabled full-screen antialiasing and anisotropic filtering. Painkiller preferring cards with a high GPU clock rate, it is natural that the RADEON X850 XT and XT Platinum Edition beat the GeForce 6800 Ultra here. The GeForce 6600 GT, however, is only 1fps ahead of the RADEON X700 PRO despite having higher clock rates.
Pariah is rather an ordinary fantastic shooter, a bit resembling Halo, with a hackneyed plot but beautiful and complex visuals. The latter fact makes it a good benchmarking tool. The game is based on a modified Unreal Engine with various post-effects rendered with the help of pixel shaders.
The game obviously prefers graphics cards from ATI Technologies in low resolutions: the senior RADEON models are faster than the GeForce 7800 GTX even! In higher resolutions the G70-based products gain the lead, but the RADEON X850/X800 are still ahead of their market competitors. In its price category the GeForce 6600 GT is far ahead of the RADEON X700 PRO, the gap amounting to 30-35% in high resolutions. SLI technology ensures a colossal, nearly twofold performance gain.
This cyberpunk shooter has a strong resemblance to Deus Ex in the plot and style. Unfortunately, it was ported from the PlayStation 2 console, so you won’t find high-resolution textures here (well, this defect is well masked with an abundance of shader-based special effects). Project Snowblind is a typical representative of its genre, with rich character upgrade opportunities but also with a rather dull “kill ‘em all” gameplay.
SLI technology isn’t compatible with this game: the performance of a SLI configuration is the same as the performance of a single card. The GeForce 7800 GTX holds firmly its first place – the pixel shader-heavy game is a natural playground for the G70 chip with its 24 improved pixel pipelines. GeForce 6800 Ultra also looks very nice, despite its slower speed than that of the RADEON X850 XT/XT Platinum Edition. Graphics cards based on NVIDIA chips lose their battle only in the low-end category where the RADEON X700 beats the GeForce 6600 due to the difference in their operational frequencies. Note also how the GeForce 6800 loses the first two resolutions to the RADEON X800 XL, but overtakes it in 1600x1200.
Using such effects as Bloom, the game can still work correctly in FSAA modes. The RADEON X850 family do better in this test, getting very close to the GeForce 7800 GTX – the gap is no more than 6-8fps, the absolute frame rates being about 50-60fps. So, ATI RADEON cards once again confirm their efficiency under high graphics memory loads, i.e. in high resolutions with enabled full-screen antialiasing.
Unreal Tournament 2004 is an old game and it is not a hard trial for modern graphics hardware. The participating cards have almost the same speeds in the "pure speed" mode, excepting the RADEON X700 and the GeForce 6600 among which the ATI solution is faster.
The “eye candy” mode allows comparing top-end graphics cards, too, but only in high resolutions. The RADEON X850/X800 cards have higher speeds than their lower-frequency competitors: the game not featuring any special technological tricks, the sheer speed of the GPU and the number of pixel pipelines determine the winner.
It’s different on the Metallurgy map: the GeForce 6800 Ultra competes with the RADEON X850 XT, and the GeForce 6800 GT/6800 with the RADEON X800 XL. The GeForce 6800 is even a little faster than the GeForce 6800 GT despite having fewer pixel pipelines. The only defeat NVIDIA suffers in this test is the GeForce 6600 losing to the RADEON X700. This map isn’t as large as Torlan and loads mostly the vertex processors of the card, so the frequency of the GPU is the main performance-influencing factor here.
The RADEON cards regain their advantage in the “eye candy” mode, but the RADEON X700 PRO still cannot overtake the GeForce 6600 GT. Since Unreal Tournament 2004 isn’t a very demanding game by today’s standards, only the RADEON X600 and GeForce 6600 don’t allow playing it with comfort in high resolutions of the “eye candy” mode.
Unlike Project: Snowblind, Hitman: Contracts can’t work normally if you enable FSAA and the Bloom effect simultaneously. The game prefers graphics processors with high pixel shader performance, but currently solutions from both GPU makers are such. So, the GeForce 7800 GTX stands firmly on the top of the podium, and the GeForce 6 family wins the low resolution. In higher resolutions, however, the RADEON solutions gain the lead. SLI technology fails with this game, bringing no positive effect or even reducing the performance.
While there’s parity between solutions from ATI and NVIDIA in every product category, the GeForce 7800 GTX goes unrivalled. The game isn’t too hard, so any graphics card from our list will suffice, excepting the GeForce 6600 and the RADEON X600 XT that have a low GPU clock rate and an obsolete architecture, respectively. The value of the SLI effect here amounts to 15-20%.
The GeForce 6/7 cards were benchmarked in the Shader Model 3.0 mode here, so they are quite naturally slower than the RADEONs that use Shader Model 1.1. Yet the visuals are much nicer in the Shader Model 3.0 mode, so graphics cards that use it are preferable for playing Chaos Theory. Note the highly demanding character of this game – the cheaper solutions yield a playable frame rate in 1024x768 only, while owners of GeForce 6600 and RADEON X600/X700 cards will have to reduce the level of detail.
The “eye candy” results are overall similar to the “pure speed” ones. The GeForce 7800 GTX and the GeForce 6800 Ultra SLI are the only solutions to deliver a satisfactory performance in 1600x1200. And only one solution – the SLI configuration with two GeForce 7800 GTX cards – provides a really high speed.
Full-screen antialiasing is automatically disabled here when the Bloom effect is on to avoid image-quality problems. The GeForce 6 family rule in the lowest resolution, but the RADEON X850/X800 come ahead as soon as 1280x1024. Moreover, the RADEON X800 XL even matches the performance of the GeForce 7800 GTX in 1600x1200! SLI technology works correctly, but produces a very small performance gain in this game.
The game depends rather too much on the CPU performance. As soon as you go out into the streets, the frame rate goes down below 50-52fps whatever graphics subsystem you may have. The RADEON X700 PRO, RADEON X700 and GeForce 6600 cards are too weak to reach that speed ceiling.
The “eye candy” mode makes a better comparison possible, yet the performance of the topmost solutions is still the same, about 40-50fps. In 1600x1200, however, you can see that the RADEON X850 XT Platinum Edition, RADEON X850 XT and RADEON X800 XL feel better in this game, the GeForce 7800 GTX being 3-5fps behind. We couldn’t observe any positive SLI effect, because of the game being too CPU-dependent.
Thanks to their high graphics core frequency, the RADEON X850 XT and XT Platinum Edition manage to leave the GeForce 7800 GTX behind in low resolutions, but the latter regains its top place in higher display modes. The GeForce 6600 GT outperforms the RADEON X700 PRO because of the high GPU frequency, too.
The “eye candy” mode brings interesting results: the RADEON X850 XT and XT Platinum Edition are ahead of the GeForce 7800 GTX in all resolutions, except 1600x1200 where they have identical speeds. The most interesting thing, however, is that the GeForce 6600 GT based SLI platform almost overtakes the same GeForce 7800 GTX. It is another confirmation of the fact that the ability of the graphics processor to quickly execute relatively simple pixel shaders is the crucial factor that determines the speed of this auto simulator.
This car simulator has a fixed frame rate limiter set at 100fps. The game is indifferent to the number of pixel pipelines, but reacts most sensitively to the GPU clock rate. That’s why the GeForce 6600 GT has a better result than the GeForce 6800, even though the difference between them is only 4 fps in 1600x1200.
It is only the SLI configuration with two GeForce 7800 GTX cards that can reach the speed limit in high resolutions and with enabled full-screen antialiasing. The single GeForce 7800 GTX and the GeForce 6800 Ultra SLI platform have excellent results, too. The members of the RADEON X8 family are victorious among the less powerful products.
Lock On also prefers the GeForce 6/7 architecture: NVIDIA solutions are generally faster in this flight simulator than RADON X800/X850-based cards. SLI technology works correctly, but provides a very small performance gain. There is even no positive effect from joining two GeForce 7800 GTX cards because Lock On is a highly CPU-dependent game at the maximum graphics quality settings, and that super-powerful SLI configuration just can’t get above the speed bar set by the system central processor.
The RADEON X850/X800 cards are getting more confident in the “eye candy” mode. They are as fast as their GeForce counterparts in 1024x768, and in 1600x1200 the RADEON X850 XT Platinum Edition almost overtakes the SLI configuration with two GeForce 6800 Ultra. The GeForce 7800 GTX remains on top, but it is no more than 3fps ahead of the senior RADEON in 1600x1200.
This flight simulator comes from 1C: Maddox Games, the creators of the famous IL-2 Sturmovik series. Pacific Fighters uses the same engine as Aces in the Sky and the visual quality is similar in both games, but Pacific Fighters carries the action over from the USSR territory to the Pacific. You can play either the U.S. Air Force or the military aviation of Japan.
Like IL-2 Sturmovik, Pacific Fighters uses OpenGL by default, so NVIDIA based solutions are overall superior to the RADEON X850/X800, having a better driver for that API. You can just note the GeForce 6600 outperforming the RADEON X700 PRO and almost overtaking the RADEON X800 XL! Judging by the results of the GeForce 6600 GT SLI and GeForce 6800 SLI platforms, the multi-GPU technology from NVIDIA works correctly in this game, but our test record depends rather heavily on the CPU performance, making impossible to evaluate the SLI effect for more advanced configurations. Note also that all solutions from NVIDIA, with the sole exception of the GeForce 6600, allow playing this game comfortably in 1600x1200 at the “pure speed” settings.
With enabled full-screen antialiasing and anisotropic filtering we can better see the value of the SLI effect with some configurations. It is about 15% for the GeForce 7800 GTX SLI in 1600x1200 – these two graphics cards reach the performance ceiling. The GeForce 6800 Ultra SLI is 30% faster than a single card, which is far from the theoretical maximum but makes that resolution playable nonetheless. As for ATI based cards, they are still rather far behind the GeForce 6 and 7 solutions, so RADEON X850/X800-based devices won’t suit you if Pacific Fighters is among your favorite games.
Act of War: Direct Action resembles the Command & Conquer Generals series in gameplay, but with a strong focus on tactics. In other words, your success depends on your skills in commanding an available force in the best way rather than on building a base and creating the highest possible number of units in the shortest time. The graphics in Act of War resembles C&C, too, but it is head above in terms of complexity and level of detail.
The graphics cards from ATI Technologies have the best results in this test, enjoying an enormous advantage over their GeForce counterparts. For example, the RADEON X850 XT Platinum Edition is either equal to or better than the GeForce 7800 GTX! But ATI beats NVIDIA here only in the top-end sector. A step lower, the GeForce 6600 GT has higher frequencies than the RADEON X700 PRO and leaves it no chance in their particular race. SLI technology does not work in this game, the SLI effect equaling zero.
ATI graphics cards feel even better in the “eye candy” mode, and even the GeForce 6600 GT falls the further behind the RADEON X700 PRO as the resolution grows. We suppose that the RADEON X850/X800/X700 cards run Act of War so well not only because of their more efficient memory subsystem but also because of a higher performance of their vertex processors which directly depends on the GPU clock rate. That’s why only the GeForce 7800 GTX with its 8 vertex processors clocked at 470MHz can challenge the RADEON X850 XT PE and X850 XT here. SLI technology doesn’t work correctly in this game, giving either no effect at all or reducing the performance, especially in the “eye candy” mode.
The GeForce 7800 GTX takes the first place in the lowest resolution, but the RADEON X850 XT Platinum Edition gets very close to it in 1280x1024 and wins 1600x1200 resolution. The RADEON X850 XT successfully challenges the GeForce 6800 Ultra, but the RADEON X800 XL has a lower speed than the GeForce 6800 GT even though the former has a higher GPU frequency. The GeForce 6600 also passes this test well.
The RADEON X850/X800 cards usually shows their best in FSAA + anisotropic filtering modes, but it’s all upside down this time around: the GeForce 6/7 cards feel the more confident as a higher resolution is enabled. The only exception is the GeForce 6600 – its 300MHz frequency is too low for today. Note also that 1024x768 resolution alone is playable in the “eye candy” mode and only on such cards as the senior GeForce 6 models and the GeForce 7800 GTX.
This curious game is a medieval castle simulator and the player has to consider numerous factors for the castle to grow and prosper. In other words, Stronghold 2 is a management strategy game, even though it allows the player to draw a sword as the castle must be defended against enemies, too.
On the technical side, the game engine supports all rendering modes from Shader Model 1.1 to 3.0. The graphics abounds in small details and cute special effects; the water surface is modeled using pixel shaders. Since ATI cards do not support Shader Model 3.0, they were all tested in the Shader Model 2.0 mode. The GeForce cards all worked in the Shader Model 3.0 mode – this mode doesn’t affect the quality of the game graphics, but maybe it has an effect on the game speed?
Like many other strategies, Stronghold 2 is a highly CPU-dependent game. No wonder as the CPU has to process all the numerous units that appear on the screen at the same time. So we couldn’t get more than 40fps even with our GeForce 7800 GTX. Moreover, the results of the participating cards only differ much in 1600x1200 resolution where the GeForce 6/7 cards (except the GeForce 6600) are generally faster than the RADEON X850/X800/X700 devices.
RADEON and GeForce solutions of the same class have either the same speed in the “eye candy” mode or there’s a small advantage on the GeForce part, again excepting the same GeForce 6600. It seems that the Shader Model 3.0 helps NVIDIA solutions show such good results even with enabled full-screen antialiasing and anisotropic filtering. SLI doesn’t work here: the performance of two cards working in the multi-GPU mode is the same or even slightly worse than that of a single such card.
The GeForce 6/7 solutions beat their rivals in low resolutions, but in high display modes ATI and NVIDIA cards match each other. The SLI configurations with two GeForce 6800 Ultra and GeForce 7800 GTX and the single GeForce 7800 GTX are an exception – they just have no rivals in this test.
Rather unusually for an “eye candy” mode, the RADEON X850/X800 cards lose their ground in this test. Meanwhile, the RADEON X700 PRO and the RADEON X700 are faster than the GeForce 6600 and GeForce 6600 GT, respectively, in high resolutions, as the latter have only three vertex pipelines against the ATI cards’ six.
There’s parity between ATI and NVIDIA solutions in this benchmark, save for the lower product categories where the GeForce 6600 GT is far ahead of the RADEON X700 PRO and the RADEON X700 is likewise better than the GeForce 6600. The GeForce 7800 GTX SLI, GeForce 6800 Ultra SLI and the single GeForce 7800 GTX are quite expectedly the fastest, with the SLI pair of GeForce 6800 Ultra being a little faster than the single GeForce 7800 GTX.
The mentioned parity vanishes in the “eye candy” mode – the RADEON X850 and X800 products enjoy an advantage over their opponents, but the RADEON X850 XT Platinum Edition can’t overtake the GeForce 7800 GTX, not to mention the SLI platform on two such cards.
The GeForce 7800 GTX got the maximum number of points, but the RADEON X850 XT Platinum Edition scored only 104 points less. That’s the only defeat of ATI Technologies in this benchmark that stresses the graphics memory bandwidth and high geometrical performance of the card. In all the other cases the RADEON cards beat their GeForce counterparts.
As you can see, the peaks of 20,000 and 25,000 points are conquered by NVIDIA based solutions – the SLI configurations on two GeForce 6800 Ultra and GeForce 7800 GTX, respectively. The SLI platform with two GeForce 6800 cards got more than 15,000 points, outperforming the RADEON X850 XT Platinum Edition. The graphics cards from the GeForce 6 and 7 families are overall faster in 3DMark03 than the RADEON X850/X800/X700 series cards. Not surprising as three out of four game tests from this benchmarking suite are technically better suited for the GeForce 6/7 architecture.
NVIDIA GeForce6 solutions enjoy a bigger advantage over RADEON family of the same class as the resolution grows up. It’s normal because the first test is based on DirectX 7 and fixed T&L functions, and graphics processors from NVIDIA are generally most efficient in such applications.
The GeForce 6/7 cards are less successful in the “eye candy” mode: the GeForce 7800 GTX and the senior SLI configurations are still on top, but the GeForce 6800 Ultra, for example, can only compete with the RADEON X800 XL. The GeForce 6600 GT is once again saved by its high frequencies.
The textural and geometrical load is low in the second test, but normal maps and dynamic stencil shadows are employed, thus making it an excellent match for the GeForce 6/7 architecture. There’s no definite winner here: the higher frequencies of ATI solutions are counterbalanced with NVIDIA UltraShadow II technology.
This balance between the products from the two manufacturers can be observed in the “eye candy” mode, too, with a minor advantage on the NVIDIA part in low resolutions.
The third test is much alike to the second one, but puts a higher geometrical load on the card. Anyway, ATI and NVIDIA based graphics cards of the same class have similar speeds here.
Unlike in the second test, ATI’s solutions enjoy a certain advantage in the “eye candy” mode here. ATI cards have a high vertex shader performance and an efficient memory controller.
The fourth test features version 2.0 pixel and vertex shaders and a highest math1ematical load. The GeForce 7800 GTX still holds its leadership, but the gap between it and the RADEON X850 XT Platinum Edition shrinks to less than 10% in 1600x1200. The SLI configurations are of course the fastest, but they are the most expensive solutions as well. In the rest of the product categories, the RADEON cards look better than the corresponding GeForce cards.
In the “eye candy” mode the GeForce 6800 family isn’t that much worse than the RADEON X850/X800 series. Thus, the overall performance scores in 3DMark03 are quite correct: the GeForce 6/7 architecture performs better in three tests out of four and is just a little worse in the fourth test. Moreover, SLI technology works smoothly in this benchmark, everywhere ensuring a huge performance boost.
The SLI configuration with two GeForce 6800 Ultra stops less than 150 points short of the 10.000 mark, while the more advanced GeForce 7800 GTX card easily overcome it in the SLI mode, scoring 11.458 points. The single GeForce 7800 GTX also manages to get more than 7,000 points, but that’s the last of NVIDIA’s triumphs. The single GeForce 6800 Ultra is slower than the RADEON X850 XT Platinum Edition and the GeForce 6800 GT can’t compete with the RADEON X800 XL. Only the GeForce 6600 GT with its high frequencies (500/1000MHz) beats the sluggish RADEON X700 PRO that has a lot of memory but works at low clock rates. Unlike the previous version of the benchmark, 3DMark05 seems to favor solutions from ATI Technologies.
The two senior SLI configurations and the GeForce 7800 GTX take the top places in this shooter-like test, but ATI RADEON X850/X800 are supreme in the lower echelons.
Our turning on full-screen antialiasing and anisotropic filtering changes the situation: the GeForce 6 cards have better results than the RADEON X850/X800 in low resolutions. As you remember, graphics cards with only 128MB of onboard memory can do full-screen antialiasing in resolutions above 1024x768 in 3DMark05.
The second test is in fact a geometrical one as there’s lush dynamically generated vegetation in this scene, with complex shadows and lighting effects. The senior RADEON models start as confidently as in the previous test, but lose their advantage in higher resolutions. In 1600x1200 they are equal to their GeForce 6800 counterparts. This does not affect the overall performance score of the cards, by the way, because 3DMark05 uses the 1024x768 results to calculate it – this may seem already inadequate today.
The “eye candy” results differ from the first test: the GeForce 6800 Ultra can’t overtake the RADEON X850 XT and XT Platinum Edition, being only able to beat the RADEON X800 XL.
The last test from 3DMark05 is traditionally the most difficult one, creating an enormous load on the pixel pipelines. The shaders employed here use almost every function from Shader Model 2.0. Despite its Shader Model 3.0 support, the GeForce 6800 Ultra is slower than the RADEON X850 XT Platinum Edition in all resolutions. The GeForce 6800 GT, however, is a good match for the RADEON X800 XL, being only 1fps behind in 1600x1200 resolution. The GeForce 6600 GT beats the RADEON X700 PRO across all resolutions. Of course, the GeForce 7800 GTX and the senior SLI configurations are beyond competition.
The GeForce 6 cards are faster than the RADEON X850/X800 cards of their class in all resolutions. The GeForce 7800 GTX ensures a playable frame rate in resolutions up to 1280x1024, while two such graphics cards would allow playing this scene comfortably in 1600x1200, if it were part of a real game. The overall performance scores seem to be correct: ATI RADEON X850/X800 cards look really better in 3DMark05, yet the 24 pipelines of the GeForce 7800 GTX just have no rivals at all.
What can we advise to a potential customer after we have tested 17 graphics cards in 30 gaming and benchmarking applications? It depends on what category of devices you are interested in. Let’s start from the top.
This class is represented exclusively with NVIDIA’s products in this review:
For an easier comparison, we prepared summary diagrams with results of same-category products only:
The SLI kit of two GeForce 7800 GTX cards will be the most expensive graphics subsystem, yet the fastest, too. So if you don’t care much about the price factor, it might be your choice. You will have the maximum possible speed in all modes and resolutions, including HDR modes, and will also have an opportunity to enjoy such an exciting feature as transparency antialiasing.
It is harder to choose between the GeForce 6800 Ultra SLI and the single GeForce 7800 GTX: the former solution ensures a somewhat higher level of performance in average, but does not have some exclusive features of the G70 architecture and depends greatly on driver support of particular games. If a given game is not explicitly supported, you’ll most likely get the speed of a single GeForce 6800 Ultra, while the GeForce 7800 GTX will always give you its maximum performance in all applications. Besides that, the GeForce 7800 GTX does not require a SLI-compatible mainboard – any mainboard with a PCI Express x16 slot will do.
We also do not think it wise to purchase two GeForce 6800 Ultra cards to join them in a SLI pair because the GeForce 7800 GT has become available. Unfortunately, we don’t yet have a second sample of the GeForce 7800 GT to check it in the SLI mode, but we still think such a configuration is going to be faster than the GeForce 6800 Ultra SLI. The fact that the GeForce 7800 GT has all the advantages of the G70 architecture is another point against the GeForce 6800 Ultra SLI.
So, the GeForce 7800 GTX SLI is the best graphics subsystem available today. The cost of that solution being quite steep, you may want to go for the fastest single graphics card available, GeForce 7800 GTX.
This category is represented with the following solutions:
We have products from ATI Technologies, too, in this category. The RADEON X850 XT Platinum Edition and the RADEON X850 XT are the fastest of the RADEON solutions as yet. They are rarely slower than the GeForce 6800 Ultra (mostly in games that use OpenGL or are specially optimized for NVIDIA’s architecture) and sometimes leave the competitor far behind. Until recently, the RADEON X850 XT Platinum Edition was in fact unrivalled, being obviously preferable to the GeForce 6800 Ultra, but now it has got a dangerous opponent.
We mean the GeForce 7800 GT, a card we unfortunately had not time to test in all the applications included into this review. But our experience with this card suggests that the GeForce 7800 GT is either faster or equal to the RADEON X850 XT Platinum Edition, very rarely being slower that it. The new product is also endowed with such features of the G70 architecture as HDR and Shader Model 3.0 the RADEON X850 XT Platinum Edition/X850 XT are devoid of. Thus, the GeForce 7800 GT seems to be a more appealing purchase today than ATI RADEON X850 models. If you don’t want to wait for R520-based products, but want to have a good high-end graphics card, the GeForce 7800 GT will be an optimal choice.
SLI configurations with GeForce 6800 and 6600 GT cards, on the contrary, can’t be considered an optimal solution. These graphics cards have 128MB of graphics memory. Both cards in a SLI configuration have identical memory contents (i.e. the memories of the two cards do not double), and 128 megabytes may prove too little sometimes. For example, such graphics cards simply lack memory to work in resolutions above 1024x768 with enabled full-screen antialiasing in 3DMark05. This problem is not very urgent as yet, but as soon as games like F.E.A.R. are released, a graphics subsystem with 256MB of memory may become a real necessity. Don’t also forget that NVIDIA SLI is a driver-dependent technology and you may find yourself with the performance of a single GeForce 6800/6600 GT if a given game is not supported by ForceWare.
So our opinion about multi-GPU systems remains unchanged – we think such systems only make sense when based on topmost products available (today, these are GeForce 7800 GTX and GeForce 7800 GT). In other case, it would be better to buy a single and faster card than two slower ones (well, if the single card costs much more money, the choice may become more complicated).
So, the NVIDIA GeForce 7800 GT card is the best buy in the below $500 category.
Going one step lower, we meet three graphics cards:
The main choice is between the ATI RADEON X800 XL and the NVIDIA GeForce 6800 GT here. The RADEON X800 XL seems preferable from the point of view of speed as it delivers a higher performance in FSAA + anisotropic filtering modes, making good use of a more advanced memory controller than the one implemented in the GeForce 6800 GT. The advantage of ATI’s product is, however, smaller in the “pure speed” mode and it doesn’t support Shader Model 3.0. Thus, the performance and low power consumption of the RADEON X800 XL should be weighed against the wider functionality of the GeForce 6800 GT which also supports SLI technology, by the way.
As for the GeForce 6800, it is slower than the above-mentioned RADEON X800 XL and GeForce 6800 GT, but has all the advantages of the NV40 architecture and can occupy the graphics slot of your computer if you can’t or don’t want to buy one of the other two products. The NV42 is manufactured on 0.11-micron tech process, so you will get a power-thrifty graphics card, too. Note, however, that unlike the other two solutions in this price category, the standard version of the GeForce 6600 comes with only 128 megabytes of graphics memory.
So, the ATI RADEON X800 XL offers the highest performance among performance-mainstream solutions, but the NVIDIA GeForce 6800 GT features a better price/features ratio. The choice depends on your personal tastes.
Two products belong to this class:
We should also add a third product to this category – the newly announced ATI RADEON X800 GT graphics card. ATI’s new card has 8 pixel pipelines and 6 vertex processors and a 256-bit memory bus. This bus is wider than those employed in the RADEON X700 PRO and GeForce 6600 GT and should ensure the new product an advantage in high resolutions and/or with enabled full-screen antialiasing. The memory amount may vary from 128 to 512MB; the 256MB version will probably become the most common.
The market future of the RADEON X700 PRO thus becomes rather vague after the release of the RADEON X800 GT as the new solution has a potentially higher performance but fits into the same price category as the older one. The RADEON X700 PRO will probably get much cheaper soon or will just disappear from shops altogether. Anyway, the GeForce 6600 GT has got a serious opponent in its price niche.
Unfortunately, we couldn’t get a sample of the RADEON X800 GT in time for this roundup and if we compare the remaining two products, the GeForce 6600 GT is most of the time faster than the RADEON X700 PRO. This is easily explained by the difference in their clock rates: 500/1000MHz against 425/864MHz. (The RADEON X700 XT model was overall comparable to the GeForce 6600 GT, but that model remained a rare product because its production was stopped very soon.) The RADEON X700 PRO gets closer to the GeForce 6600 GT in FSAA + anisotropic filtering modes, but not to become a real threat to it. Note also that graphics cards of that class find the “eye candy” modes difficult and their speed is often below playable in 1600x1200 (we publish tests results for such resolutions mostly for the sake of comparison).
Of course, it doesn’t make much sense to buy a RADEON X700 PRO now that the RADEON X800 GT has appeared in the market. If we compare it only with the GeForce 6600 GT, NVIDIA’s solution is certainly preferable. So, while the RADEON X800 GT is unavailable, the GeForce 6600 GT is the best choice.
We would put the following products into the category of entry-level graphics cards that have a high enough performance for playing modern computer games:
The RADEON X600 XT should be struck off your shopping list even if you have a very limited budget. This graphics card is based on an obsolete architecture and has the worst technical characteristics of the three (only 4 pixel pipelines and 2 vertex processors). In some games it can outperform the GeForce 6600 due to the faster memory, but most often it is slower than the NVIDIA card, not to mention the ATI RADEON X700.
The GeForce 6600 is evidently hamstringed by its down-clocked frequencies (300/500MHz against the 400/700MHz by ATI). There’s no wonder then that the RADEON X700 wins in its price category. It just has no real rival. There are numerous modifications of that card available, mostly with varying type and frequency of the memory chips.
Like the RADEON X700, the GeForce 6600 comes in many flavors that reflect the fantasies of the particular manufacturer. Most often, however, it is equipped with slow TSOP-packaged memory with limited overclockability. The GPU frequency of the GeForce 6600 can usually be increased to 400-500MHz and even higher, but the memory frequency seldom gets as high as 600MHz, so there’s no talk about competition with the RADEON X700 even in low resolutions where the influence of the memory subsystem is the lowest. You should also keep in mind that graphics cards of this category do not usually give out playable frame rates with enabled full-screen antialiasing, with the exception of some particular games and low resolutions.
As for our recommendation, we would advise you to buy an ATI RADEON X700 rather than a GeForce 6600, their prices being similar. The RADEON is just faster in almost all modern games, with rare exceptions like Doom 3. Try to find a card with PGA-packaged memory as it overclocks better. Avoid the RADEON X600 XT which is an obsolete and slow graphics card.
P. S.: So, this is the end of our comprehensive testing of graphics cards with the PCI Express interface. You have seen 13 graphics cards and 4 SLI configurations perform in 30 gaming and theoretical applications and we hope the information you’ve received will help you make a wise choice as you’re upgrading the graphics subsystem of your existing PC or buying a whole new computer. We’ve given you our recommendations for each product category above basing on the results of the tests, but after all it’s you who make the choice.
You should also be aware that this test session is a snapshot of the 3D graphics market as it is today. The market is changing constantly and we may have quite a different situation in no time (after the upcoming releases from ATI Technologies, for example). So, stay tuned to us!