The Fastest Graphics Cards of 2003

Quite a lot of people buy hardware and software to play with during the upcoming holidays late this month. Some buy consoles, some get new games, others acquire new graphics cards. This review will help you to choose between 9 premier graphics cards based on VPUs and GPUs from ATI and NVIDIA in 16 benchmarks representing performance in past, present and upcoming 3D games! In addition to benchmarks, we have market analysis for you.

by Alexey Stepin
12/10/2003 | 10:57 PM

History 2003

It’s next to nothing left in the current year of 2003 – just the time to make summaries and predictions. Let’s try to sum up everything the past year has brought into the graphics card market. I guess this year was one of the most interesting periods in the history and development of consumer graphics processors as we’ve seen solutions with support of Microsoft’s DirectX 9 API gaining their ground throughout it.

I am not only going to talk about the historical significance of the year 2003, but will also offer you a comparative testing of the graphics cards that have appeared in the market. This will help you refresh your memory, form your own opinion and get positive about the graphics card you are going to buy for this Christmas.

Q1 2003: Failure of GeForce FX 5800 Ultra, Renewal of ATI’s R300

NVIDIA starts the new year screeching at the joints. The company has problems with the launch of the new graphics processor codenamed NV30. According to the promises NVIDIA is lavish to throw about the chip is going to be a real revolution. Meanwhile, the Canadian ATI Technologies, an immediate rival of NVIDIA, already has a whole series of DirectX 9.0-compatible solutions based around the R300 VPU. This graphics processor appeared back on July 14, 2002, about 6 months before the formal release of the NV30 – that’s an unprecedented term for the graphics industry.

The abominable RADEON 9700 PRO meets a resistance from NVIDIA’s somewhat out-dated GeForce4 Ti series chips that don’t support the new API from Microsoft. Although NV25-based products show an acceptable performance and sometimes outperform the RADEON 9500/9700/PRO across a number of applications in the so-called “raw speed” mode, the reviewers are sure this is not for long – ATI’s chip will step on top. Meanwhile, the customers were making their own choices. According to Jon Peddle Association, a bulk of the graphics cards bought for the Christmas of 2002 was based on NVIDIA’s GeForce4 Titanium chips, first announced around the end of February, 2002.

GeForce4 Ti series processors don’t suit for full-screen anti-aliasing and anisotropic filtering where the RADEON 9700/9500 beats them black and blue. Anyway, they remain the sales leaders in the performance-mainstream and high-end segments until the middle of Q1 2003.

The end of February, 2003, and it is three months since the formal announcement of GeForce FX technology. The publication of the benchmarking results for the GeForce FX 5800 Ultra (NV30) is bad news for NVIDIA, while ATI’s R300 becomes a star in the market compared to the unlucky chip from Santa Clara. NVIDIA had nothing like that to offer to the user. Some analysts suggest that it won’t be able to do that in the near future, either.

However, the Markham, Ontario-based ATI is not riding the gravy train as they use the same graphics processor (R300) for both mainstream and high-end solutions. This is not profitable and cannot go on forever. The most logical move is to create two independent graphics processors, for two different market segments. We’ll see the second solution appearing in the second quarter of 2003.

NVIDIA:  Test Time

NVIDIA has a hard time transitioning to the 0.13-micron tech process along with developing the new chip. December, January and February pass by without a sign of an NV30-based card in the market. It is only in March (four months after the announcement!) that the monsters are spotted on the shelves around the world.

It also transpires that the difficulties with the NV30 are really colossal and the chip yield is utterly low. The chip goes into mass production after the yield is only 6-10 dies per wafer; its cost is close to $60 instead of the expected $10-20! Anyway, NVIDIA is quick to shut down the production of this mockery of a revolution. The lifecycle of the NV30 was short – only about 100,000 samples produced.

Besides other things, NVIDIA for the first time in the industry employs fast and expensive DDR-II memory working at the incredible frequency of 500MHz (1000MHz DDR). Relying on the frequency alone, the company refuses a 256-bit memory bus ATI implemented in its RADEON 9700/9700 PRO. It is revealed later that the 128-bit bus was not a result of haste with the graphics chip, but the original plan for the NV30 approved by the company’s executives years before the announcement.

The GeForce FX 5800 Ultra seemed to usher us into a new era, the era of cinematic-quality 3D. None of that. The flexibility and programmability came at a high cost: the new chip is miserable at executing pixel shaders, losing to ATI’s GPU hopelessly. One of the reasons for that is the NV30’s working most of the time in the inconvenient mode (4 pipelines x 2 texture-mapping units) and rarely switching to the 8x0 formula (see our GeForce FX 5800 Ultra Review).

The huge frequencies cannot help the inefficient architecture out. Moreover, the heat dissipation becomes too high for an ordinary cooler to handle. The exclusive FlowFX cooling system can do the cooling of the GeForce FX 5800 Ultra, but at the cost of the user’s ears. Some jeering advocates of silence and ATI Technologies arrange collages like “GeForce FX 5800 Ultra – vacuum cleaner” or “GeForce FX 5800 Ultra – hair-dryer”.

This complex and sophisticated GPU is clocked at 500MHz. Its flexibility and capabilities cover all the basic DirectX 9 specifications and go far beyond. Still, it is not the revolution NVIDIA had promised back in 2002, but rather the biggest disappointment of the entire 2002 and the beginning of 2003.

One of the advantages of the NV3x series shows up soon as NVIDIA unveils a full range of DirectX 9-compatible GPUs for every market sector. A couple of new chips, the GeForce FX 5600 and 5200, appear as soon as March 2003. The former was intended as a replacement of the out-dated GeForce4 Titanium series and the latter – to become the first value GPU to be compatible with DirectX 9.

The NV31 is architecturally alike to the NV30, but has half the number of the pipelines. The chip is manufactured by the same 0.13-micron process and works at 350MHz, while the resulting memory frequency reaches 700MHz. This GPU is later found incapable of effectively competing with the rival, RADEON 9600 PRO, so NVIDIA transforms it into a faster version clocked at 400MHz. This helps to improve the speed of the 5600 Ultra somewhat, but gives no advantage over the competitor.

The NV34 (GeForce FX 5200) is a simpler solution to occupy the seat of the GeForce4 MX. A cheaper 0.15-micron tech process is employed to manufacture this chip. To reduce the surface area of the die, NVIDIA resorts to draconian measures cutting down everything possible. The memory controller suffered most, being practically deprived of the data compression algorithms, which degraded the performance of the chip. Notwithstanding its poor speed, the NV34 anyway provided support of DirectX 9, while the value VPUs from ATI Technologies – RADEON 9000 and 9200 (RV250 and RV280) –supported DirectX 8.1 only.

ATI Technologies: At the Crossroads

The necessity of splitting the RADEON 9500/9700/PRO series into two independent branches is long felt. So, on March 3, 2003, ATI Technologies announces two new graphics processors – RADEON 9800 and RADEON 9600 (R350 and RV350).

The first of the chips, RADEON 9800 (R350) was a true heir to the R300 combining the best from its ancestor with some innovations capped with the capacity to process infinite-length shaders. This made it King of 3D Graphics for long. ATI was still clinging to the 0.15-micron tech process for this chip, so its frequency was relatively low, only 380MHz. This fact, as well as the “moderate” 680MHz memory frequency, didn’t prevent the RADEON 9800 PRO with 128MB and later 256MB memory to work as fast as the GeForce FX 5800 Ultra and outperform even the GeForce FX 5900 Ultra for a change.

The RADEON 9600 (RV350), the first 0.13-micron VPU from ATI Technologies, met an ambiguous reception. With its plans of creating a mainstream graphics processor, the company walked the road of reducing the number of the pixel pipelines. As a result, the RADEON 9600 (RV350) had only four of them, while the older RADEON 9500 PRO had eight (the RADEON 9500 had four pipelines, too).

Thanks to the new technological process, the frequency of the RADEON 9600 PRO easily notched 400MHz, while the energy consumption went down – the chip didn’t require any additional power. RADEON 9600 PRO-based cards were simple and compact. Regrettably, the memory on such cards is never clocked above 300(600DDR)MHz. Notwithstanding all the innovations, the RADEON 9600 PRO loses to the eight-pipelined RADEON 9500 PRO across a majority of tests provoking some misunderstanding from the part of the users. At the same time, its R300 architecture allows it to easily leave both variants of the GeForce FX 5600 Ultra behind.

Q2: 3DMark03, Cheats, SARS, GeForce FX 5900

NV35 Coming to Replace NV30

The virus of atypical pneumonia is rampaging around the world. Asian countries bear the main blow, although the computer industry feels the outcome of the epidemic a little later. So far, NVIDIA finds itself beaten on all fronts (at least, as concerns technological superiority), for the first time in history. The sales falling after the publication of the performance results for the GeForce FX 5800 Ultra, the company tries to stem the tide by launching a competitive top-end graphics processor. It is quick to appear. On May 12, NVIDIA announces the launch of the NV35 to be officially named GeForce FX 5900.

The GeForce FX 5900 resembles the NV30, but without a number of flaws of the latter. Particularly, it is much faster at floating-point calculations and the memory controller received the 256-bit bus, at last.

The flagship of the new line, GeForce FX 5900 Ultra works at 450/850MHz (GPU/memory), and the heat dissipation is much lower due to the use of the ordinary DDR-I memory. GeForce FX 5900 (without “Ultra”) cards became much popular among the fans of the NVIDIA products as they only differed from the “Ultra” in a smaller amount of memory (128MB) and a 50MHz-lower GPU frequency.

Alas, although the performance of the NV35 GPU and NV35-based cards is higher than that of the GeForce FX 5800 Ultra, the intrinsic disadvantages of the NV3x architecture never allow the new product to reach the level of the RADEON 9800 PRO in demanding applications. NVIDIA has to run after the leader again.

Cheats

As an attempt to show the strength of its architecture, NVIDIA chose to implement various cheats and optimizations into its drivers to boost the performance in specific applications. Those applications that hardware testers and PC manufacturers use to determine the best graphics cards.

3DMark is traditionally among them. The version of this benchmarking suite from the year 2003 is a highly demanding application, the results of which reflected well enough the rankings in popular games like Tomb Raider: Angel of Darkness and Half-Life 2.

The suite uses pixel shaders of versions 1.4 and 2.0. So, it was no wonder the results of GeForce FX-series chips were miserable, especially in the last gaming test, “Mother Nature”. NVIDIA decided to “improve” the driver to have more acceptable results, but it didn’t work. Futuremark exposed NVIDIA with its long list of tricks for 3DMark03 as well as ATI Technologies with its single attempt at boosting the speed (see our news post from May 23, 2003, for details). Futuremark also nips every future attempt at “cheating” in the bud by releasing a new patch and promising to be checking for 3DMark03-targeted software optimizations in the future.

Although the situation was clear for everyone who had eyes, NVIDIA refused to plead guilty saying that the tests from Futuremark were prejudiced and specifically designed to trample the GeForce FX down into dirt. Admirers and minions of NVIDIA were all indignant, accusing ATI in bribing Futuremark. Some NVIDIA-close mass media spurned the newest version of the industry benchmark. ATI had boosted its 3DMark03 score by only 1.9% and in a relatively fair way, but later agreed that this method was unacceptable and removed the cheats from the next driver versions. NVIDIA was still optimizing for Benchmark No.1 with a purpose, as some say, to straighten out its results. As more cheats were being exposed in a number of other well-known benchmarks, few were still listening to those people.

Q3 2003: ATI on Top, XGI Appearing, New GPUs from Matrox Graphics

The second half of 2003 starts out quiet. There are no new products announced, but the leading graphics companies are grappling for the market share. Computex Taipei used to reveal the main trends of the industry, but it is postponed for September (SARS is abroad!). So, the industry has nothing else to do but showcase their products at their websites and release rumors into the Internet.

SiS, Xabre, XGI – Three’s a Crowd?

Silicon Integrated Systems, known for its chipsets in the first hand, decided to test its strength in the graphics processor market for once. Back in 2002 the company gave birth to the Xabre series. Cheap enough, these chips were never a success mostly due to the bad drivers. The notorious turbo-texturing mode provided enough speed, but jumbled most of the textures into a kind of soap. There were other drawbacks, so the Xabre never really took off.

A year had passed since the announcement of the Xabre. Having failed to conquer the graphics market, SiS spun off its graphics division into an independent company called eXtreme Graphics Innovation or XGI. XGI started working from June 1, 2003. About a month later, the newly-baked company voiced its intention to buy the graphics division of Trident Microsystems to enhance the assortment of the offered solutions and lure some specialists on the way.

According to an optimistic claim of the company’s CEO, XGI was going to become profitable in the next three years. Journalists and analysts were only questioning themselves what weapon XGI was going to beat ATI Technologies and NVIDIA Corporation with?

September 2003, the new graphics company announces a new series of graphics chips – XGI Volari. The Volari V5 and V8 chips took in all the development from the Xabre II project and received DirectX 9 support. But the most exciting thing about them is their ability to work in dual-processor configurations.

Historically, multi-chip graphics solutions of the consumer class have never been popular. There are precedents: the Voodoo2 SLI was too expensive to be of any use, while the ATI Rage Fury MAXX performed well enough, but met an abominable competitor, the higher-performing GeForce256. ATI’s creation was also undercut by its driver-related problems. The Voodoo5 5500 from 3dfx also carried two chips onboard. It cost $399, but was anyway slower than the GeForce2 GTS. Thus, dual-processor consumer graphics cards have never been a success, but XGI decides to give them another try, hoping to avoid the traps the other companies have been caught into.

The Volari V8 carries two vertex-processing units (4 in the R350/360 and 3 in the NV35/36/38), four pixel shader units and eight rendering pipelines. The cheaper Volari V5 has only two shader units and four pixel pipelines. Both Volaris access the graphics memory across a 128-bit bus – that’s a mainstream solution by the standards of the end of 2003.

The Volari looks appealing on paper. It is also paper that XGI signs contracts with graphics card makers on. October, Club3D announces its plans to produce Volari-based products. It is also rumored that companies like ASUS, CP Technology, Gigabyte and MSI are also interested in this graphics processor.

XGI assures that it has everything necessary to manufacture and sell the chips and everyone believes that in August and September. Meanwhile, Volari-based cards don’t appear in retail in October as well as in November and December. XGI keeps silent. In spite of the zero sales, the heads of the company are optimistic about their plans to have 10% of the world’s GPU market in 2004.

Matrox Graphics, PowerVR – Living Underground

In early summer, new versions of the last-year Parhelia from Matrox Graphics sidle into the market. They are even slower in 3D now and come under names of Millennium P650 and P750. Being basically 2D solutions, the new Matrox Millennium GPUs are too expensive to be accepted by the mass market.

As a first rumble of the battle to be fought in the fall, the employees of the half-forgotten PowerVR suddenly release two papers on pixel and vertex shaders of the next generation. So far, PowerVR has no DirectX 9.1-compatible chip. We’ll see in the next year whether it will have one.

Meanwhile, ATI Technologies is enlarging its presence in the graphics market quite dramatically. According to Mercury Research data, published in summer, ATI owns 87% of the high-end market in the second quarter of the year.

Q4: All-Out Overclocking, Automatic Optimizations

The last quarter of the passing year brought us a bunch of new graphics processors: RADEON 9800 XT (R360), GeForce FX 5950 Ultra (NV38), high-end solutions priced at about $500, and RADEON 9600 XT and GeForce FX 5700 Ultra, mainstream solutions priced about $200.

The $500 newcomers were faster versions of the previously released products.

The R360 is in fact an overclocked R350. It is manufactured on TSMC facilities using the 0.15-micron process and powers up RADEON 9800 XT graphics cards. The frequency of the R360 is 412MHz with an option of dynamic overclocking; the memory works at 730MHz. The NVIDIA NV38 (GeForce FX 5950 Ultra) is a development on the NV35, which runs just a little faster in real applications, just like ATI’s competitor. The core of the new solution works at 457MHz, and memory at 475MHz (950MHz DDR).

ATI Technologies takes it easy when improving their “middle-range” product, RADEON 9600 PRO. They roll out the RADEON 9600 XT clocked at 500MHz due to the low-k 0.13-micron tech process. The graphics cards on the RADEON 9600 XT can show a performance similar to the oldie RADEON 9500 PRO, at last.

NVIDIA approached the problem of developing a new mainstream solution with much more responsibility. The NV36 graphics processor (GeForce FX 5700) was a very well-done product with a thrice higher vertex processing speed compared to the predecessors. Besides, it was equipped with CineFX 2.0 technology meaning some advantage at processing pixel shaders. Well, we had been expecting it as the NV36 was nothing else but the NV35 (GeForce FX 5900) with half the pipelines, but working at a higher frequency (475MHz). NVIDIA takes another try with the expensive and hot memory, DDR-II, working at 450MHz (900MHz). The reason for doing this is not quite comprehensible and first cards on the new GPU would cost somewhat above the declared $200. Anyway, the new product was all right. Its performance was much above that of the GeForce FX 5600 Ultra. In other words, the RADEON 9600 PRO and XT got a new dangerous rival to fight with.

The new products from ATI Technologies are mostly interesting for their dynamic overclocking technology dubbed OVERDRIVE. It allows the card to control the GPU frequency depending on its temperature. The RADEON 9800 XT uses the new option to the full, having a small profit in performance, but the RADEON 9600 XT supports it on paper only.

Notwithstanding the growth of the frequencies, the performance of NVIDIA’s GPUs is always worse when it comes to complex pixel shaders. NVIDIA says it is the payment for flexibility and programmability of the NV3x architecture. This resembles the situation with Intel Itanium processors that are rather slow at executing ordinary x86 code. NVIDIA introduces a special code translator into the new generation of its drivers, changing their name from Detonator to ForceWare along the way.

That’s the way the GPU performs DirectX 9.0 shaders now, according to NVIDIA:

DirectX 9 pixel shaders are not directly fed to the graphics processor. The driver has a compiler that translates DirectX 9 shader code into GPU commands. The compiler “takes apart”, analyzes and then “puts back together” the shader rearranging the commands into pairs of texturing and math1ematical ones (this fits best for the GeForce FX architecture), reducing the number of involved registers and the overall number of commands. It also uses only those of commands, which are supported by the GeForce FX hardware.

The main optimization criterion is the maximum processing speed of the output code. Besides, the output parameters of original and the optimized shader should be in perfect coincidence. In other words, NVIDIA claims that the use of the optimizing compiler eliminates any possibility of quality worsening.

The new approach to the performance problem of the NV3x is acceptable since NVIDIA doesn’t now optimize for specific applications, but rather boosts the execution speed for DirectX 9.0 shaders anywhere they are used. The first try was not all perfect, though. The performance increasing, there are still some visual problems in certain games that have never been noticed before. Shadows suffer most: they vanish altogether or become distorted, to the level of plain black squares.

The new shader code compiler from the NVIDIA programmers seems to require some improvement yet. It is sure to get it and the future version of the ForceWare driver will be free from the disadvantages we see today. The arrival of the new driver is good in itself, allowing NVIDIA’s GPUs do fast enough where they used to slow down. People who want to play modern games comfortably are not limited in choice with VPUs from ATI Technologies only, as NVIDIA’s solutions have become competitive.

However, implementing the large-scale optimization, NVIDIA continues with its application-specific ones. Well, it is not as bad as it seems really, until it comes to benchmarks like 3DMark03. Back in summer Futuremark voiced its intention to fight such optimizations. The company remains true to its words releasing a patch in the middle of the fall which kicks off NVIDIA’s tricks for boosting the performance in Benchmark No.1.

Some NVIDIA’s partners are very negative about the new patch for 3DMark03 and try to slander the work of Futuremark. Particularly, a representative of Gainward said the new patch disabled NVIDIA’s shader optimizer and 3DMark03 was thus inappropriate for an objective testing. A little later, an NVIDIA spokesman refused these words saying that the optimizer was working correctly after installation of the new patch for 3DMark03.

Market Analysis 2003

Graphics Wars in 2003: The Big Picture

Talking about the year 2003, I could venture a few suppositions about the results of the year as well as the directions the industry is going to move along:

The battle of the two graphics giants can be viewed from several points of view, not only with respect to technological superiority/inferiority. Let’s consider things tangible – the market shares.

If take the computer graphics industry at large, we’ll find Intel domineering with its integrated chipsets. So, we’ll only focus on the manufacturers of discrete solutions – ATI Technologies and NVIDIA Corporation as well as a handful of smaller players.

Q1 2003

At the beginning of the year, the biggest share of the standalone graphics chip market belonged to NVIDIA (57%). That’s quite logical considering the company had a solid ground underfoot then. Its reputation was only beginning to be questioned. The Canadian ATI Technologies, a most dangerous rival, came second with 35%. Other players have from 1% to 3% of the total.

Talking about graphics processors for desktop systems, NVIDIA’s share was 64% in the first quarter, while ATI had 25% of the market. It seemed like ATI had nothing to be much optimistic about, but we can view the situation from another point of view. For example, comparing the market distribution of sixth-generation GPUs (DirectX 8-compatibles), NVIDIA has 44% of the market, while ATI – 40%. Very close, as you see.

As for portable graphics chips, ATI was even n the lead with 62% of the market. NVIDIA had 28%.

Time was working for the Canadians. ATI Technologies had an enormous potential which showed up in the second quarter.

Q2 2003

In the second quarter already NVIDIA lost 3% of the desktop GPU market due to the delays with the launch of their new graphics processor and the customers’ rising mistrust. ATI Technologies, on the contrary, enlarged its share from 35% to 37%. As for DirectX 8 solutions, ATI’s share grew from 40% to 53%, while NVIDIA’s one dropped from 44% to 11%. The humble SiS somehow nipped off a morsel of 36% with its Xabre, probably due to the company’s contacts in Asia as there’re few Xabre GPUs offered in Europe and the USA.

The sector of DirectX 9.0-comaptible GPUs was most curious: ATI practically monopolized the high end of the market, controlling about 87%, but had only 30% of the inexpensive GPU market against NVIDIA’s 70%. It is natural since the latter had two new GPUs, NV31 and NV34, while ATI had nothing like that and actually doesn’t have till today.

I should also mention that the shares of NVIDIA and ATI in the discrete desktop GPU market remained the same compared to the first quarter: 64% and 28%, respectively.

As for the mobile graphics market, the Canadians fortified their positions further from 62% to 68%. NVIDIA’s share shrunk from 28% to 19%.

Q3 2003

The third quarter brought nothing principally new. ATI’s share of the market grew by 3% more, from 37% to 40%, while NVIDIA’s dropped from 54% to 53%. The DirectX 8-compatibles market was in fact monopolized by ATI (80%), while SiS receded: from 36% to 18%.

NVIDIA had recovered from the blows it had been receiving and was boosting its influence in both DirectX 9.0 graphics chip market sectors. By the end of the third quarter it had 72% of the low-end sector and 32% of the high-end. The shares of ATI Technologies diminished to 27% and 68%, respectively.

NVIDIA also won 2% of the mobile GPU market, increasing its share to 21%, while ATI gained 3 points to control 71% of the market.

ATI’s share of the discrete desktop GPU pie grew to 32%, while NVIDIA’s dropped to 62%.

The fourth quarter is still going on, so there’s no sense talking about its results – we don’t have the data yet. Anyway, I suppose it will be much alike to the third: NVIDIA is regaining with the help of its shader compiler what it has lost. The two giants are making ready for the next round.

DirectX Installation Base

Of course, the market shares show the dynamics of the company. But to be the leader, you also have to have your solutions installed in user PCs – this is the parameter considered by game developers to determine the platform or platforms to create new games for.

Talking about high-performance DirectX 9.0 solutions, those that don’t have DirectX 9 support just for show, but do provide an acceptable level of performance, ATI shipped about 85% of them from August 2002 to September 2003, while NVIDIA – about 15%. As the calculations were based on open as well as closed info (no one kept track of DirectX 9 graphics cards in the market until the second quarter of 2003), some fluctuations are possible, but anyway the trend is clear – ATI has gained superiority in shipments of high-performance DirectX 9.0 GPUs.

As for inexpensive DirectX 9.0 accelerators, where DirectX 9.0 support is mostly for marketing reasons, rather than a real option (I mean graphics cards of GeForce FX 5200 and RADEON 9600 SE classes as well as inexpensive cards on the GeForce FX 5600 and RADEON 9600), NVIDIA is on top having shipped 72% of such devices, while ATI – only 28%.

If we summed up the numbers (low-cost and expensive DirectX 9.0 GPUs), NVIDIA would have 59%, while ATI – 41%.

The numbers suggest that the enthusiastic users all around the world have liked the produce of ATI Technologies with partners and have got some scruples about NVIDIA. It’s no secret it’s very hard to develop a new graphics processor, but it is even harder to convince the potential buyer to become an actual one. NVIDIA has to make up leeway, which is going to be a daunting task, as it was the GeForce brand that helped the company much in the previous year rather than technical superiority. The third player, XGI, is going to have an even harder time competing with both ATI and NVIDIA. While ATI enjoys the customers’ trust, and NVIDIA is repairing it by releasing the automatic shader optimizer, XGI won’t be able to offer either. It is also unlikely to offer high speeds, comparable to the leaders.

Graphics Cards  2003

We are going to undertake an all-out benchmarking of contemporary graphics cards of the mainstream and top-end classes to help you find the king of 3D graphics and get decided about your shopping. It’s no secret modern graphics cards are mostly used for gaming, so we excluded theoretical (synthetic) tests in favor of modern games. We’ve got nine graphics cards in total. Make your stakes, here’s the list of the participators from the “Premier League”:

Unfortunately, we don’t have an NVIDIA GeForce FX 5900 Ultra and an XGI Volari Duo V8 at our disposal. The “Middle-range League” comprises five cards:

There’s no typo – we’ve really included a RADEON 9500 into the review. This “oldie” is hale enough to show its teeth (eight pipelines!) to the “youngsters”. As for the GeForce FX 5600 Ultra, we of course used the new version of the card with 400MHz/400MHz (800MHz DDR) frequencies.

You may have encountered some of the cards as we reviewed them earlier. New to our site are the GeXcube RADEON XT and the Built by ATI RADEON 9800 PRO 128MB. We’ll examine them shortly.

Built by ATI RADEON 9800 PRO 128MB

We tested some RADEON 9800 PRO-based graphics cards previously, but this time we were lucky enough to lay our hands on the original RADEON 9800 PRO Built by ATI solution with 128MB of memory.

The graphics card came to our test lab in its retail package, designed in sober-looking red-black-silver colors. Its unusual size – the length nearly matches the height – distinguishes it from other cards’ packages that are usually oblong. We’ve got a card and an ordinary set of accessories inside:

There’s no use searching for any games. Even the DVD Player came on the Driver CD. ATI Technologies must have decided to ship its legendary quality without any allurements.

Closer Look

 

The PCB of the card is compact and simple, similar to the PCB of the RADEON 9700 PRO, but redesigned for stable work at high frequencies. There’s a row of pins to the left of the chip: it is the special-purpose interface for connecting a TV-tuner or additional TDMS-transmitter. We’ve already seen this all in the RADEON 9700 PRO.

The GPU boasts a new cooling system with a silver-colored needle-plate heat-spreader of a curious (parallelogram) shape. The fan now has more blades, while the blades themselves became thinner. ATI heard the indignant cry from overclockers at last and the heat-spreader has got a bump in its sole to provide a good contact with the die surface, although the protective gasket, which hindered proper heat transfer in the RADEON 9700 PRO, remained as it was. A normal thermal paste is used as the thermal interface instead of the “gum” every overclocker hates for its poor heat-conductive properties. The cooler is fastened rather offhandedly, there’s a real chance of damaging the GPU die. Moreover, the cooler itself looks low-powerful, incapable of performing its job well. We’ll see whether it is really so, shortly.

The memory chips are not cooled at all. The PCB carries eight memory chips from Hynix with 2.8ns access time. By default, the memory works at 340MHz (680MHz DDR) frequency, while the GPU is clocked at 380MHz.

The new power connector is a joy – ATI installed a standard 4-pin connector instead of the flimsy one employed in floppy-drives. There’s no danger you accidentally tear it off. Overall, the card can’t boast any extraordinary features, but it is a well-made, high-quality product.

2D Quality and Overclocking

The ATI RADEON 9800 PRO graphics card was displaying an excellent image in all modes, including 1600x1200@85Hz.

As for overclocking, I took to it quite cautiously (remember my apprehensions about the cooler?). I used an additional 92mm cooler that was blowing at the card from a side. The GPU worked stable at 445MHz (!), while the memory was somewhat worse – 378MHz (756MHz DDR). When I increased the frequencies further up, I got all manner of visual artifacts. So, these frequencies were the limit. The flimsy nature of the cooler showed up, though. After about an hour of continuous testing, artifacts appeared in 3DMark2001 and 3DMark03. The card was very hot then, in spite of the additional cooling, while the memory chips hurt to the touch. So, it’s simple: although the cooling system of the RADEON 9800 PRO handles the temperature well enough in its regular mode, I don’t recommend you install the card into a case without additional exhaust fans. You’d better even take a system case with a side fan, which blows at the expansion cards.

Thus, the ATI RADEON 9800 PRO deserves our appraisal for the quality of manufacture, exterior and equipment, but its cooling system is only satisfactory. It prevents the card from overheat, but the temperatures remain very high.

GeXcube RADEON 9800 XT

You should know the name of the manufacturer of this graphics card as we posted on our site a review of the multimedia combo GeXcube All-In-Wonder RADEON 9600 PRO. This time we’ve got another product of this company, the top-end GeXcube RADEON 9800 XT. It also came to us in its retail package:

The design of the package is a bit gaudy. At least, the combination of red, black and golden has nothing to do with soberness, but tastes differ and some may find this design acceptable. The three-headed hell creature may be known to you by the exclusive screen-saver from ATI Technologies. The package has captions telling you the amount of the graphics memory (256MB), interface (AGP 8x) and the supported DirectX version. Moreover, there’s a sticker telling you that there’s a full version of “Delta Force: Black Hawk Down” included with the card. The back side of the package lists briefly the product’s specifications, supported resolutions and refresh rates.

Here are the things we found in the box:

I’ve seen better accessories sets, of course, but this will do, too.

Closer Look

  

The card copies the RADEON 9800 XT reference design as developed by ATI Technologies. There’s nothing wrong about it, as the reference design is most well-thought with respect to the compact size of the PCB and the efficiency of the cooling system. The latter is a massive copper plate with a copper folded band soldered up to it. It has such a profile that the air from the fan is evenly distributed along the surface of the heat-spreader. The fan is huge, 80mm in diameter. Thanks to this size, it’s possible for the fan to rotate slowly (and noiselessly). Running a little ahead, I’d say that my expectations about the noise came true, although partially. The whole cooling system is covered with a plastic case.

The memory chips at the back side of the PCB are covered with a figured copper plate, although its size makes it look more like a decoration element rather than a component of a cooling system. Part of the plate has a bump to be pressed against the PCB in the chip area with a special spring brace. It helps to take the heat off the back of the graphics processor. Both parts of the cooling systems are fitted together by means of spring screws; springs help to avoid damaging the GPU. As an additional fastening element, there’s a pressure plate at the back. Overall, the cooling system looks solid enough, although lacks the glamour of the solution from ASUS Computer. Besides the copper plate and the fastening elements, the back side of the PCB has a landing place for an ATI Rage Theater 200 chip to provide VIVO functions to the card. This place was empty in our sample.

Contrary to the RADEON 9800 PRO DDR-II, the PCB of the GeXcube RADEON 9800 XT carries only eight DDR memory chips, but the capacity of each chip is 256MB rather than 128MB. So, we have a total of 256MB of graphics memory. The chips come from Hynix, work at 365MHz (730MHz DDR) and have 2.5ns access time. Alas, the relatively simple PCB design doesn’t allow the memory to be clocked at its nominal, 400MHz (800MHz DDR). The VPU works at 412MHz, but when you enable ATI’s OVERDRIVE technology the frequency may vary from 412 to 432MHz depending on the temperature as measured from the core-integrated thermal diode. If you want precise numbers: 432MHz is set when the temperature is below 52oC, the frequency goes down to 418MHz after the temperature rises and gets back to the nominal when the temperature is 62oC. You shouldn’t foster any hopes about any performance gain from OVERDRIVE, as it is highly improbable that the complex chip, the R360 is, will be less than 52oC hot under a workload.

Noise, Overclocking and 2D Quality

This time the engineering team from ATI Technologies did a nice piece of work developing the cooling system. First of all, it has become practically noiseless. There’s a reservation, though. It is noiseless when the GPU temperature is below the critical mark, otherwise the cooling system increases the rotational speed of the fan. However, the noise, or rather rustle, is quite bearable even in the worst case. You may only find it unacceptable if you’ve got a quiet system, with fewer fans.

As for the operational temperatures, they are quite normal, save for the memory chips on the backside of the PCB. Under high workloads, the plate covering the chips heats up to 55-56oC. I guess a bigger size of the plate or some additional ribs would be of help.

2D quality was up to the mark in all resolutions up to 1600x1200@85Hz. Overclocking added some pleasant emotions: the core worked at 452MHz. When the frequency was increased further, 3DMark03 yielded some artifacts: places of the image where pixel shaders were used became all covered with white dots. The memory stopped short of the nominal 400MHz (800MHz DDR) frequency, notching 390MHz (780MHz DDR). Again, there were visual artifacts at further memory overclocking. 452MHz is good enough a result for so complex a GPU manufactured by a not-very-thin tech process. The memory overclocking evidently was limited by the relative simplicity of the PCB – it is not intended to clock the memory so high.

Taking the GeXcube RADEON 9800 XT at large, I’m all positive about it. It is free from obvious flaws, although the cooling of the memory chips on the backside of the PCB might be better. The accessories are not exuberant, too, but that’s not really a requirement for a card without VIVO functionality. So, if you are looking for a high-performance and quiet graphics card, the GeXcube RADEON 9800 XT may suit you.

3D Games 2003

The list of games used for benchmarking has become completely new by now and looks as follows:

First-person 3D shooters:

Third-person 3D shooters:

Simulators:

Real-time strategies:

Semi-synthetic benchmarks:

Synthetic benchmarks:

So, altogether there are 12 modern games and 3 semi-synthetic benchmarks, which can also be regarded as gaming tests, because they are built on real gaming engines. In particular, X2 – The Threat Rolling Demo demonstrates all the beauty of the upcoming space simulator, which is to come instead of the X-Beyond The Frontier. And the Final Fantasy XI Official Benchmark 2 presents us the way the world will look in one of the most highly anticipated games from Square Enix. As you see, we are going to have a purely gaming test session this time. So, those of you who are considering a purchase of a gaming graphics card, might find this article pretty helpful. We also included the performance of our testing participants ion 3DMark03 test package, because it is considered a nearly industrial standard and many PC makers use the results of this test to chose a graphics card model for their systems.

Of course, we tried to make the benchmark results independent of the CPU performance that is why our test system was configured as follows:

The major modes used for testing were still the so called “Pure Mode” and “Eye Candy”. The first one is intended to reveal the maximum performance of the tested graphics accelerators, while the second one shows what they are capable of if we involve full-screen anti-aliasing and anisotropic filtering, which actually significantly improve the image quality and make it very pleasing for the eye. FSAA 4x mode is a high-quality mode but at the same time it is not so heavy for the contemporary graphics adapters. And the 8x anisotropic filtering mode is the maximum NVIDIA’s GPUs can support. The graphics cards based on ATI technologies VPUs can support up to 16x anisotropic filtering, however the image quality in this case hardly becomes any different, at least you cannot notice it with a naked eye. However, I believe it is high time we figured out which one deserves being called “The Best Graphics Accelerator of the Year 2003”?

3D Performance 2003: First-Person 3D Shooter games

We will start from the very beginning, that is with first-person 3D shooters, because this type of games seems to be the most widely spread today:

Return to Castle Wolfenstein: Enemy Territory

The engine of this game and GeForce FX are simply destined to be together: high vertex processing speed and high working frequencies help NV3x based graphics cards take the lead here. The clear evidence proving that RTCW: Enemy Territory loves high frequencies is the performance of the RADEON 9500 PRO, which is still behind RADEON 9600 XT even though it boasts 8 powerful pipelines. RADEON 9600 PRO is completely defeated by everyone, including the morally outdated GeForce FX 5600 Ultra.

The same picture can be observed as the resolution grows up, although RADEON 9800/XT family manages to catch up with the GeForce FX 5950 Ultra. RADEON 9600 XT and GeForce FX 5700 Ultra try to rip the victory off each other’s hands running almost neck and neck all the time.

In 1600x1200 the fastest RADEON based solution manages to outperform the rivals however the mainstream solutions are still behind their counterparts because of slower onboard graphics memory.

Eye Candy mode has always looked better on ATI graphics cards. Our today’s test session is also not an exception. As soon as the workload increased the eldest GeForce FX based solutions immediately gave in. the slower representatives of the NVIDIA GeForce family managed to retain pretty good positions despite the not very favorable testing conditions. Now let’s see what happens as the resolution keeps growing.

The proportions remained the same. GeForce FX 5700 Ultra is still very stable: it keeps pace with RADEON 9600 XT, which is also due to its faster memory.

Despite the fact that RADEON 9500 PRO works faster than its new fellows, it is still too slow for comfortable gaming in 1600x1200 with enabled full-screen anti-aliasing and anisotropic filtering. More powerful graphics accelerators outpacing one another by 3-5 fps also cannot boast any acceptable performance in this mode.

As the practical tests showed, almost all graphics cards pass the tests in Return to Castle Wolfenstein: Enemy Territory with disabled FSAA and anisotropic filtering. If we are talking about higher quality graphics settings, then the mainstream solutions will definitely not do here. Of course, the graphics cards based on RADEON 9800 and GeForce 5900 will cope with these tasks much better this way.

Star Trek: Elite Force 2

This science fiction shooter is based on the same good old Quake 3 engine that is why we have every reason to expect NVIDIA based graphics cards to show good performance results.

And we were absolutely right: all graphics adapters based on NV3x chips managed to defeat the competitors in no time thanks to the drivers.

Here the situation got slightly different: elder GeForce FX managed to get ahead of the slower models due to wider memory bus.

Here the gap between the high-end solutions and the mainstream ones became even bigger.

Despite the enabled FSAA and anisotropic filtering the GeForce FX family managed to retain its leadership.

The higher gets the resolution, the farther behind falls GeForce FX 5600 Ultra. The only one who failed to defeat the latter appeared RADEON 9600 PRO.

In the highest resolution the situation we observed in RTCW repeated: RADEON 9500 PRO outperformed RADEON 9600 XT/PRO quite significantly. The gap between the fastest graphics cards is not so big: from 3.3 to 5.4 fps.

Judging by the benchmarks results, GeForce FX graphics card family suits better for Star Trek: Elite Force 2 game than the RADEON based solutions. As for the quality graphics modes, the situation of RTCW again repeats here: none of the mainstream graphics accelerators are capable to ensure sufficient speed in modern games with enabled FSAA 4x and Anisotropic filtering 8x.

Unreal Tournament 2003

This super-popular 3D shooter has nothing to do with Quake3, that is why the results obtained in it appeared absolutely different from what we have already seen above. When we tested the cards in Unreal Tournament 2003 we decided to use two most typical demos: Inferno and Antalus.

The new drivers definitely help NVIDIA products to retain their advantage: we haven’t seen the results that high before. RADEON 9800 PRO looks better than GeForce FX 5900, while RADEON 9800 XT yields a little bit to GeForce FX 5950 Ultra. In the mainstream segment the laurels belong to RADEON 9500 PRO and GeForce FX 5700 Ultra. Maybe ATI Technologies shouldn’t have given up the 8-pipeline architecture in the mainstream graphics solutions.

As soon as we enabled Eye Candy mode, the eldest RADEON models dashed forward. Even such a monster as GeForce FX 5950 Ultra couldn’t compete on equal terms with them. GeForce FX 5900 looks very pale in this case: being a really expensive toy, it performed as fast as RADEO 9500 PRO/RADEON 9600 XT. As you may have already understood, the youngest GeForce FX models are hardly worth mentioning at all here: they are in the very end of the race.

In Flyby Antalus demo the situation is actually very similar: GeForce FX 5950 Ultra runs ahead of all other testing participants, GeForce FX 5900 competes with RADEON 9800 PRO for the second position, while RADEON 9500 PRO continues showing off being just a little behind GeForce FX 5700 Ultra.

Again the RADEON family is at the top of the Olympus, while the eldest GeForce FX solutions are completely ruined at the foot of this pedestal. 5950 Ultra model looks a little bit better compared with the other solutions from the same family, but is still not fast enough to compete on equal terms with RADEON 9800 PRO/XT. RADEON 9500 PRO is again faster than its more up-to-date brothers, however, it still failed to catch up with GeForce FX 5900 this time. I think I can say that Antalus demo is less pitiless towards GeForce FX, but not pitiless enough to let these solutions win in the heavy modes.

Summing up the results of all Unreal Tournament 2003 tests, I can state that RADEON solutions are more suitable for high-quality modes. If you are using full-screen anti-aliasing and anisotropic filtering then it would be better to think about getting a RADEON based solution for your system, since even the mainstream solutions perform pretty well in Unreal Tournament 2003. If you are not hunting for high image quality at high speed, then GeForce FX family will also do great for you.

Halo: Combat Evolved

I would like to veer a little bit away from the very beginning. As you probably know, FSAA cannot be enabled on NVIDIA graphics cards in this game. Therefore, we decided not to provide the results demonstrated by solutions from ATI Technologies, because it would be unfair. Anyway, this game is very demanding and enabling FSAA would surely lead to tremendous drop of the fps rate below the comfortable gaming level.

What do we see here? The newcomer from NVIDIA manages to compete with RADEON 9800 XT quite successfully due to ForceWare technology. The second position has been split between RADEON 9800 PRO and GeForce FX 5900. Te remaining testing participants simply failed to provide any acceptable gaming performance at all in this beautiful and exciting shooter. These are the new generation games and they are going to be more and more numerous with the time.

Serious Sam: The Second Encounter

Well, today Serious Sam is probably performing in our roundup for the last time, and after that we will no longer use it as it has already become too outdated. So far, let’s check what has changed since the times when we used this game as a serious tool for contemporary graphics accelerators performance evaluation.

Wow, this is cool! 1.5-2 times higher performance of the NVIDIA solutions could hardly be left out. We repeated the tests a few times, just to make sure, but the results didn’t change. To tell the truth, it is pretty impressive how far ahead GeForce FX managed to get here. Of course, NVIDIA’s solutions have always felt at home in this game, but we have never expected them to feel so greatly at home, really! Nevertheless, we have to admit that NVIDIA is indisputable leader here: no one could even try to compete with the GeForce FX.

With enabled FSAA and anisotropic filtering the situation changed dramatically. GeForce FX managed to retain the leadership in this test, but the gap got significantly smaller this time. In fact, it is pretty logical: these graphics cards have always been known for their inability to retain high speed as soon as visual effects were enabled.

So, the indisputable victory in Serious Sam: The Second Encounter belongs to GeForce FX family. However, we cannot consider this game a next-generation one, which makes this victory less important, unfortunately.

Highly-Anticipated DirectX9 Game

And now the hit of the today’s show comes: the new generation game based on a revolutionary engine with outstanding shader graphics and realistic physical model of the environment around the player. NVIDIA has already suffered a few very upsetting defeats here, and in fact we do not have any reasons to think that this time the situation will change. Nevertheless, it is still worth checking out:

The SF Demo record, we created reflects the actual gaming situation and is very suitable for performance evaluation. In this case the victory is won by ATI based graphics cards. Moreover, the good old buddy RADEON 9500 PRO looks just excellent against the background of the new mainstream solutions. The monstrous GeForce FX 5950 Ultra is absolutely harmless and can only be regarded as a competitor to RADEON 9600 PRO. GeForce FX 5700 Ultra tries real hard to work in DirectX9 mode, but it hardly manages to cope with this task, as GeForce FX 5600 Ultra working in DirectX 8.1 mode easily outperforms the elder brother and almost catches up with the GeForce FX 5950 Ultra solution.

Another record we created also presents a real situation, which may take place in the game, however, this time there are huge water surfaces there. And of course, the water is drawn with pixel shaders. Moreover, this time we had to fight a few monsters, which led to severe injuries of three zombies and two crows :)

No wonder that RADEON 9800 XT/PRO and RADEON 9500 PRO appeared among the leaders in high-end and mainstream tests. The latter is definitely more suitable for next-generation games, than the RADEON 9600 XT/PRO working at higher frequencies but lacking a few important things. In applications like that the pipelines cannot be too many. :)

GeForce FX solutions are slowly crawling in the very end of the crowd tending to win no serious prize. Although this time GeForce FX 5900 and 5950 Ultra still manage to ensure more or less sufficient performance for comfortable gaming. Of course, everything may change for the better for GeForce FX when the final version of the game is finally released, but so far the situation looks as follows.

The third and the last demo scene was also created in our lab during the real game play. This time our virtual hero had to crawl through dark underground corridors using the gun to cool down a few especially annoying flying robots. There is also water in this record, namely a stream in the bottom of the drain canal, however, this stream is much smaller than in the previous demo, of course. The monsters and fellow allies were on the contrary more numerous this time, and they were moving really actively. Again GeForce FX appeared unable to compete with RADEON family. No doubt that the latter manages to cope with the new gaming engine much more successfully. I don’t see any causes for concern about its performance when the final version of the game finally comes out. :)

Not all performance issues can be solved by optimizing the software: this simple truth again gets proven in the diagrams above. However, nobody prohibits you to hope for higher performance at all. NVIDIA and the game developers may possibly find a way to optimize the gaming engine later on.

Tron 2.0

Not so long ago our test games collection acquired a new member: the original first-person shooter, which is remarkable due to the fact that the player lives not the life of a brave space soldier or a magic hero, but of a … computer program. The image of this game definitely told on the environment created in it: there are no complex textures, while the geometry is pretty heavy, and the lighting effects are truly impressive.

The game is based on a pretty progressive LithTech Triton engine capable of using the abilities of the contemporary graphics accelerators to the full extent. GeForce FX 5700 Ultra demonstrates pretty good results similar to what we saw by RADEON 9600 XT/9500 PRO. As for the elder GeForce FX models lagging behind the leaders, they are not that far behind and the gap is not critical at all.

Actually, the situation appeared up to our expectations: the enabled anisotropic filtering and full-screen anti-aliasing pushed RADEON to the top. Moreover, the outdated and already discontinued RADEON 9500 PRO managed to outperform a more expensive GeForce FX 5900 and got really close to the flagman of NVIDIA products – GeForce FX 5950 Ultra solution. GeForce FX 5700 suffered an indisputable defeat and got beaten by RADEON 9600 PRO/XT in two of the three resolutions, and managed to make up for its failures only in 1600x1200.

Again we have to state that RADEON 9800 family is superior to the competitors and slower fellows on ATI chips. If you are a big fan of Tron 2.0 game with all its graphics tricks then your choice of a graphics adapter is evident.

Actually, we cannot single out an indisputable leader in the first-person 3D shooter games. According to the benchmarks results, the first prize is shared by ATI RADEON 9800 XT and NVIDIA GeForce FX 5950 Ultra, though the former looks much more advantageous, as it is more suitable for the next-generation games and performs faster with enabled FSAA and anisotropic filtering.

3D Performance 2003: Third-Person 3D Shooter games

The second group of our tests includes the games offering the third-person view of the scene. Let’s check out the results of our testing participants here.

Tom Clancy’s Splinter Cell

The game is going to leave the pages of our reviews pretty soon, because its popularity among gamers starts to decrease little by little. Nevertheless, it is still pretty beloved, so we simply couldn’t miss it out this time.

Well, there is nothing new here: RADEON family is still ahead of all, which is actually not at all surprising. It is a curious fact that this game is more sensitive to frequencies rather than to the number of working pipelines, which actually determines the failure of RADEON 9500 PRO. By the way, the problems with the shadows, which have already been described in one of our previous articles, result from the same roots as the problems with FSAA in Halo game mentioned above. Hopefully, NVIDIA will improve the situation with the image quality with the time.

Tomb Raider: Angel of Darkness

This game reminds us of Splinter Cell, but the interesting thing about it is a great number of image quality settings it offers: you can adjust nearly everything. This can even confuse an unsophisticated user. As for us, we used the settings ensuring maximum graphics quality during the test.

Splinter Cell is very sensitive to the graphics chip frequency, while Tomb Raider, on the contrary, loves 8-pipeline architecture much more, which leads to the victory of RADEON 9500 PRO among the mainstream solutions. In the High-End segment the situation is more than clear: the laurels belong to RADEON 9800 XT/PRO working with the shader code very efficiently. The score of GeForce FX, however, turns out a disaster even despite the new compiler integrated into ForceWare 52.16.

Enabling FSAA and anisotropic filtering didn’t affect the situation at all. The leadership still belongs to ATI RADEON based solutions. Moreover, the mainstream products from NVIDIA refused to work at 1024x768 and the game simply returned us back to Windows desktop.

Both third-person shooter games we used for testing today are based on progressive engines using shaders, that is why RADEON family feels at home here and wins the tests. The code translator implemented by NVIDIA doesn’t help them win for some reason.

3D Performance: Simulators

The next type of gaming applications includes those representing an exact (or not very exact) simulator of some means of transportation, military or civil. These games have the specifics of their own. In particular, the remarkable thing about all airplane-simulators is the presence of two huge textures in any frame: the sky and the ground, and the models are all designed with the highest quality and obey the major laws of physics. Something like that is also typical of auto-simulators, too, however, in this case anisotropic filtering matter much more, because it is responsible for the quality of the road moving under the car wheels. We believe that we selected two most suitable games of all the today’s simulators: the airplane-simulator from the times of World War II aka IL-2 Shturmovik: Forgotten Battles, and formula 1 race car simulator aka F1 Challenge 99-02. IL-2 Shturmovik: Forgotten Battles seems to be the best game of the kind so far: the airplane models are designed in the most thorough and detailed way and the image quality makes you forget that it is just a game and carries you away into the air battles where the radars and rocket launchers with laser targeting do not matter that much, and its is only the pilot’s skills and experience that help you get through to the sacred victory.

IL-2 Shturmovik: Forgotten Battles

For our tests we selected one of the heaviest records called TheBlackDeath.ntrk that is why you shouldn’t be surprised with these low fps rates. In the heavyweights class RADEON 9800 XT and 9800 PRO manage to get slightly ahead of the rivals from NVIDIA. Moreover, the latter is only 1fps behind the leader in 1024x768. The performance of RADEON 9500 PRO is probably ruined by the low memory working frequency, because its results are even lower than those of RADEON 9600 XT/PRO. As for GeForce FX, it feels OK in this game due to fast graphics memory (remember we mentioned large textures when we described the tests?). It is really surprising but in 1024x768 GeForce FX 5700 Ultra manages to outperform its faster brothers, though only by 2fps.

In Eye Candy mode the situation gets slightly different: 8 pipelines of the RADEON 9500 PRO render it a priceless service as it comes to large textures processing that is why it wins among the mainstream products for sure. The elder RADEON based solutions work perfectly well with textures. GeForce FX 5950 Ultra falls just a tiny bit behind them.

F1 Challenge 99-02

This game is a Formula 1 racing simulator and we have already used it once in our testing (see our latest reviews in the Video section).of course, it is quite possible that we will replace it with a different more advanced auto-simulator game later, but this is just a plan so far.

For some unknown reasons all RADEON graphics cards are rarely exceeding 45 fps here and NVIDIA cards hardly overcome the 51 fps barrier. We should probably really think about the correctness of the results obtained in this gaming benchmark. Nevertheless, we clearly see that on the whole GeForce FX family seems to perform somewhat better in F1 Challenge 99-02 than the RADEON family.

As the workload increases RADEON starts performing better especially as the resolution grows up. Simpler anisotropic filtering algorithms do not affect the image quality here, as the road in front of the racer is always located in one reference plane.

3D Performance: Real-Time Strategies

This category of games was initially supposed to include two contemporary popular real-time strategic games – Command & Conquer Generals: Zero Hour and Homeworld 2. The former is a continuation to the world’s famous C&C series, however this time the developers focus on the multi-player game version. Homeworld 2 is remarkable for the fact that the action takes place not on the planet but in the open space. Both games are very beautiful and load the contemporary graphics accelerators quite tangibly. Unfortunately, Homeworld 2 behaved inadequately throughout the entire test session when we used the latest driver versions that is why we had to give it up this time.

Command & Conquer Generals: Zero Hour

To make this game provide correct performance value we had to dig into its settings a little bit. We disabled the fps limitations because otherwise this value would never exceed 30.

As you see, the actual performance difference between various graphics accelerators tested can be observed only in 1280x1024 and up. Moreover, you can clearly notice one tendency: all graphics cards with 256bit memory bus show almost identical results and the same is true for the cards with 128bit memory bus. The winner here appeared RADEON 9800 XT, however, GeForce FX 5950 Ultra also proved to be pretty successful. In the mainstream segment we can state the parity between RADEON 9600 XT/9500 PRO and GeForce FX 5700 Ultra.

When we enable FSAA, the memory subsystem workload rushes up immensely so that all graphics cards with narrow memory bus immediately give up their positions despite all other advantages they can boast. RADEON 9800 XT remains the winner in the high-end competition and on the mainstream side the victory is won by GeForce FX 5700Ultra, which manages to elbow its way through to the pedestal due to extremely high working frequencies.

3D Performance: Semi-Synthetic Benchmarks

This type of gaming tests is certainly of interest to us because the benchmarks included into this group are prototypes of real games or are based on real gaming engines. Therefore, the results obtained in these tests can be used to objectively evaluate the performance of our today’s testing participants.

Aquamark3

I don’t think I need to introduce this benchmark to you any more: see our article called ASUS RADEON 9800 XT and LeadTek WinFast A380 TDH Ultra: The Battle for AquaMark3 for more details.

The leader here is GeForce FX 5950 Ultra, which owes its victory to high working frequencies as well as to ForceWare 52.16. GeForce FX cannot boast any really remarkable results but competes pretty successfully with the elder RADEON solution. In the mainstream group the leading position is occupied by RADEON 9500 PRO/9600 XT and GeForce FX 5700 Ultra.

Successful architecture of the RADEON 9800 XT/PRO as well as fast algorithms for anisotropic filtering and anti-aliasing again help the buddy to win when FSAA and AF are both enabled. GeForce FX immediately loses its advantages in both: high-end and mainstream segments. In the mainstream group the best result in the lowest resolution is demonstrated by RADEON 9600 XT, while in the highest resolution the winner is definitely RADEON 9500 PRO.

However, I have to admit that the really high results in Aquamark3 are only shown by graphics solutions from higher price group, while mainstream products can hardly allow playing games with high image quality.

Final Fantasy XI Official Benchmark 2

This is an official benchmark created by Square Enix Company. It includes a number of scenes from the upcoming Final Fantasy XI game, which promises to become the most beautiful and interesting online RPG.

The scenes involved in the testing doe not yield in quality to the Nature scenes from 3DMark2001 and 3DMark03. Moreover, unlike the latter ones, these scenes are not static, but are very live.

Unfortunately, Final Fantasy XI Official Benchmark 2 doesn’t support resolutions change and provides the results not in the fps but as the overall number of frames rendered within single pass. The results exceeding 2000 implies that Final Fantasy XI will run well on your system, and the result exceeding 4000 means that your system is powerful enough ton ensure high gaming performance even with the maximum image quality settings.

As you see, all testing participants managed to cope with this task well enough, except GeForce FX 5600 Ultra. Nevertheless, the official position of the game developer implies that if the result lies between 3000 and 3999, you PC may be considered “Very tough”. Your computer can run FINAL FANTASY XI for Windows enjoyably with the default settings. If your video card exceeds the recommended system requirements, it may be possible to run “FINAL FANTASY XI for Windows” easily even in high resolution mode.

Even enabled FSAA and anisotropic filtering do not allow the results to go below 3000. however, in this case the eldest RADEON solutions managed to show better performance than all toher testing participants. RADEON 9500 PRO has also performed very well and easily got to the level of GeForce FX 5900.

X2 – The Treat Rolling Demo

Now you can see the new space simulator aka X2 – The Treat, in action. We have already mentioned this test in one of our previous reviews.

This game evidently loves high frequencies, which you can clearly see from the diagrams above. The winners in the high-end and mainstream class are GeForce FX 5950 Ultra and GeForce FX 5700 Ultra respectively. The second position is stably occupied by RADEON 9800 XT and RADEON 9600 XT cards.

Enabling the Eye Candy mode allows to slightly reduce the gap between GeForce FX and RADEON families: now the latter solutions do not fall that far behind the leaders. Moreover, the mainstream RADEON 9600 XT dashes forward and in 1600x1200 RADEON 9500 PRO outperforms it. The slower GeForce FX solutions surely roll back to the very end of the race.

Summing up the results obtained in X2 – The Treat we can see that GeForce FX is much better fit for this game. However, the gap between the GeForce and RADEON families is not dramatic enough to draw a definite conclusion once and for all.

Synthetic Benchmarks

This time we decided to skip the full set of 3DMark03 diagrams. For your convenience here is the summary of the graphics cards performance in this test package:

As we have expected, the fastest one here is RADEON 9800 XT. The second prize was won by RADEON 9800 PRO and the third – by GeForce FX 5950 Ultra. In the mainstream segment we see RADEON 9600 XT winning, while the second and third positions are occupied by the cards on ATI’s VPUs.

Conclusion 2003

So, we have just tested nine most well-known and popular graphics solutions of the year 2003. We didn’t include any low-cost or intermediate solutions, trying to draw your attention to the best products available in the market.

Well, what have we revealed?

The main conclusion states: do not expect mainstream graphics cards to show rocking performance in heavy modes. The practice showed that the solutions like RADEON 9600 XT and GeForce FX 5700 Ultra manage to cope well with the games without enabled anisotropic filtering and full-screen anti-aliasing. Once you activate these options the performance of these both graphics accelerator will go down too much, so that you will be unable to enjoy gaming any longer.

We also didn’t dwell on the graphics quality in contemporary gaming applications, we didn’t try to figure out how the cheats and optimizations affect the image. The major criterion for our “quality control” was the absence of serious eye-catching image artifacts. Today, when NVIDIA recommends the game developers to sacrifice the image quality for the sake of speed (see this document), we sincerely hope that the game developers will not follow the hint, because otherwise we will very soon “enjoy” low-quality graphics on NV40 and R420, on NV50 and R500. I suggest that for now we leave alone the driver developers and their desperate attempts to drag the performance of not very successful hardware to a certain acceptable level, the graphics chip designers and their intentions to create a worthy solution, and game developers working really hard on interesting and visually pleasing gaming applications.

Graphics Accelerators Based on Chips from NVIDIA Corporation

I should say that NVIDIA Company has finally managed to animate the NV3x family, which has definitely been supported by the ForceWare drivers release a lot. Now the graphics accelerators based on these chips perform not so drastically slow in the new games using pixel shaders. Of course, this is true for far not all cases. Moreover, there is a certain tendency here: the more up-to-date is the game and the more complex is its engine, the worse is the performance of NVIDIA’s offsprings in this application. However, in a number of cases, they still run really fast, especially in the modes with full-screen anti-aliasing and/or anisotropic filtering.

The new GeForce FX 5950 Ultra is really not bad at all, however, it has a few drawbacks worth paying attention to, such as huge cooling solution and long PCB, which blocked IDE connectors in our case and touched the DIMM slot clips. The same is true for GeForce FX 5700 Ultra, the new mainstream graphics adapter hardly yields to its elder brother in size and weight. Moreover, DDR-II memory can never be called “cool”, as it dissipates a lot of heat during work. Those who cannot afford a GeForce FX 5950 Ultra, but do not want to put up with only four rendering pipelines and 128bit memory bus, could probably go for a nice GeForce FX 5900 model. We consider the following solutions to be the most successful among those based on GeForce FX 5900 chip: ALbatron GiGi GeForce FX 5900 and ASUS V9950, as they boast quiet and compact cooling systems onboard, which is a definite advantage.

Unfortunately, many graphics card makers believe that the noise level is the last thing the gamers care about, that is why they very often provide their products with real turbines, which are so noisy that you can hardly enjoy sitting in front of your PC for long. If you really don’t care about the noise, then you will like the cards from Leadtek, which boast excellent PCB and mounting quality. Their only drawback from our point of view is the noise produced by the cooling system.

All in all, we would recommend GeForce FX graphics accelerators family to those of you who need the maximum fps rates and do not use the high quality modes at all. Moreover, it is quite possible that NVIDIA’s software developers will fail to eliminate all bugs from the NV3x architecture with the help of the shader code compiler built into the ForceWare driver. As a result, the performance of GeForce FX based solutions in the next generation games such as Doom III, Half-Life 2 and some others will turn out absolutely unacceptable. The indirect proof to the point is the section of our article devoted to the benchmarks results in a beta version of a game like that: despite all tricks, huge and non-optimized NV3x architecture appeared a real stopper for all NVIDIA graphics solutions participating in the test session. Of course, some game developers may optimize their software specifically for this architecture, but there is hardly any hope that all of them will do it one day. Especially, since ATI has been successfully selling an absolute majority of performance-mainstream and high-end DirectX 9 products so far.

Graphics Accelerators Based on Chips from ATI Technologies

The situation here is relatively cloudless: these products are clearly split into two major categories: high-end and mainstream. The first one includes RADEON 9800/PRO/XT, while the second one includes RADEON 9600/PRO/XT. Unfortunately, RADEON 9500 PRO has already been discontinued, however, we have just seen that this graphics adapter is a worthy competitor to RADEON 9600 XT and can even outperform the latter in some cases, namely when we play in high resolution with enabled FSAA and anisotropic filtering. RADEON 9600 XT also looks very progressive. Both these cards boast simple design, are very compact, do not require any additional power supply, dissipate little heat and work really fast in shader applications. I would recommend them to all of you who wish to play modern games, but do not have enough budget for a RADEON 9800.

As a rule these graphics cards feature very quiet coolers, because the 0.13micron chip they are based on, dissipates very little heat, even though its working frequency is 400-500MHz.

RADEON 9800 PRO is also a great choice for those who do not want to pay extra for the 256Mb of onboard graphics memory and slightly higher frequencies of the RADEON 9800 XT. In fact, 128MB of memory is more than enough for most games today, and the situation is very unlikely to change in the near future: these cards will last for at least another half a year for sure.

And those of you who wish to be at the top of the modern technology should definitely take a closer look at RADEON 9800 XT. This is the today’s top-notch solution in the 3D graphics market, which is perfectly fit for any type of games: older ones, new ones and the upcoming next generation ones. RADEON 9800 XT are manufactured mostly according to ATI’s reference design and use ATI Technologies’ reference cooling solution including a huge but surprisingly quiet fan. However, ASUS released its own home version of the RADEON 9800 XT with an even quieter and more efficient cooler, so if you care a lot about the noise level, you should definitely try it out.

All graphics cards based on ATI VPUs feel great in the heaviest graphics modes using full-screen anti-aliasing and anisotropic filtering. That is why I have every right to recommend these cards to everybody who would like to enjoy high-quality image at a pretty high fps rate.

This way, we can conclude that graphics adapters based on NVIDIA GPUs are more suitable for:

The graphics solutions based on VPUs from ATI Technologies are more suitable for:

In conclusion, let me point out the simple truth: we can only advise you to look into these or those products, but the final decision is made only by you, our readers. Enjoy your Christmas shopping!