The Fastest Graphics Cards of 2004: Ultimate Testing of 27 GPUs. Part I: Major Events of the Year 2004 in 3D Graphics

Today we would like to present to you an unprecedented testing of 27 different graphics cards based on GPUs from ATI and NVIDIA supporting different interfaces (AGP and PCI Express) and representing all today’s market segments starting from the Ultra-High-End solutions supporting NVIDIA’s multi-GPU configurations, and finishing with Budget-Mainstream segment. Check out the first part of our article to read about the major events of the 3D graphics market in the past year that brought us all these products!

by Alexey Stepin , Yaroslav Lyssenko, Anton Shilov
02/14/2005 | 10:53 PM

Today we would like to present to you an unprecedented testing of 27 different graphics cards based on GPUs from ATI and NVIDIA supporting different interfaces (AGP and PCI Express) and representing all today’s market segments starting from the Ultra-High-End solutions supporting NVIDIA’s multi-GPU configurations, and finishing with Budget-Mainstream segment. Do you feel lost in the tons of graphics cards out there? Then our guide will definitely help you out!

 

Being now in the year 2005, we have some perspective to have a clearer look at the year that passed, particularly at the computer graphics market, with the purpose of finding the fastest and most technologically advanced graphics card of 2004.

The last year brought us a lot of exciting events, brand-new graphics architectures among them. The performance of consumer graphics cards has reached a previously unthinkable height, and there have appeared games that raised the bar in terms of realism of the 3D image. The year was also remarkable for the fact that after three years of tough competition ATI Technologies became the leader in the number of graphics chips sold, but it only made the fight the more violent. Now besides the desktop and mobile GPU fields, the companies are challenging each other as makers of mainboard chipsets.

But let’s be meticulous and methodical – we are first going to give a brief description to the events of each quarter of the last year.

Content Rules the World

An undeniable fact, the influx of new games that use technologies developed over the previous 2-3 years had an even greater impact on the graphics market than the unprecedented leap in the performance of graphics cards and, to some extent, of microprocessors. The new-generation games amaze with their image and sound, but their system requirements are sometimes shocking as well – the prominent titles of 2004 can only run really smooth on graphics cards and processors released the same year. Far Cry, Doom 3 and Half-Life 2 did their job well – people of all ages and professions rushed to upgrade their computers with new hardware to play them comfortably. The deficit in the market of high-end graphics cards is explained exactly by the releases of new demanding games, because system integrators (catered for by ATI mostly) and retail shops wanted mass quantities of expensive cards to meet the exceptionally high demand.

So, starting right from the first quarter of 2004, the importance of games in graphics card sales was felt later throughout the entire year.

Q1 2004: Calm before the Storm

ATI Remains Unbeatable

It was all calm and quiet at the beginning of 2004. Although ATI and NVIDIA had been hastily getting ready their new-generation processors codenamed R420 and NV40, respectively, there were no special events in the 3D graphics market. Just all kinds of fantastic rumors were being spread about the upcoming chips, but few things were certain. Both solutions were expected to have high-speed GDDR3 memory, 12-16 pixel pipelines, and clock rates up to 500-600MHz. Not all of these rumors came true, though.

ATI’s RADEON 9800 XT processor kept its crown throughout the first quarter. In fact, the RADEON 9800 family had no rivals since GeForce FX cards would show a lower performance in games, especially when encountering many sophisticated pixel shaders. The NV38 (GeForce FX 5950 Ultra) processor, released late in 2003, couldn’t save the day – it was the very GeForce FX architecture that couldn’t cope with the increasing demands of modern computer games.

End of 2003 NVIDIA introduced its new technology of a dynamic compilation of pixel and vertex shaders, intended to increase the performance of NVIDIA’s products. The new ForceWare driver that came instead of the earlier Detonator and featured that compiler did add up to the performance of the GeForce FX, but didn’t solve the problem completely. NVIDIA’s fans had to wait for the upcoming graphics architecture and hope it would be free from the drawbacks of the older one.

As for the mainstream sector of the market, ATI’s RADEON 9600 and NVIDIA’s GeForce FX 5700 families were grappling there. The latter could roughly match the former, save in pixel-shader-free applications, and even beat it at geometry processing (the GeForce FX 5700 Ultra had three vertex processors).

NVIDIA had a steadier standing in the low-end market with its GeForce FX 5200. Although not much in terms of speed, this card fully supported DirectX 9, while ATI’s RADEON 9200 could only support DirectX 8.1.

Ambitious Attempts

Late in 2003 XGI expressed its claim on the high-end graphics market, and in early 2004 said it was going to boost the chip shipment volume in 20 times, from 200 thousand to 4-5 million items. That was a brave claim indeed, but the attempt failed. Our tests of the top model of the Volari series, equipped with two Volari V8 processors, proved the card was nonviable in the way it was: its performance was comparable to that of the RADEON 9600 PRO, it had problems with the image quality and, as we found out later, with compatibility.

For more details see our article called Club3D Volari Duo V8 Ultra Review: XGI Volari Family Coming to Graphics Market.

S3 Graphics also doesn’t give up its attempts to have a slice of the market for itself and announces the new DeltaChrome S8 Nitro on March 3. The new graphics card is fully analogous to the earlier announced DeltaChrome S8, but is clocked at higher frequencies and delivers somewhat more performance. For more details see our article called S3 DeltaChrome S8 Gets Nitro Acceleration: Review of the Revamped S8.

It turns out later that these graphics cards are not going to make it to market, though.

CeBIT 2004: Failed Hopes

Against all expectations, neither ATI nor NVIDIA announced their new GPUs at CeBIT Hannover 2004 which was opened on the 18-th of March. NVIDIA, however, showed its NV40 to a few select partners among the mass media and manufacturers. ATI’s R420 wasn’t heard about at all: none of ATI’s partners could say they saw the new chip in the silicon.

Instead there were other curious samples showcased, like graphics cards with the PCI Express interface. We should note, however, that early PCI Express graphics cards from both ATI and NVIDIA had no architectural advantages over their AGP counterparts and were in fact the same RADEONs 9600 and GeForces FX but renamed as RADEON X600 and GeForce PCX. The GeForce FX supported the PCI Express interface by means of a special NVIDIA HSI converter which allowed using old AGP-compatible chips on the new platform. This seemingly awkward solution would help NVIDIA later in updating its series of middle-range products with the AGP interface.

As for the platform proper, there were numerous mainboards with the PCI Express bus at the exhibition, and ASUS showcased an operational system based on the Intel i925X chipset (Alderwood), without giving any info as to its performance, though.

On the whole, the beginning of 2004 was calm and event-less as concerns the 3D graphics realm. Everything got silent before the storm of imminent announcements coming from both ATI Technologies and NVIDIA Corporation.

For more details see our CeBIT 2004 Coverages available here:

Q2 2004: Nalu Attacks Ruby

The long wait ended in the second quarter when the mermaid Nalu, a symbol of the GeForce 6800, and the sports beauty Ruby, a symbol of the RADEON X800, stepped on the scene.

NVIDIA NV40: A Revolution?

NVIDIA found itself in a tight corner in the spring of 2004 as it had nothing to fight ATI’s highly successful RADEON 9800 series with. It had become clear by then that the GeForce FX didn’t fit for the role: the architecture couldn’t provide enough performance in modern games that were widely using the capabilities of DirectX 9. So, NVIDIA had to hasten the release of a new product that would be absolutely different from the predecessor.

Urged on by the failures in the sector of top-end graphics cards, NVIDIA was the first to show its new weapon – the graphics processor of the new generation, codenamed NV40, was announced on the 14-th of April along with a new family of graphics cards based around it (For more details read aur review called NVIDIA GeForce 6800 Ultra and GeForce 6800: NV40 Enters the Scene). It was a completely new architecture, free from a score of drawbacks of the earlier one. NVIDIA packed a heap of advanced technologies into the chip, endowing the NV40 with support of Shader Model 3.0 and of the enhanced dynamic range as described by the OpenEXR standard, yet the GeForce 6 still had a few discernable features left from the GeForce FX. Besides that, the NV40 featured a special video processor capable of encoding and decoding video streams in MPEG-4, -2, and HDTV formats, thus freeing the CPU for other tasks. This luxurious functionality came at a price – the die of the new processor came out very complex, consisting of a whopping number of 220 million transistors.

The complexity of the chip made NVIDIA release the GeForce 6800 Ultra processor at a lower frequency than the earlier GPUs. Until the release, NVIDIA’s top-end GPUs used to be clocked at about 500-600MHz, but NVIDIA could only hit an acceptable chip yield at 350-400MHz with the NV40. NVIDIA’s new family at first included two graphics cards, GeForce 6800 Ultra and GeForce 6800, but later was complemented with GeForce 6800 GT and GeForce 6800 LE.


Click to enlarge

In fact, the yield of NV40 dies capable of working at 400MHz was too low, and the company manufactured the GeForce 6800 GT out of chips that couldn’t work at 400MHz, but were stable at the reduced clock rate.

The power consumption of the top-end model remained almost the same as that of the GeForce 5950 Ultra, but the requirements to the quality of power had grown up, especially to the power received from the +12v power rail. NVIDIA even recommended using 480W or higher PSUs with the GeForce 6800 Ultra: such units are expensive, but employ high-quality components and provide the necessary stability of the output voltages.

The blow was strong from the technological point of view. The GeForce 6800 Ultra dethroned the R360 in no time, showing a much higher level of performance. Alas, in spite of the success, there were no graphics cards with that GPU in shops, so NVIDIA’s victory kind of remained on paper only. The problem was that IBM, the first company NVIDIA had chosen as the manufacturer of the new GPU, couldn’t supply GeForce 6800 chips in sufficient quantities until the fourth quarter.

ATI R420: Shader Model 2.0 in Need for Speed…

On May 4, ATI Technologies rolled out its R420 chip as a response to NVIDIA’s earlier announcement (For more details about ATI R420 read our review called ATI RADEON X800: R420 Totally Exposed). Like the competitor, ATI was evolutionary, rather than revolutionary with the new chip, basing it on the R300 architecture, although revised, improved and stripped of some earlier drawbacks.

Besides everything else they made the RADEON X800 using a new 0.13-micron technological process with the so-called low-k dielectrics. As a result, the chip was more compact and worked at a higher frequency than the competitor from NVIDIA. The relative simplicity – only about 160 million transistors – and the new tech process helped to reach frequencies of about 500MHz. Curiously, the RADEON X8xx was to come out with 12 pipelines, but ATI changed its mind and released a 16-pipelined processor, like the NV40.

As for new technologies, there were no revolutionary innovations in the R420 – things had just been enhanced a little. The organization of the pixel pipelines has changed – 16 pipelines were split into 4 groups, each of which could process 4 pixels at once. The HyperZ technology was once again improved and dubbed HyperZ HD. Besides, ATI equipped the R400 architecture with a new algorithm of normal map compression called 3Dc.

Two graphics card models were announced, with two more approaching. At the end of the year the RADEON X800 family included 9 models, which makes it the biggest family in ATI’s line-up.


Click to enlarge

All new RADEONs were equipped with a single power connector and a standard set of outputs (DVI-I, D-Sub and S-Video). As for the heat dissipation, the new chip from ATI, thanks to the improved tech process, proved most unpretentious and lived on with a modest cooler, even of a smaller size than the cooler installed on the RADEON 9800 XT. The power consumption parameters were also better than with the GeForce 6800 Ultra – even the top-end model, RADEON X800 XT Platinum Edition, came with a single power connector.

The evolutionary approach of ATI Technologies towards developing the new GPU proved to be correct. As our tests showed, the new RADEON X800 family was most successful in terms of performance, often delivering more speed than the GeForce 6800 series (for the detailed benchmark results see our article called Clash of the Titans: ATI RADEON X800 PRO and ATI RADEON X800 XT Against the NVIDIA GeForce 6800 Ultra). The availability of the RADEON X800 was also better than with the GeForce 6800 – these graphics cards appeared in retail soon after the announcement, although at rather steep prices.

PCI Express: Green Light Ahead

The beginning of the transition to the PCI Express bus was certainly the most anticipated Q2 event in the whole IT industry. On June 21 Intel announced a series of new LGA775 processors as well as three chipsets with support of PCI Express, i925X, i915G, and i915P. For more details see our article called LGA775: New CPUs and Chipsets.

The new chipsets from Intel had no trace of the AGP bus whatsoever. Instead, they offered PCI Express x16 and x1 slots. The use of PCI Express x16 instead of AGP 8x raised the bandwidth of the chipset – graphics card thoroughfare from 2.1GB/s to 8GB/s (4GB/s in each direction). Moreover, the new slot could provide up to 75 watts of power to the graphics card (an additional power connector had had to be put on such cards earlier).

Unfortunately, the expansion rate of the PCI Express platform was rather slow as Intel combined the new chipsets with the new Socket T (not much in terms of mechanical robustness, by the way) as well as with expensive and barely available memory of the DDR2 SDRAM standard. That’s why the manufacturers were not hasty in accepting the new interface, but the situation had become normal by the fourth quarter. Today about 50 percent of new computers are coming with the PCI Express bus.

The second quarter was overall filled with various thrilling events, unlike Q1. We saw the birth of two graphics architectures, the arrival of graphics cards with the PCI Express interface, the official start of Intel’s new platforms that opened a new era in consumer 3D graphics – all in three months!

Native vs. Bridged PCI Express

Long before the appearance of PCI Express-supporting chipsets, ATI and NVIDIA announced their new products intended for the upcoming platforms. Both companies just added support of the new interface to their existing architectures, but ATI did it on the chip level, while NVIDIA introduced a special AGP – PCI Express converter for its GeForce 4, FX and 6 series chips.

ATI Technologies put an emphasis on the “native” support of PCI Express by its chips, winning close to 100 percent of the PCI Express graphics market with major OEMs, while NVIDIA was talking about a bright future when its PCI Express chips would work on AGP-interfaced graphics cards thanks to the same HSI bridge, making NVIDIA popular among people who upgrade their PCs often.

Both companies proved right in their own ways: ATI is currently developing its own PCI Express-to-AGP bridge, while NVIDIA ships GPUs with native support of PCI Express. From the economical standpoint, however, ATI’s approach was more realistic as the huge share of the PCI Express market the company owns now is a confirmation of.

NVIDIA SLI: Luxurious Tech for All?

The PCI Express variant of the GeForce 6800 got one more thrilling innovation, the support of the SLI technology. Read more about this technology in our article called NVIDIA SLI: Sometimes They Come Back...

3dfx, once the leader of the graphics market, used the SLI abbreviation to denote a technology that allowed for two graphics processors to work in pair. Now NVIDIA, who devoured 3dfx a few years ago by the way, revives the concept. NVIDIA’s SLI technology allows to install two GeForce 6800 PCI Express graphics cards into one system and to achieve a higher graphics performance. The only prerequisite is the availability of two PCI Express x16 slots on the mainboard, and a special connector on each of the graphics cards.

In spite of much clamor and promises to ship SLI systems by the end of the summer or beginning of the fall, first PCs with two graphics cards became available only in November, priced at $5000-6000. NVIDIA, however, says the SLI technology can be employed in rather inexpensive PCs – everything is already built into the nForce4 SLI chipset and the GeForce 6600 GT graphics processor.

While other chipset makers are preparing their chipsets that would provide two PCI Express x16 slots (maybe reduced to x8, for example, in the number of PCI Express lanes) and ATI is going to unveil its own vision of multi-processor graphics solutions soon, mass shipments of NVIDIA’s SLI haven’t yet begun, and such solutions – for $5000 in the realization of Alienware and VoodooPC – are rather a piece of luxury than a typical computer.

Now we will only see in the year 2005 if the SLI technology can really become an affordable option.

Q3 2004: Nalu and Ruby Step One Floor Down

As usual, some time after the release of the top-end representatives of the new graphics architectures, ATI and NVIDIA roll out their more affordable solutions with functionality similar to the top-end models. It was the same this round: both companies introduced their new mainstream chips in the third quarter. They were GeForce 6600 and RADEON X700. But a little bit earlier…

S3 is Plowing up the Low-End Market

Unsuccessful in the top-end graphics market, S3 Graphics announced its low-end DeltaChrome-based product, the DeltaChrome S4 Pro, on the 14-th of July. For more details about these products read our review called The DeltaChrome S4 Pro: S3 Graphics' Entry-Level Offspring.

Unlike XGI, S3 Graphics didn’t foster any puffed-up ambitions, but targeted its produce at a specific category of users. The DeltaChrome series is mainly intended for people who want to use their PCs as a multimedia entertainment center with support of the HDTV format. The DeltaChrome S4 Pro suits ideally for this job: it is power-saving, rather fast in 3D, supports version 2.0 pixel and vertex shaders, can output HDTV video and facilitate its decoding and lay special effects in real time using the S3 Chromotion Engine.

Our tests showed that the Chromotion does reduce the CPU load when decoding video streams in MPEG-2 and MPEG-4 formats, but the load during HDTV playback was the same as with other cards – probably due to the lack of software support on the player’s part. The S3 DeltaChrome S4 Pro is overall a worthy product, especially if we compare it to the RADEON 9600 SE and GeForce FX 5200 Ultra chips the manufacturer actually positions it against.

NVIDIA GeForce 6600 GT: A Doomer’s Dream

NVIDIA was the first to hear the market’s cry for new technologies at affordable prices by introducing its GeForce 6600 GT in the middle of August and disclosing its features in early September. Read more about it in our article called Knowing the Depths: NVIDIA GeForce 6600 GT Architecture.

The newcomer was in fact one half of the GeForce 6800 Ultra, having half the pixel and vertex processors of the latter; the memory bus was also reduced in double. In other words, the chip had 8 pixel pipelines, 3 vertex processors, a 128-bit memory bus and all the technologies peculiar to the GeForce 6800.

The GeForce 6600 core consisted of 143 million transistors. Combined with NVIDIA’s new 0.11-micron tech process this allowed to reach frequencies of about 500MHz. The power consumption and heat dissipation characteristics remained on the same level, thus sparing the company the trouble of putting a sophisticated cooling system on. The additional power connector was removed as the PCI Express slot could supply up to 75 watts of power to the graphics card. The new product was also NVIDIA’s first chip to natively support PCI Express, without any external bridges.

NVIDIA announced two graphics cards with the new chip: the GeForce 6600 GT was clocked at 500/1000MHz and cost $199, while the PCB design, the amount and frequency of graphics memory and the price of the cheaper GeForce 6600 were decided upon by the manufacturer of the card. Like the GeForce 6800 series for PCI Express, the GeForce 6600 GT supported the SLI mode, but the junior model didn’t.

The GeForce 6600 GT came out a very well-done product, with a simple PCB design and a compact cooler. In tests it would surpass the RADEON 9800 XT save for the 4x FSAA + 16x anisotropic filtering mode in 1600x1200 resolution where the narrowness of the 128-bit bus played a negative role (for more detailed benchmarking results see our article called NVIDIA GeForce 6600 GT Assaults Mainstream Gaming Market). Besides that, the video processor that NVIDIA had been touting since the announcement of the GeForce 6800 Ultra worked normally at last in the GeForce 6600 GT, easily handling the decoding of HDTV, for example.

Thanks to UltraShadow II technology, inherited from the top-end GeForce 6800, NVIDIA advertised the GeForce 6600 GT as a graphics card most suitable for playing Doom 3, which had been released by that time.

ATI RADEON X700 XT – Preferred for Half-Life 2?

So, NVIDIA was ahead of ATI Technologies in releasing a mass graphics card with a new architecture and the PCI Express interface. It was ATI’s turn now.

The answer came on the 21-th of September. The Canadians announced the RADEON X700 GPU as a competitor to the GeForce 6600 in the mainstream market. The new GPU also filled the gap between the RADEON X800 and X600 families. The process of making a mainstream product out of a top-end one was different with ATI: like NVIDIA with its GeForce 6600, ATI halved the number of pixel pipelines and the width of the memory bus, but kept the same number of vertex processors. Judging by the tests, the texture caches suffered a reduction instead.

This 0.11-micron chip proved to be very simple, consisting of only 110 million transistors against 143 millions of the NV43. That’s natural since the X700 couldn’t boast support of Shader Model 3.0 or 32-bit floating-point pixel processors and other technologies available in NVIDIA’s new-generation products, but it had all of ATI’s exclusive technologies like 3Dc, SMOOTHVISION HD, SmartShader HD, VideoShader HD, HyperZ HD and others.

Three X700-based graphics cards were announced – RADEON X700 XT, RADEON X700 PRO and RADEON X700 with the following characteristics:

A curious fact, the RADEON X700 XT and the X700 PRO had the same recommended price and only differed in the amount and frequency of the graphics memory. The user thus had a choice between a card with lower frequencies but more of graphics memory and a high-frequency device with less of memory.

Like the GeForce 6600 GT, the RADEON X700 XT/PRO had a small size, although their PCBs were somewhat more complex. The seemingly simple and efficient cooler proved to be very noisy and the GPU itself – very hot. For more details read our article called ATI RADEON X700 XT: Architecture Preview.

Our gaming tests showed that the graphics card accomplishes its purpose, delivering the performance of the GeForce 6600 GT and outperforming the RADEON 9800 XT in a majority of modern games thanks to the new architecture (for more detailed benchmarking results see our article called 3DMark05: The Future of Computer Games in Numbers). Some driver-related problems were reported as well as an insufficiently high texturing speed of the X700; the latter fact is probably the outcome of a reduction of the texture caches.

Following NVIDIA’s example with Doom 3, ATI began to label its cards as “Preferred for Half-Life 2”. The absence of the X700 XT model in the market put a question mark after the phrase, though.

Thus, by the end of the third quarter ATI and NVIDIA had drafted worthy graphics processors into the ranks, suitable for creating mainstream graphics cards with the PCI Express interface.

Q4: Improving Along

XGI’s Plans

The company whose first attempt to enter the desktop 3D graphics field had been a disaster seemed to be ready for a second try.

October 20, XGI made a brave statement that we were up to a series of new GPUs from this developer in the beginning of 2005. The second attempt of XGI is going to be more daring – the XG45/47 processors will support Shader Model 3.0, according to the announcement! The company will abandon the ill-fated multi-chip technology and will take to collaborate seriously with game developers.

Well, it is yet to be seen if XGI’s second try is a success. So far the company has been supplying the Volari Z7 chip, announced October 12. This GPU doesn’t support 3D functions and is intended for areas where 3D graphics isn’t required.

GeForce 6600 GT AGP: Interface-Lifting

About that time, in the middle of November, another interesting thing happened – NVIDIA announced the AGP version of the GeForce 6600 GT.

By that time the company had found itself incapable of offering anything to users with a graphics card budget of about $200, while the graphics cards of the GeForce FX series were too slow in modern games to be an appealing buy. Here NVIDIA put its experience with the early PCI Express graphics cards to good use. The HSI bridge, much criticized by ATI, can work in both directions, i.e. it can serve to transform a PCI Express graphics card into an AGP-compatible one. The GeForce 6600 GT AGP was a realization of the idea – the company redesigned the original PCB, rotating the GPU and the memory chips by 45 degrees, and putting a bridge chip near the GPU.

The memory frequency of the resulting product had to be reduced from 1000MHz to 900MHz, probably to avoid stability-related problems. The AGP version of the card also acquired a Molex power connector. Thus, NVIDIA got a middle-range AGP graphics card with a new architecture at practically no effort.

From that moment the GeForce FX architecture can be considered as ultimately dead. The GeForce 6600 GT AGP is much faster in modern games than the peak of the earlier architecture, the GeForce FX 5950 Ultra.

RADEON X850: Turbo-Charged X800 Hits the Scene

The announcement of the new RADEON X850 processor (codenamed R480) was the event of the fourth quarter of the year in the ATI camp. This is just an improved version of the RADEON X800 capable of working at higher frequencies. There’s information on the Internet that NVIDIA is not preparing an answer (NV48), probably relying on the power of the SLI technology.

Save for some minor improvements, the RADEON X850 brought nothing new. The RADEON X800 XT Platinum Edition is anyway unrivalled in high resolutions with enabled full-screen anti-aliasing and anisotropic filtering, so a few extra percent of speed can’t make any difference. For more details see our article called ATI RADEON X850 Platinum Edition: Good Things Go Better.

ATI’s 0.11 Micron Gets Higher Performance amid Lower Clock-Speeds

The release of the RADEON X850 pushed the RADEON X800 a step down as is often the case with ATI’s product line-up. Once a single product series on the same GPU, it split in two as the company accompanied the announcement of the RADEON X850 with a reshuffle in the RADEON X800 family.

Now the cards of this series use a new R430 processor manufactured with 0.11-micron tech process. Starting from the low-end RADEON X300, this process finally reached the X800.

Note that ATI decided to press its own RADEON X700 XT with another graphics chip, the RADEON X800, which works at a much lower frequency, but often shows a better performance in popular games and benchmarks.

As you see, any user with a budget of $200 and higher will soon be able to choose an X800-based graphics card with enough performance for playing modern games.

Overall, the last year brought us a lot of exciting events, and the new year 2005 will hopefully be alike. The next part of this article will be concerned with the market activities of ATI and NVIDIA as well as with the tendencies that rule the computer graphics field today.

Check out Part II of our article to find detailed performance coverage of the 27 different graphis accelerators!