ATI RADEON 9100 Based Graphics Cards Review: Gigabyte and PowerColor Solutions

We undertook a short tour to the sub-$100 graphics cards market. Let’s take a closer look at ATI’s offering aka RADEON 9100 and test a few products from the company’s major AIB partners: Gigabyte and PowerColor.

by Tim Tscheblockov
05/02/2003 | 09:07 PM

From articles posted on our site, you know that ATI RADEON 9500-9700 graphics chip family was the most attractive from the price-to-performance point of view until quite recently when NVIDIA launched its GeForce FX 5200-5800 solutions.

 

But this only concerns graphics cards priced at $120 and higher. What do we have in the >$100 market sector where demand is traditionally the highest? Do the Canadians do well there, too? We know that GeForce4 MX GPUs from NVIDIA have been playing first fiddle there. Of course, the appearance of RADEON 9000/PRO graphics chips from ATI pressed MXs a little. The more so as RADEON 9000 features full DirectX 8.1 support, which the low-end ones from NVIDIA do not have.

Nevertheless, RADEON 9000 didn’t manage to oust GeForce4 MX altogether, but only took a small share of the market. There are a lot of reasons for that, among them – lower performance compared to their predecessor, RADEON 8500 (although the number in the marking is higher – a clear blunder of the marketing people).

So, ATI faced two problems: how to increase RADEON 9000 sales and (more important!) how to clear up their warehouses from RADEON 8500 based cards. You understand that it would be not right to sell the latter at a higher price than the former: this would be an evident inconsistency in the markings. But if they lower the price, RADEON 9000 won’t sell at all: the whole world knows it’s slower than 8500. So, marketing failures bring ATI to money losses.

But they are not green in the business…Their marketing men sat together and found a simple solution. It reads: re-mark RADEON 8500 into 9100 and sell it under this new name. The number corresponds to the speed characteristics of the chip, so there is a chance to pass out the old stock. Moreover, the chip yield of RADEON 8500 is very high (they have polished the tech process at last!), so it makes sense to continue producing these chips (although under a new name).

And so, we have got RADEON 9100, which is nothing else but RADEON 8500LE (moreover, it’s not even the fully-fledged GPU clocked at 275MHz, but a lower-frequency variant). In theory, graphics cards based on this graphics chip should be faster and, accordingly, more expensive. In fact, we see in price-lists that these cards cost about $65-90 depending on the memory size (64MB or 128MB). That’s exactly the price range of RADEON 9000/9000PRO based cards. Seems strange enough, but let’s first get acquainted with RADEON 9100 and check its performance level.

Closer Look

We have got two graphics cards based on ATI RADEON 9100 chip from two quite respectable manufacturers – Gigabyte and CP.Technology (they sell under “PowerColor” trademark).

Gigabyte MAYA II RADEON 9100 64MB

 

This card has 64MB DDR SDRAM onboard working at 250MHz (500MHz DDR) with 4ns cycle time. The graphics processor is also clocked at 250MHz. The card is equipped with a traditional RAGE Theater co-processor; here it is responsible for TV-Out.

PowerColor RADEON 9100 128MB (VIVO)

 

This graphics card carries 128MB DDR SDRAM. The frequencies are the same: 250MHz/250MHz (500MHz DDR) for the core and memory respectively. The memory chips have the same 4ns cycle time, but larger capacity than by the Gigabyte’s card. The RAGE Theater co-processor implements Video-In and Video- Out (VIVO) in this card. That’s why it comes with an adapter-splitter to connect to analog signal sources and/or receivers. Among the accompanying software we find CyberLink PowerDirector, which is a video capturing program. Video capture itself is performed by ATI’s WDM drivers that sit on the CD next to the main drivers. MPEG 1, 2 and 4 as well as PAL/NTSC/SECAM standards are all supported. So, you can record AVI files in the ordinary, uncompressed format, or code them on the fly by various codecs supported in the OS. This PowerDirector allows setting it all up and putting to (good) use.

Note also that the second reviewed card copies the RADEON 8500 reference-design: the card has a second RAMDAC (situated left to the graphics chip). Gigabyte’s product, on the contrary, looks like a unique thing developed and designed by the company itself (they even changed the lacquer color from red back into blue, the traditional color of Gigabyte’s graphics cards). And there’s no second RAMDAC. Let us remind you that RADEON 8500 (now called 9100) has only one integrated RAMDAC, the second one is implemented as an independent chip. As you see in the snapshots, the graphics chip has no clear marking and is just labeled by numbers and the word “RADEON”. Its size indicates that it’s the good old RADEON 8500, though.

We won’t go into detailed description of various extra features of RADEON 9100, like TV-out and dual-display configurations support, because we’ve talked about it in our RADEON 8500 related materials (see our Video section for more details). RADEON 9100 brings nothing new here, as might have been expected.

So, let’s go straight to the practical part of our today’s review, that is to testing RADEON 9100 speed characteristics. Recalling some facts, RADEON 8500 used to perform quite differently with 64 and 128MB of graphics memory onboard. This phenomenon had an explanation, though, connected with the memory interleave technology. But those cards used to differ physically: their design was different as well as memory chips (that came in TSOP and BGA packages). Now, we have two nearly identical cards, as far as their design is concerned, but they have 64MB and 128MB of graphics memory. So, we are curious to find out how the RAM amount will affect their performance.

Testbed and Methods

We used the following testbed:

Drivers settings were left by default, with a single exception: we turned off Vsync and set texture level of detail (LOD) in OpenGL to High Quality.

The results were compared to those of the following graphics cards:

Performance

It’s clear that both reviewed cards show a predictable level of performance. They are nearly leaders among their rivals. Why nearly? Note that RADEON 8500LE 128MB with BGA memory is still ahead, while 128MB RADEON 9100 is even slower than its 64MB mate! Why? The answer is simple: look at the snapshots of the memory chips above, they are 16bit ones in both cards. Eight chips give us 128bit exactly. RADEON 8500LE 128MB, on the contrary, has 32-bit chips and thus enables interleave.

The situation here is absolutely the same as in the previous case.

Here the results are also very similar to what we had above. Only in the highest 1600x1200 resolution RADEON 9000 PRO nearly catches up with RADEON 9100.

This test depends on fast shaders processing a lot. RADEON 9000/PRO is better at such things than RADEON 8500/9100. That’s why both RADEON 9100 cards and RADEON 9000 PRO do equally fast in Game4. Note also, how far ahead RADEON 8500LE 128MB has got.

So, summing it all up, we see that RADEON 8500LE 64MB and RADEON 9100 64MB are one and the same thing. 128MB RADEON 9100 is somewhat slower, probably because of less aggressive memory timings. Both cards tested today lag behind RADEON 8500LE 128MB, but prove true to their new name, outperforming RADEON 9000/PRO in every test.

Let’s check Return to Castle Wolfenstein game. We used Checkpoint demo with all detail settings on the maximum.

The smooth and glossy OpenGL driver from NVIDIA makes GeForce4 MX an unquestioned winner on this lap. But that’s not all. Note how badly 128MB RADEON 9100 behaves: it fell behind the 64MB card, especially in 1024x768, and nearly dropped down to the level of RADEON 9000! ATI programming folks must have spent a lot of time on the driver for the 9000 series. And thus they spoil all the fun for their marketing fellows: 9100 and 9000 swap places here!

Now comes Serious Sam: The Second Encounter. We used Grand Cathedral demo with “Quality” settings, forced tri-linear filtering and 32-bit color depth.

Although NVIDIA programmers didn’t optimize GeForce4 MX here (or maybe didn’t want to), the funny thing about RADEON 9000 cards outperforming the RADEON 9100 ones repeats again.

Unreal Tournament 2003 Demo is the next. We use flyby-antalus demo with default settings.

Things clear up in this test: RADEON 9100 is faster than RADEON 9000, though RADEON 8500LE 128MB keeps its leading position.

Winding up our test session, we would like to admit that both cards we have just tested are of very high quality. They proved very stable at work. As for 2D, we didn’t detect any problems with the image quality here. The picture didn’t lose its sharpness in 1600x1200@85Hz as well as in 1280x1024@120Hz.

Conclusion

We guess you are troubled by one single question: why RADEON 8500 again when there has already been so much talk about it already. Well, it does make sense. First of all, out benchmarking proved that RADEON 9100 64MB is identical to RADEON 8500LE with the same amount of memory. Second, we saw that there is no reason to buy a 9100 card with 128MB memory: this extra memory will not pay back as efficiently as in case of RADEON 8500!

Third, and most important of all, these cards are most appealing today from the pricing point of view. We have already mentioned in the beginning of the review that RADEON 9100 and 9000 PRO based cards cost about the same amount of money. Meanwhile, the first one is faster nearly everywhere (the incident with the OpenGL driver is surely going to be amended soon). Moreover, the card from CP.Technology boasts that VIVO feature – an excellent choice if you are into video editing.

Overall, RADEON 9100 is a nice buy today (you can find a brand-name GeForce4 MX440-8x for the same money, but this GPU doesn’t have as many features to boast).

There is only one question left: do the re-marked RADEON 8500LE spoil the day for RADEON 9000/PRO? Aren’t ATI and its partners running into the same trap once again?

Or will they reduce the price for RADEON 9000/PRO even lower? Just to give a warm welcome to upcoming GeForceFX 5200? :) It’s none of our business, though – let them decide for themselves. We deal with what we have – low-cost graphics cards supporting all necessary features and technologies (DirectX 9.0 won’t come onto the scene in the next half a year, at least, while DirectX 8.1 offers enough functionality). That’s good for users, while game developers might get an incentive to use such functions in their products.

Yeah, GeForce FX 5200 GPUs are going to flood the market soon and they will probably cost even less. But how many “cripples” is there going to be among them? Those with the 64-bit memory bus or downright slow memory (and we have already seen such cards in stores). Even DirectX 9.0 support won’t help them become very popular.