Articles: Graphics
 

Bookmark and Share

(0) 

Table of Contents

Pages: [ 1 ]

Last summer ATI released two new graphics chips at a time - RADEON 8500 and RADEON 7500.

RADEON incorporated the latest ATI developments. We won't be mistaken if we call it today's most full-featured chip with the richest bunch of the newest functions, especially if compared with the pseudo-new Titanium chips from NVIDIA. RADEON 8500 turned out outrageously fast and smart, though not flawless, but good enough to accept the challenge of NVIDIA top models with confidence.

The younger brother of RADEON family, RADEON 7500, stayed in the shade of RADEON 8500 and did not attract so much attention. It didn't provoke so many compliments and criticism in the web, but it deserves no less attention than its elder brother. ATI RADEON 7500 is manufactured with 0.15micron technology, its 3D part entirely inherited from ATI RADEON, its 2D part - from RADEON VE.

Thus, RADEON 7500 combines good old architectural elements, which have already stood the test of time. While chip layout upgrade and the use of finer technology allowed it to work at really high frequencies (300MHz and more). As a result, the new chip has ATI RADEON architecture, but it is nearly twice as fast and offers wholesome support for dual-monitor configurations. ATI RADEON 7500 was supposed to press its mainstream "sworn friends" - NVIDIA GeForce2 Pro, GeForce2 Ti and in part GeForce3 Ti200.

Nowadays ATI RADEON 7500 based graphics cards are available worldwide, partly thanks to ATI's decision to finally delegate card production to third companies. This move helped to reduce the costs and increase the output.

Today our aim is to find out what RADEON 7500 is capable of and to impartially compare RADEON 7500 based graphics cards with solutions from the same price group based on NVIDIA chips. Let's start…

Closer Look: ATI RADEON 7500 Chip

Basic RADEON 7500 characteristics and 3D features:

  • 270MHz-290MHz core frequency;
  • 64/128bit SDRAM or DDR SDRAM graphics memory interface;
  • 2 pixel pipelines;
  • 3 texturing units per pipeline;
  • Up to three textures are overlaid per clock;
  • Bi-linear, tri-linear and anisotropic texture filtering;
  • Bump mapping: Emboss, Dot3, EMBM;
  • S3TC/DXTC texture compression support;
  • Full-Screen Anti-Aliasing;
  • Hardware T&L unit;
  • HyperZ support.

2D abilities and video playback:

  • Two embedded CRT controllers;
  • Two embedded RAMDAC with 350MHz transformation frequency;
  • Embedded TMDS transmitter for digital monitor output;
  • Embedded TV coder for TV output;
  • Adaptive de-interlacing support;
  • Hardware DVD - iDCT decoding support.

As it follows from the above enlisted specs, ATI RADEON 7500 supports dual-monitor configurations. There are four combinations possible:

  • Analog monitor + analog monitor (DVI-I-to-VGA adapter needed)
  • Analog monitor + digital monitor
  • Analog monitor + TV
  • Digital monitor + TV

It is noteworthy that with RADEON 7500 any display device can be primary or secondary, since both CRT controllers of RADEON 7500 (as well as of RADEON VE) are enjoy absolutely equal rights.

In our ATI RADEON VE Review we discussed dual-monitor configurations in every detail, so we will not write about it again here.

Closer Look: ATI RADEON 7500 Graphics Card

ATI RADEON 7500 graphics card is equipped with a VGA, DVI and S-Video Outs, but doesn't impress us with the whole lot of chips onboard, as almost everything necessary is integrated into the RADEON 7500 core:

  

The card's heart is a 0.15micron ATI RADEON 7500 chip:

The card is equipped with 64MB graphics DDR SDRAM from Hynix with 4ns access time:

The default core and memory frequencies of the tested ATI RADEON 7500 OEM sample were 270MHz/460MHz (230MHz DDR) respectively.

The situations with RADEON 7500 and RADEON 8500 core frequencies look alike: only retail ATI RADEON 7500 cards have 290MHz core frequency, while all the other ATI RADEON 7500 cards (including RADEON 7500 OEM from ATI) are clocked at 270MHz. As for the graphics memory of all the ATI RADEON 7500, it is still (luckily!) clocked at the same 230MHz (460MHz DDR).

During the tests we set ATI RADEON 7500 frequencies at 290MHz/230MHz like those of the retail ATI RADEON 7500.

Testbed and Methods

For test purposes we assembled the following testbed:

  • AMD Athlon XP 1500+ CPU;
  • MSI K7T266 Pro2 v2.0 (VIA KT266A) mainboard;
  • 2 x 128MB PC2100 CL2 DDR SDRAM by Nanya;
  • Fujitsu MPF3153AH HDD.

Software:

  • ATI RADEON 7500: driver version 6.13.10.6011 for Windows XP;
  • Graphics cards based on NVIDIA chips: Detonator 23.11 driver for Windows XP;
  • Max Payne;
  • Serious Sam v1.05;3DMark 2001;
  • Quake3 Arena v1.27;
  • Windows XP.

For a better comparison we tested ATI RADEON 7500 together with the following graphics cards:

  • SUMA Platinum GeForce2 Pro (NVIDIA GeForce2 Pro, 200MHz/400MHz, 64MB DDR SDRAM);
  • VisionTek Xtasy 5864 (NVIDIA GeForce2 Ti, 250MHz/460MHz, 64MB DDR SDRAM);
  • VisionTek Xtasy 6564 (NVIDIA GeForce3 Ti200, 175MHz/400MHz, 64MB DDR SDRAM).

Performance

First, we ran 3DMark2001 synthetic tests to check the fillrate and polygon processing speed:

The optimized Z-buffer of ATI RADEON 7500 and its ability to lay three textures per clock result in the lowest fillrate losses as it skips from 16bit to 32bit mode. Besides, its higher graphics memory frequency ensures the highest graphics memory bandwidth than that of its rivals.

Well, in spite of the lowest losses, ATI RADEON 7500 showed poorer performance than the GeForce3 Ti200 based card, though the latter supports lower graphics memory frequency. The reason is that GeForce3 Ti200 has twice as many pixel pipelines as RADEON 7500, which provides higher theoretical fillrate, and LightSpeed memory architecture, which allows using the graphics memory bandwidth more efficiently. So, the leaders in this test are NVIDIA GeForce3 Ti200 and ATI RADEON 7500.

ATI RADEON 7500 breaks ahead when T&L unit is involved. However, this does not imply that the T&L unit of RADEON 7500 is much more powerful than that of GeForce2 Ti or GeForce2 Pro. We should remember here that ATI RADEON 7500 has the highest core frequency of all the cards tested. If we make some simple calculations to define how fast ATI RADEON 7500 will be when working at 175MHz-200MHz core frequency, we'll see that the abilities of its T&L unit are close to that of GeForce2 Pro / Ti. In other words, at equal frequencies RADEON 7500 is a little bit slower than these two cards in case of one light source and a bit faster in case of 8 light sources.

As for software calculation of the scene geometry, ATI RADEON 7500 is a clear-cut outsider, and these are only the ill-optimized drivers that are to blame here.

Frankly speaking, we could also run some other tests to reveal other peculiarities of ATI RADEON 7500 architecture (for example, the tests showing the performance of three texturing units or HyperZ), but firstly, we don't think it would be that interesting, and secondly, there is nothing new to expect from ATI RADEON 7500. Its 3D part is identical to that of the good old RADEON.

So, enough for synthetic tests, let's pass over to games.

In 3DMark2001 we ran only Dragothic and Lobby scenes. Car Chase gives heavily diversed results, which were too much dependent on the CPU performance, whereas Nature is suitable only for GeForce3 Ti200, as you know.


The achievements of ATI RADEON 7500 are really encouraging. Due to the chip's ability to lay three textures per clock and since the Overdraw is big enough for the HyperZ to show what it is capable of, ATI RADEON 7500 is not much slower than GeForce2 Pro / Ti in the Low Detail mode, and in High Detail mode it firmly holds the second position. Of course, ATI RADEON 7500 proved unable to beat NVIDIA GeForce3 Ti200 with its more up-to-date architecture.


ATI RADEON 7500 with its 2 pixel pipelines only and lower fillrate lags behind GeForce2 Pro / Ti in 16bit color mode. In 32bit mode it uses its large caches and HyperZ to strengthen its position and outrace GeForce2 Pro / Ti by good 20%.

NVIDIA GeForce3 Ti200 is at the top of the list again.

For Max Payne we used the benchmark mod and PCGH's Final Scene No1 test scene (they are described in detail at the German 3DCenter web-site). We tested in two modes:

  • Mode 1 - "quality": The image quality is set to the maximum, 32bit textures and frame buffer color depth, tri-linear texture filtering instead of anisotropic, full-screen anti-aliasing disabled (after all it was not GeForce3 Ti500 or RADEON 8500 that we tested :)
  • Mode 2 - "speed": The image quality is set to the minimum, 16bit textures and frame buffer color depth.

We hope, these tests will satisfy both: graphics-quality-rather-than-fps and speed-by-all-means supporters:

As you can see, there is a significant performance difference between "quality: and "speed" results, but all the four participants run very close to one another: ATI RADEON 7500, NVIDIA GeForce2 Pro and GeForce2 Ti are at about the same level.GeForce3 Ti200 is leading again.

ATI RADEON 7500 graphics quality in Max Payne deserves no criticism whatsoever. Only in 1600x1200 resolution mode it refused to work and showed an error notice:

We ran Quake3 Arena tests in traditional conditions: with maximum image quality, enabled tri-linear filtering and disabled texture compression:

As we forecast, ATI RADEON 7500 is an outsider in 16bit color mode. In 32bit mode it takes the advantage of its well-balanced architecture to catch up with GeForce2 Pro / Ti and to even overrun them at higher resolutions thanks to HyperZ technology. NVIDIA GeForce3 Ti200 still remains at the Olympian top.

Like Max Payne tests, we ran Serious Sam tests in two modes:

  • Mode 1 - quality graphics settings, 32bit frame buffer;
  • Mode 2 - speed graphics settings, 16bit frame buffer.

For this testing we took a standard DemoSP03:

The results are really exciting. ATI RADEON 7500 lost at every point in the Speed mode, but fully recovered in the Quality mode: at 1600x1200 it even broke ahead of the hitherto unreachable Geforce3 Ti200!

In the quality mode Serious Sam engine enables anisotropic filtering which ensures the success of ATI RADEON 7500. Unlike GeForce2 Pro / Ti, not to mention GeForce3, of course, RADEON 7500 hardly suffers any performance losses when anisotropic filtering comes into the play.

Here we'd like to cite some parts of Serious Sam configuration files saying, which level of anisotropic filtering different graphics cards use in the Quality mode:

  • NVIDIA GeForce256 / GeForce2 / GeForce3:
    if( sam_iVideoSetup==2) {
    gap_iTextureAnisotropy = 4;
    gap_fTextureLODBias = -0.5;
    }
  • ATI RADEON, RADEON 7xxx, RADEON 8xxx:
    if( sam_iVideoSetup==2) {
    gap_iTextureAnisotropy = 16;
    gap_fTextureLODBias = -0.5;
    }

As it comes from above, the anisotropic filtering level set by the game developers for ATI RADEON 7500 is even higher, but still RADEON 7500 wins.

How do RADEON chips manage to perform anisotropic filtering so painlessly? In "3D Image Quality" section below we'll try to explain it. And now we'll say a few words about the new abilities of Serious Sam gaming engine.

Serious Sam version 1.05 offers an opportunity to use Direct3D. Surely, we couldn't help using it. The results of NVIDIA based cards turned out close to what they showed in OpenGL. Without any unpleasant apprehensions, we were about to compare these results with ATI RADEON 7500 performance, but… As we launched Serious Sam via Direct3D with ATI RADEON 7500, we witnessed a frightful picture:

Needless to say, these results make any comparison completely impossible. The question is who is to blame: Direct3D driver from ATI or Croteam developers who tested Direct3D only with NVDIA graphics cards? :)

3D Image Quality

The most attractive feature of ATI RADEON 7500/8500 is their ability to cope so fast with anisotropic filtering.

Let us remind you that anisotropic filtering is the most correct way of texture filtering which allows getting the best image quality. The idea is that to define the color of a pixel, a graphics card takes not a texture color in a corresponding point on the object surface, and not the interpolated color of four neighboring texels surrounding the pixel projection as it happens in case of bi-linear filtering. During anisotropic filtering a pixel is treated as a small circle or rectangle projected on a texture as an ellipse or quadrangle. The color of this pixel is determined by the colors of all the texels falling into this projection.

As the angle between the eyesight line and the observed surface gets smaller, the ellipse (pixel projection) will stretch and colors of ever more texels will be involved into averaging. The computational burden in this case will be very high, but it guarantees extremely high image quality. It's not for nothing that all the modern 3D modeling software packages use anisotropic filtering for the final rendering of the scenes.

Anisotropic filtering methods used in graphics accelerators are simplified. For example, NVIDIA GeForce3 obviously calculates the ultimate pixel color by allocating several pixels along the longer axis of the ellipse (pixel projection). There can be 1, 2, 4, 6, 8 or more points depending on the anisotropy level or on how greatly an ellipse is stretched. In these pixels the chip performs bi-linear filtering and then averages the colors (possibly with different weight coefficients).

These are just our assumptions, but they perfectly match the practice, which shows that GeForce3 needs an extra clock to process a pixel like that. Say, anisotropic filtering of 32 samples (8 pixels, 8 bi-linear filtering operations, 8x4=32) takes it 8 times longer than bi-linear filtering.

The way anisotropic filtering is implemented within the ATI RADEON family seems to be completely different.

Let's start from the very fundamentals :).

In order to prevent the textures from looking twisty and grainy on remote objects, MIP-Mapping is used. That is, when the original texture is replaced with its lower-detailed variants as the object moves away from the viewer. On the picture the original texture is in the upper left corner and its MIP levels run diagonally to the lower right corner:

At every MIP level the texture gets two times smaller and the color of every texel is an average of four corresponding texels of the previous MIP level.

However, it is not of the priority interest to us. We'd like to draw you attention to another two rows where the texture is compressed along only one of the two axes. On the picture these rows run down and to the right of the original texture.

Let us call them RIP levels. What is so special about them? You see, the color of each texel from any RIP level is an average color of two texels from the previous RIP level. Why do we need it? Let's imagine the following: we look at the surface with our texture at an acute angle, like this:

A projection of one of the pixels on the texture is highlighted with a red ellipse. For correct anisotropic filtering we are supposed to average the colors of all the texels that are covered by the ellipse (they are framed green).

Now let us remember the RIP levels we have prepared. From these RIP levels we can choose the one with the compression level closest to the anisotropy level, i.e., to the level of the ellipse stretching. Then we apply bi-linear filtering to it and finally get a color which is an average of the necessary texels of the original texture. We hope, the pictures show it clearly enough.

As a result, with a number of preliminarily prepared original texture variants (RIP levels), we can perform filtering with any reasonable anisotropy level using bi-linear filtering alone and subsequently minimizing the performance losses.

The method we've just described is known as RIP-mapping. It works most correctly in case ellipse inclination is close to one of the texture axes. At "inconvenient" angles close to the diagonals RIP-mapping is no better than common bi-linear filtering. So, in order not to let the image quality worsen at "inconvenient" angles, we can use some combined RIP levels compressed along two axes by different number of times, then introduce a row of diagonal RIP levels or run anisotropic filtering in some different way, like NVIDIA GeForce3, for instance.

It feels as though the ATI RADEON family uses particularly this RIP-mapping technique. In case of this method the borders between MIP (or RIP) levels are shaped as broken lines.

It was pretty easy to check this assumption. We enabled anisotropic filtering in a small test application from NVIDIA using standard OpenGL extensions and working with any graphics card and then made screenshots showing these broken lines very clearly. Picture with ATI RADEON 7500 result is on the left, NVIDIA GeForce2 Ti is in the middle, NVIDIA GeForce3 Ti200 is on the right:

ATI
RADEON 7500
NVIDIA
GeForce2 Ti
NVIDIA
GeForce3 Ti200

By ATI RADEON 7500 the lines between MIP levels are heavily broken and diversed. They make the mere thoughts about tri-linear filtering absolutely impossible, unlike the MIP levels of NVIDIA GeForce2 and GeForce3, which are quite even and don't suffer from any anomalies.

By the way, the users sometimes notice artifacts caused by anisotropic filtering on ATI graphics cards. We could show you most illustrative screenshots from games, but firstly, there are not so many bugs indeed, and secondly, these artifacts are most noticeable in dynamics, not in static screenshots.

That's about all we wanted to tell you about the negative effects of anosotropic filtering. Now we'd like to stress the positive points. First of all, it is a very fast method: when anisotropic filtering is enabled, ATI RADEON cards lose only a few percents of performance, which is really insignificant. Secondly, anisotropic filtering working on RADEON chips in favorable conditions provides better results than if run on chips from NVIDIA.

To illustrate the point we would like to offer you some screenshots from Serious Sam where anisotropic filtering quality for each card was set to the maximum. The picture arrangement stays the same: ATI RADEON 7500 screenshots are on the left, NVIDIA GeForce2 Ti - in the middle, NVIDIA GeForce3 Ti200 - on the right:

ATI
RADEON 7500
NVIDIA
GeForce2 Ti
NVIDIA
GeForce3 Ti200

Summing up all that we've written about anisotropic filtering with ATI RADEON 7500, we admit that NVIDIA GeForce2 / GeForce3 and ATI RADEON 7500 use totally different algorithms which have their own highs and lows. It is up to you to choose what you like best. This summary could be of help:

ATI RADEON 7500/8500 anisotropic filtering:

High - top quality
High - high speed
Low - can't work together with tri-linear filtering
Low - occasional artefacts may arise

NVIDIA GeForce3 anisotropic filtering:

High - top quality
Low - great performance losses

Overclocking

To overclock ATI RADEON 7500, we used PowerStrip 3.12 utility.

During overclocking experiments, a very interesting thing happened. As we had expected, the increase in Vcore resulted into performance growth, while Vmem increase produced no favorable effect whatsoever. We were free to set any memory frequency, say 800MHz, but the card didn't react to this at all.

As we searched through the reports of RADEON 7500 owners in different forums, we couldn't but agree that graphics memory overclocking is locked in RADEON 7500 chip or ATI drivers.

So, we overclocked the core alone. The maximum core frequency at which the card worked stably was 340MHz. The following graph shows the performance dynamics:

Indeed, 15% (Quake3) and 8% (Serious Sam) performance gain generated by a 17% core frequency increase (from 290MHz to 340MHz) is not bad at all. At the same time, there is nothing to be surprised at: both old RADEON and RADEON 7500 have well-balanced architectures, and the card's performance is not always limited by the graphics memory bandwidth.

Conclusion

ATI RADEON 7500 is a very interesting card providing marvelous graphics quality, wholesome support for dual-monitor configurations, TV and digital monitor output. It is also quite successful in 3D applications.

If we compare ATI RADEON 7500 with the graphics cards based on NVIDIA GeForce2 Pro / GeForce2 Ti, it will turn out much better in 2D graphics (in terms of quality and functionality). In 3D games the performance of ATI RADEON 7500 falls between that of GeForce2 Pro and GeForce2 Ti. Graphics cards based on these NVIDIA chips are somewhat cheaper, so if you are buying a RADEON 7500 based card, believe that you pay extra for higher 2D quality.

A comparison of ATI RADEON 7500 and NVIDIA GeForce3 Ti200 says that the latter is faster in most 3D games. Without fully-fledged DirectX8 support, RADEON 7500 has very poor chances against Georce3 Ti200.

On the other hand, GeForce3 Ti200 is no rival to ATI RADEON 7500 in terms of 2D functionality. The picture quality of NVIDIA based cards may also happen to be relatively low (it depends on the manufacturer). As for graphics cards based on RADEON 7500 / 8500, they are of high quality whatever the manufacturer is. Is it for ATI keeping an eye on the quality of its products that strictly?

All in all, if you need a purely gaming card, we advise you to buy a Geforce3 Ti200 based product: it will be a more expensive and faster solution than ATI RADEON 7500, though you can never be sure about the quality.

Highs:

  • High quality mounting;
  • Wholesome support for dual-monitor configurations;
  • DVI and high-quality TV-Out;
  • Excellent 2D quality;
  • Good 3D performance.

Lows:

  • No support for pixel and vertex shaders in DirectX 8;
  • Scarce package.
 
Pages: [ 1 ]

Discussion

Comments currently: 0

Add your Comment




Latest materials in Graphics section

Article Rating

Article Rating: 8.8636 out of 10
 
Rate this article:
Excellent
Average
Poor