eVGA e-GeForce FX 5950 Ultra and PowerColor RADEON 9800 XT: Extreme Overclocking and More!

We tested two top products from EVGA and PowerColor based on two eternal rivals: NVIDIA GeForce FX 5950 Ultra and ATI RADEON 9800 XT. New hints for extreme overclocking lovers, new gaming benchmarks including Lock On: Modern Air Combat, new performance heights. Read on to find out more exciting things!

by Tim Tscheblockov
01/04/2004 | 09:43 PM

It’s been a while since we tested the topmost graphics solutions from NVIDIA and ATI. Graphics card manufacturers have embraced the GeForce FX 5950 Ultra and RADEON 9800 XT graphics chips. Well, it’s holiday sales season, after all. So the recent newcomers are now ordinary off-the-shelf products, if I can refer to graphics cards of that class as “ordinary”.

 

This review will be another opportunity to put the chips from NVIDIA and ATI to fight. They arrived into our text lab on two graphics cards: the e-GeForce FX 5950 Ultra from EVGA and the RADEON 9800 XT from PowerColor. Let’s get started, eh?

Closer Look: EVGA e-GeForce FX 5950 Ultra

The graphics card on the GeForce FX 5950 Ultra GPU to go trough the unmerciful tests in our lab cannot boast a noiseless water cooling system or heavy-weight heatsinks with thermal pipe technology. It doesn’t strike your eyes with acid colors of the PCB textolite or feature a couple of “spare” fans in case the main one fails. There is no illumination in it like light-emitting diodes, highlights or anything. EVGA doesn’t need it. The company’s product enjoys popularity as an etalon of reliability and high quality.

Although belonging to the “premier league” of gaming graphics cards, this one comes in an average-sized box. I would say it is quite moderate considering the current tendency to gigantic dimensions. The box is decorated with a collage of well-known characters from NVIDIA’s demo programs.

The EVGA e-GeForce FX 5950 Ultra doesn’t veer even a step away from the reference design: we’ve got the same PCB and the same cooling solution consisting of an active cooler on the face side and a heatsink mounted on the memory chips at the back:

 

Like the reference card, the EVGA e-GeForce FX 5950 Ultra is equipped with DVI-I, D-Sub and composite video-in/video-out connectors. We have also got accustomed to the additional power supply connector on the face side of the PCB:

 

Special units integrated into the GPU are responsible for supplying analog signal to the D-Sub and DVI-I outputs as well as shaping low-frequency TV signal for the video-out. The Sil164ct64 chip from Silicon Image outputs digital signal to the DVI, and the SAA7108 chip from Philips decodes video signal from the video-in:

 

The set of accompanying accessories includes a DVI-I-to-D-Sub adapter, an S-Video cable and an adapter cable from the composite video-in/video-out to two RCA and two S-Video connectors:

We’ve also got a user’s manual and a pile of CDs with games and software, where we should definitely point out full versions of Ghost Recon and America’s Army games and the excusive pack from EVGA called Automated Driver Management (ADM). The ADM tool makes it very easy for a beginner to install the drivers for the card and complement them with chipset and AGP drivers, if necessary.

The card boasts the powerful NVIDIA GeForce FX 5950 Ultra GPU and 256MB of DDR SDRAM memory in chips from Hynix with a 2ns cycle time:

 

The nominal frequencies of the card are 950MHz (475MHz DDR) for the memory, 300MHz for the GPU in 2D and 475MHz for the GPU in 3D.

The overclocking potential of the EVGA e-GeForce FX 5950 Ultra graphics card is good enough, but no more than that. Without additional cooling and voltage tweaking, the card worked stable at 530MHz/1000MHz (500MHz DDR) (GPU/memory respectively). For a race with a RADEON 9800 XT, every extra megahertz would count, so I didn’t stop at “ordinary” overclocking, but went for extremities.

My usual preparations for extreme overclocking include two steps: increasing the voltage of the GPU and the graphics memory, and modding the cooling system.

Extreme Overclocking Experience Step 1: Pulling Up the GPU Voltage

GeForce FX 5950 Ultra (and the previous model, GeForce FX 5900 Ultra) uses an ISL6569ACR controller chip from Intersil as the GPU voltage regulator. This chip features an intellectual option of adjusting the output voltage “on the fly” through changing the state of the digital inputs (VID0…VID4). I discussed the chip in detail when I carried out extreme-overclocking experiments with GeForce FX 5900 Ultra (see our article called Extreme Overclocking Experience: NVIDIA GeForce FX 5900 Ultra against ATI RADEON 9800 Pro).

EVGA e-GeForce FX 5950 Ultra of course uses the additional features of the regulator. The GPU voltage is not a constant value by the EVGA card: it is equal to 1.1V at startup, 1.2V in 2D and 1.6V in 3D modes.

To increase the core voltage, I used the same OFS input of the controller chip as I did with the GeForce FX 5900 Ultra:

According to the documentation, when there is a resistor with a resistance R between the OFS input and the “Ground”, the output voltage of the regulator goes up by V=(R*100mkA)/10.

So that’s what I did. The red arrows in the snapshot below point at the spots where the resistor with zero resistance originally stood (by default, the OFS input is grounded and the bias voltage equals zero):

I soldered up a variable resistor with a resistance of 22KOhm instead of the zero one. By increasing its resistance I could raise up the GPU voltage. I preferred to stop at 10KOhm, which resulted in an increase of 0.1V for the GPU voltage in all operational modes. In other words, it was 1.2V at startup, 1.3V in 2D and 1.7V in 3D modes. That’s how we dealt with the GPU. Now it is the memory’s turn.

Extreme Overclocking Experience Step 2: Increasing the Graphics Memory Voltage

An impulse regulator on the HIP6012CB chip from Intersil supplies power to the internal circuitry (VDD) of the graphics memory chips. The chip is connected as shown below:

The output voltage of the regulator is defined by the resistances of the R2 and the R3 resistors according to the formula: V=1.27*(1+R3/R2). To boost the regulator’s output voltage, we can reduce the resistance of the R3 resistor (marked in red in the scheme) by soldering up one more resistor in parallel to it.

I used a 5.6KOhm resistor, having soldered it via wires for convenience. The orange arrows below show where the additional resistor is:

This modification helped me to increase the voltage of the internal circuitry of the memory chips from 3.2V to 3.44V.

Input/Output circuitry (VDDQ) is power-supplied by another regulator, based on the ISL6225CA chip, which was specifically designed for work in memory voltage regulators. The output voltage for each channel of this dual-channel regulator is determined by the ratio of resistances in the feedback circuit. This is a simplified scheme of the chip:

The output voltage is determined by the formula: V=0.9*(1+R1/R2). We can increase it by reducing the R2 resistance (as marked in the scheme above).

Having found the necessary resistor on the PCB, I attached an additional 4.7KOhm resistor to it. It is marked with orange arrows in the snapshot:

As a result, the voltage of the I/O circuitry increased from 2.53V to 2.88V.

Extreme Overclocking Experience Step 3: Cooling System Modification

The standard cooling system is not enough to handle the increased heat dissipation of the GPU and memory working at higher voltages.

Relying on additional ventilation, I didn’t replace the standard heatsinks on the memory chips. As for the GPU, I used the Aquarius II water cooling solution from Thermaltake instead of the standard cooler.

That’s what the graphics card looked like with the water-cooling system installed:

 

You can see additional paired 80mm fans that blow the air streams at the memory chips heatsinks:

After the modification, the card worked stable at frequencies up to 600MHz/1100MHz (550MHz DDR).

The GPU frequency gain was smaller than the one I got during extreme overclocking of the NVIDIA GeForce FX 5900 Ultra reference card. There are two reasons for that. First of all, GeForce FX 5900 Ultra GPU doesn’t differ in any way from the GeForce FX 5950 Ultra. The latter chips are culled for their ability to work at high frequencies. Second, when overclocking GeForce FX 5900 Ultra, I removed the protective cover from the graphics core and installed the water unit right onto the die. I didn’t “strip” the core naked in today’s tests. Considering these two points, these 600MHz of the GeForce FX 5950 Ultra are an excellent result.

Graphics memory overclocking was less enjoyable. The memory of EVGA e-GeForce FX 5950 Ultra worked at a higher frequency than in my previous overclocking tests, but 1100MHz is not much above the nominal (950MHz).

On the other hand, the GeForce FX 5950 Ultra has colossal memory bandwidth, and graphics memory overclocking influences the card’s performance less than GPU overclocking, especially if the tests use DirectX 9 pixel shaders and the inherent disadvantages of the NV35 architecture show up. So overclocking GeForce FX 5950 Ultra may help it to beat RADEON 9800 XT in those tests where RADEON used to be unrivalled.

RADEON 9800 XT VPU came to our test lab on a graphics card from PowerColor.

Closer Look: PowerColor RADEON 9800 XT

The retail box of the PowerColor card displays the company’s logo against the background of a shield and crossed pole-axes.

The accessories bundle includes a coupon for Half-Life 2, CD discs with drivers and utilities, a CD with demo versions of some computer games, and CDs with two full games (Tomb Raider: Angel of Darkness and Big Mutha Thucker). Besides that, we’ve got a heap of cables and adapters: S-Video, RCA and power cables plus S-Video-to-RCA and DVI-I-to-D-Sub adapters.

The graphics card complies with the RADEON 9800 XT reference design and doesn’t differ in exterior from the reference card.

 

The card comes with DVI-I, D-Sub and TV-Out connectors. There is a connector at the other end of the PCB to attach additional power:

 

The card carries an ATI RADEON 9800 XT VPU and 256MB of DDR SDRAM in chips from Hynix with a cycle time of 2.5ns:

 

The nominal clock-rate of the graphics chip is 412MHz. When Overdrive function is enabled, it grows up a little bit, but I didn’t use this mode in order for the results to be repetitive. The graphics memory on the PowerColor card works at 730MHz (365MHz DDR).

I think we’ve got two worthy competitors, let’s test them now against each other.

Testbed and Methods

The testbed was configured as follows:

We used the following software:

The cards were tested using 4x full-screen anti-aliasing (FSAA) and 8x anisotropic filtering (AF). I enabled AF both in the “quality” mode (the “Quality” setting for both cards) and in the “fast” mode (“Performance” for the PowerColor RADEON 9800 XT and “High Performance” for the EVGA e-GeForce FX 5950 Ultra).

Performance in Gaming Benchmarks: Unreal Tournament 2003

When benchmarking the cards in Unreal Tournament, I used 32-bit color depths and maximum graphics quality settings (Texture Detail - Highest, World Detail - Highest, Character Detail - Highest, Physics Detail - High, Character Shadows - ON, Dynamic Lighting - ON, Detail Textures - ON, Projectors - ON, Trilinear Filtering - ON, Decals - ON, Coronas - ON, Decals Stay - High, Foliage - ON, Use Blob Shadows - OFF). 8x anisotropic filtering was not only forced in the drivers but also was turned on by editing the game’s INI-file.

The tests are based on my own demo record on the DM-Inferno level:

As I have already mentioned above, I ran all the tests using FSAA and AF: these are the functions you actually buy cards like that for!

Both cards enjoy a certain performance gain (25-30%) when we use “fast” rather than “quality” AF. The graphics memory bandwidth influences the results in higher resolutions more, so we have a smaller gain there, about 20-25%.

Our extreme overclocking of the EVGA e-GeForce FX 5950 Ultra provides a nice additional performance gain of 15-20%. As a result, this card is the winner in this test, although when working at nominal frequencies it goes neck and neck with the RADEON.

Unreal Tournament 2003 is a rather simple trial for a modern graphics card. Let’s see what we have in games using shader techniques.

Performance in Gaming Benchmarks: Tron 2.0

Tron 2.0 uses DirectX 8 pixel shaders, mostly in “haloes” around the brightest objects in the scene. I chose a script scene on the City Hub level for my testing purposes.

I also pushed all of the game’s graphics quality settings to their maximum. To measure the speed in frames per second, I used the Fraps utility. The scene was played from the beginning to the end in different resolutions; the measurement error, according to my own experience, is no more than 1-2%.

The graphics card from EVGA gives up before RADEON 9800 XT in this test, although the game itself was developed under the NVIDIA’s slogan “The way it’s meant to be played” (i.e. the game is sharpened specifically for NVIDIA’s GPUs). There is no great speed difference between “quality” and “fast” anisotropic filtering modes, as there are no large amounts of “heavy” textures. Unlike Unreal Tournament 2003, it doesn’t require high texturing speed.

As a result, extreme overclocking of the EVGA e-GeForce FX 5950 Ultra provides a heftier bonus than the “faster” AF.

Performance in Gaming Benchmarks: Tomb Raider: Angel of Darkness

Tomb Raider: Angel of Darkness is a thorn in NVIDIA’s side, since GeForce FX GPUs run this game with less brilliance than ATI’s solutions. The game uses actively quite “hard” DirectX 9 pixel shaders. However, the results may greatly vary depending on the scene you use to benchmark graphics cards. Sometimes GeForce FX chips may suffer a total defeat, and sometimes they may win over graphics cards on R3xx VPUs. That’s why I usually test graphics cards in Tomb Raider using two scenes: “harder” paris5_4 and “easier” paris2_3.

The graphics quality settings in the game were default for the “PS 2.0” mode with one exception – I disabled “Pixel Shader 2.0 Shadows”.

The harder scene, paris5_4, comes first:

Besides calculating the Depth of Field effect (“blurring” objects that don’t fit into the camera’s focus), the graphics card has to render the effect of light refraction in the jets of hot air and in liquid. Both effects are implemented though DirectX 9 pixel shaders.

EVGA e-GeForce FX 5950 Ultra cannot be proud of the results as it loses to PowerColor RADEON 9800 XT in every mode. Anisotropic filtering and texturing speed are no decisive factors in this test, so we have little benefit from switching between “quality” and “fast” AF. Even extreme overclocking cannot make up for the lower pixel shader processing speed and pull the GeForce FX up to the level of the RADEON.

The other scene, paris2_3, has fewer pixel shaders:

This scene seems to be using only the Depth of Field effect:

That’s what I was talking about: EVGA e-GeForce FX 5950 Ultra handles the “easier” scene faster than PowerColor RADEON 9800 XT, making the gap even wider during extreme overclocking. Switching from “quality” to “speed” anisotropic filtering brings but a slight performance advantage: DirectX 9 pixel shaders processing speed is the most important thing in this game.

Performance in Gaming Benchmarks: Lock-On: Modern Air Combat

“Lock-On: Modern Air Combat” continues the glorious series of flight simulators from the Eagle Dynamics team. The new game is based on the “Flanker 2.5” with some major improvements. I guess the graphics of this game matches and even surpasses that of the outstanding “Il-2 Sturmovik: Forgotten Battles”.

The graphics engine creates weather effects, shadow and lighting effects, realistic water surfaces, explosions, clouds and the cloud deck, effects of light refraction in the reaction jets from the engines. Add also realistic models of warlike equipment and a highly detailed landscape. I may have forgotten something, feel free to make your corrections any time :).

Lock-On allows playing records, so I had no problems during the tests. I used two demo records included into the game distributive as published by 1C:Games. They are Demo-Su27-Aerobatic and Demo-Mig29-Intercept. Both records are quite long, so I played only 3 minutes of each demo benchmarking the cards with the Fraps utility.

Lock-On puts the entire system under a serious workload, including the central processor and the graphics card. So I preferred to give up resolutions above 1024x768. As a result, I tested the graphics cards in the most popular mode: 1024x768 with FSAA and AF enabled. I also used three graphics quality presets offered by the game: “High”, “Middle” and “Low”.

The first demo is Demo-Su27-Aerobatic:

This screenshot displays the effect of light refraction in the jet of hot air. The effect is implemented with the help of DirectX 8 pixel shaders. Overall, Demo-Su27-Aerobatic is an “easy” record, and both graphics cards provide sufficient gaming performance under the highest graphics quality settings:

The results of the overclocked EVGA e-GeForce FX 5950 Ultra and when the card was working at nominal frequencies don’t differ much. It is the central processor, which acts as a limiting factor with all three graphics quality presets, therefore the increase in the graphics card performance didn’t affect the results of the test that much. Switching between “quality” and “fast” AF doesn’t bring any significant advantages to any of the cards.

As for the ranking, GeForce FX 5950 Ultra is somewhat faster than RADEON 9800 XT in Demo-Su27-Aerobatic.

The next demo record is Demo-Mig29-Intercept:

This is a harder demo as it features weather effects like rain and clouds. Instead of a demo flight in a cloudless sky, we have a real air fight here.

The graphics cards show much lower results in this test. We can also note that EVGA e-GeForce FX 5950 Ultra receives a considerable performance boost from overclocking (i.e. it is the graphics card that bears the main workload in this test) and that there is a significant difference between “quality” and “fast” AF modes.

The reason for such performance of the tested graphics cards lies in the demo record itself. Most of the action takes place above the cloud deck, which loads the graphics card.

Let’s see how the gaming engine draws the clouds. When the plane is just a little above the clouds, you can take a peep behind the scenes (take a closer look at the rectangle outlined area):

The screenshot suggests that the cloud deck in Lock-On is modeled in a way similar to the forests of “Il2-Sturmovik”. It is a row of parallel planes that carry textures with “sections” of the clouds. This greatly reduces the rendering speed. First, high-resolution “translucent” textures are used for the sections of the clouds. Second, a number of planes with the sections are at a sharp angle to the line of sight. This makes the adaptive anisotropic filtering algorithms employed in graphics chips from ATI and NVIDIA use highest, resource-consuming, anisotropy levels on part of each of the parallel planes.

As a result, the workload increases dramatically and the graphics cards show much lower results than in the first demo. Note also that EVGA e-GeForce FX 5950 Ultra receives a performance boost when switching from “quality” to “fast” AF, while PowerColor RADEON 9800 XT gets nothing. I suspect this is a well-known “bug” in the ATI’s driver: when “quality” AF is on, tri-linear filtering is only enabled for the first texture. That is, PowerColor RADEON 9800 XT always uses the “fast” variant of anisotropic filtering in this test, without tri-linear filtering.

Anyway, EVGA e-GeForce FX 5950 Ultra looks better than PowerColor RADEON 9800 XT in this test.

Conclusion

So we have just tested two off-the-shelf graphics cards based on the top-end graphics processors from ATI and NVIDIA. What do they offer the user?

EVGA e-GeForce FX 5950 Ultra boasts the VIVO function and a nice set of accessories. The card follows the reference design, without any new-fangled LEDs, shiny fans or thermal pipes, but leaves an impression of a highly reliable product. The cooling solution used on the card occupies the PCI slot next to the AGP, but this is its only disadvantage. In spite of my apprehensions, it produced very little noise.

As for the performance level, EVGA e-GeForce FX 5950 Ultra is a good rival to ATI RADEON 9800 XT in modern games. However, as soon as there are more games using DirectX 9 pixel shaders, the EVGA card as well as any other GeForce FX 5950 Ultra-based card, may find itself lagging behind RADEON 9800 XT.

NVIDIA seems to have squeezed the GeForce FX to the last drop releasing its GeForce FX 5950 Ultra. Overclockers have little to do here. As out tests show, extreme overclocking of this GPU provides a maximum performance gain of 20%. Yes, that’s a nice extra, but I don’t think it is worth the trouble of taking the soldering iron into your hands and losing the warranty.

I think NVIDIA will try to minimize the risk of GeForce FX 5950 Ultra falling far behind RADEON 9800 XT in the upcoming games that use DirectX 9 pixel shaders. They will cooperate with the developers or optimize their drivers for some specific games. This would bring a better effect than extreme overclocking. These application-specific “optimizations” are a controversial topic, you may have your opinion about them. Anyway, I am sure that NVIDIA will do its utmost to avoid any performance failures with the GeForce FX 5950 Ultra.

As for games we will play in a more distant future, there will be other graphics processors (based on NV40 and R420), while GeForce FX 5950 Ultra and RADEON 9800 XT will both become obsolete by then.

PowerColor RADEON 9800 XT graphics card left an agreeable impression, too. Like the EVGA card, it closely follows the reference design from ATI. It features a “flat” cooling system built of copper and using thermal pipes that occupies only the AGP slot. It is typical that this card is also quite noiseless. It looks like the manufacturers got to improving the acoustical characteristics of their graphics cards: even the most powerful products are not heard against the other sounds from the system case nowadays.

PowerColor RADEON 9800 XT performs at a level with the GeForce FX 5950 Ultra in gaming tests, but is better in games that use “hard” pixel shaders. The advantages of the VPU architecture show their best in such tests. That’s why I don’t worry about the future of RADEON 9800 XT. Until next-generation graphics processors come into mass production, RADEON 9800 XT will remain an indisputable leader of gaming 3D graphics.

So, the answer to the traditional question “What should I buy?” lies in the irrational sphere – depending on your loyalty to one of the two brands. The price factor and availability may become the most important factors, too, considering that NVIDIA GeForce FX 5950 Ultra and RADEON 9800 XT provide the same level of performance.

Watch the prices and stay tuned!