Positioning GeForce 9800 GTX in Nvidia’s Single-Chip Line-Up: Chaintech Apogee GeForce 9800 GTX Review

We will review and test three graphics cards from two vendors, and will introduce some new benchmarks. Read our review to find out more about GeForce 9800 GTX from Chaintech.

by Sergey Lepilov
07/01/2008 | 02:30 PM

The recent announcement of graphics cards based on Nvidia’s new GT200 chip is surely the most momentous event in the PC hardware field since the beginning of 2008. The new flagship card will be pitted against its opponents on our website soon – we’ve received a sample of the GeForce GTX 280 just a few days ago.

 

Today, we are offering you a review and test results of a GeForce 9800 GTX card based on the G92 chip. With the GT200 coming up, we will try to pinpoint the place of the GeForce 9800 GTX in Nvidia’s line-up of single-chip solutions and see if it is proper for this card to be in Nvidia’s line-up at all from a common user’s and overclocker’s viewpoints.

Closer Look at Chaintech GeForce 9800 GTX 512MB (GAE98GTX)

Package and Accessories

The box with the graphics card from Walton Chaintech is huge. It has a large glossy cardboard cover, a sunlit sexy futuristic brunette gracing the front panel.

The text and labels on the box indicate the card’s support of Nvidia PureVideo HD and SLI technologies and tell you the memory type and memory bus width. The amount of memory is indicated wrongly. It is not 1 gigabyte because all GeForce 9800 GTX come with 512 megabytes of memory. If you take a closer look, you can note that the “GTX” part of the product name is just glued to the box. There must be “GX2” under it. Coupled with the wrong indication of the memory amount, it means the box was originally designed for GeForce 9800 GX2 series cards but was later adapted more or less successfully for GeForce 9800 GTX.

Inside the colorful cover there is a main box made from thick cardboard. Its compartments contain the graphics card and accessories. The accessories are rather scanty:

While you can get along without free games on DVDs, you may want a cable for connecting additional power to the card especially as the GeForce 9800 GTX needs not one but two such cables.

PCB Design and Functionality

Following the unpleasant tradition of GeForce 8800 GTX/Ultra, the card is as long as 270mm (40mm longer than GeForce 8800 GT/GTS) and will not fit into some system cases. A picture with the girl from the cover can be seen on the face side of the card’s plastic casing.

The Chaintech GeForce 9800 GTX is equipped with a PCI Express x16 interface, two dual-link DVI ports, and an S-Video output:

There is nothing particularly exciting about the reverse side of the PCB:

On the top side of the PCB, near the video ports, there are two onboard MIO connectors for uniting three such cards into a 3-way SLI subsystem.

It will be costly, yet quite fast.

The GeForce 9800 GTX is 32mm thick and blocks the neighboring slot on the mainboard:

There are two additional power connectors on the right:

These are 6-pin connectors, so you won’t have to look for adapters into 8-pin connectors or for a PSU with such plugs. According to the specs, the peak power consumption of the GeForce 9800 GTX is 156W. A 450W power supply with a current of 24A and higher on the +12V rail is recommended for it.

The card uses a 12-layer PCB, all the main components residing on the top side:

At the left part of the card there is a LED reporting the card’s status.

The power section incorporates a lot of components, some of which contact with the cooler through soaked thermal pads.

This G92, a 65nm revision A2 chip, was manufactured on Taiwan during the third week of 2008.

The GPU’s ROPs are clocked at 675MHz. The frequency complies with the GeForce 9800 GTX specs in both 3D and 2D modes, being a mere 25MHz higher than the frequency of the GeForce 8800 GTS 512MB. The shader domain is clocked at 1688MHz, complying with the specs, too. The GPU incorporates 64 unified shader processors and 16 texture modules. The die measures 330 sq. cm. There is no protective frame on the core.

Eight GDDR3 chips are placed in a semicircle around the GPU. The Chaintech GeForce 9800 GTX comes with Samsung’s 0.83ns memory with a rated frequency of 2400MHz.

The chips are marked as K4J52324QE-BJ08. The real memory frequency is only 2200MHz (256MHz higher than that of the GeForce 8800 GTS 512MB). With a 256-bit memory bus, this ensures a memory bandwidth of 70.4GBps. The GeForce 9800 GTX carries a total of 512 megabytes of memory. Alas, Nvidia did not equip its ex-flagship with 1024 megabytes of memory, obviously to reduce the cost of the new card.

So, the Chaintech GeForce 9800 GTX has the same characteristics as the reference sample:

Cooling System

Taking off the card’s cooler we found that it has a copper core to contact with the GPU and an aluminum base for the memory chips.

Soaked thermal pads ensure heat transfer from the memory chips and power elements. Gray and thick thermal grease serves as a thermal interface between the cooler and the GPU.

We can have a better view of the cooler with the plastic casing removed:

The heatsink consists of thin aluminum plates pierced by three copper heat pipes, 6mm in diameter:

The pipes go out of the base that betrays a rather mediocre finish:

The heatsink is cooled by a blower with slightly curved blades.

 

This 97x94x33mm blower comes from Delta Electronics’ BFB1012L series. According to the specs, the fan speed is automatically adjusted depending on the GPU load from 400 to 2700rpm producing a maximum airflow of 19.07CFM and a max noise of 45.0dBA. The hot air is exhausted out of the system case through the grid in the card’s mounting bracket.

The back part of the cooler above the power elements is shaped like a needle heatsink.

I tested the performance of this cooler by running the Firefly Forest test from 3DMark06 at 1920x1200 with 4x FSAA and 16x AF. The test was performed in a closed system case (ASUS Ascot 6AR2-B; you can find its fan configuration in the description of the testbed below). The room temperature was 21°C. The GPU temperature proved to be as follows during that test (the graphics card was working at its default clock rates):

The temperature of 77°C under peak load is quite comfortable for such a high-frequency chip. You may wonder where the data on the ambient temperature and the fan speed are. Well, the manufacturer seems to have saved on these sensors because the latest available version of RivaTuner (version 2.09, to be exact) could not find them and thus could provide the appropriate information. That’s a strange solution considering that such sensors are installed even on the cheaper GeForce 8800 GTS 512MB as you’ll see shortly.

The fan speed regulation can be seen in the graph. When the load has been removed and the GPU temperature has gone done, the fan speed lowers and the GPU temperature grows up again by 6°C. It is 62-63°C in 2D mode. By the way, the speed can be easily regulated by means of RivaTuner, so I could set the maximum speed and rerun the test:

As you can see, the GPU temperature is 9°C lower under load and equals 46-47°C in 2D mode.

Overclocking Potential

As for the overclockability of the card, I didn’t have an alternative cooler for the GeForce 9800 GTX at my disposal (I’m still waiting for an Arctic Cooling Accelero Xtreme 9800 to be delivered to me), so I overclocked the card using its standard cooler set at the maximum speed. As a result, the card proved to be stable at 783MHz GPU and 2520MHz memory frequencies. I guess it’s just excellent for a GeForce 9800 GTX.

The card could pass some tests even at a GPU frequency of 810MHz but I found 783MHz to be the absolutely stable maximum.

I checked out the card’s temperature at the overclocked frequencies (with the fan set at the maximum speed):

The GPU temperature is higher by 5°C when the card is overclocked, yet it is still within a safe range.

Specifications, Recommended Price, Power Consumption

The following table lists the specs of the graphics cards whose test results will be published below.

I performed power consumption measurements using a multifunctional ZM-MFC2 panel from Zalman. This panel measures the power consumption of the whole system, without the monitor, rather than of a specific component (the exact configuration of the testbed is listed in the Testbed and Methods section).

The measurements were performed in 2D mode (Word and Web surfing) and in 3D mode (3DMark06’s Firefly Forest test ran for five times at 1920x1200 with 4x FSAA and 16x AF). The 3D load was quite high so that the other system components, the CPU in the first place, would not affect the results much. Compared with a few games and another synthetic benchmark (listed in the Testbed and Methods section), the Firefly Forest test proved to increase the system’s power consumption to the highest mark. For example, the system consumed 35-40W less in Crysis than in Firefly Forest.

The results of the power consumption measurements are shown in the next diagram:

The CPU’s power-saving features were enabled. This is why the system’s consumption would sink suddenly in idle mode. The results are overall logical, except for the comparison between the systems with a GeForce 8800 GTS 512MB and a GeForce 8800 GT 512MB. Despite the lower frequencies and the less advanced GPU of the latter card, the system would consume more with it than with the former card. Perhaps it is because the Sysconn GeForce 8800 GT uses a unique PCB whereas the Sysconn GeForce 8800 GTS 512MB is a copy of the reference card.

Comparing the GeForce 8800 GTS 512MB and GeForce 9800 GTX 512MB at the reference frequencies, the system with the more expensive card consumes 45W more on average. Going for the cheaper card may help you save a small fortune on your electricity bill. :)

Testbed and Methods

The graphics cards were benchmarked in a closed system case with the following configuration:

To minimize the CPU’s influence on the graphics cards’ performance I overclocked the CPU to 4.04GHz at 1.6V voltage during the tests.

The system memory worked at a frequency of 1077MHz with 5-5-5-12 timings and 2.1V voltage.

The tests were run under Windows Vista Ultimate Edition x64 preSP1. I installed Intel Chipset Drivers version 9.0.0.1007, DirectX 9.0c (dated March 2008), and Nvidia ForceWare 175.16.

The graphics cards were tested at three resolutions: 1280x1024, 1680x1050 and 1920x1200. The graphics card driver was set at High Quality (i.e. driver optimizations were all disabled). I turned full-screen antialiasing and anisotropic filtering on from the menu of each game. If the game didn’t provide such options, I enabled FSAA and AF from the ForceWare control panel. The Transparency Antialiasing (multisampling) option was turned on.

The cards were benchmarked in the following games and applications:

I added the results of an Nvidia GeForce 9600 GT into this review for the sake of comparison. I guess this card is the required minimum for today’s gamer. Sysconn’s version of GeForce 9600 GT with 512MB of memory was benchmarked at its default (650/1625/1800MHz) as well as overclocked (738/1902/2106MHz) frequencies. Cheaper and slower graphics cards cannot ensure a comfortable frame rate without your lowering the graphics quality settings in games. This is the reason why I didn’t include them into this review.

Performance

The results of the GeForce 9600 GT are marked with dark blue in the diagram. Both cards from the GeForce 8800 series are marked with teal and the GeForce 9800 GTX, with purple.

The cards were tested in Futuremark’s benchmarks at the default settings only.

3DMark 2006

The first number in the diagram is the graphics card’s result in the HDR/SM3.0 tests whereas the second number, in bold font, is the total score. It is clear the GeForce 9600 GT is the only card that falls behind. The GeForce 8800 GT is not so far behind the leaders and even overtakes them if overclocked. The GeForce 8800 GTS and 9800 GTX are very close to each other. Interestingly, the overclocked GeForce 8800 GTS/512 has the same score as the GeForce 9800 GTX overclocked to a higher memory frequency. Take note of this fact – the GeForce 8800 GTS will have an advantage over the 9800 GTX in some other tests, too.

3DMark Vantage

It is similar to 3DMark06 overall. The GeForce 9600 GT is lagging behind even when overclocked. The GeForce 8800 GT is close behind the leaders whereas the GeForce 8800 GTS/512 is on top at the overclocked frequencies. The GeForce 8800 GTS has a 10MHz higher frequency of the GPU, which cannot be the reason why it beats the GeForce 9800 GTX which has high-frequency memory. The 9800 GTX is beaten at the overclocked frequencies and wins by a very small margin in the default mode, and this must be due to its higher memory timings. ForceWare optimizations are disabled and cannot affect these cards’ performance.

S.T.A.L.K.E.R. - Shadow of Chernobyl (Direct3D 9)

Every graphics card ensures a comfortable frame rate in every of the three display resolutions of this game. I wouldn’t say the gaps are large between the cards. As a rule, an overclocked junior card is faster or equal to a higher-level card working at the default frequencies. Thus, the GeForce 9600 GT at 738/2106MHz is as fast as the GeForce 8800 GT at the default clock rates. The latter in its turn is better, when overclocked, than the GeForce 8800 GTS/512 which is equal or even somewhat faster than the GeForce 9800 GTX.

Call of Juarez Bench (Direct3D 10)

This benchmark ran at the following graphics quality settings:

The display resolution and the FSAA level varied depending on the test mode. Anisotropic filtering was enabled from the ForceWare control panel. Here are the results:

Alas, the frame rate is always too low to play Call of Juarez normally. It’s hard to recommend any of the tested cards for playing this game. The GeForce 9600 GT is the slowest, especially in terms of minimum speed. The GeForce 8800 GT is somewhere in between. And there is no significant difference between the GeForce 8800 GTS/512 and GeForce 9800 GTX.

World in Conflict (Direct3D 10)

The benchmark from World in Conflict ranks the graphics cards up in the same manner as the previous tests did. Once again I’d like to note the negligible difference between the GeForce 9800 GTX and GeForce 8800 GTS/512. It is in the easiest mode only that the 9800 GTX is faster. Elsewhere the two cards are equals.

Enemy Territory: Quake Wars (OpenGL 2.0)

Enemy Territory: Quake Wars doesn’t have anything new to tell us. The GeForce 9800 GTX is finally ahead of the GeForce 8800 GTS but only at the two highest resolutions of the heaviest visual mode. It should be noted that the frame rate is always comfortable even on the GeForce 9600 GT.

Call of Duty 4: Modern Warfare MP (Direct3D 9)

Call of Duty 4: Modern Warfare, being no less popular a shooter than Enemy Territory: Quake Wars, draws the same picture of comparative performance of the tested graphics cards.

Unreal Tournament 3 (Direct3D 9)

Unreal Tournament 3 agrees with the two previous tests. Well, a flyby is not a real game battle with shooting and explosions. The load on the graphics subsystem is going to be higher in the second case. Subjectively, the in-game frame rate is 20-30fps lower depending on the complexity of the scene. So I wouldn’t be sure that every tested card is capable of delivering comfortable performance in the heaviest scenes of Unreal Tournament 3.

Crysis (Direct3D 10)

The cards were benchmarked with the following settings in this game:

The results are not very bright just as you could have expected.

Unfortunately, the previous generation of Nvidia’s cards (the release of the GT200 has made them a previous generation) cannot deliver good performance in Crysis in DirectX 10 mode. It is only at 1280x1024 that the two fastest cards of today’s test session, the GeForce 8800 GTS/512 and GeForce 9800 GTX, make the game more or less playable. The results the cards have with enabled FSAA and AS are simply depressing.

Devil May Cry 4 (DX10)

The fresh benchmark from the upcoming Devil May Cry 4 ran with the following graphics quality settings:

The resolution and FSAA level were varied depending on the test mode.

The benchmark consists of four scenes, each very detailed and beautiful.

 

 

Without delving deep into the gameplay, I can tell you Devil May Cry 4 is surely a success from the visual aspect. Moreover, the game is not too heavy for the graphics subsystem as opposed to Crysis, for example. The frame rate is higher than 40fps in the hardest mode of the most complex scene. It was difficult to choose one scene, so I will publish the average frame rates in all the four scenes. For detailed results in each scene refer to the summary table.

Let’s see what we have in the new benchmark:

It is similar to the previous tests except that the cards get closer to each other when using FSAA and AS. This must be the consequence of the averaging of the results for the four test scenes. Perhaps one scene should be used instead – the second scene showing a battle in a sunlit forest must be the heaviest.

GeForce 9800 GTX Performance Dependence on Resolution

The following table and diagram show the performance drop of the GeForce 9800 GTX 512MB (at the default frequencies) when switching to higher display resolutions. This information may be useful for some gamers. Here is the table:

And here is the diagram:

When switching from the still-popular 1280x1024 to the widescreen 1680x1050 mode the GeForce 9800 GTX slows down by 22.3%. When switching to 1920x1200, it slows down by 36.3%. The difference between 1680x1050 and 1920x1200 is about 18.6%. However, the card’s performance may drop by as much as 70% in certain tests (Crysis) and display modes. Of course, it is more correct to analyze the results of each individual game and display mode, and the table above will help you do that.

Conclusion

Getting back to the main problem of this review which was voiced at the beginning, the conclusion about the positioning of the GeForce 9800 GTX in Nvidia’s single-chip solution line-up is obvious after we have seen the test results. And this conclusion is not in favor of the card. Compared with the performance, price, power consumption and even dimensions of the GeForce 8800 GTS 512MB, the GeForce 9800 GTX makes little sense. There is a negligible difference between the two cards in terms of performance, often within the benchmark’s measurement error range (1-1.5%). And the GeForce 8800 GTS/512 is even faster occasionally due to its more aggressive memory timings.

So, this difference can hardly be worth $50-100 (according to the recommended prices). The power consumption of a system with a GeForce 9800 GTX is about 45W higher than with a GeForce 8800 GTS/512, and the former card is also too long to be installed into some system cases. The GeForce 9800 GTX might have a more substantial advantage over the GeForce 8800 GTS/512 if it had 1024 megabytes of local memory, but we guess this still wouldn’t justify the higher number in the model name.