by Alexey Stepin , Yaroslav Lyssenko, Anton Shilov
04/12/2007 | 11:42 AM
Flagship graphics cards from the top price category are known to be the public demonstration of the developer’s technological achievements in the first place. Such solutions are deservedly popular among wealthy gamers but they never make up a substantial share in the general picture of sales.
After all, there are few people who are ready to shell out some $600 – the price of the most expensive modern game console! – for a graphics card, and it is less expensive, but more widespread graphics cards that earn most money for AMD/ATI and Nvidia.
November 9, 2006, Nvidia announced the world’s first consumer graphics processor with a unified architecture and support of DirectX 10. Scrutinized in our Directly Unified: Nvidia GeForce 8800 Architecture Review, the new GPU made the heart of two graphics cards, GeForce 8800 GTX and GeForce 8800 GTS. The senior model performed brilliantly in our tests and is surely the best choice for a gamer who does not care about the money factor (for details see our article called 25 Signs of Perfection: Nvidia GeForce 8800 GTX in 25 Benchmarks). The junior model took its seat in a price range from $350 to $500.
$449 is not big money for a new-generation product with full support of DirectX 10 and capable of offering highest performance in modern games. However, Nvidia didn’t stop at that and February 12, 2007, introduced a more affordable GeForce 8800 GTS 320MB with an official price tag of $299, thus strengthening its position in this price sector even more.
It’s about these two cards we are going to talk today. We’ll also find out how critical the amount of graphics memory is for the GeForce 8800 series.
To see what the two GeForce 8800 GTS models can do we should examine the characteristics of the GeForce 8800 series at large.
All the three graphics card models in this series are based on the 681-million-transistor G80 graphics core and utilize an additional NVIO chip which incorporates TMDS transmitters, RAMDACs, etc. Using such a complex chip in multiple graphics card models belonging to different price categories is not optimal in terms of the end product’s self cost, yet it is not an altogether bad solution: Nvidia can sell defective samples of GeForce 8800 GTX (which can’t work at the required frequency and/or have a few defective subunits) while the self cost of graphics cards selling at $250 and higher is not critical. Nvidia and ATI have both used this policy before. You can recall the G71 chip that could be seen on the massive inexpensive GeForce 7900 GS as well as on the top-end dual-chip monster GeForce 7950 GX2.
So, this is how the GeForce 8800 GTS was built. As the table shows, this graphics card differs greatly from its elder brother when it comes to technical characteristics. It has lower frequencies and a few disabled streamed processors. It also has less graphics memory, a narrower memory bus, and a few inactive TMUs and ROPs.
The GeForce 8800 GTS features 6 groups of streamed processors, 16 ALUs in each, for a total of 96 ALUs. This card’s main market opponent AMD Radeon X1950 XTX has 48 pixel processors each of which consists of 2 vector and 2 scalar ALUs, a total of 192 ALUs.
The GeForce 8800 GTS seems to be weaker than the Radeon X1950 XTX in pure computing power, but there are some factors to be accounted for. First of all, the GeForce 8800 GTS’ streamed processors, like ALUs in the Intel NetBurst micro-architecture, are clocked at a much higher frequency than the other subunits of the graphics core, 1200MHz as opposed to 500MHz. This means a considerable performance boost. Another factor comes from the peculiarities of the R580 architecture. Theoretically, each of its 48 pixel shader processors is capable of executing 4 instructions per clock cycle, not counting a branch instruction. But only two of those instructions can be of the type ADD/MUL/MADD. The other two are always ADD instructions with modifiers. As a result, the R580’s pixel processors cannot deliver their maximum efficiency all the time. As opposed to that, the G80’s streamed processors have a fully scalar architecture and each of them can execute two scalar operations, e.g. MAD+MUL, per clock cycle. Although we still don’t have precise data on the architecture of Nvidia’s streamed processors, we’ll check out in this article how the new unified GeForce 8800 architecture compares with the Radeon X1900 architecture in games.
As for performance of texturing and rasterization subunits, the GeForce 8800 GTS has more such units (24 TMUs and 20 ROPs against the Radeon X1950 XTX’s 16 TMUs and 16 ROPs), but clocks them at a lower frequency (500MHz against 650MHz). So, neither party is handicapped in today’s tests, and the cards’ relative performance in games will depend exclusively on differences in their micro-architectures rather than on a superiority in numbers.
A curious coincidence, the GeForce 8800 GTS has the same memory bandwidth of 64GB/s as the Radeon X1950 XTX, but uses a 320-bit memory bus to access its GDDR3 memory locked at 1600MHz. The Radeon X1950 XTX makes use of 2GHz GDDR3 memory with 256-bit access. Considering that AMD declares an advanced memory controller with a ring-bus topology, it’s interesting to see if the Radeon will have any advantage over its opponent at high resolutions with enabled FSAA, just as it had over the GeForce 7.
The less expensive GeForce 8800 GTS 320MB was announced on February 12, 2007. It is expected to replace the GeForce 7950 GT in the performance-mainstream class and differs from the regular GeForce 8800 GTS in the amount of graphics memory only. In fact, Nvidia created this card by simply replacing the latter’s 512Mb memory chips with 256Mb ones. This solution helped Nvidia establish its technical superiority in the popular $299 category. We’ll see shortly how the reduced memory amount affects the card’s performance and if it makes sense to pay extra $150 for the 640MB model.
The GeForce 8800 GTS 640MB model is represented in this review by the MSI NX8800GTS-T2D640E-HD-OC graphics card. Let’s learn more about this product.
We’ve got a retail version of the product, packed into a colorful box together with all the accessories. The box is rather small, especially in comparison with the box of the MSI NX6800 GT that used to rival the huge packages of ASUS cards. Notwithstanding its modest size, the box has a traditional handle for carrying it.
The box is painted a placid mixture of white and blue colors. The face side is embellished with a picture of a cute angel girl – there are no aggressive motives that are so popular among graphics card makers. Three stickers tell the user that this card is pre-overclocked by its manufacturer, supports HDCP, and is accompanied with a full version of Company of Heroes. The back side of the box shows information about Nvidia SLI and MSI D.O.T. Express technologies. The latter is a dynamic overclocking technology that increases the graphics card’s performance by 2 to 10% depending on the employed overclocking profile.
We found the following in the box, besides the graphics card:
Both manuals are designed as posters. Both are rather too simple and contain only some basic information. The brief user manual is written in as many as 26 languages, but you can’t get any info from it except for basic instructions on installing the card into your system. If the manuals were more detailed, they would be of more help for inexperienced users, we guess.
The driver CD contains an outdated version of ForceWare (97.29) and a number of exclusive utilities, particularly MSI DualCoreCenter and MSI Live Update 3. The former is a unified control center for overclocking both the graphics card and the CPU, but for the program to have its full functionality you must have an MSI mainboard with a CoreCell chip. So, this utility is going to be of little use for owners of other mainboards. MSI Live Update 3 is meant for keeping track of driver and BIOS updates and downloading them from the Internet. This is a handy tool for users who don’t want to delve into the intricacies of the manual BIOS update process.
Our thanks go to MSI for including a full version of the popular tactical RTS Company of Heroes into the box. It is a top-notch title, with superb visuals and exciting gameplay. Many players regard it as the best game of the genre which is also confirmed by numerous awards, particularly Best PC Strategy Game of E3 2006. As we wrote in our earlier reviews, Company of Heroes features visuals like those of a good first-person shooter and thus suits perfectly for demonstrating everything the GeForce 8800 GTS can do. Besides Company of Heroes, the discs contain a demo version of Warhammer 40000: Dawn of War – Dark Crusade.
The accessories to the MSI NX8800GTS-T2D640E-HD-OC are overall good including a full version of the highly popular tactical RTS Company of Heroes and the functional software from MSI.
Nvidia developed another PCB for the GeForce 8800 GTS – it is more compact than the PCB of the GeForce 8800 GTX. All GeForce 8800 are shipped to Nvidia’s partners ready-made, so nearly everything we’ll say below about the MSI NX8800GTS applies to every other GeForce 8800 GTS as well, be it a version with 640 or 320 megabytes of graphics memory.
The PCB of the GeForce 8800 GTS is shorter than the one of the GeForce 8800 GTX: 22.8 centimeters against nearly 28 centimeters. The GeForce 8800 GTS is almost the same size as the Radeon X1950 XTX and even smaller because its cooler does not stick out beyond the dimensions of the PCB.
Our sample of the MSI NX8800GTS has a green PCB but the company’s website shows a photo with a black PCB. Currently, GeForce 8800 GTX and GTS are selling in both black and green varieties. Notwithstanding all the rumors spreading over the Web, there is no difference, except for the color, between such cards, which is confirmed by Nvidia’s official website.
The black coating is rumored to be more toxic than the traditional green. Some people also say the black coating is more expensive or harder to work with. We don’t agree with any of these suppositions. Soldering masks of different colors cost the same money, so there should be no problems with masks of particular colors. We guess the explanation is simple. Cards of different colors are manufactured by different contract manufacturers, Foxconn and Flextronics. Moreover, Foxconn seems to be using both colors of the coating because we’ve met both black and green cards from that manufacturer.
The power circuit of the GeForce 8800 GTS is almost as complex as the one of the GeForce 8800 GTX and includes more electrolytic capacitors even. It is packed more densely, however, and has only one additional power connector which helped make the PCB shorter. The GPU power supply is controlled by a digital PWM controller Primarion PX3540, the same as on the GeForce 8800 GTX. The memory power supply is controlled by an Intersil ISL6549 controller, which is absent on the GeForce 8800 GTX that has a different memory power circuit.
The left part of the PCB with the main components of the card (GPU, NVIO, memory) is in fact identical to the GeForce 8800 GTX which is reasonable because developing a new PCB from scratch would require more money, time, and effort. Moreover, it would have hardly been possible to simplify the PCB for the GeForce 8800 GTS even developing it from scratch because of the necessity to use the same G80 & NVIO combination as on the flagship model. The only visible difference from the GeForce 8800 GTX is the lack of a second MIO interface. Instead, there is a seat for a technological connector with fixing locks that perhaps serves the same purpose, but is not soldered on the card. The 384-bit wiring of the memory bus is retained even. The reduction of the bus to the necessary width is achieved by simply installing 10 GDDR3 chips instead of 12. Since each chip has a 32-bit bus, 10 of them give you a total of 320 bits. Theoretically, there is no obstacle to creating a GeForce 8800 GTS with a 384-bit memory bus, but we doubt this is a plausible thing. We are more likely to see a full-featured GeForce 8800 GTX with reduced frequencies instead.
The MSI NX8800GTS-T2D640E-HD-OC carries ten GDDR3 memory chips from Samsung (K4J52324QE-BC12, 512Mb capacity, 1.8V voltage, a rating frequency of 800 (1600) MHz). According to Nvidia’s official GeForce 8800 GTS specification, this card’s memory should be clocked at 800 (1600) MHz, but the discussed version of the MSI NX8800GTS has the letters OC in its name. They mean that the card is pre-overclocked by the manufacturer and the memory chips are clocked at 850 (1700) MHz, increasing the bandwidth from 64 to 68GB/s.
The single difference of the GeForce 8800 GTS 320MB from the ordinary model is that it has half the amount of graphics memory. This card comes with 256Mb chips like Samsung’s K4J55323QC/QI series or Hynix’ HY5RS573225AFP. Otherwise the two GeForce 8800 GTS models are identical to the smallest detail.
The marking of the graphics card’s GPU (G80-100-K0-A2) differs from the marking of the GPU on the reference GeForce 8800 GTX (G80-300-A2). We know that GeForce 8800 GTS may be manufactured using G80 chips that have defective units and/or unable to work at the frequency of the 8800 GTX. Perhaps these things are reflected in the chip’s marking.
The 8800 GTS GPU has 96 active streamed processors (out of 128), 24 active TMUs (out of 32), and 20 active ROPs (out of 24). The reference GeForce 8800 GTS has a basic frequency of 500MHz (a real frequency of 513MHz) and a shader processor frequency of 1200MHz (a real frequency of 1188MHz), but the MSI NX8800GTS-T2D640E-HD-OC works at GPU frequencies of 576MHz and 1350MHz, which corresponds to the frequencies of the GeForce 8800 GTX. We’ll see in the Tests section what effect this frequency increase has on the graphics card’s performance.
The NX8800GTS has a standard configuration of output connectors: two DVI-I connectors capable of working in dual-channel mode, and a universal 7-pin mini-DIN connector to connect to HDTV devices via YPbPr and to SDTV devices via S-Video or Composite interfaces. Both DVI connectors of the MSI card are covered with rubber caps for protection.
The cooling system installed on the MSI NX8800GTS as well as on a majority of GeForce 8800 GTS from other graphics card suppliers is a shortened version of the GeForce 8800 GTX cooler we described in our earlier article called Directly Unified: Nvidia GeForce 8800 Architecture Review.
The heatsink and the heat pipe that goes out of the copper sole are both shortened. The flat U-shaped heat pipe pressed into the base for uniform distribution of heat is positioned in a different way, too. The aluminum frame that holds all of the cooler details has numerous projections where it contacts the memory chips, the voltage regulator’s power transistors, and the NVIO chip. Traditional pads made out of non-organic fiber and soaked in white thermal grease provide the necessary thermal contact. Dark gray thermal grease is used as a thermal interface for the GPU.
There are rather few copper elements in the cooler, making it light and not requiring a back-plate. The eight spring-loaded screws that fasten the cooler right to the PCB are quite enough. The GPU die is protected with a heat-spreading cap and is surrounded with a wide metallic frame that helps avoid misaligning the cooler.
The heatsink is cooled by a 75mm blower that has the same electrical parameters as the fan in the GeForce 8800 GTX cooler (0.48A/12V). It uses a 4-pin connection to the PCB. The cooler is covered with a translucent plastic casing in such a way that the hot air is exhausted outside through the slits in the card’s mounting bracket.
This cooler design is reliable and time-tested, almost silent at work, and highly efficient. There’s no sense in replacing it with anything else. MSI only replaced Nvidia’s sticker on the cooler’s casing with its own one that copies the picture on the box and also put another sticker, with its own logo, on the fan.
We measured the level of noise produced by the MSI NX8800GTS’s cooler with a digital sound-level meter Velleman DVM1326 (0.1dB resolution) using A-curve weighing. At the time of our tests the level of ambient noise in our lab was 36dBA and the level of noise at a distance of 1 meter from a working testbed with a passively cooled graphics card inside was 40dBA.
The cooling system of the NX8800GTS (like that of any other GeForce 8800 GTS) is just as noisy as the cooler of the GeForce 8800 GTX. It is actually very quiet in any mode and even surpasses the excellent GeForce 7900 GTX cooler that used to be considered the best in its class. Achieving a total silence at the same cooling efficiency would require installing a water cooling system, especially if you are into overclocking,
Reference samples of the GeForce 8800 GTX refused to run on our testbed for measuring power consumption, but newer cards from the GeForce 8800 series, and the MSI NX8800GTS-T2D640E-HD-OC too, work correctly in this system, which has the following configuration:
The mainboard in this testbed was specially modified: we connected measurement shunts into the power lines of the PCI Express x16 slot and equipped them with connectors to attach measuring instruments. We also added such a shunt to a 2xMolex → PCI Express adapter. The measurements were performed with a Velleman DVM850BL multimeter (0.5% accuracy).
We loaded the GPU by launching the first SM3.0/HDR graphics test from 3DMark06 and running it in a loop at 1600x1200 resolution and with enabled 16x anisotropic filtering. The Peak 2D load was created by means of the 2D Transparent Windows test from Futuremark’s PCMark05 benchmarking suite. The results follow below:
Thus we’ve got power consumption data on the MSI NX8800GTS-T2D640E-HD-OC as well as on the whole Nvidia GeForce 8800 family.
Power Consumption Details (Click to enlarge)
The GeForce 8800 GTX indeed surpasses the previous “leader” Radeon X1950 XTX, but only by 7 watts. A power draw of 131.5W in 3D mode is a good result considering the complexity of the G80 chip. The two additional power connectors of the GeForce 8800 GTX take in about the same amount of power, not higher than 45W even in the hardest operation mode. Although the PCB design of the GeForce 8800 GTX provides for the installation of one 8-pin power connector instead of a 6-pin one, this will hardly be necessary even if the GPU and memory frequencies are increased greatly. Nvidia’s new flagship is not very economical in Idle mode, but this is expectable for a 681-million-transistor chip with a huge frequency of shader processors. This high level of power consumption also comes from the fact that GeForce 8800 series cards do not reduce their frequencies when idle.
Both versions of GeForce 8800 GTS consume less power, but cannot match the results of Nvidia’s G71-based graphics cards. The single power connector of these devices bears a high load of 70W and more. The 640MB and 320MB versions of GeForce 8800 GTS do not differ much between each other in terms of power consumption because the amount of memory is the single difference between them. MSI’s pre-overclocked product has a higher power draw than the reference GeForce 8800 GTS – about 116W under load in 3D mode, which is anyway lower than the corresponding result of the Radeon X1950 XTX. The AMD card is much more economical in 2D mode, but graphics cards of this class are bought for playing 3D games and their consumption in 2D is not a critical factor.
Overclocking GeForce 8800 series graphics cards involves some uncommon peculiarities we want to discuss here. As you perhaps remember, early products from the GeForce 7 series on the 0.11-micron G70 core could only increase the frequency of the ROPs and pixel processors with a step of 27MHz. Nvidia then returned to the standard frequency step of 1MHz in G71-based cards, but the GeForce 8 series has a discrete frequency step, again.
Shader processors in the G80 chip are clocked at a higher frequency than the rest of the GPU subunits. The frequency ratio is about 2.3 to 1. Although the basic core frequency can be changed with a step smaller than 27MHz, the frequency of shader processors is always changed with a step of 54MHz (2x27MHz). This raises more problems at overclocking because all the overclocking tools deal with the basic frequency rather than with the shader domain clock rate. There is a simple formula to calculate the frequency of the GeForce 8800’s streamed processors during overclocking with a high enough precision:
OC shader clk = Default shader clk / Default core clk * OC core clk,
where OC shader clk is the (approximate) resulting frequency, Default shader clk is the initial shader processor frequency, Default core clk is the initial core frequency, and OC core clk is the frequency of the overclocked core.
Let’s now see how the MSI NX8800GTS-T2D640E-HD-OC behaves when you try to overclock it with RivaTuner2 FR. This program can show the real frequencies of different areas, or domains, of the G80 GPU. The MSI card has the same GPU frequencies (576/1350MHz) as the GeForce 8800 GTX, so the information below applies to the flagship card as well. We were increasing the basic GPU frequency in 5MHz steps. This is a small enough step, and it is not a divisor of 27MHz.
So our practical check shows that the basic core frequency can indeed be changed with a variable step: 9, 18 or 27MHz. We could not spot any regularity in the change of the step, though. The frequency of shader processors is always changed in 54MHz steps. That’s why some frequencies of the main domain of the G80 chip prove to be practically useless at overclocking. Using them only heats the GPU up. There is no sense in overclocking the main core frequency to 621MHz, for example, because the shader block will still be clocked at 1458MHz. So, you should overclock your GeForce 8800 with some caution, using the formula above and looking up the monitoring data of RivaTuner or some other similar program.
We had not expected anything exceptional from the pre-overclocked NX8800GTS, but the card surprised us with its high GPU frequency potential. We managed to speed the GPU up from its default 576/1350MHz to 675/1566MHz and the NX8800GTS passed a few test cycles in 3DMark06 without additional cooling. The GPU temperature was 70°C as reported by RivaTuner.
We were less lucky with the memory chips whose default frequency of 850 (1700) MHz had already been higher than their rating frequency of 800 (1600) MHz. We stopped at 900 (1800) MHz. At higher memory frequencies the card would hang up or issue a driver error message.
Thus, the graphics card has a good GPU frequency potential, but its memory chips are not that fast. The level of the GeForce 8800 GTX is already an achievement for them. Coupled with the 320-bit memory bus, this ensures a considerable advantage over the Radeon X1950 XTX in terms of memory bandwidth: 72GB/s against 64GB/s. The overclocking results will vary depending on the particular sample of the MSI NX8800GTS OC Edition and your using or not using additional means like volt-modding or installing a water cooling system.
During our comparative testing of the GeForce 8800 GTX graphics cards we used the following hardware platforms:
Since we believe that the use of tri-linear and anisotropic filtering optimizations is not justified in this case, the graphics card drivers were set up in standard way to provide the highest possible quality of texture filtering.
We selected the highest possible graphics quality level in each game. We didn’t modify the games’ configuration files. Performance was measured with the games’ own tools or, if not available, manually with Fraps utility. We also measured the minimum speed of the cards where possible.
We tested the cards in three standard resolutions according to our testing methodology: 1280x1024, 1600x1200 and 1920x1200. One of the goals during this test session was to evaluate the effect of the amount of GeForce 8800 GTS graphics memory on the performance. Besides, the technical specifications and cost of both graphics adapter modifications allow us to expect pretty high performance in contemporary games with enabled FSAA 4x. Therefore, “eye candy” mode was enabled everywhere, where it was possible.
We enabled FSAA and anisotropic filtering from the game’s menu. If this was not possible, we forced them using the appropriate driver settings of ATI Catalyst and Nvidia ForceWare. We ran the tests with disabled FSAA only for those games that do not support FSAA due to the specifics of their engine or use FP HDR. The thing is that the GeForce 7 family cannot perform FSAA together with floating-point HDR.
Since our goal was to compare the performance of two graphics cards that only differed from one another by the amount of onboard memory, MSI NX8800GTS-T2D640E-HD-OC was tested twice: at its default frequencies and at clock frequencies reduced to the level of the reference Nvidia GeForce 8800 GTS - 513/1188/800 (1600) MHz.
Besides the MSI graphics card our today’s test session also included the reference Nvidia GeForce 8800 GTS 320MB and the following other testing participants:
For our tests we used the following games and benchmarks:
First-Person 3D Shooters
Third-Person 3D Shooters:
We can’t see any big difference between the two versions of GeForce 8800 GTS that have different amounts of graphics memory on board until the resolution of 1920x1200 pixels, although the junior model is about 4-5fps slower than the senior one at 1600x1200, both delivering comfortable performance. The GeForce 8800 GTS 320MB slows down suddenly at 1920x1200, being far behind in both average frame rate and minimum speed. Moreover, it loses to the previous-generation cards, too. It’s clear the card lacks graphics memory or cannot manage it effectively.
The MSI NX8800GTS OC Edition is far faster than the reference model beginning with the resolution of 1600x1200 pixels. It cannot catch up with the GeForce 8800 GTX, but comes very close to it at 1920x1200. Obviously, the difference in the memory bus width between the GeForce 8800 GTS and GTX is not important here.
Both GeForce 8800 GTS have similar speeds in all resolutions, including 1920x1200 pixels. This is normal, considering that we benchmark the cards with enabled HDR and without FSAA. Working at the reference frequencies the cards are slower than the GeForce 7950 GX2.
MSI’s overclocked version of the card overtakes its opponent in high resolutions which, however, should not be used even on a GeForce 8800 GTX. For example, the average frame rate of Nvidia’s flagship product is only 40fps at 1600x1200 with slowdowns to 21fps in graphically complex scenes. Such numbers can hardly be considered sufficient for playing a first-person shooter with comfort.
This is a rather old game and it doesn’t suit well for benchmarking today’s top-end graphics solutions. Despite the use of full-screen antialiasing, we can only see significant differences between them at a resolution of 1920x1200 pixels. The GeForce 8800 GTS 320MB feels a lack of graphics memory then and finds itself about 12% behind the model with 640MB of memory. You’ll have a comfortable frame rate anyway, though.
The MSI NX8800GTS OC Edition goes neck and neck with the GeForce 8800 GTX: the computing power of the latter card is not called for in Far Cry .
The demo recorded on the Research map is different and shows somewhat more interesting results. The different members of the GeForce 8800 family behave differently even at a resolution of 1600x1200. And the 320MB version is already lagging behind although the action goes on in the limited space of an underground cave. The gap between the MSI and the GeForce 8800 GTX is bigger here at 1920x1200 than in the previous case because the GPU’s pixel shader performance is much more important now.
The GeForce 8800 GTS 320MB doesn’t suffer from having less memory than its elder brother in the FP HDR mode and provides a comfortable speed in every resolution as well. The MSI version ensures 15% speed boost, yet even the standard version of the card is fast enough for 1920x1200 whereas the GeForce 8800 GTX would surely make a resolution of 2560x1600 pixels playable, too.
This game’s visual luxuries require a powerful graphics card, and the GeForce 8800 GTS 320MB is 5% behind even at 1280x1024. And then the gap suddenly grows to 40% at 1600x1200.
There’s no clear benefit from overclocking the GeForce 8800 GTS here: the reference and the pre-overclocked cards suit equally well for playing at 1600x1200. And the overclocking gain is too small to make the next resolution playable with comfort. This comfort is only provided by the GeForce 8800 GTX with its 128 active shader processors and a 384-bit memory subsystem.
Using the deferred rendering technique, this game is incompatible with FSAA. We’ll publish the anisotropic filtering results only.
The MSI NX8800GTS OC Edition is getting ever farther away from the reference card as the display resolution grows, the gap amounting to 19% at 1920x1200. This 19% helps achieve an average frame rate of 55fps which is quite a comfortable level.
This test shows no difference between the two versions of GeForce 8800 GTS that have different amounts of graphics memory.
Our testbed’s CPU proves to be the bottleneck at the resolution of 1280x1024 pixels: all the graphic cards deliver the same speed. There are some differences at 1600x1200, but they are not critical, at least for the three versions of GeForce 8800 GTS, each of which provides a comfortable speed. The same is true for the resolution of 1920x1200 pixels. Despite its high-quality visuals, the game does not need a lot of graphics memory. The GeForce 8800 GTS 320MB is a mere 5% behind the senior (and much more expensive) model that has 640MB of onboard memory. The pre-overclocked GeForce 8800 GTS from MSI takes second place behind the GeForce 8800 GTX.
Although the GeForce 7950 GX2 has better results than the GeForce 8800 GTS at 1600x1200, you should not forget about problems you may encounter with this graphics card, which is in fact a SLI tandem, and about the much lower quality of anisotropic filtering the GeForce 7 series provides. The new solution from Nvidia has driver-related issues, too, but its problems are more likely to be corrected than those of the GeForce 7950 GX2.
The GeForce 8800 GTS 640MB is no better than the GeForce 8800 GTS 320MB, perhaps owing to the game’s running on a modified Doom 3 engine and is not very hungry for graphics memory. Just like with Ghost Recon Advanced Warfighter , the increased performance of the NX8800GTS OC Edition makes the resolution of 1920x1200 quite playable. For comparison, the ordinary GeForce 8800 GTS offers the same speed at 1600x1200 only. The flagship of the series GeForce 8800 GTX is beyond competition.
The Croatian developer Croteam’s project demands that the graphics card had 512 megabytes of memory. If the card doesn’t meet this requirement, its performance is going to be horribly low. The amount of memory installed on the inexpensive version of GeForce 8800 GTS proves to be insufficient to satisfy the game’s appetite and it yields only 30fps at 1280x1024 whereas the 640MB version of the card delivers twice that speed!
For an unknown reason the min speed of every GeForce 8800 is very low in Serious Sam 2 , which may be somehow related to the architectural features of the series which has a unified architecture without the division into vertex and pixel processors. Well, it may be some flaw in the ForceWare driver, too. Whatever the reason, you can’t get full comfort from your GeForce 8800 in this game as yet.
The so long-awaited GSC Game World project has finally arrived after 6 or even 7 years of development. The game turned out ambiguous, but at the same time too diverse to be described in just a few phrases. We would only like to note that the gaming engine has been improved significantly compared with one of the earlier versions we discussed. The game supports a number of contemporary technologies including Shader Model 3.0, HDR, parallax mapping, etc., however it can still run in simple mode with static lighting model proving excellent performance on not very powerful platforms.
Since we are shooting for the maximum image quality, we tested this game with full dynamic lighting and maximum level of detail. Among other things this mode also implies the use of HDR, so there is no FSAA support, at least in the current version of the S.T.A.L.K.E.R. game. Since the game become considerably less attractive with static lighting model and DirectX 8 effects, we decided to stick to anisotropic filtering only.
The game is actually not that modest at all when it comes to resources: even GeForce 8800 GTX cannot ensure 60fps in 1280x1024 with the maximum level of detail activated. However, I have to point out that in lower resolutions the performance gets limited by the system CPU, because the cards’ performance is not very diverse and the average results are quite close.
Nevertheless, GeForce 8800 GTS 320MB is falling a little behind the elder brother here already and the higher gets the resolution, the bigger grows the gap. In 1920x1200 the youngest GeForce 8800 fellow simply runs out of graphics memory. No wonder, since the gaming scenes are very large and rich in special effects.
All in all I can say that GeForce 8800 GTX doesn’t provide any significant performance advantage in S.T.A.L.K.E.R. compared with GeForce 8800 GTS, and Radeon X1950 XTX even looks as good as GeForce 8800 GTS 320MB. AMD solution is even somewhat better than the Nvidia one, because it works in 1920x1200, however, the use of this mode is not practically justified, because the average performance will be around 30-35fps. The same is true for GeForce 7950 GX2, which is actually a little faster than its direct competitor and the youngest new generation solution.
We wrote in earlier reviews that graphics cards with 512MB of memory have a certain advantage in high resolutions of Hitman: Blood Money . 320 megabytes seems to be enough, too, because the GeForce 8800 GTS 320MB is not much slower than the ordinary GeForce 8800 GTS irrespective of the display resolution. The difference is never bigger than 5% between them.
Both these cards and the pre-overclocked version of GeForce 8800 GTS offered by MSI make all the resolutions playable while the GeForce 8800 GTX even allows using FSAA modes higher than the common 4x MSAA – its reserve of speed is more than enough for that.
Even at the maximum graphics quality settings the GeForce 8800 GTS 320MB copes with the game just as successfully as the ordinary GeForce 8800 GTS does. Both cards allow playing the game comfortably at 1920x1200 in the eye candy mode. The MSI NX8800GTS OC Edition is faster than both reference cards in terms of average frame rate, but has the same minimum of speed. The min speed of the GeForce 8800 GTX is on the same level. This must have something to do with the game’s engine.
The current version of Gothic 3 does not support FSAA so we benchmarked the cards using anisotropic filtering only.
Despite the lack of FSAA, the GeForce 8800 GTS 320MB is far slower not only than the ordinary GeForce 8800 GTS, but also than the Radeon X1950 XTX, and is a little ahead of the GeForce 7950 GX2. Delivering a frame rate of 26-27fps at 1280x1024, this card does not suit well for playing Gothic 3 .
Note that the GeForce 8800 GTX is ahead of the GeForce 8800 GTS by 20% at best. This game probably cannot fully utilize all the resources provided by Nvidia’s flagship product. The small difference between the ordinary and overclocked versions of the GeForce 8800 GTS is indicative of the same fact.
Starting with version 1.04 this game allows using FSAA, but HDR is still not supported. So, we benchmarked the cards in the eye candy mode here.
You want to have a frame rate of higher than 15fps to play Neverwinter Nights 2 with comfort, and the GeForce 8800 GTS 320MB barely reaches this point even at 1600x1200. The 640MB version of the card is never slower than 15fps.
This game loses much of its visual appeal without HDR (although some gamers argue this point), so we run it with enabled FP HDR.
Performance of the GeForce 8800 GTS 320MB is directly proportional to the display resolution: the card can challenge top-end products from the previous generation at 1280x1024, but loses to them at 1600x1200 and, even more, at 1920x1200. Being up to 10% slower than the Radeon X1950 XTX and up to 25% slower than the GeForce 7950 GX2, it still performs quite well for a product with an official price of only $299.
The ordinary GeForce 8800 GTS and its pre-overclocked version from MSI feel at ease in this test and ensure a comfortable frame rate in every resolution.
Comparing two versions of GeForce 7950 GT that differed in the amount of onboard memory (for details see our article called GeForce 7950 GT: 256MB or 512MB? Foxconn and Gigabyte Graphics Cards Review) we could not spot any great difference between them in TES IV , but it is not so with the two versions of GeForce 8800 GTS. They do behave alike at 1280x1024, but the 320MB version is twice slower than the 640MB at 1600x1200 and sinks to the level of the Radeon X1650 XT at 1920x1200 (for details see our article called ATI Radeon X1650 XT Graphics Card Review)! It’s clear that the problem is not in the amount of graphics memory but in the way this memory is managed by the driver. Perhaps this issue can be corrected by improving the driver. We’ll check this out as soon as the next version of ForceWare is released.
The GeForce 8800 GTS and the MSI NX8800GTS OC Edition provide a high level of comfort even in open scenes of the Oblivion world, although not 60fps as in closed environments. Top-end solutions from the previous generation are no match for the new cards.
Every card from the GeForce 8800 series delivers a high average speed, but the minimum of speed is still rather low, which calls for driver improvements. The GeForce 8800 GTS 320MB has the same results as the GeForce 8800 GTS 640MB.
I am sure that everyone who is fond of playing computer games is familiar with the Command & Conquer real-time strategy series. Electronic Arts has recently released a sequel to the series that takes the player to the well-known world of GDI and Brotherhood of Nod opposition. This time however, there is a third force to get involved – alien conquerors. The gaming engine is very up-to-date and uses a lot of advanced special effects. Moreover, it boasts a distinguishing feature: fps limitation set at 30fps. Maybe they did it in order to limit the AI performance and this way to ensure that there won’t be any unfair competition with the player. Since there are no default means to disable this limitation, we tested the game with it, and hence paid attention to the minimal fps rates.
Almost all testing participants can provide 30fps in all resolutions, except GeForce 7950 GX2 that suffered some SLI issues. Looks like the drives simply doesn’t have the appropriate support, because the official Nvidia ForceWare Windows XP driver for GeForce 7 family was last updated over half a year ago.
As for both GeForce 8800 GTS models, they demonstrated similar minimal fps rate and hence ensure equal gaming comfort for the player. Although the 320MB model yields to the one with mode onboard memory in 1920x1200 resolution, 2fps is hardly a critical value, especially since with similar minimal performance it doesn’t really affect the gaming process. Only the GeForce 8800 GTX can provide 100% smooth control of the battle units, as its instant fps never gets below 25fps.
We tested the game in the pure speed mode with enabled anisotropic filtering only because the game has problems when you turn on FSAA.
This is another game where the GeForce 8800 GTS 320MB is slower than previous-generation solutions with a non-unified architecture. Nvidia’s $299 card can only be used to play at resolutions not higher than 1280x1024 pixels even with disabled FSAA whereas the $449 model with more graphics memory allows playing normally even at 1920x1200. The AMD Radeon X1950 XTX makes the latter resolution playable too, though.
Supreme Commander does not need as much graphics memory as Company of Heroes , and the GeForce 8800 GTS 320MB turns in as high a result as the GeForce 8800 GTS does. An additional performance increase can be achieved through overclocking as is shown by the MSI card, yet it still cannot overtake the GeForce 8800 GTX. On the other hand, the game is perfectly playable even at 1920x1200 and the minimum of speed doesn’t differ much from the average.
Since 3DMark05 defaults to 1024x768 resolution without full-screen antialiasing, the GeForce 8800 GTS 320MB expectably turns in the same overall score as the ordinary version of the card with 640MB of graphics memory. The overclocked version of GeForce 8800 GTS supplied by MSI boasts a pretty-looking round score of 13,800 points.
As opposed to the overall scores we obtained in the benchmark’s default mode, we run each separate test in the eye candy mode. This does not affect performance of the GeForce 8800 GTS 320MB, though. It is not much slower than the GeForce 8800 GTS even in the hardest third test. The MSI NX8800GTS OC Edition is everywhere second best behind the GeForce 8800 GTX, proving that the overall scores are correct.
Both versions of GeForce 8800 GTS behave just like in the previous test, but 3DMark06 uses more complex graphics. Combined with 4x FSAA in some tests, this should produce a different picture of performance. Let’s check it out.
The separate groups of tests bring us expectable results. The SM3.0/HDR group makes use of a number of more complex shaders and the advantage of the GeForce 8800 GTX is more definitely outlined than in the SM2.0 tests. The AMD Radeon X1950 XTX looks good when the application uses Shader Model 3.0 and HDR whereas the GeForce 7950 GX2 looks better in the SM2.0 tests.
When we turn on FSAA, the GeForce 8800 GTS 320MB falls behind the GeForce 8800 GTS 640MB at 1600x1200 and cannot pass the tests altogether at 1920x1200 due to the lack of memory. The speed of the 320MB version is almost two times lower in both SM2.0 tests although the two are quite different in terms of graphics scene design.
The amount of graphics memory affects the card’s performance even at 1280x1024 in the first SM3.0/HDR test. The junior GeForce 8800 GTS is about 33% slower than the senior one then. And this gap grows up to 50% at 1600x1200. The second test, with a less complex and smaller scene, does not need too much of graphics memory and the gap between the two cards is 5% and 20% at 1280x1024 and 1600x1200, respectively.
It is time for us to do some summarizing. We have tested two models of Nvidia GeForce 8800 GTS, one of which is a direct opponent to the AMD Radeon X1950 XTX, while the other is meant for a performance-mainstream sector and is priced at $299. So what do the test results have to tell us?
The senior model with an official price of $449 has been good in terms of performance. The GeForce 8800 GTS outperformed the AMD Radeon X1950 XTX in most of our tests. There are only a few applications in which it provides the same performance as the AMD card and is slower than the dual-chip GeForce 7950 GX2. Considering the highest performance provided by the GeForce 8800 GTS 640MB, we would not compare it directly with the last-generation products because the latter do not support DirectX 10. Moreover, the GeForce 7950 GX2 offer a lower quality of anisotropic filtering and may be incompatible with some games.
The GeForce 8800 GTS 640MB is surely the best solution in the $449-499 price range, yet we want to warn you that the new-generation products from Nvidia are still not perfect. There are flickering shadows in Call of Juarez . And you have to resort to special tricks to launch Splinter Cell: Double Agent on ForceWare 97.94. So, although the GeForce 8800 GTS will deservedly hold the title of the best $449 graphics card until the arrival of AMD’s new-generation products, we would advise you to make sure before the purchase that it is compatible with your favorite games.
The new GeForce 8800 GTS 320MB is a good buy for its official price of $299. Among its advantages are support of DirectX 10, high-quality anisotropic filtering and good performance in common display resolutions. So, if you are going to play in resolutions of 1280x1024 or 1600x1200, the GeForce 8800 GTS 320MB will be an excellent choice.
Unfortunately, the technically promising graphics card, which differs from the more expensive version in the amount of memory only, is sometimes much slower than the GeForce 8800 GTS 640MB not only in games that demand a lot of graphics memory (e.g. Serious Sam 2 ), but also in applications that didn’t reveal any difference between graphics cards with 512MB and 256MB of memory before. Particularly, these are TES IV: Oblivion, Neverwinter Nights 2, and F.E.A.R. Extraction Point . 320MB is considerably more than 256MB, so this is a memory management problem, probably a driver issue. Anyway, even with the mentioned drawbacks, the GeForce 8800 GTS 320MB looks much more appealing than the GeForce 7950 GT or Radeon X1950 XT, even though the latter are going to become cheaper now.
The MSI NX8800GTS-T2D640E-HD-OC is a product with good accessories bundle which difference from Nvidia’s reference card goes beyond the box design, accessories and the sticker on its cooler. This graphics card is pre-overclocked by the manufacturer and ensures a considerable performance boost over the reference GeForce 8800 GTS 640MB in most games. Of course, it cannot match the GeForce 8800 GTX, but it’s not bad to have some extra frames per second. These graphics cards seem to be carefully selected and tested to ensure their ability to function at increased frequencies because our sample did quite well during overclocking. It is possible that many samples of the NX8800GTS OC Edition can be overclocked above their default frequencies.
We should give MSI special credit for accompanying its product with a two-disc edition of Company of Heroes which is considered the best strategy game of the year by many game reviewers. If you are indeed going to buy a GeForce 8800 GTS, we strongly recommend that you check out this product from MSI.
GeForce 8800 GTS 320MB