by Alexey Stepin , Yaroslav Lyssenko, Anton Shilov
04/18/2007 | 06:08 AM
Nvidia GeForce 8800-series has managed to take the market of high-end graphics accelerators by storm with three models. Nonetheless, not a lot of gamers can afford graphics boards that cost $299 and above, which is why the bulk of graphics processor developers comes from much more affordable offerings.
According to Jon Peddie Research, around 75% of add-in-board graphics market revenues come from sales of graphics cards that are valued at $150 - $229 in the retail. While Nvidia has launched a version of its GeForce 8800 GTS with 320MB of memory that carries manufacturer suggested retail price (MSRP) tag of $299, the board does not have a lot of chances to quickly drop $50 or so, as even large distributors in the U.S. offer such cards to their clients for approximately $290 - $320 or even more depending on manufacturer, bundle and clock-speeds.
Even though Nvidia’s GeForce 7 lineup can serve the market of graphics cards priced between $150 and $229 pretty well and the firm hardly feels a lot of need in updating its current lineup from financial or market share perspective, Nvidia has strategic reasons to release a cost-effective lineup of DirectX 10-compliant chips.
There is no news that while discrete graphics processors from ATI, Nvidia or S3 Graphics can perform certain functions of DirectX, they all do it in different ways. For example, Nvdia’s DirectX 9.0 hardware could operate faster with partial precision, whereas ATI’s DirectX 9.0 always used full precision and could not benefit from partial even when game developers forced it for Nvidia hardware. Besides, there are a number of other peculiarities of implementation that need to be taken into account by software makers either to get performance boost or to avoid speed drops.
However, game developers will hardly tweak their titles for high-end hardware that is owned by 5% of their customers. That said, Nvidia needs to have a broad family of DirectX 10-compliant graphics processing units at different price points on the market so to force programmers to tune their game engines for the new architecture. Since Nvidia’s arch rival ATI, graphics product group of Advanced Micro Devices, was, according to some sources, hold the release of the code-named R600 graphics chip till the more affordable RV610 and RV630 are available for shipping, mid-April seems to be nearly the right time for Nvidia to unveil the new family: game developers who already use the GeForce 8800-series will focus their efforts entirely on the GeForce 8 and AMD risks to lose initial DirectX 10 real-world benchmarks, which are supposed to be available in the second half of the year.
So, today Nvidia is releasing its GeForce 8600 GT and GTS graphics products as well as relatively low-cost GeForce 8500 GT card, which are supposed to serve the $100 - $229 market going forward. Let’s take a closer look and find out what the GeForce 8600/8500 and G84 actually are.
For years Nvidia has been developing graphics architectures which allow easy scaling upwards and downwards, as after creation of a flagship graphics processor it is possible to design lower-end chips without many problems based on the building blocks of the higher-end part. The GeForce 8-series is not an exception and, in fact, the unified shader architecture allows taking the approach one step forward.
The GeForce 8800 GTX consists of 8 “blocks”, each of which features 2 shader processors (one shader processor sports 8 stream processors or ALUs), 4 texture modules and a shared L1 cache. Besides, each block has an array of general-purpose registers, L2 cache and 64-bit memory controller, which can be accessed by any other “blocks”. Such approach allows Nvidia to easily disable defective parts of a particular graphics processor to create cut-down solutions, such as the GeForce 8800 GTS, based on the same chip, or to develop new processors without major changes when it comes to the architecture.
The newly released GeForce 8600-series graphics chips features two “blocks” with 32 stream processors (which work at 1.45GHz), 16 texture modules, 8 raster operation units (ROPs) and 128-bit memory controller. In fact, configuration of the 8600-series seem be rather strange for Nvidia: back in the days the company used to offer performance-mainstream GPUs with half the number of execution units compared to high-end/premium parts (e.g., the GeForce 6600 featured 8 pixel processors, whereas the GeForce 6800 GT/Ultra sported 16 pixel pipes), but the GeForce 8600 only has one fourth of the GeForce 8800’s resources when it comes to processing power and half of texture units.
The general architecture of the G84 is similar to that of the G80, but there are some differences too. Each stream processor (SP), or ALU, can perform two simultaneously issued scalar operations like MAD+MUL per cycle, just like a stream processor from the G80. In order to compensate the relatively low amount of stream processors, Nvidia pushed their clock-speed upwards to 1.45GHz on the GeForce 8600 GTS and there are signs that the company may be working on a more powerful solution with shader processor’s clock-speed maxed to even higher frequency. Besides, with the G84 the developer decided to change configuration of texturing units: Nvidia claims that the GeForce 8600 has 16 texture address (TA) and 16 texture filtering (TF) units, whereas the GeForce 8800 supports 8 texture address and 16 texture filtering units. It is unknown why the firm thought that rapid texture fetch would be important on the performance-mainstream and mainstream parts and why sacrificing “free” anisotropic filtering in favor of increased amount of TAs. We will have to come back to the claim about 16 texture units later in this review.
But while the GeForce 8600-series graphics processors do not feature as many execution units as the GeForce 8800-series, they still consist of 289 million of transistors, a respectable number. But the number has a logical explanation: the G84 graphics chip has built-in NV I/O logic, which alone consists of around 60 million of transistors and which needed to be taken out of the G80, as making a chip that consists of over 700 million of transistors using TSMC’s 90nm process technology seems to be too risky. Even though Nvidia’s original explanation about the reasons to move NV I/O logic off-die indicated that this was necessary to do because of extreme clock-speed of SPs, some may doubt now whether this was actually the reason: SPs on the G84 operate at 1.45GHz and output logic is still inside the GPU itself.
Since there are no major architectural difference between the G80 (GeForce 8800) and G84 (GeForce 8600), all the 3D capabilities available on the high-end part are present in the mainstream chip as well. Therefore, end-users can expect the following features:
But besides some tweaks within the pixel pipeline, the new GeForce 8600 also features enhancements not available on the G80: we are talking about substantially improved video processing engine.
PureVideo technology was first time introduced by Nvidia within NV40 graphics processor that was the heart of the high-performance GeForce 6800 family. This unit was disabled in the very first GPU batches because of some hardware defect it had, but even when this problem was solved it turned out that it was the first-generation video processor that cannot work with HD-content. Fully-fledged version of PureVideo appeared only in the less high-performance but more mainstream NV43 GPU, which was the basis for the GeForce 6600 family. After that this technology continued to exist without any significant hardware enhancements up until GeForce 8800 arrived.
This introduction will help you to better understand what has really happened this time. History has the tendency to repeat itself and looks like the story of GeForce 6800 and 6600 is about to be repeated on the level of GeForce 8800 and 8600. The thing is that despite all claimed enhancements of the PureVideo technology and the addition of the “HD” abbreviation to the name that stands for high-definition video support, all these enhancements remained purely software. PureVideo could perform motion compensation and post-processing on the hardware level, but that was about it: all advanced encoding methods were still the prerogative of the central processing unit.
Namely, it is true for MPEG-4 AVC format also known as H.264 that uses Context-Adaptive Variable Length Coding (CAVLC) and Context-Adaptive Binary Arithmetic Coding (CABAC) entropic encoding algorithms. Advanced H.264 encoding techniques provide extremely high image quality at a good compression level, however, they require huge computational capacity from the decoder, which falls onto the CPU’s shoulders even with the GeForce 8800 installed into the system.
Only the new Nvidia GPU family knows to perform real hardware H.264 decoding thanks to the corresponding unit. Moreover, the decoding process is now being performed completely on the hardware level thanks to the additional AES128 unit responsible for decoding protected video content.
Since GeForce 8600 is a mainstream GPU the situation is very similar to what we have just described above. This is the second case when it is not the flagship Nvidia product that received advanced video decoding capabilities, but the mainstream one, with lower performance in gaming applications.
According to Nvidia, the new PureVideo HD version can decrease the CPU workload much greater than all technologies existing in the market before the GeForce 8600. Practice will show if this is true or not. As for the image quality, nothing new has been introduced here: it is promised to be the same as by GeForce 8800.
Being one of the first companies to release the GeForce 8600-series graphics cards, Asustek Computer was able to supply us a retail version of its graphics card, which will be available commercially shortly. To our great surprise, Asus EN8600GTS/HTDP/256M graphics card does not come inside an enormously large box, but is put inside a moderate size package. Design of the box is also not really pretentious: a lady in ancient Romanian clothes on greed background as well as basic information regarding the graphics’ cards capabilities and features.
In fact, there will be another version of Asus EN8600 GTS available on the market (EN8600GTS/G/HTDP/256M), which will be bundled with S.T.A.L.K.E.R.: Shadow of Chernobyl video game. Packaging of that graphics card will be similar to the game DVD: a face of an irradiated monster with Chernobyl nuclear power plant on the background.
As usually, contents of Asustek’s boxes are carefully put in separate compartments and the card is fixed in foam-rubber tray to avoid damage during transportation and storage. The new cardboard boxes do not allow seeing the board without opening the case, however, this hardly is a problem.
When the box is opened, you find the following accessories:
The product bundle is nothing special to say at least: no free games, no interesting accessories. On the other hand, the contents of the box will allow anyone to use the graphics card with no problems and without paying for additional things that may not necessarily be needed eventually. Those who still want to get something extra are welcome to acquire EN8600GTS/G/HTDP/256M version of Asustek’s GeForce 8600 GTS, which comes with S.T.A.L.K.E.R.: Shadow of Chernobyl first person shooter video game.
Nevertheless, a not very rich product bundle of our version of Asus EN8600 GTS does not mean that Asus’ customers receive the same kind of service as those who buy cheap boards from second-tier manufacturers. Besides renowned quality, Asus bundles several in-house developed programs, including Asus GameFace Messenger, Asus Video Security Online, Asus SmartDoc and Asus OnScreenDisplay (for Windows XP, XP x64 and Vista 32-bit OSs). The graphics card comes with Nvidia ForceWare driver version 101.02, which could not be found at nvidia.com web-site at press time. Since this driver is something that early adopters will have to use, we decided to test the GeForce 8600 GTS using this version.
First look at the newcomer gives us to understand that its design is considerably simpler and more compact than that of the GeForce 8800 based solutions. The GeForce 8600 GTS PCB is slightly longer than GeForce 7600 GT and is close to GeForce 7900 GT/GS/7950 GT: a little less than 20cm. This relatively small size will allow to install it easily into compact system cases, especially since the reference cooling system is a single-clot solution. The GeForce 8600 GTS sample we had at our disposal is based on traditional green PCB.
The card boasts very modest looks, especially if you compare it to the monstrous GeForce 8800 GTX. Nevertheless, it features an additional power supply connector: high GPU and memory working frequencies of the GeForce 8600 GTS wouldn’t let it use only the PCI Express slot capacity.
The voltage regulator circuitry is pretty simple and doesn’t contain too many power components and electrolytic capacitors. The control elements are all on the reverse side of the PCB. The card definitely boasts relatively low level of power consumption, because none of the voltage regulator components are equipped with additional cooling: there are even no passive heatsinks on these elements, and some of them are not laid out at all.
Thanks to the 128-bit memory bus, the PCB layout is very simple and hence low-cost. From this standpoint expensive GeForce 8800GTX/GTS cannot compete with the newcomer. In fact, the biggest part of the PCB front is taken by the screen, except the lines connecting the memory chips with the graphics core. There are four GDDR3 memory chips on the PCB, 512Mbit each. In our case these are Samsung K4J52324QE-BJ1А with 900 (1800) MHz nominal frequency. The actual frequency they run at equals 1000 (2000) MHz. The memory working frequency together with 128-bit access produce pretty modest bandwidth of only 32GB/s, however there might appear graphics card modifications using faster GDDR4 memory with up to 512MB total capacity, which will require 1Gbit chips. Anyway, we shouldn’t expect GeForce 8600 GTS to work performance wonders in high resolution with enabled FSAA.
G84 chip that contains fewer transistors than G80 is logically smaller in size and its packaging has no protective framing, because it is small and doesn’t require massive cooling solution. The chip revision is 400-A2 and it supports HDCP. The chip was manufactured on sixth week of 2007. The graphics core contains 32 unified shader processors, 16 TMU and 8 raster operation processors (ROP). Just like GeForce 8800GTX/GTS, GeForce 8600 GTS uses discrete GPU clocking algorithm with different main domain and shader domain frequencies. The main G84 domain in the top model of the family works at 675MHz, and the shader processor domain – at 1450MHz.
Asus EN8600 GTS is equipped with standard set of connectors and features two DVI-I ports supporting dual-link and HDCP, universal S-Video/RCA/YPbPr video Out connector and the MIO interface pins for the SLI configurations. We suspect there will also be GeForce 8600 GTS solutions with VIVO feature, because there is an empty spot on the reverse side of the graphics card PCB, which is most likely planned to host a video decoder chip later on.
The cooling system of GeForce 8600 GTS is much simpler in design than that of the GeForce 8800 family, because G84 generates considerably less heat than G80.
The cooler looks like a variation of the GeForce 6800 cooling solution, although it is simpler and smaller. Nevertheless, it features one heatpipe that ensures efficient functioning of the aluminum heatsink with relatively large gap between the ribs. The system doesn’t oust the warm air outside the system case, but fits into a single slot with the graphics card instead.
Unlike GeForce 7600/7900/7950, the fan on the GeForce 8600 GTS reference cooler uses four-pin connector instead of the traditional two-pin one. It boasts PWM-management and fan rotation speed control option. The cooler is fastened to the PCB with four spring-screws and cools not only the GPU, but also the memory chips working at high clock speeds.
Judging by the size and design of the GeForce 8600 GTS cooling system, this graphics adapter boasts relatively low heat dissipation and power consumption, however we still have to check it out in practical tests.
Generally speaking, power consumption of a high-end graphics card is hardly important for a hardcore gamer or performance enthusiast: they all use large computer cases with exceptionally powerful PSUs, a number of hard drives and optical drives, therefore, even noise level of a graphics card’s cooler will hardly be important for them.
The things are a lot different in performance-mainstream, mainstream and entry-level graphics cards worlds. When it comes to affordable add-in-boards we have not only power consumption constraints, but we also have space constraints, as small computer cases may be too small for boards like the Radeon X1950 Pro, not talking about the Radeon X1950/X1900 XT that are heading towards $229 price-point. Therefore, moderate power consumption is something important when it comes to the GeForce 8600 GTS/GT.
As usually, we plugged in the GeForce 8600 GTS into our special testbed equipped with connectors for measuring instruments to check out the new product’s power consumption. Our testbed configuration remained unchanged:
The measurements were performed with a Velleman DVM850BL multimeter (0.5% accuracy).
We loaded the GPU by launching the first SM3.0/HDR graphics test from 3DMark06 and running it in a loop at 1600x1200 resolution and with enabled 4x full-screen antialiasing and 16x anisotropic filtering. The Peak 2D load was created by means of the 2D Transparent Windows test from Futuremark’s PCMark05 benchmarking suite.
Power consumption of the Nvidia newcomer seems to be inline with that of the GeForce 7900 GS. Even though the GeForce 8600 GTS is made using 80nm process technology and sports 289 million of transistors, whereas the predecessor features 278 million and is produced using 90nm fabrication process, power consumption of the newcomer is a bit higher compared to the ancestor. Still, 47W is definitely not a lot and considering that there are 32 stream processors working at 1.45GHz inside the 8600 GTS, power consumption that is 18W lower compared to the Radeon X1950 Pro may be considered as remarkable.
Click to enlarge
Traditionally, we decided to find out how loud the cooling system of the GeForce 8600 GTS is compared to its rivals. As always, we used digital sound-level meter Velleman DVM1326, which has a resolution of 0.1dB and allows measuring noise level in a range up to 130dB with A or C weighting.
We minimize the influence of external factors by performing the measurements at night and with closed windows; the background noise level is about 36dBA then. The sound-level reading is about 41-42dBA at a distance of 1 meter from the test platform when a graphics card with passive cooling is installed in it (which is an increase from figures measured earlier, as we started to use 1kW power supply unit, which is pretty noisy itself). These are the two reference numbers we base our judgments upon. The noise is measured when the system case is open.
Just like when measuring the maximum currents, we check out three operating modes – Idle, Peak 2D load, and Peak 3D load – which are described in the previous section.
The GeForce 8600 GTS with 47W power consumption hardly requires robust cooling, but unfortunately our Asus EN8600 GTS turned out to be noisier compared to other graphics cards. Even though the board maxes out its cooling fan speed right after the computer starts, it then reduces its speed and even under high load does not rise it up. Still even on moderate speeds the novelty seems noisier compared to its predecessors. So, cooling system of the 8600 GTS is not as good as all coolers employed by Nvidia in the recent years, nevertheless, it cannot be called really noisy too.
Unfortunately, we had pretty limited time to prepare this review. As a result, we decided not to publish results of synthetic benchmarks and skip image quality and media playback investigation. Brief overview of image quality in several games showed that the GeForce 8600 GTS has similar image quality as the GeForce 8800 GTS/GTX, which means that it is the best available today. However, this was only a brief investigation and in future articles we may offer a more detailed image quality analysis.
During our comparative testing of the four GeForce 8800 GTX graphics cards we used the following hardware platforms:
Since we believe that the use of tri-linear and anisotropic filtering optimizations is not justified in this case, the graphics card drivers were set up in standard way to provide the highest possible quality of texture filtering.
We selected the highest possible graphics quality level in each game. We didn’t modify the games’ configuration files. Performance was measured with the games’ own tools or, if not available, manually with Fraps utility. We also measured the minimum speed of the cards where possible.
Since initially the GeForce 8600 GTS will cost $199 - $229, we decided to test it with FSAA 4x and anisotropic filtering 16x enabled. We used 1280x1024/960, 1600x1200 and 1920x1200 resolutions as wide-screen displays with 24” or 27” panel are becoming more and more affordable and widespread. We used “eye candy” mode everywhere, where it was possible without disabling the HDR or Shader Model 3.0 increasing the image quality. We enabled FSAA and anisotropic filtering from the game’s menu. If this was not possible, we forced them using the appropriate driver settings of ATI Catalyst and Nvidia ForceWare. Since some games do not allow simultaneous use of FSAA and HDR, we benchmarked with HDR and without antialiasing.
Besides the Asus EN8600 GTS 256MB, the following graphics cards participated in our review:
For our tests we used the following games and benchmarks:
First Person 3D Shooters
Performance in Third-Person 3D Shooters
Performance in RPG
Performance in Simulators
Performance in Strategies
Performance in Synthetic Benchmarks
Given that Battlefield 2142 is a very demanding game, graphics cards that cost less than $229 can hardly boast with good performance in it and the GeForce 8600 GTS is not an exception: it performs on the same level as the GeForce 7900 GS despite of having lower memory bandwidth. The Radeon X1950 Pro is faster compared to both solutions by Nvidia and, considering that the GeForce 8800 GTS 320MB will initially be only about $100 more expensive compared to the 8600 GTS, the latter’s price/performance ratio does not seem to be really good.
Call of Juarez is a yet another demanding game that does not offer smooth framerate even on the top-of-the-range GeForce 8800 GTX 768MB. Therefore, it is not a surprise that the GeForce 8600 GTS cannot surpass even 25fps mark here, just like the previous-generation GeForce 7900 GS. While this time the model 8600 GTS is slightly faster compared to its rival Radeon X1950 Pro, even 20% performance premium in 1920x1200 will not help a gamer much, as we are talking about 12fps vs. 10fps.
The visually stunning world of Far Cry managed to do something that none of the games has done before: to stay in our pool of games for more than three years. But while the game is pretty old, it does not mean that any graphics card can deliver comfortable framerate with maximum quality.
The GeForce 8600 GTS cannot provide 60fps even on the “Pier” level in 1600x1200 resolution, not talking about comfortable speed in 1920x1200 resolution, which is becoming more and more popular, as full-HD TVs gain popularity and 24” monitors dropping in prices. It is also somewhat alarming that 16 texture units (at 675MHz) of the GeForce 8600 GTS cannot match 20 of the GeForce 7900 GS and 12 of the Radeon X1950 Pro on this level, which is rich with textures.
The situation seems to be better on the pixel-shader intensive “Research” level: 1600x1200 resolution is available for gamers even with maximum quality settings and performance in 1920x1200 is considerably higher compared to that on the “Pier” level. Still, the new 8600 GTS board delivers performance comparable only with the GeForce 7900 GS, which is tremendously slower than that of the Radeon X1950 Pro.
High dynamic range lighting in Far Cry can place almost any graphics card on the knees. It is quite satisfactory to see that the novelty is constantly faster than its predecessor – GeForce 7900 GS – across all resolutions. Nevertheless, the only resolution suitable for practical use by gamers is 1280x1024, as already in 1600x1200 the framerate drops to around 40fps, which is too low for first person shooter games. As we can see from the graphs above, difference between performance in “Pier” and “Research” levels is negligible.
It is interesting to note that 8 ROPs of the GeForce 8600 GTS can offer performance comparable with 16 ROPs of the GeForce 7 and 12 of the Radeon X1950 Pro.
F.E.A.R. has indisputably deserved the title of a “true” next-generation video game. Besides visual appeal it has elements of psychological thriller, something new to a video game world, but, considering the availability of expansion pack for F.E.A.R., an intention to release another one by Monolith/Sierra and similarities between F.E.A.R. and Condemned: Criminal Origins, this kind of video games are going to become really massive in the coming years.
That said, it is truly regrettable that Nvidia’s new performance-mainstream part not only cannot provide comfortable performance in F.E.A.R.: Extraction Point, but is also slower compared to the predecessor – GeForce 7900 GS – and direct competitor Radeon X1950 Pro.
As usually, since the game uses deferred rendering, we cannot enable FSAA, thus, we only publish results with anisotropic filtering and HDR.
Even though the GeForce 8600 GTS belongs to a new generation of graphics cards and delivers average framerate inline with its rivals, its minimal framerate is well below any other graphics card that participated in the review. It is interesting to point out that while the new hardware showcased exceptional performance (for its class) in Far Cry with HDR, its speed in Ghost Recon Advanced Warfighter is not that good.
Despite radical architectural differences, GeForce 8600 GTS doesn’t differ much from GeForce 7900 GS or Radeon X1950 Pro from the gaming prospective, as it offers the same level of performance. All three cards allow playing HL2: Episode One in resolutions up to 1600x1200 with maximum level of detail and enabled FSAA 4x. However, there is hardly any performance reserve in this case, so we would still recommend playing in resolutions no higher than 1280x1024.
As we have already said, new Nvidia architecture doesn’t yet perform as well in OpenGL applications as the older one. This is true for GeForce 8600 GTS, which falls behind GeForce 7900 GS throughout the entire test. In this case the situation gets even more aggravated by performance of TMUs: even though Nvidia declares 16 of them, the novelty even cannot match performance of the Radeon X1950 Pro with 12 TMUs at lower clock-speed.
As a result, the newcomer performed worst of all the testing participants, giving us no hope for acceptable performance even in 1280x1024 with enabled full-screen anti-aliasing, at least with current drivers.
Since we aim at getting maximum image quality, we tested this game with full dynamic lighting and maximum level of detail. This test mode also implies the use of HDR, so there is no FSAA support, at least in the current S.T.A.L.K.E.R. version. Since the game doesn’t look that appealing any more with static lighting and DirectX 8 effects, we only enabled anisotropic filtering here.
GeForce 8600 GTS performed pretty modestly here having performed at about the same level as Radeon X1950 Pro. In 1280x1024 the results are very unlikely to be that much higher, so you will either have to reduce the level of detail, namely the command, or to go with static lighting model, which goes easier on the graphics subsystem and allows using full-screen anti-aliasing.
Our today’s hero acts not quite typically here, competing successfully with GeForce 7900 GS in 1280x1024 and 1920x1200. However, for some reason it is not as fast in 1600x1200. If we look at the minimal performance rates, GeForce 8600 GTS looks more attractive than Radeon X1950 Pro and is quite suitable for comfortable gaming at 1280x1024 with FSAA 4x.
The results of GeForce 8600 GTS are not very high, and if we consider average fps rate, then our hero will not stand out against the background of the previous-generation solutions such as Radeon X1950 Pro and GeForce 7900 GS. However, we would still like to point out that the newcomer demonstrated better minimal fps rate having outperformed even Radeon X1950 XT.
The current version of Gothic 3 game doesn’t support FSAA therefore we tested all the cards with anisotropic filtering only.
Since the game uses no anti-aliasing, it turns easier for GeForce 8600 GTS to run as fast as Radeon X1950 Pro in all resolutions including 1920x1200. However, there is no gaming comfort whatsoever. Only the owners of Radeon X1900 XT can play Gothic 3 in 1280x1024 with the maximum level of detail.
The game allows using FSAA beginning with version 1.04. The HDR support, however, is still not finalized, so we ran the tests in NWN 2 in “eye candy” mode only.
The newcomer feels quite at home in low resolutions. GeForce 8600 GTS performs close to Radeon X1900 XT in 1280x1024. Although we couldn’t get very high-precision control of the virtual troops, because the minimal performance may sometimes drop below 15fps, Nvidia’s mainstream solution still looked quite OK in this test.
As the resolution increases, GeForce 8600 GTS loses its speed dramatically dropping down to the level of GeForce 7900 GS, which indicates that it lacks memory bandwidth. This is no big deal, because GeForce 8600 GTS is not intended for gaming in high resolutions anyway, however, we still expected a new-generation solution to do better than that.
Without the HDR the game becomes considerably less attractive, and although there are different opinions about it, we still performed our TES IV testing with enabled FP HDR.
In case of not very massive indoor scenes GeForce 8600 GTS runs neck and neck with Radeon X1950 Pro and yields to GeForce 7950 GT as well as to Radeon X1900 XT. The results are quite OK for the price range of the new Nvidia solution, however, the use of high resolutions in this game is out of the question.
Despite our expectations, GeForce 8600 GTS performed excellent in 1280x1024 in scenes with wide open spaces. It defeated even Radeon X1900 XT and won even the minimal performance test. Unified architecture worked exactly as it was supposed to, however, in 1600x1200 the newcomer lost its advantage because of high workload put on its TMUs, although even in this case it was still faster than all other testing participants except Radeon X1900 XT. Only in 1920x1200 the average performance of GeForce 8600 GTS dropped down to the level of GeForce 7950 GT, however this resolution is anyway the privilege of higher-end graphics cards.
All in all, GeForce 8600 GTS will suit perfectly fine for playing Oblivion in 1280x1024 with the maximum level of detail.
The X3: Reunion gaming engine has always favored the solutions from former ATI Technologies, now AMD, and our today’s test session is no exception. The new GeForce 8600 GTS, despite high working frequencies, could slightly outperform only GeForce 7900 GS. We believe it hardly makes any sense to use FSAA on this graphics card even in 1280x1024 resolution, because its average performance is only 40fps and minimal – 30-25fps. During outer space combat of X3 game high-precision troops control is very important for successful gameplay.
Since the game limits the fps rate, we should consider minimal performance, because it determines the smoothness of the virtual troops control.
However, this limitation doesn’t matter for GeForce 8600 GTS graphics card: it cannot hit 30fps in 1280x1024 with enabled FSAA 4x anyway. Here AMD Radeon X1950 Pro is way better, not to mention more powerful solutions such as GeForce 7950 GT or Radeon X1900 XT.
Since there are problems with stable FSAA activation in this game we ran the tests only in pure speed mode with enabled anisotropic filtering.
GeForce 8600 GTS with its 16 TMUs, high clock-speed and new-generation micro-architecture again cannot leave the GeForce 79000 GS and Radeon X1950 Pro behind. Its performance in Company of Heroes is far from high: its average result is even a little lower than that of GeForce 7900 GS, and its minimal fps is the lowest!
If you want to play Company of Heroes on this graphics card you will have to reduce the level of detail. The owners of AMD Radeon X1950 Pro or GeForce 7900 GS who love this popular RTS do not have to invest in the new GeForce 8600 GTS, but will definitely have to sacrifice the level of quality too. And you most certainly wouldn’t do it if you have a GeForce 7950 GT or Radeon X1900 XT!
GeForce 8600 GTS evidently lags behind Radeon X1950 Pro. The lag is negligible, but pretty stable and equals a little less than 10% for standard resolutions and about 15% for widescreen 1900x1200 mode. From the gamer’s prospective, this card is not suitable for playing in high resolutions (just like the GeForce 7900 GS and the Radeon X1950 Pro). With enabled FSAA 4x you can play Supreme Commander maximum at 1280x1024, but even in this case the performance reserves are minimal.
The total score of GeForce 8600 GTS is quite high: it is close to that of Radeon X1900 XT, which used to be one of the best High-End solutions. However, we are talking about the 1024x768 resolution here, which doesn’t load the graphics accelerator enough to estimate the advantages and disadvantages of its architecture in full. We should discuss the results of individual tests in “eye candy” mode, as they will give us a much better idea.
In Game 1 test GeForce 8600 GTS outperforms Radeon X1950 Pro only in 1280x1024. In 1600x1200 these two cards level out in performance: performance of 16 TMUs of Nvidia’s newcomer working at 675MHz perform comparably to 12 TMUs of the Radeon X1950 Pro working at 575MHz. As for the GeForce 7950 GT with 24 TMUs and Radeon X1900 XT with 16 TMUs, our hero cannot compete with them at all, at least not in the benchmark where fillrate is crucial for the overall performance score.
Game 2 test is far not as large-scale as the first one, it serves mainly to demonstrate the GPU performance when working with geometry and scene lighting. Unified architecture of GeForce 8600 GTS proved quite efficient here. However, the card still yielded to Radeon X1900 XT and fell behind GeForce 7950 GT and Radeon X1950 Pro in resolutions above 1280x1024. We would also like to stress that G84 slows down as the resolution increases, despite the peculiarities of the Game 2 test from 3DMark05 suite. Performance of texturing and rastering units of the new graphics processor are still a step backwards compared with the previous-generation solutions from AMD as well as from Nvidia themselves.
The results of Game 3 test prove the evident once again. They reveal very vividly the drawbacks of the unified architecture with few execution units as well as the problems created by relatively low-performance TMUs. Of course, the new Nvidia solution outperforms GeForce 7900 GS, but it cannot get beyond the performance level of Radeon X1950 Pro, especially in high screen resolutions.
The results obtained in 3DMark06 correspond better to the actual situation with the new GeForce 8600 GTS, so we are not talking about any possible rivalry with Radeon X1900 XT here. Almost 5300 points is a pretty good achievement for relatively inexpensive mainstream graphics card. However, let’s take a closer look at slightly harder working conditions before we make any final conclusions.
In 1280x1024 GeForce 8600 GTS is powerful enough to compete with Radeon X1900 XT in SM2.0 tests and with Radeon X1950 Pro in SM3.0/HDR tests. In the latter case Radeon X1900 XT with 48 pixel processors each featuring 4 ALUs and working at a little over 600MHz frequency has simply no rivals whatsoever. Now let’s make it more interesting by enabling FSAA 4x. Note that we didn’t use any resolutions above 1280x1024 in individual tests because the mainstream graphics cards didn’t run fast enough there. With the performance rate of less than 10fps, the measuring error increases, which makes it very hard to perform fair comparison.
At least enabled anti-aliasing in the first test didn’t drop the performance of GeForce 8600 GTS – the new Nvidia solution is still competing successfully against GeForce 7950 GT and is just a little slower than Radeon X1900 XT with 16 TMUs and ROP. However, the second tests revealed the same drawbacks as the Game 2 test in 3DMark05, although unlike the latter, GeForce 8600 GTS loses even to Radeon X1950 Pro here.
In SM3.0/HDR tests the drawbacks of GeForce 8600 GTS become evident: the card cannot compete even with Radeon X1950 Pro. Nvidia’s solution has only 32 shader processors at its disposal (although they work at higher frequency), and the AMD adapter boasts 36 dedicated pixel processors each featuring 2 scalar and 2 vector ALUs and 8 dedicated vertex processors with 2 vector ALUs each. Moreover, it doesn’t have to deal with dynamic workload distribution between the processors. From this prospective the defeat GeForce 8600 GTS suffers from Radeon X1950 Pro in the first test of this suite is quite illustrative.
The newcomer’s results in another SM3.0/HDR tests are also not that high. GeForce 8600 GTS doesn’t support single-component texture sampling acceleration, which is an advantage for AMD during the processing of dynamic shadows created with shadow maps. It is not quite clear why 32 unified shader processors perform not so well despite their huge peak power as a result of 1.45GHz frequency. The logical explanation here could be the buggy drivers, however it is really hard to suspect ATI or Nvidia in being unable to finalize the drivers for 3DMark, which is the most popular GPU tests in the industry these days.
Nvidia Corp. has once again left its rival ATI, graphics product group of Advanced Micro Devices, behind in terms of time-to-market with its mainstream-class DirectX 10 graphics accelerators. But is performance of the flagship option of the new series – Nvidia GeForce 8600 GTS – truly impressive? Well, let’s sum everything up.
The GeForce 8600 GTS graphics card from Asustek Computer performs as good as the GeForce 7900 GS, a product that has been available on the market for 8 months already. In most benchmarks the 8600 GTS is faster, in other, it is slower (F.E.A.R., Ghost Recon Advanced Warfighter, Prey, Hitman) than the GeForce 7900 GS, but in the vast majority of real-world situations it lags behind the Radeon X1950 Pro. Several good things about the GeForce 8600 GTS is that it features better image quality compared to both of its rivals, supports DirectX 10 and has significantly improved video processing engine. But those, who already use the Radeon X1000, will hardly notice quality improvements and will not be able to experience DirectX 10 applications for several months to go.
So, those, who already own a GeForce 7900 GS or a Radeon X1950 Pro, will hardly find it useful to switch to the GeForce 8600 GTS just now: performance in current games with current drivers does not really impress and it remains to be seen whether the 8600 GTS is a good performer in DirectX 10-based games. Still, the new PureVideo HD engine may be just what the doctor ordered for video enthusiasts (it should be kept in mind, however, that at press time Nvidia only guaranteed new PureVideo HD features for Windows Vista operating system).
Traditionally, successful performance-mainstream parts at $199 price-points offered performance level similar to former flagship offerings released a year or a little more before. This was the case with the GeForce 6600 GT, which could easily outperform the Radeon 9800 XT; the same was true for the GeForce 7600 GT, which could offer performance of the GeForce 7800 GT at much lower price-point; the Radeon X1950 Pro outperformed even the Radeon X1800 XT in certain cases, while the GeForce 7900 GS provided same level of speed in games as the GeForce 7800 GTX. When it comes to the GeForce 8600 GTS, we cannot see it leaving the GeForce 7950 GT behind, not talking about more powerful GeForce 7900 GTX.
It should be kept in mind, however, that Asus EN8600 GTS graphics card that we tested operated at 675MHz/1.45GHz for the graphics core/unified shader processors and 2.0GHz for the memory, whereas there will be factory-overclocked graphics boards with clock-speeds boosted towards 720MHz/1560MHz for the GPU/SPs and 2.10GHz for memory. Those graphics cards are likely to offer higher performance compared to our today’s hero, though, we would not expect tremendous performance increases. It is interesting to note that Asus EN8600 GTS could not be overclocked at all with ForceWare 101.02, but when a beta version of the ForceWare 158.16 was installed, the board could be easily pushed to over 760MHz/2360MHz frequencies.
We do not know the exact reasons for Nvidia to cut-down computing power and performance of the GeForce 8600 so significantly when compared to the high-end brethren, but it would be logical to assume that the company wanted to get a very cost-efficient graphics processing unit (GPU) which could serve performance-mainstream and mainstream markets for a while. In fact, we would assume that the GeForce 8600-series may have the same destiny as the GeForce FX 5600, which was a mediocre DirectX 9 choice in spring/summer 2003, but which was quickly replaced by more powerful GeForce FX 5700 in fall 2003. Considering current performance of the GeForce 8600 GTS and keeping in mind that its die size is already quite large, we would assume a more powerful performance-mainstream GPU based on the G80 architecture made using 65nm process technology coming in sometime in late Q3 or early Q4. That said, we would strongly suggest end-users to consider the GeForce 8800 GTS 320MB instead of the GeForce 8600 GTS 256MB, as for about $75 - $100 premium you will be able to get performance that may be 100% higher.
Now, let’s summarize all the pros and cons of Nvidia GeForce 8800 GTS as well as Asus EN8600 GTS.