by Alexey Stepin , Yaroslav Lyssenko
11/08/2005 | 02:26 AM
The very successful graphics card GeForce 6800, based on the 0.11-micron NV42 graphics processor, has a number of indisputable advantages including simple PCB design, support of advanced technologies like Shader Model 3.0 and HDR, and very low power consumption and heat dissipation parameters achieved through the use of the progressive tech process. Priced at about $199, this 12-pipelined card would be nearly ideal if it were not for its slow memory, clocked at 350 (700) MHz. This is a very low clock rate by today’s standards.
So, despite the 256-bit memory bus, it is the memory that prevents the GeForce 6800 from showing its full potential, especially in high resolutions and with enabled full-screen antialiasing. For example, in our PowerColor X800 GT review the 12-pipelined GeForce 6800 was often slower than the 8-pipelined RADEON X800 GT in the “eye candy” test mode, i.e. with enabled full-screen antialiasing and anisotropic filtering, and only due to the sheer difference in the memory bandwidth (32GB/s against 22.4GB/s).
The ATI RADEON X800 GT is in fact an answer to the NVIDIA GeForce 6600 GT, while the GeForce 6800 is supposed to fight another market opponent, ATI’s RADEON X800 GTO, which has 12 pixel pipelines and GDDR3 memory clocked at 1GHz. So, we suspect it will also have some advantage over the GeForce 6800 in high resolutions, especially if you turn on FSAA.
Meanwhile, the 0.11-micron NV42 chip is known to have a very high frequency potential and can be easily overclocked from 325 to 450MHz and higher, its power consumption still remaining low. It was also clear that NVIDIA did not have a product to oppose the yet-unavailable RADEON X1600 XT with. The GeForce 6800 doesn’t suit for this part due to obvious reasons, but it also belongs to a lower price category ($199 against $249). The GeForce 6800 GT could serve that purpose, but NV40 and NV45 chips are being manufactured at IBM facilities and have a high cost, so NVIDIA doesn’t have much room for price adjustment with them.
Taken all together, these facts just called for the obvious solution. The potential of the inexpensive NV42 chip could be made use of to add more vigor to the GeForce 6800 by increasing the GPU clock rate and equipping it with 256MB of faster memory. That’s the background behind NVIDIA’s new product, GeForce 6800 GS, which is expected to replace the more expensive GeForce 6800 GT and challenge the RADEON X1600 XT as well as the RADEON X800 XL. We’ll check this graphics card today to see how power-economical and fast it is in comparison with competing solutions.
Today’s graphics processors are often pin-compatible with their predecessors for the developers not to spend time and money on designing new PCBs. An example of this approach is the RADEON X8 series from ATI Technologies: the entire series, from X800 GT to X850 XT Platinum Edition, use unified PCBs with minor variations.
NVIDIA’s GeForce 7800 and GeForce 6800 GPUs are also physically compatible, so the company didn’t have to develop the PCB for the new product from scratch. They already had the GeForce 7800 GT printed circuit board. Simple (i.e. cheap) and rather compact, it could normally power up a 20-pipelined GPU working at 400MHz and memory clocked at 1GHz. So, it was sure to suffice for the 12-pipelined NV42, even clocked at a high frequency. As you have already guessed, that PCB became the foundation of the GeForce 6800 GS graphics card:
If it were not for the cooler and the connectors (DVI-I and D-Sub), you wouldn’t tell the new card from a GeForce 7800 GT. NVIDIA abandoned the efficient but noisy cooling system of the GeForce 7800 GT, but equipped the reference sample of the GeForce 6800 GS with the cooler from the GeForce 6800 GT. In brief, the cooler consists of three parts: a plastic casing with a blower, a GPU heatsink and an L-shaped heatsink with a heat pipe for cooling the memory chips.
The L-shaped heatsink on our card was electrochemically blackened for better heat transfer, while in the original cooler it had been just painted black. The pipe transfers heat to the heatsink section which is blown at by the fan. This way the GDDR3 memory clocked at 500 (1000) MHz is effectively cooled. For comparison, the memory chips on RADEON X800 GTO and RADEON X1600 XT cards is not cooled at all.
The GPU heatsink is made of copper, although NVIDIA had earlier used all-aluminum heatsinks of the same shape. It was ASUS Computer that first employed a copper heatsink on a GeForce 6800 in its unique V9999 Gamer Edition graphics card which actually had the same technical characteristics as the today’s GeForce 6800 GS, but worked on the AGP platform. Generally speaking, copper has better heat conductivity but worse heat capacity than aluminum. In other words, a copper heatsink takes heat off the GPU faster and more effectively, but requires a stronger airflow. The blower installed in the GeForce 6800 GS cooling system can create this airflow, but its noise characteristics may be too much for a sensitive ear – we’ll talk about that in the next section of the review. The main radiator is fastened to the PCB with four spring-loaded screws, so a proper contact with the GPU is guaranteed.
The fan speed control system lacks feedback, which we have on graphics cards from ATI Technologies, but it can work in three fixed-speed modes: 2D, Low Power 3D and Performance 3D. The speeds in these modes can be adjusted to some extent with the RivaTuner utility, so you can reduce the noise from the card a little in a particular mode.
Typical dark-gray thermal paste with low heat resistance is employed as a thermal interface between the heatsink’s sole and the GPU die. The memory chips touch the heatsink through cloth pads soaked in white thermal paste like on other NVIDIA products. The cooling system is overall satisfactory. It used to cope nicely with 0.13-micron 16-pipelined NV40/45 chips, so it should handle a 0.11-micron 12-pipelined NV42 as well.
We removed the memory heatsink to check the marking. NVIDIA employed popular K4J55323QF-GC20 chips of GDDR3 memory from Samsung here. They have 256Mb capacity, 2.0 voltage, and 2.0ns access time. It means they are rated to work at 500 (1000) MHz frequency.
And then we took off the main heatsink to see an ordinary NV42 chip that we described some long time ago in our MSI NX6800 review . As you remember, this GPU has 12 pixel and 5 vertex processors and can work at frequencies over 400MHz. The GPU frequency is declared to be 425MHz in the GeForce 6800 GS specification. This is 100MHz above the frequency of the GeForce 6800. Combined with 500 (1000) MHz memory frequency, it should ensure a tremendous performance gain – the GeForce 6800 GS seems to have much more raw power than the GeForce 6800.
The card is not equipped with a VIVO chip, although there is a place for it on the PCB. NVIDIA probably tried to reduce the cost of the product as much as possible, but some graphics card manufacturers will probably equip their versions of the GeForce 6800 GS with an appropriate chip from Philips.
The GeForce 6800 GS fully supports NVIDIA’s SLI technology. Unfortunately, we couldn’t test it in this mode since we had only one sample of the GeForce 6800 GS on our hands. But as soon as we get a second one, we’ll tell you how fast such SLI configurations may be.
Power consumption has become a crucial parameter of any modern graphics card and we of course checked how much juice NVIDIA’s new product needs. Officially, the GeForce 6800 GS can eat up to 70 watts of power at maximum, but this is a purely theoretical number, hardly achieved under real conditions. We checked the power consumption of the card on a special testbed configured like follows:
The GPU was put to test by running the third 3DMark05 subtest in a loop in 1600x1200 resolution with enabled 4x FSAA and 16x AF. We performed the measurements with a digital multimeter Velleman DVM850BL (its 0.5% accuracy suits well for our purpose). And here are the results:
The high frequencies and the 256 megabytes of onboard memory must be fed amply: the peak power consumption of the GeForce 6800 GS is high, closely approaching that of the GeForce 6800 GT. Even the 0.11-micron tech process couldn’t help the new graphics card much. The RADEON X1600 XT consumes 13 watts less, but it has a 0.09-micron GPU, only 4 texture-mapping units, and 4 memory chips (the GeForce 6800 GS carries eight memory chips). Curiously, the new card almost does not use the 3.3V power line, consuming a mere 0.1W from it. This is a characteristic feature of the PCB and power circuit of the GeForce 7800 GT we mentioned in our NVIDIA GeForce 7800 GT review .
Overall, our expectations about the power consumption of the GeForce 6800 GS came true. The card requires about 55W under load, i.e. about the same amount as the GeForce 6800 GT needs. The new card is just a little worse than the RADEON X800 XL in this parameter, and noticeably worse than the newer RADEON X1600 XT. On the other hand, the power consumption remains at a reasonable level, not exceeding even 60 watts.
Theoretically speaking, the card could do even without an external power source, but feed through the PCI Express slot alone, as the ATI RADEON X800 XL with a slightly lower power consumption does, for example. Most likely, this was not implemented due to technical reasons: the power circuit borrowed from the GeForce 7800 GT probably must have an external power source. This should positively affect the stability of operation and the overclockability of the GeForce 6800 GS, by the way.
Our GeForce 6800 GS being an engineering sample, its cooler could only work in two modes. The fan is rotating at its full speed and is rather loud until the OS and the ForceWare driver are loaded. After that the fan speed goes down and its noise diminishes, but not to vanish completely. The fact is the fan-control system on our sample of the card was programmed in such a way that the fan speed was constant in all the modes (2D, Low Power 3D and Performance 3D).
RivaTuner agrees with us: the sliders stand on 53% for all three modes. Of course, the speed-control system will be set up properly in the final revision of the card, and its acoustic characteristics will be like those of the GeForce 6800 GT which is not silent, but is not annoyingly loud, either. Moreover, some graphics card makers will surely equip their versions of the GeForce 6800 GS with quieter coolers.
We were much pleased with the overclockability of our sample of the card. We easily overclocked the GPU from the default 425MHz to 500MHz. The cooling system probably prevented it from speeding up more (the GPU was stable even at 540MHz, but there were some visual artifacts on the screen). But the real surprise was that the memory could work at 680 (1360) MHz – a fantastic achievement for 2.0ns memory chips! We know of many cases when Samsung’s K4J55323QF-GC20 could work at 550-600 (1100-1200) MHz, but not higher! Could NVIDIA have culled chips especially for the engineering sample of the GeForce 6800 GS or maybe they just increased the memory voltage? It is probable since we could not be mistaken. The monitoring module of RivaTuner reported the frequency gain, and this gain resulted in an appropriate performance gain in tests, and the card was absolutely stable at that.
You can rarely meet a graphics card today that would provide other than excellent image quality in 2D applications. The new product from NVIDIA was not an exception, giving out a crystal-sharp picture in all display modes supported by our monitors. We observed no fuzziness or ghosting or any other undesired effect. The next section of the review deals with the performance of the new graphics card in games.
We installed the NVIDIA GeForce 6800 GS graphics card into our testbed computer:
We set up the ATI and NVIDIA drivers in the following way:
ATI CATALYST 5.9:
NVIDIA ForceWare 81.87:
We select the highest graphics quality settings in each game, identical for graphics cards from ATI and NVIDIA. If possible, we use the games’ integrated benchmarking tools (to record and reproduce a demo and measure the reproduction speed in frames per second). Otherwise we measure the frame rate with the FRAPS utility. If it is possible, we measure minimal as well as average fps rates to give you a fuller picture.
We turn on 4x full-screen antialiasing and 16x anisotropic filtering in the “eye candy” test mode from the game’s own menu if possible. Otherwise we force the necessary mode from the driver. We don’t test the “eye candy” mode if the game engine doesn’t support FSAA.
Besides the NVIDIA GeForce 6800 GS, the following graphics cards took part in this test session:
These games and applications were used as benchmarks:
First-Person 3D Shooters:
Third-Person 3D Shooters:
The GeForce 6800 GS is generally as fast as the GeForce 6800 GT (maybe a little slower in high resolutions without FSAA since the new product’s fill rate is lower). The RADEON X1600 XT is slower than the GeForce 6800 GS in almost every resolution, being limited in speed by its four TMUs. The overclocking gain we managed to get from our GeForce 6800 GS is proportional to the impressive frequency gain and amounts to 30% in some cases.
The GeForce 6800 GS is absolutely identical to the GeForce 6800 in the “pure speed” mode, but falls 10-12% behind it at the “eye candy” settings. It’s only at the overclocked frequencies that the GeForce 6800 GS can compete with the GeForce 6800 GT in high resolutions.
NVIDIA’s graphics cards are victorious in this test since the engine of the game uses OpenGL functions and stencil shadows.
It is all like in the previous test because this shooter is technically much similar to The Chronicles of Riddick .
The drawbacks of the RADEON X1600 XT architecture become apparent again since 1280x1024 resolution whereas the GeForce 6800 GS, GeForce 6800 GT and RADEON X800 XL hit the finish line almost the same moment. Curiously enough, the GeForce 6800 GS is not slower than the GeForce 6800 GT in the “eye candy” mode, as it was in Doom 3 and The Chronicles of Riddick , but faster and almost by 20%! The RADEON X800 XL, however, feels even better with enabled FSAA and anisotropic filtering, so the GeForce 6800 GS can only outperform it by means of overclocking.
The Research map is less favorable for ATI Technologies’ solutions from the RADEON X850 and X800 families as they lack Shader Model 3.0 support. That’s why the RADEON X800 XL is slower than the GeForce 6800 GS in the “pure speed” mode and equals it at the “eye candy” settings. The impressive frequency gain of the GeForce 6800 GS at overclocking leads to an impressive speed bonus: up to 30% in 1600x1200 resolution!
Besides complex pixel shaders, F.E.A.R. has complex textures, and the RADEON X1600 XT with its four texture-mapping units is a loser again. The GeForce 6800 GS with the RADEON X800 XL ensure a playable frame rate in resolutions up to 1280x1024. The GeForce 6800 GS is a little slower than the GeForce 6800 GT across all the modes and resolutions, but the gap never exceeds 10%.
Half-Life 2 is not a very hard game by today’s standards and all the participating graphics cards, save for the GeForce 6800 and the RADEON X1600 XT, give you no less than 60fps even in 1600x1200 with enabled 4x FSAA and 16x AF. There is almost no difference between the GeForce 6800 GS and GeForce 6800 GT, but both cards are slower than the RADEON X800 XL in the “eye candy” mode.
This time the RADEON X800 XL is on top in the “pure speed” mode already, and only the overclocked GeForce 6800 GS can compete with it at the “eye candy” settings.
The RADEON X800 XL is at first about 8-9% faster than the GeForce 6800 GS, but the gap is narrowing in higher resolution. The performance gain from overclocking the GeForce 6800 GS is near 20%. We don’t publish the “eye candy” results because the game does not support FSAA.
The GeForce 6800 GS is much better than the GeForce 6800 GT in this game, especially in higher resolutions. The newer card enjoys a 30% advantage over the older one in the “pure speed” mode and a 20% advantage in the “eye candy” mode. The RADEON X800 XL performs passably well, but is slower than the GeForce 6800 GS, while the RADEON X1600 XL fails again, sometimes even being behind the GeForce 6800.
Although this game is based on the Doom 3 engine, it produces different numbers. Particularly, the ATI RADEON X800 XL runs it at a very good speed, despite ATI’s deficient support of the OpenGL API. This not-very-new graphics card is always a little ahead of the GeForce 6800 GS and GeForce 6800 GT which in their turn match each other’s performance. The results of the RADEON X1600 XT are a failure if compared with those of NVIDIA’s new product, which doesn’t suffer from the lack of TMUs.
The GeForce 6800 GS is everywhere ahead of the GeForce 6800 GT, especially in low resolutions with enabled full-screen antialiasing. Well, the new graphics card just has no rivals in this game: the RADEON X1600 XT is impeded by its four TMUs and the RADEON X800 XL doesn’t support Shader Model 3.0. ATI’s graphics cards that take part in this test do not allow you to play comfortably even in 1024x768 resolution, while the GeForce 6800 GS gives you enough speed even in 1280x1024 or in anti-aliased 1024x768. Note that none of the cards has a speed reserve here – the min frame rate is not higher than 23fps.
1600x1200 resolution of the “pure speed” mode gives us some useful info – the RADEON X1600 XT is clearly on the losing side there. The game is not as complex as to load fully 16 pixel pipelines. It is quite satisfied with 12, but is sensitive to the clock rate. As a result, the GeForce 6800 GS surpasses the GeForce 6800 GT in the “eye candy” mode.
The new card shows its best qualities on the Metallurgy map: it delivers a frame rate of 80fps and higher even in 1600x1200 with enabled FSAA and anisotropic filtering.
The GeForce 6800 GS takes the first place and is followed by the GeForce 6800 GT. The GeForce 6800 and the RADEON X800 XL share the third place in 1024x768. All the graphics cards, including the RADEON X1600 XT, are sufficiently strong to run this game at a comfortable speed in all resolutions.
You should not compare the results of the RADEON X800 XL with those of the other cards since it is the only solution here that does not support Shader Model 3.0. The SM 3.0 rendering mode of this game involves large amounts of math1ematical computations, and the number of pixel pipelines plays an important role. The GeForce 6800 GT has 16 such pipelines against the GeForce 6800 GS’s 12, so the latter is about 10% slower than the former throughout the test.
This car simulator prefers ATI RADEON X850 and X800 graphics cards as they are more efficient at executing simple pixel shaders. The RADEON X800 XL is unrivalled in low resolutions, but it is then overtaken by the GeForce 6800 GS in higher ones. In the latter case the gap between the two graphics cards is huge – about 40% even when the GeForce 6800 GS works at its default frequencies. So, it seems this game values GPU frequency more than some extra pixel pipelines.
The GeForce 6800 series is superior in Pacific Fighters . The slowest member of this family is always faster than the RADEON X800 XL, not to mention the RADEON X1600 XT. The latter gives you enough speed in the lowest resolution only. NVIDIA’s triumph in this test has been repeatedly explained in our reviews: flight simulators developed by Maddox Games use OpenGL by default.
Just like in Splinter Cell: Chaos Theory , the ATI RADEON X800 XL works in the Shader Model 2.0 mode and delivers a lower-quality picture than graphics cards with support of Shader Model 3.0 do. As for the rest of the devices, NVIDIA is victorious once again, while the RADEON X1600 XT, belonging to the new generation of graphics cards from ATI Technologies, does not allow you to play this game with comfort even in 1024x768. We could not test Age of Empires 3 in 1600x1200 with enabled FSAA and anisotropic filtering due to problems with the image quality.
And once again the ATI RADEON X1600 XT is the worst runner, while the GeForce 6800 GT and GS are in the lead thanks to their UltraShadow II technology. The GS card has a higher “pure speed”, while the NV45-based device does better when full-screen antialiasing and anisotropic filtering are turned on.
The GeForce 6800 GS is on the same level of performance with the RADEON X800 XL in the “pure speed” mode. The device from ATI Technologies looks better at the “eye candy” settings thanks to its more efficient memory controller. The GeForce 6800 GT is noticeably slower than the GeForce 6800 GS in the “eye candy” mode.
This is a rather CPU-dependent test, so all the graphics cards have similar speeds here. Only the RADEON X800 XL stands out – it lacks only 120 points to notch a score of 6,000.
The GeForce 6800 GS scores almost 1400 points more than the RADEON X1600 XT. It is a good result, comparable with the GeForce 6800 GT’s score. Overclocking helps the new card to overcome the 13,000 points mark, i.e. to surpass the GeForce 6800 Ultra and closely approach the result of the GeForce 7800 GT! It is all clear with the results since 3DMark03 works well on GeForce 6/7 graphics cards, but let’s have a more detailed view of the situation.
The GeForce 6800 GS and GT both have high pure speeds in the first test, but the former takes the lead in the “eye candy” mode. We’ve seen this happen throughout the current test session and we are not sure about the cause. It is possible the memory controller of the NV42 was improved, or maybe this test just requires high GPU frequency more than pixel pipelines.
There is a negligible difference between the GeForce 6800 GT and GS in all modes – the latter is a little better in high resolutions of the “eye candy” mode. As you know, the GeForce 6800 GT can process up to 32 Z-values per clock and the GeForce 6800 GS only 24, but makes up for that by its high frequency. As for the RADEON X1600 XT, it can only compete with the GeForce 6800 here.
The results of the third test are similar to those of the second one since they are analogous from the technical point of view. The geometry of the third test is just somewhat more complex.
The GeForce 6800 GS is again faster than its predecessor in the fourth and the most difficult test. It can even keep up with the RADEON X800 XL in the “eye candy” mode which is an achievement: the GeForce 6800 GT used to be slower than the ATI solution here. And this is the only test where the RADEON X1600 XT shows good results. At least, it is not too far behind the GeForce 6800 GT in the “pure speed” mode.
So, the GeForce 6800 GS has overall the same performance as the GeForce 6800 GT in 3DMark03 and is even better in the “eye candy” mode, at least in the first and fourth tests, i.e. the simplest and the most complex ones.
3DMark05 has it in a different way: the RADEON X1600 XT, not brilliant in earlier tests, shows its best in this benchmark where the graphics card has to process huge amounts of complex shaders. But anyway, the GeForce 6800 GS scores only 102 points less, so we can consider these two cards as equals. Let’s see if it is so in each of the tests.
The first test doesn’t seem to bring any surprises and the GeForce 6800 GS goes abreast to the RADEON X1600 XT, but the latter doesn’t do so well in the “eye candy” mode. Despite the brand new architecture and high GPU and memory frequencies, the RADEON X1000 only outperforms the previous-generation RADEON X800 XL there.
The GeForce 6800 GS, GeForce 6800 GT, RADEON X1600 XT and RADEON X800 XL have similar speeds in all resolutions, with minor deviations. The RADEON X1600 XT stands out in low resolutions – it is not limited by its four TMUs in this small-scale scene with numerous special effects.
The GeForce 6800 GS leaves its predecessor behind in higher resolutions of the third test at the “pure speed” settings. When FSAA and anisotropic filtering are enabled, the situation changes to the opposite in 1280x1024, but then the newer card is ahead again in 1600x1200. Both RADEONs can compete with the GeForce 6800 GT and GS in the “pure speed” mode only.
So, the resulting scores are appropriate: the GeForce 6800 GS is really better than the GeForce 6800 GT and can rival the RADEON X1600 XT in the “pure speed” mode. At the “eye candy” settings, however, the new graphics card from ATI loses its ground, limited by its four TMUs and, probably, non-optimized drivers.
So, is the new graphics card from NVIDIA a worthy successor to the passing-away GeForce 6800 GT and is it a worthy rival to the new ATI RADEON X1600 XT? We can answer both these questions in affirmative, now that we’ve seen the GeForce 6800 GS in action. The new member of the GeForce 6800 family turned to be faster than the older one, sometimes by as much as 30-40%! The injection of megahertz steroids into the GeForce 6800 architecture brings about the desired effect. In most games and applications the GeForce 6800 GS leaves no chance to the RADEON X1600 XT, a not yet available product from ATI Technologies. In a few cases these two graphics cards have the same speed, and the RADEON X1600 XT won 3DMark05, but this is hardly an achievement.
Considering the identical price of these two graphics cards (about $250), the GeForce 6800 GS enjoys a complete victory, making the RADEON X1600 XT a much less appealing product for gamers. It is yet too early to make any final verdicts, since the latter device obviously suffers from insufficient driver optimization, but the general trend is unfavorable for ATI Technologies. The RV530 graphics processor has a performance-negative feature: it has only four texture-mapping units which automatically become a bottleneck in games that operate with high-resolution textures. Another drawback is the 128-bit memory bus. Working at 690 (1380) MHz, it has a maximum bandwidth of 22.1GB/s, while the 256-bit memory bus of the GeForce 6800 GS clocked at 500 (1000) MHz easily provides 32GB/s and positively affects the performance of the card in high resolutions and/or with enabled full-screen antialiasing.
If the price of the RADEON X1600 XT remains the same, it will not stand a chance against the GeForce 6800 GS, except in applications where low power consumption and advanced video support are crucial. Note also that the GeForce 6800 GS is already shipping while the RADEON X1600 XT will be coming to market in mass quantities only starting from November 30. This is a definite advantage for NVIDIA before the imminent Christmas sales season.
As for the ATI RADEON X800 XL, the GeForce 6800 GS is slower in some cases, mostly when FSAA and anisotropic filtering are turned on. But each time the new card from NVIDIA could overtake the X800 XL through overclocking. Moreover, the RADEON X800 XL does not support Shader Model 3.0 and HDR which may be important features for PC gamers.
The excellent overclockability of the GeForce 6800 GS needs to be mentioned, too. The combination of the GeForce 7800 GT printed circuit board with the 0.11-micron NV42 chip proved to be very overclocker-friendly. We managed to speed up our sample of the card from the default 425/1000MHz to the impressive 500/1360MHz. The ensuing performance gain lifted the speed of the card to the level of the more expensive GeForce 6800 Ultra and this is probably not the limit. We could have achieved higher frequencies if we had replaced the cooling system for something more efficient.
Thus, the GeForce 6800 GS is not just a high-performance solution at a relatively low price. It may become a sensation among overclocker! It’s not certain yet how much room for price adjustment NVIDIA has: the 6800 GS PCB is obviously cheaper than the PCB employed in the GeForce 6800 GT, but more expensive than the GeForce 6800 one. Theoretically speaking, the GeForce 6800 GS has some reserve for further price reduction, but considering that the NV42 is produced in mass quantities and the chip yield is high, the price of GeForce 6800 GS graphics cards may go down by a few dozen dollars in a few weeks after the release. On the other hand, NVIDIA may not want to reduce the price because of the virtual lack of competition.
So, a highly appealing product has emerged in between the GeForce 6 and GeForce 7 series. It is probably not destined to live long because NVIDIA is already preparing the G72 GPU, a mass-user version of the G70, but the GeForce 6800 GS will surely fulfill its purpose – to be the killer product in NVIDIA’s lineup during the Christmas sales season.