The King Is Dead, Long Live the King: EVGA GeForce GTX 295+ Graphics Card Review

After permanent defences Nvidia finally launches a counteroffensive. The first sign of that was a successful transition of G200 architecture to 55nm technology. And now the company tries to win back the 3D gaming graphics king title by launching a dual-processor graphics card based on this architecture.

by Alexey Stepin , Yaroslav Lyssenko
01/25/2009 | 03:54 PM

Resting on one’s laurels is a pleasant but risky pursuit in the world of gaming graphics hardware where the market situation can change diametrically in just a moment. Nvidia learned this lesson well when it suffered from the sudden all-out attack launched by ATI with its new RV770 architecture. Nvidia should be given credit for not giving up the fight then, though. The company’s first serious countermeasure was the transition of the G200 architecture to 55nm manufacturing process. The result was good: as we learned in our tests, the new version of GeForce GTX 260 Core 216 not only beat the Radeon HD 4870 1GB across a majority of tests but also featured far lower power consumption. This opened the way to creating a solution that would be not only competitive to the Radeon HD 4870 X2, but better!

 

This solution had to be a dual-G200 graphics card. Of course, this is a kind of a deflection from Nvidia’s principle of developing highest-performance single-core cards, but at war every means is good if it leads to victory. A single G200 just would not be able to challenge the RV770 duo working in CrossFire mode however high its frequency potential might be after the transition to the newer tech process. ATI’s development principle has proved its worth, so Nvidia has to use the same means. The company could not do so earlier because a dual-chip solution based on the 65nm G200 would have been too hot and uneconomical. The 55nm tech process made this effort plausible.

As a matter of fact, top-performance expensive graphics cards bring in but small profits for their developers (it is the mainstream solutions priced at $200 and lower that account for the bulk of sales), yet they have an important role other than just being profitable products. One knows a squadron by its flagship, and premium graphics cards are a kind of a visiting card of the developer company, telling everyone, including the potential customer, of his technological ability. This has an effect on the company’s market share, too. We can recall ATI’s position prior to the release of the Radeon HD 4000 series: ATI did offer interesting products in the mainstream sector, yet it was losing its market positions quickly due to the lack of a competitive flagship.

Although the G200, even in its 55nm version, is not a very suitable graphic core for building a dual-core graphics card, the release of the GeForce GTX 295 is a necessary countermeasure for Nvidia to deal with the long-lasting superiority of the Radeon HD 4870 X2. Some reviewers suggest that Nvidia’s refusal to develop premium-class single-chip cards is only temporary, but we guess that the company will stick to this strategy in the future. Our point is confirmed by the preliminary data we have on our hands about the new-generation graphics cores currently developed by Nvidia.

So, in this review we will discuss the new flagship solution from Nvidia and pit it against the Radeon HD 4870 X2 in a number of popular games to see who the king of gaming 3D graphics is now.

Nvidia GeForce GTX 295 vs. ATI Radeon HD 4870 X2: Face to Face

Nvidia lost the previous round where its GeForce GTX 280 wrestled with ATI’s Radeon HD 4850 X2. The new contender sports more impressive parameters but faces a dangerous opponent, Radeon HD 4870 X2. Let’s compare their specifications:

So, both fighters are impressive but each in its own way. The Radeon HD 4870 X2 has a huge advantage in terms of computing resources (which is, however, partially negated by the peculiarities of the superscalar architecture of its shader processors) whereas the GeForce GTX 295 is theoretically superior in texture-mapping and rasterization operations. Considering the higher clock rate of the execution subunits, the new card has a high chance of winning under real-life conditions. The support for hardware PhysX acceleration is a nice bonus. The GeForce GTX 295 can either distribute the computing resources dynamically or assign the job of physics acceleration to one GPU and free the other’s resources for graphics acceleration.

There is one parameter in which the GeForce GTX 295 is inferior to the Radeon HD 4870 X2, though. It is the amount of local graphics memory available to 3D applications. It has only 896 megabytes as compared to the opponent’s 1024 megabytes. Theoretically, this should affect the new card’s performance at high resolutions with full-screen antialiasing, especially in newer games that demands lots of graphics memory. As for memory bandwidth, the GeForce GTX 295 is almost as good as the Radeon HD 4870 X2 thanks to its 448-bit memory buses. It also does not support DirectX 10.1, has no integrated audio core and no full-featured VC-1 decoder, but these drawbacks are insignificant. After all, the GeForce GTX 295 is meant to be the fastest gaming card rather than a HTPC-ready solution.

The GeForce GTX 295 has the same implementation of SLI technology as its predecessor GeForce 9800 GX2. You can read our There Will Be Speed: The Clash of Modern Multi-GPU Technologies review for details.

All in all, the newcomer is strong and prepared to push the Radeon HD 4870 X2 off the 3D king’s throne. Before we see if it succeeds, let’s take a closer look at one of its versions. It is the GeForce GTX 295+ graphics card from EVGA.

Package and Accessories

EVGA’s series of G200-based products comes in unified packaging that varies but slightly from model to model. The EVGA GeForce GTX 295+ arrives to shops in a standard black box of rather small dimensions, embellished with a bright stripe and sealed into plastic film.

There are but few differences from the packaging of the EVGA GeForce GTX 260 Core 216 Superclocked card we tested recently: the stripe crossing the box has become dark red and acquired a pattern made by EVGA logos. The letters are now silvery rather than gray, but the design has not lost the feel of restraint and sternness. Unfortunately, there are not one but two common mistakes on this box. Besides the wrong indication of the memory type (DDR3 instead of GDDR3), the amount of graphics memory is declared to be 1792 megabytes. However, 3D applications can access only half that amount, i.e. 896 megabytes, because data are duplicated for each GPU in modern multi-GPU solutions.

There is a window in the back of the box for you to see a part of the PCB with a serial number sticker that ensures your warranty and makes you eligible for the EVGA Step-Up program. The latter option looks odd here because the EVGA GeForce GTX 295+ is the highest-performance gaming card today. We don’t think EVGA will offer something better than the described card during the period of the Step-Up program, which is 90 days since the moment of the purchase. Well, EVGA may introduce a GeForce GTX 295 with even more overclocked frequencies, for example with the word Superclocked in its name, but our practice suggests that factory overclocking doesn’t produce such a tremendous performance breakthrough as to justify the replacement of your card with a pre-overclocked version of the same model.

The packaging has good protective properties. Instead of the plastic container used in the less expensive products from EVGA, there is a foam-rubber tray here with compartments for the card and its accessories. The accessories are rather scanty considering the recommended price of the product (about $500):

This is just the minimum of things you need to install and get the card going in your gaming PC. But there are no extras like the free copy of Far Cry 2 we found included with the EVGA GeForce GTX 260 Core 216 Superclocked. The lack of a DVI-I → HDMI adapter may be explained by the fact that the card is equipped with a dedicated HDMI port, yet this adapter would come in handy due to the peculiarities of the support for multi-monitor configurations by Nvidia’s dual-GPU solutions as we will describe below.

Besides the driver and an electronic version of the user manual, the included disc contains two useful tools, Fraps and EVGA Precision. The latter is a program for overclocking the graphics cards and controlling the speed of its fan and its temperatures.

Summing it up, the EVGA GeForce GTX 295+ kit should be praised for the design and protective properties of its packaging, but the accessories do not match the high status of the graphics card. We guess a copy of a popular game would be appropriate here, especially as one of EVGA products we tested earlier came with a copy of Far Cry 2 despite its much lower price.

PCB Design

The representative of the third generation of Nvidia’s dual-processor graphics cards bears a striking resemblance to the second-generation GeForce 9800 GX2 in exterior and engineering solutions.

But while it was quite possible to do with only one PCB for the GeForce 9800 GX2 (and ATI indeed employed one PCB for its dual-GPU solutions), the GeForce GTX 295 just wouldn’t fit into one PCB. It would not be possible to accommodate two huge G200b chips and wire two 448-bit memory buses without making the PCB much longer, but 27 centimeters is already the maximum allowable length for most of today’s ATX system cases. Thus, the dual-PCB design is a must here rather than an engineering choice. Nvidia may create a simpler and cheaper design for dual-GPU graphics cards in the future when it transitions to GDDR5 memory.

Like on the GeForce 9800 GX2, the PCBs of the GeForce GTX 295 face each other and share a common cooler. This solution is questionable because the G200 chip is hot even in its 55nm version and the components of the two PCBs will heat each other up through the common heatsink. But as we said above, this component layout is the only way of creating a dual-GPU G200-based card within the existing height/length constraints. It is good that the card’s mounting bracket is not overcrowded with connectors as was the case with the GeForce 9800 GX2. There are a lot of slits at the top of the bracket for exhausting the hot air out of the system case. Some of the air is thrown into the system case, though.

As opposed to the GeForce 9800 GX2, it is not hard to take a GeForce GTX 295 apart. You have to remove the protective casing, unfasten the mounting bracket and all the screws that secure the PCBs on the cooler (the cooler is in fact the foundation of the whole arrangement). Then, you can carefully separate the thing into its parts, overcoming the resistance of the thermal grease.

 

 

The high component density of both PCBs agrees with our point that a single-PCB version of GeForce GTX 295 is impossible, even though there is a figured cutout in each PCB for air intake. The PCBs communicate through two flexible cables connecting the headers located in the left part of each PCB.

Each PCB of the GeForce GTX 295 carries an independent four-phase GPU voltage regulator based on a Volterra VT1165MF PWM controller. The top PCB is powered only by an 8-pin PCI Express 2.0 plug (with a load capacity of 150W) whereas the bottom one receives some of its power from the power section of the PCI Express x16 slot. The APW7142 and AMS1117 chips seem to be responsible for powering the memory.

A 2-pin S/PDIF header is located next to the 6-pin power connector. The S/PDIF input is necessary to translate an external audio stream from the sound card into HDMI. This header is located on the bottom PCB because it also carries a HDMI interface. Interestingly, the installation of the dedicated HDMI port required a second NVIO chip, so the GeForce GTX 295 has two of them. However, you can only use the card’s three interfaces simultaneously when not in SLI mode. This doesn’t make much sense because the GeForce GTX 295 loses its main advantage then. We mean its high gaming performance. The support for dual-monitor configurations in SLI mode is implemented since version 180 of the GeForce driver, but this support is not as extensive as with ATI’s CrossFireX technology: the Slave monitor can be turned off if you launch a game in full-screen mode on the Master monitor.

An nForce 200 chip is employed as the bridge here. It can be found on some mainboards where it implements Nvidia’s SLI technology. This is an intellectual switch of the PCI Express 2.0 interface that can operate with 48 PCI Express lanes and supports direct data exchange between two GPUs.

Each of the card’s PCBs carries 14 GDDR3 memory chips from Hynix (H5RS5223CFR-N0C, 512Mb, 16Mb x 32, 2.05V, 1000 (2000) MHz). EVGA increased the memory frequency of its card relative to the reference sample: from 1000 (2000) MHz to 1026 (2052) MHz.

Of course, we would like to see the new flagship of the GeForce GTX 200 series equipped with two 1GB memory banks with 512-bit access, but this would make its design even more sophisticated. Therefore the developer endowed his product with two 896MB banks with 448-bit memory buses. Thus, the GeForce GTX 295 comes with a total of 1792 megabytes of memory whereas 3D applications can access half of the total amount. This should be enough even for 2560x1600, but a SLI configuration built out of two individual GeForce GTX 280 cards can theoretically be faster in some situations thanks to the larger amount of graphics memory. The peak performance of the memory subsystem of the new card is 224GBps, but the EVGA version has somewhat higher bandwidth, 229.8GBps, which is almost equal to that of the Radeon HD 4870 X2 (230.4GBps).

The GPUs are marked as G200-400-B3. Thus, they are a newer revision of the G200b chip than the G200-103-B2 revision which is installed on the GeForce GTX 260 Core 216. The GPUs were manufactured on the 49th week of 2008, i.e. from November 30 to December 6. The core configuration is untypical for G200-based solutions: although its 240 shader and 80 texture processors are all active, some of the raster back-ends are turned off because the memory controller configuration is strictly linked to them. Seven out of the eight rasterization sections are active in each core, which is equivalent to 28 raster back-ends per core. So, each “half” of the GeForce GTX 295 is something in-between the GeForce GTX 280 and GeForce GTX 260 Core 216. The new card’s official clock rates correspond to the latter: 576MHz for the main domain and 1242MHz for the shader domain. EVGA’s version comes with pre-overclocked GPU frequencies: 594MHz and 1296MHz, respectively.

The card is equipped with three connectors for display devices: two DVI-I ports and one HDMI.

The former two are connected to the Master GPU and can be used simultaneously in SLI mode whereas the latter is connected to the Slave GPU and can only be used when not in SLI mode. This odd solution actually negates the value of the dedicated HDMI port. For comparison, this port of the GeForce 9800 GX2 was connected to the Master GPU together with one DVI-I.

There is a blue LED next to one DVI port, indicating that the port must be used for the main monitor. The other LED, near the HDMI connector, reports power related problems (it is shining green then). Besides, there is a single MIO port on the bottom PCB that allows to build a quad-SLI configuration out of two GeForce GTX 295 cards.

Cooling System

The cooling system of the GeForce GTX 295 is similar to the one installed on the GeForce 9800 GX2. Besides performing its main function, it also serves as the foundation for the whole graphics card because both PCBs are fastened on it. >From a technical point of view, the cooler is like a dual-sided sandwich with copper heat-exchangers on the exterior sides for cooling the GPUs and with protrusions for cooling other components. Inside the cooler there is a thin-ribbed aluminum heatsink connected to the heat-exchangers with flat heat pipes.

We did not risk taking the cooler apart because its parts were fastened with thermal glue, but the photographs show the cooler’s key features anyway. The heatsink ribs are placed at an angle to the mounting bracket, so only some of the hot air leaves the system case through the slits in the bracket whereas the other portion of the air goes into the system case through a hole in the casing. However, the share of the exhausted air is far higher in comparison with the GeForce 9800 GX2. 

As we noted above, the cooler’s aluminum bases have a number of protrusions for establishing thermal contact with the memory and NVIO chips as well as with the power elements of the GPU voltage regulators. Nvidia’s traditional fiber pads soaked in white thermal grease are used as a thermal interface for the memory and NVIO chips. A very dense gray thermoplastic material is used for the power components. Besides the two main heat-exchangers responsible for the GPUs, there is a third and smaller one, responsible for the PCI Express switch. Dark-gray dense thermal grease is employed as a thermal interface – you can see it in many modern graphics cards.

A 5.76W blower is installed at the back of the “sandwich” for blowing through the heatsink. It is connected to the top PCB with a 4-pin connector. The blower gets fresh air from above and below the card through the holes in the PCBs. Each PCB is secured on the cooler’s base with 13 spring-loaded screws. The cooler is covered from the top with a metallic casing that has a rubberized coating. At the bottom there is just a plastic plate with an EVGA logo that covers the ferrite cores of the chokes of the voltage regulator.

This design can hardly be called optimal considering that the GeForce GTX 295 is going to dissipate about 220-240 watts of heat. However, this cooler and the graphics card at large is a compromise the developer had to concede in order to stay within the required dimensions. The cooler seems to be able to cope with its job, but we don’t expect it to be exceptional in terms of cooling performance or noise parameters. We will check this out in the next section, though.

Power Consumption, Temperature, Overclockability and Noise

As opposed to its predecessors that used to be just temporary solutions, the GeForce GTX 295 is meant to be a flagship product showcasing Nvidia’s technological superiority. Therefore such characteristics as power consumption, temperature and noisiness of this card are very important. That’s why we performed our usual tests on a specially configured testbed:

Following our standard method, the 3D load was created by means of the first SM3.0/HDR test from 3DMark06 running in a loop at 1600x1200 with 4x FSAA and 16x AF. The Peak 2D mode was emulated by means of the 2D Transparent Windows test from PCMark05. This test is important as it simulates the user’s working with application windows whereas Windows Vista’s Aero interface uses 3D features.


Click to enlarge

The transition of the G200 chip to 55nm tech process has had a positive effect on its electrical characteristics. As a result, the peak power consumption of the GeForce GTX 295 is no higher than 215 watts. This is far lower than the peak consumption of the Radeon HD 4870 X2. Contrary to all predictions, the GeForce GTX 295 is not a fire-spitting monster. ATI has got a reason to think about the efficiency of its technologies. It turns out that the pair of RV770 chips consumes far more power than two G200b chips manufactured on the same tech process, but the RV770 chips incorporate much less transistors!

As for the distribution of load, one of the PCBs of the GeForce GTX 295 is indeed powered by the 8-pin PCI Express 2.0 plug only whereas the other uses the 6-pin PCI Express 1.0 connector and the power lines of the PCI Express x16 slot. The chokes of the card’s voltage regulators produced an audible squeaking sound when under load. We don’t know if this is a normal thing for all GeForce GTX 295 or just a peculiarity of our sample. As for PSU requirements, Nvidia recommends using with this graphics card a 680W or higher power supply capable of delivering a combined current of 46A or higher across its +12V lines. This recommendation seems to be downright overstated in view of the power consumption data we’ve obtained. We guess the GeForce GTX 295 can be used with a high-quality 500-550 power supply safely enough.

We already know that the 55nm version of the G200 core has substantially better overclocking potential than the older version, so we attempted to overclock our sample of the GeForce GTX 295. Although EVGA had already per-overclocked it to 594/1296MHz core and 1026 (2052) MHz memory frequencies, we managed to increase them further to 650/1418MHz and 1200 (2400) MHz, respectively. That’s a good result for a card with two G200b chips cooled with a single, rather modest, heatsink. Unfortunately, we didn’t have enough time to benchmark the overclocked card in all of our tests and had to limit ourselves to Crysis, Far Cry 2 and 3DMark Vantage.

We kept track of the card’s temperature at the overclocked frequencies and found them to be as follows:

Well, if two cores are cooled with one heatsink, they must be hotter than a single such core. The card has higher GPU temperatures than single-core G200-based solutions even in 2D mode when the clock rates of both cores were dropped to 300/600MHz. The temperatures were high in 3D mode, but 82-86°C is not something extraordinary for today’s top-performance graphics cards. Our only apprehension is about the fact that not all of the hot air is exhausted by the card’s cooler outside. Some of it remains within the system case, so you should ensure that your gaming platform is ventilated properly if you want to install this card into it.

Despite the high component density and modest cooler, the GeForce GTX 295 has good noise characteristics for its class.

The reference cooler of the GeForce GTX 295 is not only quieter than the notorious cooler of the Radeon HD 4870 X2, but even quieter than the reference cooler of the GeForce GTX 280. The card is not silent, of course, but we could not make it increase the speed of its fan even during long tests. The spectrum of the noise is comfortable enough. It sounds like the hiss of the air whereas the incessant buzz of the fan’s motor in the noise from the Radeon HD 4870 X2 is really annoying. Thus, Nvidia is still superior when it comes to developing quiet and effective coolers for its graphics cards. Besides thinking about the disproportionally higher power consumption of its RV770 core, ATI should also think about the coolers. The reference coolers the company currently offers are not quiet at all.

Testbed and Methods

We are going to investigate the performance of our EVGA GeForce GTX 295+ graphics card using the following testbed:

The graphics card drivers are configured in the same way as before: to provide the highest possible quality of texture filtering and to minimize the effect of default software optimizations. We enabled transparent texture filtering, and we used multisampling mode for both graphics architectures, because ATI solutions do not support supersampling for this function. As a result, our ATI and Nvidia driver settings looked as follows:

ATI Catalyst:

Nvidia GeForce:

We made a lot of changes to the list of games and benchmarks we normally use for our tests in order to meet the contemporary standards. As a result, it currently includes the following titles:

First-Person 3D Shooters

Third-Person 3D Shooters

RPG

Simulators

Strategies

Semi-synthetic Benchmarks

We selected the highest possible level of detail in each game using standard tools provided by the game itself from the gaming menu. The games configuration files weren’t modified in any way, because the ordinary user doesn’t have to know how to do it. We made a few exceptions for selected games if that was necessary. We are going to specifically dwell on each exception like that later on in our article.

Besides EVGA GeForce GTX 295+ we have also included the following graphics accelerators to participate in our test session:

The first two cards from the list above were also tested in SLI configuration, and Radeon HD 4870 X2 teamed up with Radeon HD 4870 1GB to be tested in Radeon HD 4870 three-way CrossFireX configuration.

We ran our tests in the following resolutions: 1280x1024, 1680x1050, 1920x1200 and 2560x1600. Everywhere, where it was possible we added MSAA 4x antialiasing to the standard anisotropic filtering 16x. We enabled antialiasing from the game’s menu. If this was not possible, we forced them using the appropriate driver settings of ATI Catalyst and Nvidia ForceWare drivers. In response to numerous readers’ requests, some gaming tests were additionally performed with forced CSAA 16xQ antialiasing for Nvidia and CFAA 8x + Edge detect Filter for ATI solutions. Both modes use 8 color samples per pixel, however, Nvidia algorithm implies twice as many samples for the coverage mesh. As for ATI algorithm, it employs the additional Edge Detect filter, which makes it equivalent to MSAA 24x, according to the developers.

Performance was measured with the games’ own tools and the original demos were recorded if possible. Otherwise, the performance was measured manually with Fraps utility version 2.9.6. In the latter case we ran the test three times and took the average of the three for the performance charts. We measured not only the average speed, but also the minimum speed of the cards where possible.

Performance in First-Person 3D Shooters

Call of Duty: World at War

With the exception of the GeForce GTX 280, every graphics card included into this test session is so fast that they all reach the performance ceiling in this game. It is only at 2560x1600 that we have some data for comparison: the Radeon HD 4870 X2 proves to be slower than its new dual-chip opponent. It needs one more RV770 core for successful competition.

When we turn on extremely high levels of antialiasing, Nvidia’s solutions have an advantage in the way of the less resource-consuming 16xQ CSAA algorithm that needs 16 samples on the coverage grid but only 8 color samples whereas the 8x CFAA + Edge-detect Filter algorithm employed by ATI makes the GPU perform more work to filter the edges. As a result, the Radeon HD 4870 X2 has the worst result among the tested cards in this mode. Moreover, the extreme antialiasing modes can only be used at the resolution of 1280x1024 whereas the image quality improvements are barely perceptible.

Nvidia GeForce GTX 295


MSAA 4x


CSAA 16xQ

ATI Radeon HD 4870 X2


MSAA 4x


CFAA 8x + Edge-detect

We think that the minimum improvement in the smoothing out of small details is not worth such a terrible performance hit.

Crysis Warhead

Nvidia’s new dual-chip flagship is ahead of its opponent at every resolution, being just a little slower than the bulky and hot tandem GeForce GTX 280 SLI. However, our apprehensions come true at 2560x1600 where 896 megabytes of local memory is not enough for the GeForce GTX 295. There is nothing bad about that loss, however, because the frame rates are low overall.

Enemy Territory: Quake Wars

The frame rate is fixed at 30fps in this game as this is the rate at which all events are synchronized during networked play. We disabled this limit in the game console for the sake of comparing the cards. The game’s built-in benchmarking options do not provide information about the bottom speed, so there is no such info in the diagrams.

The GeForce GTX 295 is somewhat slower than the GeForce GTX 280 SLI tandem in this test, too. We can explain this by its lower memory bandwidth whereas the game uses large high-resolution textures. Anyway, it is ahead of the Radeon HD 4870 X2. The gap is as large as 22% at 2560x1600.

Nvidia’s cards fail, however, when we try to enable extremely high levels of antialiasing. And they fail in OpenGL games where Nvidia’s solutions have always been strong. At a resolution of 1280x1024 the overall picture is similar to what we have with ordinary 4x MSAA, but ATI’s cards go ahead in the higher display modes. We can’t explain the odd behavior of the GeForce GTX 295 by the lack of graphics memory because its results are almost identical to those of the GeForce GTX 280 SLI tandem. As for image quality, you cannot see any difference at resolutions higher than 1680x1050, so there is in fact no good reason for enabling the extreme antialiasing modes.

Far Cry 2

The GeForce GTX 295 behaves predictably in this test. Its performance is about as high as that of the GeForce GTX 260 Core 216 SLI configuration and somewhat lower than that of the GeForce GTX 280 SLI. It enjoys a 12-15% advantage over the Radeon HD 4870 X2.

The frame rates do not drop below playable level when we enable extreme antialiasing modes, but the bottom speeds are affected. The resolution of 2560x1600 becomes unplayable.

Nvidia GeForce GTX 295


MSAA 4x


CSAA 16xQ

ATI Radeon HD 4870 X2


MSAA 4x


CFAA 8x + Edge-detect

As is the case with Call of Duty: World at War, the screenshots do not show any significant image quality improvements. There is a difference but you have to search for it with special tools like The Compressonator. It cannot be spotted in a dynamic game. This is another argument to support the point that the extreme antialiasing modes the GPU developers put so much emphasis on are more of a marketing trick rather than a real means of improving image quality in practical applications.

Left 4 Dead

The game runs on the Source engine and has an integrated benchmark, but the latter does not report the bottom speed information.

Using the Source engine, the game has modest system requirements and every tested card ensures a comfortable speed at resolutions up to 2560x1600 inclusive. The single GeForce GTX 280 is the only card to fall behind. Note that the pre-overclocked frequencies of the GeForce GTX 295+ help it overtake the GeForce GTX 280 SLI tandem.

The results are more interesting when we use high-quality antialiasing modes: ATI’s solutions lose their ground here while the GeForce GTX 295 falls behind the GeForce GTX 280 SLI at 2650x1600 because of the smaller amount of graphics memory available to 3D applications (895 against 1024 megabytes). The difference in image quality is even less conspicuous than in the previous games because Left 4 Dead is a survival shooter with lots of dark scenes.

S.T.A.L.K.E.R.: Clear Sky

To achieve a playable speed in this game we disabled FSAA and such resource-consuming options as Sun rays, Wet surfaces and Volumetric Smoke. We use the Enhanced full dynamic lighting (DX10) mode for our test and additionally enabled the DirectX 10.1 mode for the ATI cards.

Thanks to our relaxed settings every graphics solution, except for the single GeForce GTX 280, copes with this test successfully. Nvidia’s cards boast higher minimum frame rates at resolutions up to 1920x1200 inclusive but ATI’s cards are ahead in this parameter at 2560x1600 where the 3-way Radeon HD 4870 CrossFireX configuration is over 25% faster than its opponents.

Performance in Third-Person 3D Shooters

Dead Space

Unlike the Radeon HD 4870 X2, the GeForce GTX 295 has no problems with the multi-GPU mode in this game, but it is not much faster than the single GeForce GTX 280, either. We hope ATI’s multi-GPU technology will eventually work in this game, too. On the other hand, ATI’s solutions deliver playable frame rates even at 2560x1600 anyway.

Devil May Cry 4

The multi-GPI solutions all show excellent scalability, and the 3-way subsystem from ATI becomes the leader, having more GPUs than the others. The GeForce GTX 295 outperforms the Radeon HD 4870 X2 by 11-26fps depending on the resolution, but this is a negligible difference considering that the speeds are never lower than 70fps. So, both solutions provide the same level of gaming comfort.

The high-quality antialiasing modes increase the load on the graphics subsystem, but the Radeon HD 4870 X2 is the only solution to slow down much, perhaps because it has only 32 raster back-ends as compared with its opponent’s 56 such subunits. Anyway, ATI’s card maintains an acceptable frame rate at 2560x1600. The GeForce GTX 295 feels at ease thanks to its less resource-consuming antialiasing algorithm. The improvements in image quality can barely be spotted with a naked eye because the game is highly dynamic.

Performance in RPG

Fallout 3

ATI’s solutions go ahead at 1920x1200 and enjoy a definite advantage at 2560x1600. The GeForce GTX 295 performs well enough, too. It is but barely slower than the GeForce GTX 280 SLI tandem in performance but surpasses it in other consumer qualities including price.

Mass Effect

The GeForce GTX 295 is barely ahead of the Radeon HD 4870 X2 at 1280x1024 but the gap grows up to 14%and then to 26% in the higher display modes. Then the gap shrinks to zero at 2560x1600 and the 3-way Radeon HD 4870 CrossFireX subsystem takes the lead. None of the tested solutions can provide a playable frame rate at the highest resolution, though.

Performance in Simulators

Race Driver: GRID

ATI’s cards are superior in terms of average frame rate throughout this test but their bottom speeds are almost the same as those of Nvidia’s solutions up to the resolution of 2560x1600. And we don’t think the gamer will feel the difference of 10-20fps when the cards are as fast as 98-140fps. The bottom speed of the multi-GPU solutions is never lower than 60fps even at 2560x1600, which is an excellent result, especially in comparison with the fastest single-chip card GeForce GTX 280. Multi-GPU solutions seem to have achieved absolute superiority in the premium product sector.

X³: Terran Conflict

This game is known to prefer ATI’s architectures to Nvidia’s. Unlike in many other tests, the single GeForce GTX 280 is almost no slower than the multi-GPU solutions here. The new GeForce GTX 295 and the GeForce GTX 280 SLI are the only solutions from Nvidia that can provide a playable bottom speed at 1680x1050.

Performance in Real-Time Strategy

Red Alert 3

The game has a built-in frame rate limiter set at 30fps.

The GeForce GTX 295 solves the problem of low speed of Nvidia’s GPUs in Red Alert 3. Its frame rate is high enough for playing at resolutions up to 1920x1200 with 4x FSAA. Interestingly, the SLI tandems built out of discrete cards cannot deliver such performance although they exchange data through the same nForce 200 switch with the only difference that the switch is installed on the mainboard.

World in Conflict

All dual-core solutions from Nvidia deliver the same performance and are much faster than their opponents from ATI. The 3-way Radeon HD 4870 CrossFireX subsystem only goes ahead at 2560x1600, being the only solution capable of maintaining a playable bottom speed in that display mode.

Performance in Semi-Synthetic Benchmarks

Futuremark 3DMark06

3DMark06 is not very informative when it comes to benchmarking premium-class graphics products. This benchmarking suite defaults to 1280x1024 without antialiasing and thus does not use all of a modern card’s resources. Therefore it is difficult to predict how the various factors will affect the final result. For example, we can’t explain the performance of the 3-way Radeon HD 4870 CrossFireX subsystem, but it was indeed slower than the single Radeon HD 4870 X2. As for the GeForce GTX 295, it is about as fast as the GeForce GTX 260 Core 216 SLI subsystem in this benchmark, which agrees with the results of the gaming tests.

Futuremark 3DMark Vantage

We minimize the CPU’s influence by using the Extreme profile (1920x1200, 4x FSAA and anisotropic filtering). We also publish the results of the individual tests across all display resolutions to provide a full picture.

As expected, the GeForce GTX 295 does not have a record-breaking overall score. Although it is far ahead of the Radeon HD 4870 X2, the SLI configuration with two GeForce GTX 280 cards is faster still (this gap disappears when the new card works at the pre-overclocked frequencies, though).

When clocked at the official frequencies, the GeForce GTX 295 is considerably faster than the GeForce GTX 260 Core 216 SLI configuration but somewhat slower than the GeForce GTX 280 SLI at every resolution save for 2560x1600. There is no this gap at 2560x1600 and when the card works at the pre-overclocked frequencies, it takes second place behind the 3-way Radeon HD 4870 configuration which has far more impressive computing capabilities.

The mentioned solution from ATI is in the lead in the second test, but not in all of the display modes. Moreover, the Radeon HD 4870 X2 gets closer to its opponent here. The special effects created by the ray trace method seem to require tremendous computing power and ATI still has an advantage in this respect. The GeForce GTX 295 is still in between the GeForce GTX 260 Core 216 SLI and the GeForce GTX 280 SLI. The gap is small in the latter case and can be easily closed by means of overclocking.

Conclusion

Summing everything up, we can say that Nvidia has come up not with just a competitive dual-core graphics cards, but with the best-in-class solution which is far superior to its AMD counterpart in performance in modern games as well as in other consumer properties such as power consumption, noisiness and heat dissipation. Nvidia’s programmers did a good job, too. It is the first time in our tests that a multi-GPU card from Nvidia is actually free from any compatibility and performance problems, being superior to the Radeon HD 4870 X2 in this respect, too.

Well, this just proves the point we made at the beginning of this review that resting on one’s laurels is dangerous, especially if you work on the graphics hardware market where everything can change in a moment and you must be prepared to oppose your opponent’s attack. So, after a long series of defeats Nvidia is finally victorious and can claim technological superiority. This should have a positive effect on the company’s image and the popularity of its products.

Let’s now discuss the performance of the GeForce GTX 295 card in more detail:

The new flagship of the GeForce GTX 200 series is superior even at 1280x1024, losing to the former leader Radeon HD 4870 X2 in two tests only, Race Driver: GRID and X3: Terran Conflict, but delivering a playable speed in both these games anyway. The average advantage of the GeForce GTX 295 over the Radeon HD 4870 X2 amounts to 19%. Perhaps this is not too much, but ATI’s solution is also worse in terms of power consumption, heat dissipation and noisiness. The GeForce GTX 295 is also 7 to 68% ahead of the GeForce GTX 280 (38% average). In fact, this signals the end of the era of top-performance single-chip graphics cards. It is hard to image a monolithic GPU capable of rivalling the GeForce GTX 295.

We’ve got a similar picture at the resolution of 1680x1050. Despite the significantly lower computing power in theory, the GeForce GTX 295 beats the Radeon HD 4870 X2 in most of our tests, again save for Race Driver: GRID and X3: Terran Conflict. The 2% gap in Fallout 3 is unimportant. Thus, the GeForce GTX 295 has an average advantage of 17%.

The transition to the resolution of 1920x1200 doesn’t change the overall picture much except that the GeForce GTX 295 closes the gap in Race Driver: GRID to 5% and falls 4% behind its opponent in Fallout 3. The new card has an average advantage of 20% over the Radeon HD 4870 X2 and 61% over the GeForce GTX 280. These are excellent results considering the good power consumption parameters of this monster.

It is at the resolution of 2560x1600 that the GeForce GTX 295 faces the predictable problem of the lack of local graphics memory. However, this only happened in the highly demanding Crysis Warhead where the best of today’s graphics cards can barely maintain a playable frame rate even at 1280x1024. Anyway, 896 megabytes of graphics memory is the compromise the GeForce GTX 295 developers had to concede. Developing a similar card with two 1GB memory banks with 512-bit access may be possible, but the manufacturing cost would be too high for Nvidia. The 1.4-billion-transistor G200 chip is itself very costly to make, after all. The new card has an average advantage of only 8% over the Radeon HD 4870 X2 at this resolution. But again, it is superior to the latter in terms of power consumption and noisiness.

Talking about the specific card, the EVGA GeForce GTX 295+ is a precise copy of the reference sample. We will hardly see GeForce GTX 295+ with original PCB design, though. The single difference from Nvidia’s reference card is in the EVGA stickers and in the pre-overclocked frequencies that make it as fast as the GeForce GTX 280 SLI configuration in some games. The EVGA card boasts good overclockability, but has scanty accessories that do not match the high status of the product. Paying about $500, the customer expects to find at least one good game in the kit. On the other hand, this money buys you today’s fastest gaming graphics card that features good noise and electric characteristics, which is quite a lot already.

Highs:

Lows: