Nvidia GeForce GTX 560 Ti: The Hero of Our Times

The replacement of GeForce 400 family with GeForce 500 models continues. Today we are going to introduce to you a new mainstream solution from Nvidia targeted for demanding gamers that should replace GeForce GTX 460. Please meet the new Nvidia GeForce GTX 560 Ti!

by Alexey Stepin , Yaroslav Lyssenko, Anton Shilov
01/25/2011 | 06:00 AM

If you’ve been following the recent history of 3D graphics hardware, you should be aware that the GeForce GTX 480 card was not born easily. Nvidia’s first GPU with the Fermi architecture was released in a cut-down configuration. It is only at the end of 2010 that the company offered the GF110, the improved version of the original GF100 processor. The new chip was actually what the GF100 should have been from the very beginning. The GeForce GTX 580 and GTX 570 graphics cards based on it did very well in our gaming tests.

 

It was clear, however, that Nvidia wouldn’t stop at that in replacing the GeForce 400 with the GeForce 500 series. The GeForce GTX 460 was the next candidate for substitution. It must be noted here that GF104-based products had proved more successful than the senior GeForce 400 series models with the GF100 chip. The GF104 was simpler and cheaper to make and it could filter FP16 textures at full speed.

As a result, the GF104-based GeForce GTX 460 card in two varieties with 768 MB and 1 GB of onboard memory enjoyed a happy life without any competition in the $199-229 category. The availability of numerous factory-overclocked versions was an indication of the high potential of the GF104 chip which, by the way, had one secret. There were only seven active multiprocessors in it although it actually contained eight. That is, the GF104 had 336 active ALUs and 56 active TMUs although physically incorporated 384 ALUs and 64 TMUs.

The GF104 doesn’t seem to have been plagued by the problems of the GF100, yet Nvidia must have wanted to bring the new chip to market as soon as possible and ensured a high chip yield by reducing the chip’s configuration. Many reviewers supposed that the cut-down GF104 would be followed by a full-featured version, yet this has never happened during the life cycle of the GeForce 400 series.

It is only today, on January 25, that Nvidia unveils the successor to the GF104 together with a new performance-mainstream graphics card. It is the GF114 chip. Nvidia refers to the class of affordable but high-performance solutions as “Hunter” but we’d use a comparison with tanks. Flagship models priced at over $250 would be heavy tanks with the most powerful weapons which, however, cannot achieve the overall victory due to their limited number. It is the numerous mass-produced machines that win the battle as they combine simplicity with acceptable technical parameters.

The GeForce GTX 560 Ti is Nvidia’s new mainstream tank. We must confess the renewed use of suffixes in graphics card names is quite a surprise for us. “Ti” obviously stands for Titanium, suggesting superior consumer properties of the new product, but the use of various prefixes and suffixes brings us back to the far-away year of 2001 when this Ti suffix was first used in one of the GeForce 2 models. We guess the name “GeForce GTX 560” would be clear and sufficient.

Anyway, the GeForce GTX 560 Ti has arrived, so we are going to test it and see what it can do on the battlefield.

Architecture

Like with the GeForce GTX 580 and 750 cards, the GPU structure has not changed. The GF114 is in fact an all-active GF104 optimized for high clock rates at reasonable power consumption.

The two processing clusters include four multiprocessors each. Each multiprocessor consists of 48 stream cores for a total of 384 and is serviced by eight texture-processing units. The total of TMUs is 64 in the new chip. The TMU architecture has remained intact since the GF104. They can perform full-speed filtering of FP16 as well as INT8 textures. Textures in FP32 format are processed at only one fourth of the full speed. Like in the GF104, the L2 cache is 512 kilobytes large.

Each multiprocessor also incorporates a PolyMorph engine, making the GF114 superior to any AMD solution in terms of geometry processing and tessellation performance. Even the two third-generation tessellation units in AMD’s Cayman processor can hardly match the eight PolyMorph engines of the GF114. The new chip’s rasterization subsystem has remained the same with 32 raster back-ends. It is directly connected to the memory subsystem, so the latter still includes four 64-bit controllers. The memory bus connected the GPU to the local graphics memory is 256 bits wide.

There are no notable innovations in the multimedia department, but they are hardly necessary. The GF104 could already do everything modern users might want, offering hardware support for HD video in H.264 and VC-1 formats and Protected Audio Path for outputting multichannel audio in Dolby TrueHD and DTS-HD Master Audio formats. The single advantage AMD can boast in this field is that their solutions support hardware decoding of DivX.

Overall, the GF114 looks a well-balanced solution with reasonable functionality. It supports all modern visual and multimedia technologies but lacks rarely required capabilities such as an opportunity to connect six monitors simultaneously, for example. The new mainstream GPU from Nvidia carries on the ideas found in its predecessor GF104. Let’s now see how the GF114-based graphics card is positioned among its predecessors, opponents and elder cousins.

Positioning

Just like the GF110 is the “corrected” version of the GF100, the new GF114 is what the GF104 should have been from the beginning. Nvidia’s developers have managed to make the new GPU stable in its full configuration with eight active multiprocessors for a total of 384 ALUs and at higher clock rates than those of the GF104. Let’s compare:


* since Catalyst 10.12

Thus, the new GeForce GTX 560 Ti is to the GeForce GTX 460 1GB as the GeForce GTX 580/570 are to the GeForce GTX 480/470. There is nothing wrong in having a new version of an old GPU, though. The Fermi architecture is quite a good one, except for its texture-mapping subsystem which is inferior to that of the AMD solutions.

So, we can see that the GPU clock rates have grown up: over 800 MHz for the main domain and over 1.6 GHz for the shader domain. This is an achievement for Nvidia, considering the architectural features of their solutions. Such frequencies could only be conquered by individual GeForce cards with factory overclocking. AMD has something to worry about now because even the main domain of the GF114 works at a higher clock rate than the whole Radeon HD 6950 GPU. Besides, the latter has only 352 VLIW4 processors as opposed to the GF114’s 384 scalar ALUs.

AMD’s graphics architecture is also known to deliver its maximum performance not in all cases. Its new Cayman incarnation is also devoid of a dedicated ALU for complex instructions, so such instructions have to be performed by four simple ALUs, which is not efficient. Thus, in some cases the Radeon HD 6950 is going to be much slower than the GeForce GTX 569 Ti with the latter’s 384 stream processors clocked at over 1600 MHz. We’ll see in our practical tests whether this scenario is likely or not. As for the Radeon HD 6870, it is inferior to the GeForce GTX 560 Ti in every parameter, including texture-mapping performance.

The memory specs of the GeForce GTX 560 Ti have improved over its predecessor’s, yet the peak bandwidth of 128 GBps is far from impressive compared to that of AMD’s Cayman and even Barts solutions. It is unclear why Nvidia could not have increased the memory frequency, at least up to 1125 (4500) MHz. The GF110-based solutions have the excuse of their wider bus. It is not easy to ensure stable operation of memory chips at high frequencies when the memory bus is 320 or 384 bits wide. The GF114, on the contrary, has a 256-bit bus, so the GeForce GTX 560 Ti might have been equipped with faster GDDR5 chips. So, the memory bandwidth is not a strong point of the new card. It is inferior in this respect even to the Radeon HD 6870, let alone the Radeon HD 6950.

The TMU subsystem has ceased to be a bottleneck typical of the Fermi architecture with the unlocking of the eighth multiprocessor and the increase of the GPU clock rate to 820 MHz. With 64 active TMUs, the peak texture-mapping performance is 52.6 gigatexels per second, which is even higher than that of the senior GeForce 500 series products. The AMD Cayman with 96 TMUs (88 in the Radeon HD 6950 model) can do even more, yet that’s hardly called for in today’s applications. Tests suggest that the GeForce GTX 580 and GTX 570 do not feel a lack of texture-mapping speed whereas the new GeForce GTX 560 Ti is even better in this respect.

The rasterization performance of the new card is also high because of the increased GPU frequencies. The peak fill rate of the new GeForce GTX 560 Ti is slightly lower than that of the Radeon HD 6870 which has a core clock rate of 900 MHz but higher compared to the architecturally more advanced Radeon HD 6950. The new card from Nvidia shouldn't feel a lack of rasterization speed even at high resolutions with enabled full-screen antialiasing.

The rest of the features have remained the same as in the new card's predecessor. Thanks to optimizations in the GF114 chip, its TDP is only 10 watts higher than that of the GF104. Again, the new card is actually an improved version of the GeForce GTX 460 1GB with eight stream multiprocessors and higher GPU and memory frequencies. The predecessor still remains a good product and an excellent choice for gamers who don't want to spend more than $250 for their graphics card. The successor has inherited its best features and offers them at only $20 more.

Coming at a recommended price of $249, the new GeForce GTX 560 Ti fits in between the Radeon HD 6870 and Radeon HD 6950 and we guess it has a chance to outperform both these opponents. Let's now take a look at the new card. We've got a reference GeForce GTX 560 Ti from Nvidia.

PCB Design and Specifications

The new GeForce GTX 560 Ti is about 2 centimeters longer than its predecessor GeForce GTX 460 1GB (we mean the latter’s reference version). It is 23 centimeters long and should fit into most system cases without much difficulty.

You may only have some problems installing it into a short system case because the power connectors are placed at the shorter side of the PCB. This was not a problem with the reference GeForce GTX 460 but the extra couple of centimeters may get in the way here. So, you may want to make sure the GeForce GTX 560 Ti will fit into your system case prior to purchasing it.

The cooling system is fastened with ordinary crosshead screws rather than with Torx T2 ones as in the GeForce GTX 570. Hopefully, Nvidia will keep on using the more popular type of screws.

The PCB layout of the new card hasn’t changed much compared to its predecessor. One of the memory chips is still situated to the left of the GPU whereas the other seven form a letter L on the right. The new PCB seems to have borrowed some parts of the reference PCB of the GeForce GTX 460. The power system follows a 4+1 design. The increased clock rates of the GF114 make the stronger GPU voltage regulator justifiable.

The GPU is still based on an NCP5388 controller from ON Semiconductor. The memory voltage regulator is based on a Richtek RT8101 chip located below the chokes of the main power system. Like its predecessor, the GeForce GTX 560 Ti has two 6-pin PCIe 1.0 connectors with a max load capacity of 75 watts. The reference PCB design does not provide for an 8-pin PCIe 2.0 power plug.

The new card is equipped with Samsung’s popular K4G10325FE chips. You can see them on board the GeForce GTX 580 as well as many other cards. This is GDDR5 memory type. The chips have a capacity of 1 Gb (32 Mb x 32), the HC04 suffix denoting the rated frequency of 1250 (5000) MHz. The card can lower its memory frequency in its two power-saving modes to 324 (1296) MHz or to 135 (540) MHz. The memory chips are connected to the GPU with a 256-bit bus and has a default clock rate of 1002 (4008) MHz, delivering a peak bandwidth of 128.3 GBps.

The new chip is externally no different from the GF104 and has the same dimensions. According to Nvidia, the die is 360 sq. mm but we can’t check this out because it is covered with a protective heat-spreader. The marking on the GPU indicates that this sample is revision A1. It was manufactured on the 46th week of 2010, in late November, when the GF114 was being mass-produced already. The middle number 400 suggests high frequency potential just as expected since the core of the GeForce GTX 560 Ti must be able to work at frequencies above 800/1600 MHz.

The last version of GPU-Z doesn’t yet identify all the features of the GeForce GTX 560 Ti. It doesn’t report such parameters as tech process, die size, transistor count, and announcement date. The DirectX Support and Texture Fillrate fields are empty, too. The bottom line erroneously says that the GPU doesn’t support PhysX. On the other hand, the key features like the number of ALUs and clock rates are identified correctly. The 1MHz difference from the official specs is unimportant.

We can also add that the GF114 has 64 texture processors capable of filtering FP16 textures because there are four filter units per a texture address unit in each TMU. The GF104 had the same texture-processing capabilities, though. Like other solutions from Nvidia, the new card supports two power-saving modes. When decoding HD video, it lowers the GPU frequencies to 405/810 MHz. When the GPU is idle, the frequencies drop to 51/101 MHz. Thus, the GeForce GTX 560 Ti should be as economical as the GeForce GTX 460 in these modes.

The reference design of the GeForce GTX 560 Ti doesn’t provide for a DisplayPort although the GPU itself supports that interface. The card has the same connectors as the senior models of the series: two DVI-I ports and one HDMI 1.4a connector. The top of the card's mounting bracket is a vent grid for exhausting the hot air from the cooler. There is one MIO connector on board, so you can’t build a SLI configuration with more than two GeForce GTX 560 Ti cards. More advanced SLI systems can only be assembled out of senior GeForce 500 series products.

This selection of interfaces is not as gorgeous as you get with the Radeon HD 6000 series, but only few users really need to connect six monitors simultaneously or link displays via DisplayPort in a daisy chain. Most gamers use one, occasionally two, monitors with DVI interface. Large panels are connected via HDMI. Thus, the GeForce GTX 560 Ti offers the reasonable minimum in this respect. Its GPU supports DisplayPort 1.1, so some custom-designed GeForce GTX 560 may come with an appropriate connector.

The reference GeForce GTX 560 Ti doesn’t use an evaporation chamber in its cooling system although it has done well for the GeForce GTX 580 and 570. The new card uses a more classic cooler with a round central heatsink that resembles the boxed coolers from Intel. The central piece is connected to the two additional curved heatsinks with three heat pipes. The axial blower is driving the air downwards as well as sideways to cool all the heatsinks. Some of the hot air will be exhausted to the right, into the interior of the system case.

The cooler’s base is quite standard. The aluminum frame serves as a heat-spreader for memory chips and power circuit elements, taking the heat off them via elastic thermal pads. A layer of dense gray thermal grease is applied between the main heatsink and the GPU.

We guess this cooler is rather odd, especially as Nvidia has designed very efficient coolers for the senior GeForce 500 series products. The only reason why Nvidia didn't use an evaporation chamber here is that the cost of such a cooler would be too high. But perhaps this solution is quite efficient, too. Let's check this out.

Power Consumption, Noise, Overclockability

Despite the increased clock rates of the GF114 chip, Nvidia claims its TDP has only increased from 160 to 170 watts compared to its predecessor. So, we can expect the GeForce GTX 560 Ti to have about the same level of power consumption, too. We couldn’t help checking out this assumption by performing a standard series of electrical tests on our traditional testbed with the following configuration: 

The new testbed for measuring electric characteristics of graphics cards uses a card designed by one of our engineers, Oleg Artamonov, and described in his article called PC Power Consumption: How Many Watts Do We Need?. As usual, we used the following benchmarks to load the graphics accelerators:

Except for the maximum load simulation with OCCT, we measured power consumption in each mode for 60 seconds. We limited the run time of OCCT: GPU to 10 seconds to avoid overloading the graphics card's power circuitry. Here are the obtained results:

Of course, the extra active subunits and the higher frequencies affect the power consumption of the new card, so it needs 160 watts in 3D applications whereas the GeForce GTX 460 1GB needed only 140 watts. We guess that’s an acceptable price for the considerable improvements in performance. Moreover, the new card is even more economical in the desktop mode. When processing video, the card doesn’t drop its frequencies immediately whereas our testbed reports the peak power consumption. The average is lower at 18-25 watts. If you connect two monitors with different resolutions simultaneously, the GeForce GTX 560 Ti will not switch into the 51/101MHz mode, using the 405/810MHz mode instead.

Interestingly, it is the bottom connector (12V 6-pin) that has the highest load in the power-saving modes. But in the standard mode each connector has a load of 5.5 to 5.9 amperes, or no higher than 70 watts. Thus, there is indeed no need for 8-pin PCIe 2.0 power connectors.

Like the senior models of the series, the GeForce GTX 560 Ti can monitor the electric current in its 12V power lines. If the load is too high, which is typical of such stress tests as OCCT: GPU or FurMark, the GPU is switched into a low-performance mode as a protective measure. This feature is optional and may be disabled in custom-designed versions of the GeForce GTX 560 Ti. The OCCT:GPU diagram above illustrates the protective mechanism: the power consumption graph is jagged, similarly to such graphs of the GeForce GTX 580 and GTX 570.

We guess such protection is appropriate because unrealistic loads like FurMark may damage the graphics card. If you want to run stress tests anyway, you can use special options, e.g. in GPU-Z. The GPU-Z developers will surely implement this opportunity for the GeForce GTX 560 Ti.

Overall, the new card from Nvidia is quite competitive in terms of power consumption. Although not as economical as the Radeon HD 6870, it is going to be faster in games. It looks good compared to the Radeon HD 6950, too. The GeForce GTX 560 Ti carries on the good tradition of power efficiency started by the GeForce GTX 580.

To check out the temperature of the card we used a second sample of the GeForce GTX 560 Ti with its original thermal interface. At a room temperature of 25-27°C the card's GPU was 78°C hot. This is a very good result that testifies to the efficiency of the cooler.

As for the noise factor, the card produces about the same amount of noise in 2D and 3D modes because the fan works at 40% in the former mode and at 45% in the latter, despite our running Crysis Warhead to load the card. The noise level meter could barely spot any difference in noise at a distance of 5 centimeters from the testbed. At a distance of 1 meter, the level of noise is only 1 dBA higher than the base noise level of 38 dBA. In other words, the graphics card was not audible amidst the noise from the other system components. We must confess that our testbed is far from quiet, yet the GeForce GTX 560 Ti is not a loud card anyway.

Summing up this section of the review, we can say that the GeForce GTX 560 Ti has a well-balanced combination of electrical, thermal and acoustic characteristics. Now let’s see how it performs in games.

Nvidia GeForce GTX 560 Ti and High-Definition Video

When Nvidia designed their GeForce GTX 5-series (Fermi 2.0) graphics processors, they tried to lower their power consumption and increase their computational performance, rather than add more functionality. As a result, GeForce GTX 560 Ti (GF114) has the same PureVideo unit as its predecessor. Therefore, our today’s test session will look at the new drivers performance rather than new hardware.

As we know, the last version of Nvidia PureVideo supports all contemporary video formats, such as MPEG2-HD, MPEG4, MPEG4-AVC, MPEG4-MVC, VC-1, WMV-HD, Adobe Flash 10.1, etc., as well as lossless audio bitstreaming for decoding in an external receiver. Unlike contemporary Radeon HD 6800 solutions, the new GeForce GTX 560 Ti doesn’t support hardware DivX/XviD and entropic decoding for MPEG2, but this is hardly a serious issue for 2011.

The size and power consumption of the new Nvidia card will hardly allow it to be considered a solution for HTPC. Nevertheless, it fits perfectly fine inside our Antec Fusion HTPC case. This way, GeForce GTX 560 Ti can actually be used in a computer system used for gaming as well as video playback.

Let’s see how well GeForce GTX 560 Ti can playback Blu-ray content and how much load it can take off the CPU during HD video decoding.

Video Playback Benchmarking Testbed and Methods

We are going to investigate the decoding performance and playback quality of Nvidia GeForce GTX 560 Ti and other today’s testing participants on the following platform:

The following graphics cards and integrated graphics processors took part in our tests:

We used the following tools to estimate the video playback quality in standard (SD) and high-definition (HD) resolutions:

The driver settings remained the same. However, according to the HQV suite requirements, the noise suppression and detail levels in the drivers were increased to medium (50%-60%), which, however, didn’t affect the results in multi-cadence tests.

Since the owners of high-end sound systems will be extremely interested in the results of lossless threads playback, we also enabled DTS-HD Master Audio and Dolby Digital TrueHD (where available) in order to increase the CPU load in all played movie fragments.

Keeping in mind that all tests are run under Windows 7 OS without disabling background services, the CPU utilization peaks shouldn’t be regarded as critical. It is much more important how much time it takes the CPU on average to complete the task. Note that 1%-2% difference is not indicative of any advantage of a certain graphics accelerator over the competitor.

To estimate the CPU utilization during full-HD video playback (1920x1080) and full-HD video with enabled “picture-in-picture” (Bonus View) feature we used the following movies:

We didn’t use any free content for this test session.

Video Playback Quality

The HQV 2.0 test suite is a means to subjectively evaluate the quality of some video processing operations performed by a graphics card. As we wrote in our earlier reviews, this suite is very detailed and focuses on comparing Blu-ray and DVD players, which are based on specialized video processors. Therefore, today's GPUs do not always score the highest marks in it.

HQV 2.0 DVD

Today, few people watch regular DVD movies on TVs and monitors at the native resolution of DVD content. Most users instead prefer larger screens with full-HD resolution (1920x1080). So, the primary goal of any video processor is not just to properly display video content, but to be able to upscale the image, perform movement correction, reduce noise, improve detail quality, etc. Video fragments used in the HQV 2.0 DVD tests are selected in such a way as to demonstrate how good today’s video processors are at performing each of the mentioned operations.

As we have expected, the newcomer performed comparably with GeForce GTX 460. You can disregard a slight score difference: some tests played back with subjectively poorer quality and some with subjectively better quality than with earlier driver versions. In any case, we can’t recommend watching 480x320 video on monitors supporting 1920x1080/1920x1200 resolutions. At the same time, a good DVD will play just fine on a higher-end GeForce GTX 560 Ti.

HQV 2.0 Blu-ray

Similar to HQV 2.0 DVD, the HQV 2.0 Blu-ray test suite allows to subjectively evaluate a video processor at high display resolutions.

Just like in the previous case, we do not see any serious differences with the predecessors, which is good overall. Although ATI Radeon HD competitors are a little ahead of our today’s hero, it will hardly imply that Blu-ray movies will playback with lower quality.

When analyzing the results of the HQV tests, you must keep it in mind that the scoring method is highly subjective. Therefore a small difference in the total scores of different graphics cards is hardly considered critical.

 Blu-ray Playback

Let’s see how well hardware decoding units can free the CPU from video playback processing and how greatly it will allow lowering the system power consumption.

GeForce GTX 560 shows even better results than its predecessor, which could come from driver improvement as well as high clock frequency of the chip. However, average CPU utilization time is so short and so close by 460 and 560 models that we can hardly talk about any noticeable difference.

During MPEG4-AVC/H.264 playback the newcomer utilizes the CPU just as much as GeForce GTX 460 does.

When it comes to MPEG2-HD content that has become almost completely outdated already, GeForce GTX 560 Ti also performs pretty well. Slightly higher maximum CPU utilization in this case is determined mostly by software, rather than hardware reasons.

Summary

Being a slightly improved GF104 version, GeForce GTX 560 demonstrates similar playback quality and similar performance during video decoding. If you already have a GeForce GTX 460 inside your HTPC system, the new Nvidia solution will not deliver any other advantages besides higher energy-efficiency in idle mode.

Just like its predecessors and competitors, Nvidia GeForce GTX 560 Ti supports hardware decoding of almost all popular formats including Blu-ray 3D, high-definition audio bitstreaming via HDMI 1.4a, and other contemporary functionality. The new Nvidia card doesn’t support hardware DivX decoding, is typically slower in HQV tests compared with its competitors, and requires a special middleware driver from Nvidia in order to playback movies and games on stereo 3D HDTV. Nevertheless, GeForce GTX 560 Ti will be a good choice for a multimedia PC.

Being a pretty fast gaming graphics accelerator, GeForce GTX 560 Ti consumes up to 160 watt of power and is quite bulky in size. Of course, GeForce GTX 560 Ti is not an HTPC solution, as positioned by the developer. Therefore, if you decide to use it inside an HTPC, you will have to ensure that there is proper cooling inside in order to avoid possible overheating.

Testbed and Methods

We are going to investigate the gaming performance of Nvidia GeForce GTX 560 Ti using the following universal testbed:

We used the following ATI Catalyst and Nvidia GeForce drivers:

The ATI Catalyst and Nvidia GeForce graphics card drivers were configured in the following way:

ATI Catalyst:

Nvidia GeForce:

Below is the list of games and test applications we used during this test session:

First-Person 3D Shooters

Third-Person 3D Shooters

RPG

Simulators

Strategies

Semi-synthetic and synthetic benchmarks

We selected the highest possible level of detail in each game. If the application supported tessellation, we enabled it for the test session.

For settings adjustment, we used standard tools provided by the game itself from the gaming menu. The games configuration files weren’t modified in any way, because the ordinary user doesn’t have to know how to do it. We ran our tests in the following resolutions: 1600x900, 1920x1080 and 2560x1600. Unless stated otherwise, everywhere, where it was possible we added MSAA 4x antialiasing to the standard anisotropic filtering 16x. We enabled antialiasing from the game’s menu. If this was not possible, we forced them using the appropriate driver settings of ATI Catalyst and Nvidia GeForce drivers.

Besides GeForce GTX 560 Ti, we also tested the following solutions:

Performance was measured with the games’ own tools and the original demos were recorded if possible. We measured not only the average speed, but also the minimum speed of the cards where possible. Otherwise, the performance was measured manually with Fraps utility version 3.2.7. In the latter case we ran the test three times and took the average of the three for the performance charts.

Performance in First-Person 3D Shooters

Aliens vs. Predator

The newcomer starts by leaving behind Radeon HD 6870 in low resolutions and also yielding to Radeon HD 6950 only about 5%. In 1920x1080 Radeon HD 6950 increases the gap to 14%, while the average performance of our GeForce GTX 560 Ti almost matches that of Radeon HD 6870. However, this is where Nvidia fans will discover a nice peculiarity: the minimal performance of the new GeForce GTX 560 Ti never drops below 24 fps, so you will be able to enjoy the same great gaming experience in Full HD mode as you could with a more expensive Radeon HD 6950, but at a much lower price.

Battlefield: Bad Company 2

In our second benchmark GeForce GTX 560 Ti competes successfully against Radeon HD 6950. It reaches parity in 2560x1600, and in lower resolutions it even outperforms the heavier-weight rival. Most importantly, it maintains comfortable gaming performance in all test modes.

Call of Duty: Black Ops

It is interesting that with the new AMD Catalyst driver version the results obtained on Radeon HD 6950 are not that much higher, than those of Radeon HD 6870, which can partially be caused by lower efficiency of the Cayman shader processors with VLIW4 architecture. Either way, GeForce GTX 560 Ti can easily challenge both AMD competitors. It will only yield to them in minimal fps rate, although not dramatically: 66 fps will be more than enough even for the most picky user.

Crysis Warhead

And here is the first disappointment. We can’t really call it a defeat, especially keeping in mind the price difference between GeForce GTX 560 Ti and Radeon HD 6950, but the fact is undeniable: the new GeForce GTX 560 Ti no longer delivers acceptable minimal performance in 1920x1080, although it is nevertheless faster than Radeon HD 6870 in this respect. However, unlike the latter, our newcomer looks great in 1600x900. Only a more expensive Radeon HD 6950 can compete with it in this resolution.

Metro 2033

The game runs with enabled tessellation.

Eight tessellation units do not help GeForce GTX 560 Ti to avoid defeat in minimal fps rate comparison, while its average performance is just a tiny bit lower than that of Radeon HD 5960 and much better than that of Radeon HD 6870. Its advantage is the most significant in 1600x900, but AMD solution manages to lower this gap to minimum in more resource-consuming modes. It must be due to higher-performance memory subsystem.

S.T.A.L.K.E.R.: Call of Pripyat

The game runs with enabled tessellation.

Theoretically, GeForce GTX 560 Ti has an advantage during tessellation processing, but at least in this benchmark two next-generation tessellation units in Radeon HD 6950 perform just as good. However, do not forget about the price difference between the two: the new Nvidia solution price at about $50 less competes successfully against Radeon HD 6950 in 1600x900 and delivers comparable gaming comfort (estimated subjectively) in 1920x1080. GeForce GTX 560 Ti justifies its price in full.

Performance in Third-Person 3D Shooters

Just Cause 2

There is again minimal performance difference between two generations of Radeon 6000 solutions, and GeForce GTX 560 Ti competes successfully against both: it delivers comfortable gaming experience in up to 1920x1080 or 1920x1200 resolutions. However, if you intend to play in 2560x1600, you will need to go as high as GeForce GTX 580, with a completely different price tag.

Lost Planet 2

Radeon HD 6950 is completely defeated by a cheaper GeForce GTX 560 Ti. While in 2560x1600 the AMD card uses its hidden reserves of the texturing and memory sub-systems to withstand the opponent, then in work modes that are less critical to these parameters the newcomer gets as far ahead as 30-36%! Most importantly, GeForce GTX 560 Ti performance is high enough for 1920x1080 despite the resource appetites of this game.

Performance in RPG

Fallout: New Vegas

It is hard to say anything about GeForce GTX 560 Ti performance in Fallout: New Vegas. This game is way too simple for any relatively powerful contemporary graphics accelerator. I would only like to point out a slight lag behind Radeon HD 6950 and a slight advantage over Radeon HD 6870 in 2560x1600.

Mass Effect 2

We enforced full-screen antialiasing using the method described in our special Mass Effect 2 review.

The newcomer’s victory in this test is not as convincing as in Lost Planet 2, but GeForce GTX 560 Ti is ahead of both Radeon HD 6000 models in 1600x900 resolution. In higher resolutions, our hero is at least as fast as a more expensive Radeon HD 6950, so AMD should really be worries at this point and think about possibly lowering the prices of their Cayman based solutions.

Performance in Simulators

F1 2010

Well, you can’t win everywhere, and GeForce GTX 560 Ti confirms that. The new card manages to compete successfully only against Radeon HD 6870. But, frankly speaking, the new Nvidia solution is designed as a competitor to Radeon HD 6870 right from the start; secondly, we see a noticeable lag only in 2560x1600; and thirdly, even in this resolution the performance is sufficient for relatively comfortable gaming, although the fans of the car racing genre may not be satisfied with only 30 fps.

Performance in Strategies

BattleForge

The new member in GeForce GTX 500 family continues delivering high minimal performance. Although GeForce GTX 560 Ti doesn’t do as well in 2560x1600 as GeForce GTX 570, however, even 18 fps look really great against the background of Radeon HD 6870 or Radeon HD 6950, especially since this game seems to be quite playable even at low minimal fps rate.

StarCraft II: Wings of Liberty

The new Nvidia card failed to cope with 2560x1600 resolution in StarCraft II game, but nevertheless, it managed to outrun its direct competitor, Radeon HD 6870, in average performance. I have to say that Radeon HD 6950 also doesn’t work any wonders in this mode, but the subjective level of gaming comfort it provides is seriously higher. Although the price of this comfort is quite high, too. GeForce GTX 560 Ti did really well in lower resolutions. If your monitor supports resolutions up to 1920x1200 and you like StarCraft II, then it doesn’t make sense for you to overpay for a Radeon HD 6950.

Performance in Synthetic and Semi-Synthetic Benchmarks

Futuremark 3DMark Vantage

We minimize the CPU’s influence by using the Extreme profile (1920x1200, 4x FSAA and anisotropic filtering). We also publish the results of the individual tests across all resolutions.

As we have expected, GeForce GTX 560 Ti outperformed Radeon HD 6870, but it needed another 1045 points to catch up with Radeon HD 6950. The score of 9519 points is a great result for the price range where the new Nvidia card is positioned.

We can see clearly in individual tests that even if the new GeForce GTX 560 Ti yields to Radeon HD 6950, the difference is minimal. However, as the resolution increases, the latter wins its leadership back due to faster texturing processors and video memory. GeForce GTX 560 Ti defeated Radeon 6870 across all resolutions.

Futuremark 3DMark 11

Here we also use the Extreme profile. As opposed to 3DMark Vantage, 3Dmark 11 profile runs in 1920x1080 resolution.

We see almost the same picture in 3DMark 11, although the actual numbers are different here. GeForce GTX 560 Ti didn’t lose to its primary rival in this test, and as for the Radeon HD 6950, our hero doesn’t have to catch up with this more expensive product.

Final Fantasy XIV: Official Benchmark

This benchmark only supports 1280x720 and 1920x1080 resolutions.

Radeon HD 6950 suddenly demonstrated very low results in 1280x720. It could be caused by the new shader processor architecture as well as some issues in the drivers. GeForce GTX 560 Ti performs well in both resolutions, running pretty much as fast as Radeon HD 6950.

Tom Clancy’s H.A.W.X. 2 Preview Benchmark

This test sues tessellation to create ground surface. The number of polygons per frame may reach 1.5 million.

This is another benchmark where Fermi architecture feels quite at home. Only higher-end brothers from the same family can compete with GeForce GTX 560 Ti in performance, but neither Radeon HD 6950, nor even Radeon HD 6970 are fast enough. Maybe the upcoming Radeon HD 6990 will be able to change something here, but in any case it will be none other but a truly Pyrrhic victory, as the price difference between the two is going to be scarily enormous.

Unigine Heaven Benchmark

We use “Normal” tessellation mode in this game.

Our hero performs as fast as Radeon HD 6950 – an excellent result for a GeForce GTX 560 Ti, especially keeping in mind its price range. However, we did in fact expect GF114 armed with eight PolyMorph engines to do even better in this tessellation-heavy test.

Conclusion

The new GeForce GTX 560 Ti didn’t disappoint: it is a really worthy addition to the GeForce 500 family. Moreover, we can definitely call it a true “weapon of mass destruction”. And in fact, there is nothing surprising about it: while the recommended retail price of the new GeForce GTX 560 Ti is only $249, it competes successfully not only against its direct rival - Radeon HD 6870, but also against a more powerful and more expensive Radeon HD 6950.

In fact, the new GeForce GTX 560 Ti did the same thing to GeForce GTX 470 that GeForce GTX 570 had done earlier to GeForce GTX 480. At this point it is hard to tell what will happen to GeForce GTX 460 1 GB. It may be replaced with a new GF114 based solution with cut-down configuration and without the “Ti” suffix in the model name, while GeForce GTX 550 on GF116 core comes to replace the GeForce GTX 460 768 MB.

If we take s closer look at the performance of our today’s hero, we won’t be able to tell definitely that GeForce GTX 560 Ti outperforms Radeon HD 6950 in all tests: Cayman core is initially designed for higher-end solutions anyway. However, the obtained results are truly impressive.

For example, GeForce GTX 560 Ti defeated its higher-end rival in 10 benchmarks out of 18 in 1600x900 resolution, and showed 20% average performance advantage over Radeon HD 6870. And although graphics cards of this class are usually purchased for use in higher resolutions, this is a very good start.

1920x1080 resolution is one of the most popular today because of Full HD format. Here it is harder for GeForce GTX 560 Ti to compete against Radeon HD 6950, because the latter is equipped with faster memory and boasts 88 texture units. However, our newcomer doesn’t give in that easily: it wins or reaches a tie in 8 gaming tests, while loses badly in only 4: Alien vs. Predator, S.T.A.L.K.E.R.: Call of Pripyat,  Mass Effect 2 and F1 2010. As for the final winning score against Radeon HD 6870, it could be at 100% if it hadn’t been for F1 2010 game. GeForce GTX 560 Ti outperforms its primary competitor by 15% on average, and in some cases gets up to 60% faster!

As a rule, GeForce GTX 560 Ti or Radeon HD 6870 cards priced at $250 or less aren’t used in 2560x1600 resolution. This is normally the place where $300+ solutions step in. So, it is quite logical that GeForce GTX 560 Ti loses to Radeon HD 6950 in most tests here. Nevertheless, it retains its leadership over Radeon HD 6870, although its performance advantage lowers to 12-13%.

Summing everything up, we can conclude that Nvidia’s new mainstream solution turned out a great success. Due to its outstanding features and a price tag of only $10 more than the price of AMD Radeon HD 6870, it rips its competitor to shreds. No wonder that Advanced Micro Devices graphics division is seriously concerned with this aggressive move on Nvidia’s part, and is planning to strike back by lowering Radeon HD 6870 prices and releasing a cheaper modification of their Radeon HD 6950 equipped with 1 GB of local video memory. There is no other was to compete against GeForce GTX 560 TI: it is undoubtedly better priced than Radeon HD 6950 2 GB, but is not always slower than the latter in real games.

Keeping in mind the 160 W power consumption of the new GeForce GTX 560 Ti, we wouldn’t recommend this graphics card as a solution for home theater PCs, unless you need high gaming performance at any rate. In other words, if the acoustic performance and the size of your HTPC system are secondary to you, then GeForce GTX 560 Ti may be worth considering.

As of today, GeForce GTX 560 Ti has every right to be considered the best gaming graphics accelerator in the sub-$250 price range. It delivers well for its price and there is hardly any other choice for those who are not ready to invest in a GeForce GTX 570 or Radeon HD 6970. Time will show how competitive the upcoming lower-cost Radeon HD 6950 is going to turn out.

Highs:

 Lows: