by Alexey Stepin , Yaroslav Lyssenko, Anton Shilov
09/28/2007 | 10:56 PM
AMD’s mainstream graphics processors RV360 and RV610 were announced along with the top-performance R600 as we reported in our review of the Radeon HD 2000 series, but it was only on the 11th of June that AMD began to ship them in large amounts. This made AMD even more handicapped in its competition with Nvidia who had begun mass shipments of its new mainstream processors G84 and G86 in mid-April. Having such a large advantage in time, Nvidia’s partners had managed to fill the market up with graphics cards with those GPUs.
AMD’s graphics department, the former ATI Technologies, found itself lagging behind again although its mainstream GPUs could be considered more advanced than Nvidia’s G84 and G86from a technical standpoint. As opposed to the senior R600, the new GPUs are both manufactured on a thinner, 65nm, tech process, and the RV630 supports modern graphics memory of both types, GDDR3 and GDDR4 , just like the senior GPU does. Interestingly, AMD/ATI didn’t position the Radeon HD 2600 XT GDDR4 , the senior model of the new series, as an opponent to the GeForce 8600 GTS but targeted it at a cheaper niche the Nvidia GeForce 8600 GT, with much weaker specs, resides in. As a result, the pricing of the new mainstream cards from AMD now looks like follows:
Interestingly, AMD wants to counter the GeForce 8600 GTS with the Radeon HD 2600 XT Gemini which is a dual-chip solution with two RV630 GPUs working in CrossFire mode. We have seen such solutions before, from ASUS and Gigabyte and later from Nvidia, yet all of these attempts were not much of a success. Nvidia’s GeForce 7950 GX2 was in fact forgotten by the developer himself after the release of the GeForce 8800 series. AMD has no choice, however. The manufacturing cost of R600-based products can hardly be reduced to fit into the $189-249 category in near future whereas the RV630 has too few texture processors and rasterization subunits to challenge the G84 in popular applications.
As for ordinary, single-chip, cards, the Radeon HD 2600 XT GDDR4 looks a worthy opponent to the GeForce 8600 GT. Let’s compare their technical specs first.
The modular concept that ATI Technologies sticks to helps create GPUs with varying performance by employing different combinations of functional subunits.
The 120 shader processors of the RV630 chip are joined into blocks of 5 ALUs, and one processor is capable of executing instructions like SIN, COS, LOG, etc. The core incorporates a total of 24 such blocks organized into 3 large computing modules with 8 blocks or 40 ALUs in each. It means we can expect a higher computing performance from the RV630 than from the Nvidia G84 that has only 32 shader processors. And this may be the decisive factor in games that focus on special effects with complex math1ematics as well as in DirectX 10 games.
On the other hand, the RV630 is far inferior to Nvidia’s GPU in terms of texture-mapping and rasterization performance as it has only two large texture processors roughly equivalent to 8 TMUs and one render back-end equivalent to 4 ordinary raster operators. This is going to have a negative effect on the performance of the Radeon HD 2600 when processing large amounts of high-resolution textures.
AMD must have chosen this number of texture-mapping and rasterization subunits because the chip was already too complex. Moreover, the texture cache is reduced from 256 to 128KB and the ring-bus width decreased from 512 to 256 bits in the RV630 chip.
As you see, even configured like this, the RV630 has 101 million transistors more than the Nvidia G84 has. If the RV630 had more texture and raster processors, the chip just wouldn’t have fitted into the desired price category.
There is one more important difference from the R600: the RV630 and RV610 chips incorporate a hardware video-processor based on the ATI Xilleon architecture and capable of decoding HD video in H.264 and VC-1 formats off the CPU. For comparison, the G84 and G86 offer full hardware decoding only for H.264 while the first stage of VC-1 decoding is done on the CPU.
The existing versions of ATI Radeon HD 2600 XT GDDR4 differ in memory type and frequency. The low memory frequency of the Radeon HD 2600 XT GDDR3 must be due to the manufacturer’s desire to cut the cost by installing slower chips although there are no fundamental obstacle to clocking such memory at 1000 (2000) MHz or higher. Anyway, the price difference isn’t big at only $20. The Radeon HD 2600 XT GDDR3 shouldn’t be far slower than the Radeon HD 2600 XT GDDR4 because each card is going to be limited by the performance of their TMUs and ROPs rather than by the memory subsystem. We’ll check this out in the Tests section soon.
We’ll discuss the design of the Radeon HD 2600 XT using its GDDR4 version.
Despite its rather modest specs and positioning in the $129-149 price category, the Radeon HD 2600 XT is about as large as a Radeon X1950 Pro and has a much more complex wiring than GeForce 8600 GT/GTS. The PCB owes its large size not only to the area occupied by the power circuit but rather to the position of the GPU.
Although the card doesn’t have a 6-pin PCI Express connector, its power circuit is rather complex. Some of its elements are not installed, though. There is a seat for the mentioned connector, too, but there is an electrolytic capacitor there instead. The circuit is controlled by a chip marked as “uP6201AQ”. We couldn’t find a description of it.
Following the modular design principle ATI had used before, the power circuit occupies almost the entire back part of the PCB. It is sharply separated from the rest of the PCB, which helps cut the cost of developing new PCBs. For example, for a card with a higher level of power consumption it is only necessary to change the design of the power circuit, leaving the rest of the PCB intact, and this will be cheaper than developing a completely new PCB.
The power circuit of the Radeon HD 2600 XT seems to dissipate quite a lot of heat as it provides cooling for the load-bearing components as opposed to GeForce 8600 GT and GTS cards. This gives us apprehensions about how much power the new card needs. It cannot require more than 75W, which is the load capacity of the PCI Express slot, but may need more than the GeForce 8600 GTS requires.
The graphics core is placed rather far from the DVI connectors and this area of the PCB is almost empty, except a seat for a VIVO chip which is not installed.
Although the RV630 chip incorporates 101 million transistors more than the G84, its die area is smaller thanks to the thinner 65nm tech process. The die is square and installed in a compact package without a protective frame. The marking on the chip doesn’t indicate that it’s an RV630. Our sample is dated the 19th week of 2007 (mid-May). Judging by the marking, this is a fifth revision of the RV630 core.
The GPU is not divided into domains working at different frequencies. All of its subunits are clocked at 800MHz. The GPU has 120 unified shader processors grouped into 3 large computing modules, 2 large texture processors roughly equivalent to 8 ordinary TMUs, and one rasterization module equivalent to 4 classic ROPs.
The Radeon HD 2600 XT should be able to carry 512 megabytes of graphics memory, so there are 8 seats for memory chips on the PCB, 4 of which are rather unusually on the reverse side of it. The discussed version comes with GDDR4 memory and has four Samsung K5U52324QE-BC09 chips on the face side of the PCB. Each chip has a capacity of 512Mb (16Mbx32), 1.8V voltage for all the circuitry, and a rated frequency of 1100 (2200) MHz. This is indeed the frequency the memory is clocked at by the card. The four chips provide a total of 256MB of graphics memory accessed across a 128-bit bus. The 512MB version of the card has a 128-bit memory bus as well.
Like the R600, the R630 chip has a full-featured frame compositing engine, making the Radeon HD 2600 XT capable of working in a CrossFire tandem. The card has two standard CrossFire connectors for that purpose. The Radeon HD 2600 XT, the senior model of the series, can output audio over the HDMI interface via an appropriate adapter (included with the card). The integrated audio solution supports PCM (16 bits, 32/44.1/48kHz) and AC3 (Dolby Digital and DTS) formats.
The overall PCB design of the Radeon HD 2600 XT isn’t ideal in our eyes. The graphics card is rather too big for its class. It may not fit into some compact system cases.
The Radeon HD 2600 XT features a rather simple single-slot cooler that resembles the reference cooler of the Radeon X1950 Pro.
As you see, they are indeed very similar, but the cooler of the Radeon HD 2600 XT has a smaller heatsink traditional ribbing and an aluminum frame. This should suffice to cool a 65nm RV630 chip. The heatsink consists of thin copper plates that form tubular cells with a square section and has a copper sole for contact with the GPU die. Traditional dark-gray thermal paste is used as a thermal interface.
The aluminum base does double duty as a heat-spreader for the memory chips and the load-bearing elements of the power circuit by means of special juts. Elastic thermal pads ensure proper contact.
The cooler is equipped with a 3.6W blower (12V, 0.3A) with a 4-pin connection that implies PWM-based regulation of speed. A similar fan, but with a 3-pin connection, was employed in the Radeon X1950 Pro cooler where it did well enough. We can expect good noise characteristics from the cooler of the Radeon HD 2600 XT, too, if the speed management system is set up properly.
The cooler of the Radeon HD 2600 XT is simple and is not meant for setting new records, but it is not actually meant to. Although the Radeon HD 2600 XT may be hotter than the GeForce 8600 GT or GTS, the cooler should be able to cope with it. The only thing left for us to find out is how noisy the cooler is.
We performed our Radeon HD 2600 XT and Radeon HD 2400 XT power consumption tests on a special testbed configured like follows.
The testbed’s mainboard is modified to have measuring shunts and connectors for instruments in the power lines of the PCI Express x16 slot. The measurements are performed with a Velleman DVM850BL multimeter (0.5% accuracy).
We loaded the GPU by launching the first SM3.0/HDR graphics test from 3DMark06 and running it in a loop at 1600x1200 resolution with 4x full-screen antialiasing and 16x anisotropic filtering. The Peak 2D load was created by means of the 2D Transparent Windows test from Futuremark’s PCMark05 benchmarking suite. Here are the results:
Earlier products from ATI Technologies did not boast low power consumption, and the new Radeon HD 2900 XT is something like a record-holder among single-chip graphics cards, but the Radeon HD 2600/2400 series is quite different thanks to the new tech process. Although the GPU of the Radeon HD 2600 XT GDDR4 card incorporates as many as 390 million transistors, like the top-end solutions of the past, its peak consumption is only 48.6W, like that of the GeForce 8600 GTS. So, our apprehensions do not come true. The Radeon HD 2600 XT GDDR4 would be an ideal choice for a multimedia system if it were not for its too large size.
The Radeon HD 2400 XT has an even more moderate appetite. Its peak consumption is as low as 19W. In fact, this card could do without a fan and is likely to be shipped with a passive cooler, which will make it appealing for those who value compactness and silence.
The new graphics cards from AMD do not have an additional power connector, so it is the +12V line of the PCI Express slot that has the biggest load. The load on the +3.3V line is everywhere the same, about 1.6W.
In order to determine noise level of ATI Radeon HD 2600 XT GDDR4 graphics card, we measured the level of noise produced using a digital sound-level meter Velleman DVM1326 using A-curve weighing. At the time of our tests the level of ambient noise in our lab was 36dBA and the level of noise at a distance of 1 meter from a working testbed with a passively cooled graphics card inside was 43dBA. We got the following results:
As it could be expected, the reference ATI Radeon HD 2600 XT graphics board is relatively quiet and is not louder compared to ATI Radeon X1950 Pro. We could not hear the board distinctively inside our testbed featuring Enermax 1kW power supply unit and Scythe Ninja processor cooling system. Hence, it would hardly add a lot of noise to a mainstream personal computer.
To test the performance of ATI Radeon HD 2600 XT GDDR4 we assembled the following standard test platform:
Since we believe that the use of texture filtering optimizations is not justified in this case, the AMD and Nvidia graphics card drivers were set up to provide the highest possible quality of tri-linear and anisotropic texture filtering. We have also enabled transparent texture filtering. The Nvidia ForceWare and AMD Catalyst settings looked as follows:
We selected the highest possible graphics quality level in each game using standard tools provided by the game itself. The games configuration files weren’t modified in any way. Performance was measured with the games’ own tools or, if not available, manually with Fraps utility version 2.8.2. We also measured the minimum speed of the cards where possible.
We performed our tests in 1280x1024/960 and 1600x1200. We used “eye candy” mode everywhere, where it was possible without disabling the HDR or Shader Model 3.0. Namely, we ran the tests with enabled anisotropic filtering as well as MSAA 4x. We enabled them from the game’s menu. If this was not possible, we forced them using the appropriate ForceWare or Catalyst driver settings.
Besides the Radeon HD 2600 XT we also tested the following solutions:
For our tests we used the following games and benchmarks:
First-Person 3D Shooters
Third-Person 3D Shooters
The RV630 chip has only one raster and two texture processors, which are roughly equivalent to 8 ordinary TMUs and 4 ROPs. Of course, this affects the performance of the Radeon HD 2600 XT negatively, especially when we enable full-screen antialiasing.
We can see that the Radeon HD 2600 XT cannot deliver comfortable performance in Battlefield 2142 even at 1280x1024, let alone higher resolutions, when we enable 4x FSAA. On the other hand, the ATI Radeon X1950 Pro and Nvidia GeForce 8600 GTS do not offer an acceptable speed, either.
The large advantage in computing power over the GeForce 8600 GTS doesn’t save the AMD solution. It is also slower than its market rival GeForce 8600 GT in low resolutions.
From a technical standpoint, Call of Juarez uses a more advanced engine that does not depend much of the texturing speed but focuses on the GPU’s computing capabilities instead. Moreover, this game employs HDR but doesn’t support FSAA.
It’s the opposite of what we’ve seen in Battlefield 2142: the Radeon HD 2600 XT is confidently ahead of the GeForce 8600 GT as well as GeForce 8600 GTS and, despite having fewer TMUs and ROPs, rivals the Radeon X1950 Pro.
There is, however, no difference for the gamer: the average speed of 24fps with slowdowns to 15fps doesn’t allow playing normally, especially when everything depends on your shooting accuracy. The game needs a more advanced graphics card like a Radeon HD 2900 XT for example.
We have already used the Call of Juarez DirectX 10 tech demo in our earlier tests.
You can see that the computing power is critical for the new-generation DirectX API: the ATI Radeon HD 2600 XT GDDR4 is twice as fast as the Nvidia GeForce 8600 GTS.
But we have to face the truth: mainstream graphics cards are not fast enough for this application. The victory of the AMD card has no practical value. It is only an indication of the efficiency of the new ATI Radeon architecture.
The Radeon HD 2600 XT fails this test in the ordinary mode although Far Cry is a 2004 title and shouldn’t be a problem for a mainstream graphics card dated 2007 as the GeForce 8600 GT proves. The problem probably is not only about the cut-down texture-mapping and rasterization section of the RV630 chip but also about imperfect drivers.
The new card is somewhat more confident in the FP HDR mode. Its performance is still far from ideal but you can use a Radeon HD 2600 XT to play Far Cry, even though with occasional slowdowns. We’re talking about the 1280x1024 resolution here because the average speed of the Radeon HD 2600 XT is lower even than 40fps in the other resolutions.
The Radeon HD 2600 XT isn’t brilliant in F.E.A.R., either, being slower than the Radeon X1950 Pro and the GeForce 8600 GTS in average performance and far slower than the GeForce 8600 GT in terms of minimum speed. The best it can do is 26fps with a minimum of only 7fps at 1280x1024. So, if you have at least a Radeon X1650 XT in your system, don’t hurry up to replace it with a Radeon HD 2600 XT.
Using the deferred rendering technique, this game is incompatible with full-screen antialiasing. That’s why there are only anisotropic filtering results here.
Although the game doesn’t use FSAA, the Radeon HD 2600 XT is a loser in this competition. It is unable to provide an average speed of 30-35fps even. The GeForce 8600 GT isn’t far better, but anyway, the ATI Radeon HD 2600 XT GDDR4 is not anything exceptional in comparison with the previous generation of mainstream graphics cards.
ATI Radeon HD 2600 XT GDDR4 not only yields to GeForce 8600 GTS in 1280x1024 resolution, but is even a little faster than the Nvidia solution despite 4x multisampling. With the average performance close to 60fps we can already speak of acceptable gaming comfort. Unfortunately, ATI solution gives in very quickly and in 1600x1200 it can compete only with GeForce 8600 GT, while ATI Radeon X1950 Pro and Nvidia GeForce 8600 GTS are pretty fast in this resolution. Anyway, mainstream graphics card owners should be careful with resolution over 1280x1024, especially when FSAA is enabled.
The Radeon HD 2600 XT loses to the GeForce 8600 GTS just as it did in the previous tests, except for Call of Juarez. And like in Call of Juarez neither card can ensure comfortable gaming conditions at 1280x1024 if you select the highest level of detail.
You can try to achieve an acceptable speed by disabling FSAA, lowering the level of detail or switching into a lower resolution, but the game visuals will suffer as a consequence. If you don’t want this kind of compromise, you have to buy a more advanced graphics card. The same is true for the GeForce 8600 GT which performs even slower than the Radeon HD 2600 XT.
The game doesn’t support FSAA when you enable the dynamic lighting model, but loses much of its visual appeal with the static model. This is the reason why we benchmarked the cards in S.T.A.L.K.E.R. using anisotropic filtering only.
ATI Radeon HD 2000 series cards behave rather oddly in this popular game. When tested in a settlement, a not very demanding part of the game, the new cards have very low performance for their class, sometimes being slower than ATI’s previous-generation products even.
This is what we see here: the ATI Radeon HD 2600 XT GDDR4 is far slower than the GeForce 8600 GTS and, at 1280x1024, than the GeForce 8600 GT as well. All the cards are dramatically slower than the ATI Radeon X1950 Pro, which is not itself a champion when it comes to running S.T.A.L.K.E.R.
Shooters with third-person view do not require as high a frame rate as first-person shooters do. As a rule, 35-45fps should suffice for comfortable play.
Unfortunately, the ATI Radeon HD 2600 XT GDDR4 cannot yield even 30fps in this game. The GeForce 8600 GT cannot do that, either. The latter card has a higher minimum of speed at low resolutions, but this doesn’t make it suitable for playing this game.
As for the GeForce 8600 GTS and the old Radeon X1950 Pro, both deliver comfortable frame rates at 1280x1024.
A very interesting benchmark, Lost Planet uses Microsoft’s new API and can show what to expect from the ATI Radeon HD 2600 XT GDDR4 and the Nvidia GeForce 8600 GTS/GT in upcoming games.
As we wrote in our earlier review, the GeForce 8600 GTS has problems with this game, yet the speeds of new-generation mainstream cards are too low in it anyway.
The Tomb Raider results are similar to those we had in Hitman: Blood Money. If we enable 4x FSAA and the Next Generation Content mode for better visuals that are created by means of advanced special effects, the Radeon HD 2600 XT loses to the GeForce 8600 GT and even to the Radeon X1650 XT although the latter was benchmarked by us on a less advanced testbed! Well anyway, the GeForce 8600 GT cannot ensure an acceptable speed even in the lowest of the tested resolutions.
We try to get the highest possible image quality from each game, but Splinter Cell: Double Agent cannot use FSAA and FP HDR simultaneously. That’s why the results refer to the HDR + 16x AF mode only.
The developers of the Splinter Cell series have always put a focus on the number and complexity of math1ematics-heavy special effects rather than on texture quality. Double Agent carries this tradition on, and the ATI Radeon HD 2600 XT GDDR4 manages to outperform the GeForce 8600 GTS here.
The AMD card isn’t far ahead, though. The gap is only 5% in the playable 1280x1024 resolution, but we shouldn’t forget that the Radeon HD 2600 XT costs less than the GeForce 8600 GTS and is not in fact its opponent on the market.
The current version of the game doesn’t support FSAA, so we performed the test with anisotropic filtering only.
The game not using HDR and FSAA, the Radeon HD 2600 XT proves competitive against the more expensive GeForce 8600 GTS and faster than the GeForce 8600 GT. The problem is that none of these cards, and the Radeon X1950 Pro too, can ensure comfortable performance. Only graphics cards of the GeForce 8800 GTS 320MB class and higher are capable of that.
The game loses much of its visual appeal without HDR. Although some gamers argue that point, we think TES IV looks best with enabled FP HDR and test it in this mode.
When you are playing indoors, the Radeon HD 2600 XT has good results, outperforming the GeForce 8600 GTS in terms of minimum speed thanks to its 120 shader processors. The game is playable with comfort, at least at 1280x1024.
The texture load is higher in open scenes and the Radeon HD 2600 XT falls behind the GeForce 8600 GTS. The gap grows up to 8%. And still the average speed remains rather high, and the gaming comfort is higher than with the Radeon X1950 Pro and somewhat higher than with the Nvidia GeForce 8600 GT, the market opponent to the new card from AMD.
The Radeon HD 2600 XT has an average performance like that of the Radeon X1950 Pro at first, but its small amount of texture and raster processors is felt even at 1280x1024 where the minimum speed of the new card is much lower. The same problem affects its average speed in the higher resolutions. Anyway, you can play at 1280x1024 as comfortably as on a GeForce 8600 GTS, which is a good result for such an inexpensive graphics card as Radeon HD 2600 XT.
This game having a frame rate limiter, you should compare the minimum frame rates in the first place because it is the minimum speed that determines your playing comfort in Command & Conquer 3.
With enabled FSAA 4x, transparent texture antialiasing and maximum level of detail Radeon HD 2600 XT doesn’t provide acceptable performance in none of today’s standard resolutions. The same is true for GeForce 8600 GT, although it boasts slightly better result, especially in minimal performance values. Perhaps turning FSAA off and disabling adaptive antialiasing will improve the speed, yet we are not sure the gameplay will be enjoyable after that.
We don’t use the eye candy mode in this game due to its having problems with FSAA. We test it with anisotropic filtering only.
Despite the rather good average speed at 1280x1024, the Radeon HD 2600 XT doesn’t suit quite well for playing this game because its min speed is only 19fps. Theoretically, it is possible to play at this performance level, because Company of Heroes is not a first-person 3D shooter. But in some cases the performance drop may have serious effect on the troops control precision and hence on the overall campaign success. As for the GeForce 8600 GT, its speed is never lower than 28fps while the average is as high as 73fps. This is enough for comfortable play.
The Radeon HD 2600 XT has a speed of only 17fps at 1280x1024, which is lower even than the speed of the GeForce 8600 GT. We should make allowances for the enabled 4x multisampling, though. Since the RV630 has only one raster processor, disabling FSAA may bring about a considerable performance gain, but it can hardly be as high as to allows playing Supreme Commander at 1280x1024 and the highest level of detail.
When tested at the default settings the Radeon HD 2600 XT has about as high a result as the Radeon X1950 Pro that has 12 TMUs, 12 ROPs and a 256-bit memory bus. It is also not much worse than the GeForce 8600 GTS – the difference is only 703 points. But what about individual tests which we run with enabled 4x multisampling? Let’s see.
There’s nothing wrong in the failure of the Radeon HD 2600 XT in the first test, which demands a high texturing speed from the graphics card. The new card from AMD isn’t brilliant in the other two tests, though. Its failure in the second test, for which a high fill rate is not at all necessary, is especially surprising. This must be related to the specifics of the transparency antialiasing algorithm employed by AMD cards, which is indicated also by the modest results of the Radeon X1950 Pro.
So, we again see that the overall score issued by 3DMark cannot be viewed as an accurate measure of a graphics card’s performance. This overall score should be considered together with the results of the individual tests that are obtained at different resolutions and/or with enabled full-screen antialiasing.
The Radeon HD 2600 XT seems to perform well here, but we’ve already found out in 3DMark05 that the overall score has little to do with the actual capabilities of a graphics card.
The summarized results of the SM2.0 and SM3.0/HDR tests give some more thinking matter to us. The modest result of the Radeon HD 2600 XT – like that of the GeForce 8600 GT – is natural in the first group since its tests correspond to the first and second graphics tests of 3DMark05. The second group of tests depends more on the computing power of the GPU and the RV630 with its 120 shader processors beats the Radeon X1950 Pro as well as the GeForce 8600 GTS that seems to be limited by its 32 shader processors. These results are obtained without FSAA, so let’s check out the individual tests.
As you could expect, the Radeon HD 2600 XT has very low results in both tests of the SM2.0 group. It is slower than the GeForce 8600 GT. It is somewhat ahead of the Radeon X1950 Pro in the second test because the latter is limited by its 8 special-purpose vertex processors while the unified Radeon HD and GeForce 8 architectures do not have this kind of limitation.
The super-scalar Radeon HD 2600 XT architecture shows its best in the SM3.0/HDR tests. The graphics card is only inferior to the GeForce 8800 GTS 320 and by a very small margin, especially in the second test.
We have tested AMD’s new mainstream graphics card with the Radeon HD 2000 architecture and its performance is neither impressive nor depressing,
Despite the similarity of technical characteristics we did not suppose the Radeon HD 2600 XT GDDR4 would be similar to the Nvidia GeForce 8600 GTS in terms of performance. The different pricing of the new card would not make such a comparison correct. However, the new card didn’t show quite well even against the modest GeForce 8600 GT, which has a lower core frequency and lower memory bandwidth than the AMD card.
Yes, the developers from the former ATI Technologies created an inexpensive mainstream 65nm GPU that can work at 800MHz and has a serious computing capability, but this was done by means of cutting down the texture and raster processors of the chip. The chip has only 8 TMUs and 4 ROPs as a result. Perhaps this was necessary to keep the amount of transistors, the level of heat dissipation and power consumption of the RV630 within reasonable limits, but this compromise affected the performance of the Radeon HD 2600 XT even in its GDDR4 version.
As a consequence, the Radeon HD 2600 XT GDDR4 is slower than the GeForce 8600 GT in a significant amount out of 20 tests. In about 7 tests (depending how to count, see the table below), not including the overall 3DMark scores, it beats the opponent and in some other tests, counting Far Cry in HDR mode as a separate test, the two match one another’s performance. This is not impressive for a 2007 product with a unified architecture and DirectX 10 support, and with GDDR4 memory clocked at over 1GHz. We should acknowledge that the Radeon HD 2600 XT performs well in applications that focus on math1ematics-heavy special effects and can even challenge the GeForce 8600 GTS that feels a lack of computing power then. Call of Juarez and Splinter Cell: Double Agent are examples of such games.
The ATI Radeon HD 2600 XT GDDR4 does somewhat better in DirectX 10 applications that require a lot of computing power, but its speed is so low that it doesn’t really matter that the opponents are even slower.
Generally speaking, the performance level of solutions of the class the Radeon HD 2600 XT and GeForce 8600 GT belongs to is low. Earlier we recommended owners of Radeon X1950 Pro and GeForce 8600 GTS to disable FSAA and transparency antialiasing and now we recommend doing the same to owners of Radeon HD 2600 XT GDDR4 or GeForce 8600 GT cards. In some games you may even have to reduce the level of detail or switch into 1024x768 resolution.
On the other hand, if you are seeking a graphics card for multimedia applications, the Radeon HD 2600 XT will be the best choice due to its full hardware support for decoding both H.264 and VC-1 as well as for audio-over-HDMI feature. Nvidia’s solutions can’t offer this.