Multi-GPU Technology: Improvements and Drawbacks of the Concept
Creating multi-GPU products seems to be a simple and elegant way to boost performance when the GPU developer doesn’t have a new chip that would be able to do that alone. This solution has its downside, though. We don’t even mean such technical problems as compatibility with games. These are very important too, but we’d want to mention the economical aspect. Multi-GPU solutions limit the manufacturer’s freedom of price maneuvering, which is not good for the end user who has to buy less fast products for a higher price than he could buy otherwise. We’ll explain this now.
The benefits of the classic approach can be illustrated with historical examples, one of the most striking ones being the R300 core. It was employed on a whole family of graphics cards, from mainstream to top-performance. ATI Technologies could put defective cores to use by disabling faulty subunits to create rather inexpensive graphics cards that would deliver about half the performance of the full-featured product (Radeon 9700). On the other hand, when this core was eventually perfected, it could be clocked at a higher frequency, which helped increase its performance by 33% (Radeon 9800 XT). The same core, in cut-down form, was installed on a number of inexpensive cards: Radeon 9550, 9600, X300, X550, X600, and X1050. Thus the same GPU design with minor changes covered all price segments from $150 to $500.
ATI’s R420/423 core can boast just as impressive a history. Originally used in the Radeon X800 series, it was later slightly redesigned into the faster and more expensive R480/R481 (Radeon X850 XT) and, after the transition to 110nm tech process, into the affordable R430 (Radeon X800 XL). The cut-down version of the core, the RV410 chip, was installed on the Radeon X700 graphics card. In other words, ATI again managed to fill the entire price range with what was virtually the same GPU design.
The introduction of modular architecture made the process even simpler and the next graphics core, the R520 chip, gave birth to a large number of products, from the cheap Radeon X1300 (RV515) to the DirectX 9 king Radeon X1950 XTX (R580+).
The company cut the manufacturing costs while the customer was offered the opportunity of broader choice. This price maneuvering could hardly be possible if ATI Technologies switched to multi-GPU technologies instead of modifying and perfecting one chip. It also wouldn’t be good for the customer who’d have to buy products with lower performance for more money. Thus, dual-chip graphics cards can only represent a tactical move in order to win some time necessary to develop a next-generation core that would ensure a new level of performance. It doesn’t matter if this core is going to be a single chip or a multi-chip module (MCM). An MCM is similar to Intel’s quad-core processors Core 2 Quad which are two individual dual-core chips in a single package. ATI is expected to use the MCM concept in the next-generation GPU codenamed R700.
So far, the RV670 core is the best the former ATI Technologies can offer. It is a good core, yet inferior to the Nvidia G92 in performance. Combining two such cores on a single PCB to create a new high-performance graphics card is largely a forced solution as AMD’s graphics department hasn’t been offering anything competitive in the $300-400 category for a long time. The Radeon HD 3870 X2 is meant to break the tendency and show that the company can challenge its opponent in the premier league of single graphics cards where the Nvidia GeForce 8800 Ultra has been reigning for what seems like ages already. So let’s see what AMD/ATI can offer to a demanding gamer now.