There is no secret that ATI, graphics product group of Advanced Micro Devices, and Nvidia Corp. have different approaches to design graphics cards for the market of video game enthusiasts, however, it is not completely clear whose method is more efficient: ATI’s reliance on multi-chip graphics solutions or Nvidia’s mega-chip design approach.
“We took two chips and put it on one board (X2). By doing that we have a smaller chip that is much more power efficient. We believe this is a much stronger strategy than going for a huge, monolithic chip that is very expensive and eats a lot of power and really can only be used for a small portion of the market. Scaling that large chip down into the performance segment doesn’t make sense – because of the power and because of the size,” said Matt Skynner, vice president of marketing for the graphics products group at AMD, in an interview with News.com web-site.
Currently AMD’s flagship graphics product is ATI Radeon HD 3870 X2 that carries two code-named ATI RV670 chips. While in many cases the model 3870 X2 is much faster compared to single-chip model 3870, there is a huge gap between pricing and performance of the dual-chip and single-chip solutions that retail for $400 and $189, respectively. According to unofficial information, ATI also prepares Radeon HD 4870 and 4870 X2 products for summer launch.
Nvidia has a different approach to designing high-end solutions: the company creates large graphics processing units (GPUs) that may deliver higher performance compared to AMD’s multi-chip designs in larger amount of applications since they do not rely on software-based multi-GPU technologies like ATI CrossFire or Nvidia SLI. Nevertheless, Nvidia does develop dual-chip graphics cards as well: the company’s current flagship offering is the GeForce 9800 GX2, which carries two Nvidia G92 processors with 128 unified shader processors inside each of them.
“If you take two chips and put them together, you then have to add a bridge chip that allows the two chips to talk to each other. And you can’t gang the memory together. So when you add it all up, you now have the power of two GPUs, the power of the bridge chip, and the power that all of that additional memory consumes. That is why it’s too simplistic of an argument to say that two smaller chips is always more efficient,” said Ujesh Desai, general manager for GeForce products at Nvidia.
According to preliminary information from unofficial sources, Nvidia prepares single-chip code-named G200 product that has from 192 to 240 unified shader processors and delivers substantially higher performance compared to existing GeForce 8800 Ultra or GeForce 9800 GTX.
Nvidia still did not miss an opportunity to remind about AMD’s financial position and claim that it takes whopping $500 million to design “a new enthusiast-level GPU”, which is not entirely correct as half a billion of dollars are spent on designing technologies that enable an entire product lineup that covers graphics solutions in $49 - $849 price-range.
“They don’t have the money to invest in high-end GPUs anymore. At the high end, there is no prize for second place. If you’re going to invest a half-billion dollars – which is what it takes to develop a new enthusiast-level GPU – you have to know you’re going to win. You either do it to win, or you don’t invest the money,” added Mr. Desai.
While it is true that development of smaller graphics chips is considerably less expensive compared to creation of a “mega-chip”, the gap in price and performance of single-chip and dual-chip graphics cards is very high. Moreover, as multi-GPU technologies become more complex, they also require substantial investments. According to previously released information, ATI Radeon HD 4870 X2 (ATI R700) graphics card will utilize a special chip-to-chip interface that allows multi-chip designs to work more efficiently