News
 

Bookmark and Share

(14) 

After being late to market with high-performance graphics offerings for a number of times, ATI, graphics product group of Advanced Micro Devices, is reportedly considering high-end graphics solutions that utilize more than two or, perhaps, even more physical dice. The method has been successfully utilized by Intel Corp., but will it be feasible for graphics processors too?

As Graphics Chips Become More Complex…

ATI Radeon HD 2900 (R600) graphics chip, which contains about 700 million of transistors had power consumption of 160W, or even more, but still did not manage to demonstrate performance on par with Nvidia GeForce 8800 GTX, a solution that also demands high amount of power and is rather expensive to manufacture. But ATI Radeon HD 3800 (RV670) graphics processing unit (GPU), which is made using 55nm process technology, has the same amount of horsepower as R600, but is cheaper to build and consumes less amount of energy.

While two of such ATI RV670 chips would still consume quite a lot of power, they will be able to offer performance and features that were not available before without necessity to develop a chip that would have about 1.3 billion of transistors, the amount of elements that would require very thin process technology – so that the GPU would stay cheap enough to manufacture – and quite a lot of time to design it and verify the lack of bugs.

It is projected that ATI Radeon HD 3800 X2 – graphics board running two ATI RV670 processors – will be announced at Consumer Electronics Show, or at another time early next year. While this graphics card will be the first multi-GPU consumer board developed by former ATI Technologies in years, it seems that multi-GPU is the future, at least when it comes to AMD’s graphics product group (GPG).

…Multi-Chip GPUs May Be the Future

It is little known about products code-named ATI R700 today, but, according to an article at PC Watch web-site, the next generation of graphics solutions from AMD may utilize multi-chip module (MCM) concept instead of multi-GPU concept, at least, in the high-end. Even though both approaches have drawbacks compared to single-chip solutions, in case of homogeneous MCM some issues are easier to solve.

Intel Corp., the world’s largest maker of chips, puts two physical dice – each containing two processing engines – onto a single piece of substrate to create quad-core central processing units. This allows Intel to boost its yields, as monolith quad-core microprocessor would have larger size of the die and would be more expensive to manufacture, according to Intel. It is rumored that AMD’s GPG may think the same way and cease to develop large GPUs, but concentrate on making smaller chips working efficiently together.

But ATI/AMD MCM graphics solutions will not be similar to Intel’s MCM CPUs. Instead of using an external bus to connect the two dice, a special chip-to-chip interface (or high-speed link) is expected to be used, which should improve their performance when they act together. Besides, it is reported that AMD’s GPG will attempt to use shared memory on its multi-dice ATI R700 graphics solutions and rely on the link between GPUs to organize their access to each other’s memory pools. The GPU dice are projected to be able to enter idle mode when their computing power is not needed, thus, preserving power consumption.

At present graphics processors in multi-GPU configuration, or on a multi-chip graphics board, communicate using special multi-GPU interfaces – dubbed ATI CrossFire or Nvidia SLI – or via PCI Express bus. While the bandwidth of CrossFire and SLI is believed to be relatively low, the bandwidth of PCI Express 2.0 x16 bus is 8GB/s in each direction, still well below of graphics cards’ memory bandwidth that can be over 100GB/s. However, chip-to-chip interfaces like Rambus’ Flex IO can provide speeds of 76.8GB/s (32GB/s read and 44.8GB/s write) and beyond, therefore, the problem of chip to-chip interface can be solved.

But an obvious problem with homogeneous multi-die GPUs is that a high-end GPU consisting of two dice will nearly always be two times faster compared to a performance-mainstream GPU with one die. Of course, AMD GPG will still be able to sell lower-clocked dual-die GPUs, but is not obvious that such solution will be viable from financial standpoint.

Therefore, with a large gap between the price and performance of single-chip and dual-chip products, it may be hard to form a comprehensive graphics card lineup that would cover all the price and performance segments. If AMD decides to install four homogeneous chips/dice on a high-end graphics card, three onto a performance-mainstream board, two on a mainstream solution and one will serve the low-end, then its driver team will have to spend a substantial amount of time tweaking each video game for single-, dual-, triple- and quad-GPU/die graphics sub-system; a task that neither ATI, nor Nvidia have so far been truly successful in.

Performance of all modern [homogeneous] multi-GPU solutions depends on drivers and in case the driver does not recognize an application, performance of a dual-, tripe- or quad-GPU graphics solution may be similar to a single-chip graphics card. Theoretically, ATI Catalyst driver developers may force so-called alternate frame rendering (AFR) multi-GPU rendering technology for all unknown applications for CrossFire configurations, but this may add lag effects in numerous games.

Or May Not

While homogeneous graphics solutions have been widely discussed in the recent years, the first successful consumer 3D graphics accelerators as well as professional 3D boards for workstations until the recent years utilized heterogeneous multi-chip architecture, where all (or nearly all) the chips onboard had different functionality. Maybe, this is the way to go?

Nowadays different units within a graphics processor have different memory bandwidth requirements, moreover, certain parts of the chips do not communicate with others. As a result, a heterogeneous multi-GPU, or heterogeneous multi-die GPUs, may become feasible solutions against ultra-complex single-chip GPUs and triple-/quad-core homogeneous multi-GPU/multi-die solutions.

Unfortunately, heterogeneous multi-GPU/multi-die GPU solutions will almost certainly not be viable for mainstream and entry-level graphics cards, where price matters quite a lot. Thus, ATI/AMD will have to develop single-chip solutions for segments of the market where price matters and create heterogeneous multi-chip/multi-dic graphics products for those, who demand ultimate performance. In both cases the gap between the price and performance of mainstream and high-end graphics sub-systems is likely to be fairly high.

At the end, both homogeneous multi-GPU/multi-die GPUa as well as heterogeneous multi-GPU/multi-die GPUs have their advantages and disadvantages. So, maybe a single-chip high-performance graphics card has still a reason to live?

Officials for ATI, graphics product group of AMD, did not comment on the news-story.

Discussion

Comments currently: 14
Discussion started: 12/05/07 11:01:07 PM
Latest comment: 12/10/07 09:13:35 AM
Expand all threads | Collapse all threads

[1-3]

1. 
Just another concept thats gonna be paper launched... with a load or crap performance...

Just hot air blowing out of their a$$
0 0 [Posted by:  | Date: 12/05/07 11:01:07 PM]
Reply
- collapse thread

 
Couldn't have said it better myself, although I'm no fanboy. What is going to happen in the end is what has happened in the past 3 years. AMD is going to make a chip that is power hungry, hot, expensive, big, ugly, underperforming, and useless.
0 0 [Posted by:  | Date: 12/06/07 10:17:41 AM]
Reply
 
Do the world a favor and the hell up. Not only did your post reek of fanboyism it also proved that you do not have a clue about what you are talking about. Perhaps you have forgotten about Pentium 4 vs. Athlon 64? Maybe you are keeping the painful memories away when AMD CPU's could match the performance of Intel CPU's that where clocked up to a Gigahertz faster and had twice the amount of L2 cache in games?

Don't bother posting here again you retarded shill.

0 0 [Posted by:  | Date: 12/06/07 01:08:55 PM]
Reply
 
Relax nitwit and check what I said.

AMD (ATI) graphic chips have been the most power hungry, underperforming pieces of junk in the recent years, and they were always late to the market, from the X800 to 2900. Along with the new AMD cpu's, which is nothing more than a VERY LATE embarrassment, AMD has nothing to offer.
Its not fanboyism, its fact you shokolat.
0 0 [Posted by:  | Date: 12/06/07 05:26:14 PM]
Reply
 
My apologies Mr. BonBon I jumped to the conclusion that you were just ranting about CPU's when the topic was GPU's. It pains me that AMD bought ATI because of (ATI's) extremely poor track record with time tables and power management/performance of their X1000 and X2000. I guess I sub-consciously refuse to label ATI as AMD because I wish (AMD) was smart enough to have bought Nvidia instead.

My one experience with ATI left me with bad feelings for them. I bought a 9800 Pro with 128mb 256bit ram and it kept on crashing in my favorite game of that time UT2004. ATI must be run by idiots for them not having public driver beta's like Nvidia does. The piss poor drivers is the biggest thing in my opinion that is holding ATI back along with terrible power drain and lateness.
0 0 [Posted by:  | Date: 12/06/07 06:02:16 PM]
Reply

2. 
I think the R700 will beat nvidia8800GTX performance ; )
0 0 [Posted by:  | Date: 12/06/07 06:13:42 AM]
Reply

3. 
The simplist solution for AMD to increase performance would be to just use more TMUs and ROPs!!

That being said I do believe that suck a solution could be possible now only WITH AN EFFICIENT HIGH SPEED INTERCONNECT. I say this because of the way the ATI GPUs have their shaders. If the schedulers can communicate properly it would be effectively like having one GPU with 640 sharers. You would have one scheduler making all the decisions and passing information to the second GPU. Even though that would increase shader power I could see Communications dealing with the texture units to work together as if they were on the same GPU and form one image QUICKLY as the main bottleneck. Oh well I guess we'll see.
0 0 [Posted by:  | Date: 12/06/07 10:54:52 AM]
Reply

[1-3]

Add your Comment




Related news

Latest News

Monday, April 14, 2014

8:23 am | Microsoft Vows to Release Xbox 360 Emulator for Xbox One. Microsoft Xbox One May Gain Compatibility with Xbox 360 Games

Tuesday, April 1, 2014

10:39 am | Microsoft Reveals Kinect for Windows v2 Hardware. Launch of New Kinect for Windows Approaches

Tuesday, March 25, 2014

1:57 pm | Facebook to Acquire Virtual Reality Pioneer, Oculus VR. Facebook Considers Virtual Reality as Next-Gen Social Platform

1:35 pm | Intel Acquires Maker of Wearable Computing Devices. Basis Science Becomes Fully-Owned Subsidiary of Intel

Monday, March 24, 2014

10:53 pm | Global UHD TV Shipments Total 1.6 Million Units in 2013 – Analysts. China Ahead of the Whole World with 4K TV Adoption

10:40 pm | Crytek to Adopt AMD Mantle Mantle API for CryEngine. Leading Game Developer Adopts AMD Mantle

9:08 pm | Microsoft Unleashes DirectX 12: One API for PCs, Mobile Gadgets and Xbox One. Microsoft Promises Increased Performance, New Features with DirectX 12

3:33 pm | PowerVR Wizard: Imagination Reveals World’s First Ray-Tracing GPU IP for Mobile Devices. Imagination Technologies Brings Ray-Tracing, Hybrid Rendering Modes to Smartphones and Tablets

2:00 pm | Nokia Now Expects to Close Deal with Microsoft in Q2. Sale of Nokia’s Division to Close Next Month