Bookmark and Share


The head of software developers relations at Advanced Micro Devices said that the company's Fusion concept makes a lot of sense for next-generation game consoles as it offers a number of capabilities that enables unique advantages.

"I think a Fusion-based system makes a huge amount of sense for next-generation consoles. If you are looking at a system that can provide a great deal of horsepower, the Fusion architecture certainly makes sense. With the processing power on its CPU in addition to just general graphics performance, I think it is really interesting because it gives a bit of headroom," said Neal Robison, senior director of content and application support, in an interview with X-bit labs that was updated late on Sunday.

It is interesting to note that current-generation video game consoles already use multi-core microprocessors and Sony PlayStation 3 even uses Cell heterogeneous multi-core microprocessor. AMD's Fusion concept is indeed a heterogeneous multi-core chip as it contains x86 processing cores as well as Radeon stream computing elements. It looks like it is pretty natural for AMD to offer a Fusion-based system-on-chip (SoC) or a combination of a heterogeneous multi-core accelerated processing unit with some high-speed stream processor inside along with a discrete graphics chips with fixed-function hardware. The first scenario seems more likely.

"I see the Fusion architecture as capable of scaling both up and down. We’ve already talked in the past about the role of the Fusion architecture in areas such as server, and we think that our architecture is strong enough to be able to scale to many different usage scenarios," said Mr. Robison.

It is noteworthy that Nvidia Corp. also pins a lot of hopes onto its project Denver, which is Nvidia's approach to fusing CPUs and GPUs.

"Project Denver will support a range of systems from laptops to supercomputers. It is still a product in development, so I can't provide any more detail about potential platforms than that," said Ken Brown, a spokesman for Nvidia, in a conversation with X-bit labs.

Nvidia will integrate general-purpose ARM processing core(s) into a chip that belongs to Maxwell family of graphics processing units (GPUs). The Maxwell-generation chip will be the first commercial physical implementation of Nvidia's project Denver. Nvidia Maxwell will be launched in 2013, it was revealed at Nvidia's GPU Technology Conference in September, 2010. Given the timeframe, it is logical to expect 20nm process technology to be used for manufacturing of Maxwell. The architecture due in almost three years from now will offer whopping 14 - 16GFLOPS of double-precision performance per watt, a massive improvement over current-generation hardware.

Tags: AMD, Fusion, Radeon, ATI, Nvidia, Project Denver, Maxwell, Geforce, 20nm, Echelon


Comments currently: 16
Discussion started: 03/28/11 07:30:47 AM
Latest comment: 03/30/11 07:44:59 PM
Expand all threads | Collapse all threads


Two issues that are not observed

1) Console makers don't want x86 because the temptation to hack the system to run all that x86 software is huge, keeping it PPC tempers some of that desire.

2) Backwards compatibility can be affected, while not as big a deal if the GPU is from a different vendor it introduces more work.

Sony and Microsoft definitely will not use x86, Nintendo will probably move towards Arm if not PPC.
0 0 [Posted by: Professor Freeze  | Date: 03/28/11 07:30:47 AM]

Hacking the consoles to run general x86 software doesn't make too much sense, because you don't earn too much, but your console will be banned from online services.

Using hacked consoles to run cracked games is an existing problem though current consoles doesn't use x86 processors.

Backward compatibility can be provided, it's not a real issue, though it is a problem to solve.
0 0 [Posted by: Martian  | Date: 03/28/11 09:10:50 AM]
- collapse thread

Console manufacturers usually subsidize their consoles, selling them at a loss to build up the installed base quickly. They make most of their money from game royalties.

The issue here, then, is that if for example, a Fusion based console could be hacked to run x86 code, and thus basically be a PC, people will just buy it to use it as a PC because it will most probably be cheaper than a similarly specced PC. The original Xbox was capable of running Windows, IIRC.
0 0 [Posted by: aegisofrime  | Date: 03/29/11 06:46:44 AM]

The xbox 360 already has a multicore power pc cpu and ati gpu on one chip. Fusion is just the same thing, only it has an x86 cpu, and a newer gpu. To the console makers this is hardly an exciting new thing.

My bet for the future is:-
Nintendo: First to come out with something, probably some super charged tablet soc. I bet they use a quad core arm + PowerVR graphics.

MS: Arm cpu + ati gpu.

Sony: Some special custom version of nvidia's project denver offshoot developed just for them.
0 0 [Posted by: Dribble  | Date: 03/28/11 09:53:07 AM]
- collapse thread

Actually fusion is not the same thing, from a design perspective the gpu and cpu will be on a single package, meaning lower power and more form factors. Being fusion means that the cpu will have access to high and wide gpu memory bandwidth. An old intel core 2 duo along with 5770 series gpu will outperform both powerVR and ARM's current offerings!
0 1 [Posted by: redeemer  | Date: 03/28/11 02:12:10 PM]
"Being fusion means that the cpu will have access to high and wide gpu memory bandwidth."

It's the other way around in PC's (the only current implementation of fusion) - the gpu only gets access to the low and slow cpu memory bandwidth.
0 0 [Posted by: Dribble  | Date: 03/29/11 01:39:35 AM]
Actually future fusion apu's as well as ivybridge will use si gpu memory in stacks. I didn't say the memory will be faster since it will be gpu memory it will have huge bandwdith. Of course next gen consoles are not due for a couple more years.
0 1 [Posted by: redeemer  | Date: 03/29/11 05:34:18 AM]
The Emotion Engine used in the PS2 was first such technology but the 360 is just like Intel's core i series where the GPU and CPU are just two separate cores.
0 0 [Posted by: PFX  | Date: 03/28/11 03:19:21 PM]
The Emotion Engine used in the PS2 was first such technology but the 360 is just like Intel's core i series where the GPU and CPU are just two separate cores.

Not really, but Emotion Engine is the first "Hybrid" achitecture design, which combines traditional CPU and vector SIMD execution unit (VP0 and VP1 in emotional engine, which process all the physics and even T&L). The idea is further extended to the later Cell BE, and developed to today's AMD Fusion/IBM POWER7 heterogenious concerpts.

Anyway, the first combined GPU + CPU was still under the Sony camp. It should be the ES+GS chip for PS slim. The chip combined the CPU and GPU of PS2 into the same die. But as you mentioned, it is just more like a Intel Core i series which just put 2 seperated core dies into a same package only.
0 0 [Posted by: RoyalHorse  | Date: 03/30/11 07:44:59 PM]

show the post
0 3 [Posted by: DEADTIME  | Date: 03/28/11 10:36:16 PM]

80 stream units in the current AMD Fusion might get you Wii like performance. Even the 400 stream version for high performance desktops available later this year is similar to current consoles. They need 4x that performance with a 150w TDP descrete GPU for next gen consoles. 1080p60, twin display and 3D capacity. They should use similar CPU and GPU tecchnology to help backward compatibility (shared libraries).
0 0 [Posted by: tygrus  | Date: 03/29/11 01:19:37 AM]
- collapse thread

"Even the 400 stream version for high performance desktops available later this year is similar to current consoles"

I'd have to disagree. PS3 is using something close to a NV 7800 GT and the XBOX360 is something close to a radeon X1800, the difference is that developers can maximize it's efficiency and work around theie weeknesses because it is a close platform.
0 0 [Posted by: goury  | Date: 03/29/11 03:51:56 AM]
I would tend to agree with this assessment. Working in closed systems means higher optimization. On the other end, as we often see lately, this will be bad for the PC gamers as all AAA games will be optimized for consoles and PC gaming experience will suffer. (Or the roll out will be delayed considerably.)
0 0 [Posted by: solearis  | Date: 03/29/11 04:33:41 AM]
Right, GPUs used in current consoles are painfully outdated.
0 0 [Posted by: Martian  | Date: 03/29/11 01:12:54 PM]
I agree with you. The API and OS is much light weighted and more efficient than PC. BTW, One of the key difference between console and PC is the bandwidth.

Besides local memory, PS3's internal GPU has "turbo-cache" like feature to let her to access to the high speed Rambus XDR main memory shared by the CPU, effectively doubled her memory bandwidth. Cell CPU and GPU is linked by the Rambus FlexIO bus, which is 10x faster than PCI-E x16 used in PC!

In X360, 10MB ultra high high speed eDRAM is used for GPU, which acts like a large size L2 cache dedicated for GPU! That's why X360 is mery good at AA mode!
0 0 [Posted by: RoyalHorse  | Date: 03/30/11 12:21:39 AM]

The architecture due in almost three years from now will offer whopping 14 - 16GFLOPS of double-precision performance per watt, a massive improvement over current-generation hardware.

Don't kidding, have u received incorrect information from nVidia? Just 16GLOPS only by 2014? The original 2006 PS3 Cell BE already offer 185GFLOPS in single precision and 12GFLOPS in double precision performance. IBM announced an improved Cell in 2008 focus on tweaking the double precision performance, reaching 102GFLOPS in double precision and powering several world class supercomputer including the world's fastest one in 2008. All of them are already 3-5 yrs ago from now!

Actually, IBM Blade servers powered by Cell BE were well distributed before the launch of PS3! My university already having research code running on IBM Cell blades date back in late 2004 (already reach over 80% of claimed performance level by that time)! Don't u tell me by 2014 a Project Denver APU is still slower than IBM Cell in 2004?

By te way the Cell SDK and complier supplied from IBM is highly fine tuned. It's very easy to achieve over 90% of proclaimed performance in Cell. Some well designed code can even hit an efficiency rate of 98%! That's something ATI and nVidia still need more hardwork!
0 0 [Posted by: RoyalHorse  | Date: 03/30/11 12:05:22 AM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture

more news >>>