News
 

Bookmark and Share

(15) 

UPDATE: Clarifying position and technology implementation by Intel Corp.

Advanced Micro Devices said that making special-purpose GPU-based accelerators compatible with CPU sockets makes no sense. The approach proposed by AMD is vastly different from the one that is allegedly offered by its arch-rival, Intel Corp., which recently demonstrated its Knights Corner accelerator in an LGA form-factor that is used for microprocessors.

In a conversation with X-bit labs' Anna Filatova, AMD's leading software expert Neal Robison said that Fusion-architecture - which integrates general-purpose [x86] processing cores with highly-parallel stream processors of Radeon GPUs - is a better solution for high-performance computing than to install special-purpose accelerators into CPU sockets. According to AMD, "it makes more sense from the software developers standpoint". Besides, it investments into "tool has already been made so we might as well use it". It looks like the once proposed Torrenza platform is no longer even considered as viable.

"APU is a better and cleaner solution than sticking a GPU in the same socket," said Neal Robison.

 

AMD once - in the early 2000s - proposed a solution that allowed special-purpose accelerators to be installed into the same sockets as AMD Opteron microprocessors for servers. Although no HPC supercomputer projects with Torrenza have been launched, the idea continues to live on. Intel, based on certain assumptions, wants its Knights Ferry highly-parallel accelerator to be installed into sockets for Xeon server chips.

While graphics processing units (GPUs) provide extremely high theoretical compute performance, their speed is limited by a number of factors. The main factor is software that is not efficient enough to take all the advantage that the GPU has to offer and also cannot use features already available in x86 microprocessors; another factor is PCI Express that connects CPUs and GPUs and has bandwidth and latency limitations; yet another thing is performance per watt as GPUs offer highly improved performance per watt compared to GPUs. The accelerated processing units, or APUs, majorly strike out every drawback of CPU+GPU structure except software factor.

The actual model of high-performance computing as well as typical PCs involves numerous technologies, including those with interconnection. Only time will tell which will be the most efficient one.

Tags: AMD, Torrenza, Opteron, Xeon, Nvidia, Larrabee, FireStream

Discussion

Comments currently: 15
Discussion started: 12/11/11 09:52:06 PM
Latest comment: 12/19/11 01:18:59 AM
Expand all threads | Collapse all threads

[1-7]

1. 
I agree with AMD here, but I want to add the point of targeting regular consumers as the starting point for every new technology. just take a look at current successful products such as Core iX, Radeon HD4xxx + GeForce 200+ these products even thought they provide a lot of capability as a non regular consumers products like servers or HPC... but their success started as a regular consumer targeted products... this point is what is missing from AMD x86 CPU's.. AMD designed their CPU's ( after Athlon 64 ) with much consideration on the server/WS side than a regular consumer... where they fail to impress/compete against competitive products which targeted regular consumer first then optimized/modified for server/ws consumers...

I know designing a special purpose silicon or improving IPC is hard, and requires a lot of R&D resources but this is what drove the success of Core iX family... they added IPC and that special media accelerating silicon on the CPU ( encoding ).. the IPC part is very important and can work on both regular and server/ws consumers... and even the hardware media acceleration can work with some WS cases... and if they designed it well with some flexibility then they can even use the same logic for other WS intensive work or even server work load ( like image manipulation/ video processing on server side web hosting services )... I know they wont go for High-Performance CPU's now but will concentrate on their APU's now... but they can use their APU success to drive their self ( after sometime when they can stand up again ) into high-end CPU's later with better targets...
0 0 [Posted by: Xajel  | Date: 12/11/11 09:52:06 PM]
Reply
- collapse thread

 
That's what I though was the biggest mistake of AMD. Their mobile chips sucked. The Phonoms, the Bulldozer might not be impressive. The Athlons were OK. But that's not the big deal!
In times when desktop share was constantly going down and most people and companies adopted mobile devices (I don't mean tablets here) as their every day devices, AMD had nothing to offer. Their desktop CPUs might not be impressive but they seem good enough and they offer good price/performance but their mobile CPUs in the last 10 years were abysmal.
We haven't bought a new Desktop since probably 1998. We use primarily notebooks. I have personally tried to buy AMD based notebooks but I returned the every single time. Intel based machines offered much better performance and battery life.
And being bad the AMD CPUs usually go into notebooks of poor quality thus not worth buying even though they are cheap.
I have no idea if these new APUs are any better but my last three personal notebooks featured Intel CPUs and ATI GPUs and I was very satisfied with this combination.
0 1 [Posted by: Zingam  | Date: 12/12/11 02:09:51 AM]
Reply
 
Now AMD have the same overall power consumption. In Graphic-intensive workloads it's actually better, sometimes by a long shot.

But pay attention at the capacity of the battery and use it to compare laptops, sometimes a system consumes less but also has a smaller battery.
0 0 [Posted by: sirroman  | Date: 12/13/11 06:21:50 AM]
Reply

2. 
How about completely integrating the GPU into the CPU? Is that possible? Wouldn't it be better if the separation between GPU and CPU completely disappears. Currently the GPU can do some calculations but they are still very simple, compared to what the CPU can do.
Such integration will obviously completely change the PC architecture. Then instead of changing your GPU, you could just add another processors. And the PC will be more like modern super computers or blade servers.
Or maybe what I'm talking about is a better Cell processor.

Do you think that such integration could happen soon?

I have read that initially the CPU was very simple but gradually it swallowed lots of external components that were separate chips before. Even sound processing is done on the CPU these days.
1 1 [Posted by: Zingam  | Date: 12/12/11 01:55:47 AM]
Reply
- collapse thread

 
Thats the AMD plan. Search for slides of AMD APU Fusion roadmap. where they clearly show an image where the CPU and GPU are completely merged. You cannot tell which part is CPU and which is GPU. Complete fusion.
They are predicting that product to launch in 3 years.
2 0 [Posted by: MySchizoBuddy  | Date: 12/12/11 06:59:20 AM]
Reply

3. 
yet another thing is performance per watt as GPUs offer highly improved performance per watt compared to GPUs.


Which is which?
2 0 [Posted by: Zingam  | Date: 12/12/11 01:59:14 AM]
Reply

4. 
Only time will tell which will be the most efficient one.

Indeed!
1 0 [Posted by: Pouria  | Date: 12/12/11 02:58:53 AM]
Reply

5. 
AMD statement make no sense. GPGPU in other CPU sokcet is great idea. There are higher data transfer and lower latency than CPU - PCIE GPGPU or even betwen CPU and GPU in Fusion die. Sepearte GPGPU may be bigger and high performant, also may be easily replacable with other specialized coporocesor or normal CPU, depending on demands.
AMD spreads stupid comments about everything they cannot design and produce, while praising their primitive products.
1 2 [Posted by: Tristan  | Date: 12/12/11 06:04:55 AM]
Reply
- collapse thread

 
what are you going on about. If you put a GPGPU on a CPU Socket then it needs to communicate with another GPGPU or CPU via a CPU socket-CPU socket interconnect bus. the same bus that multiple socket motherboard use. Which is lot slower than putting the GPU on the die with CPU.

Nothing can be faster than a GPU and CPU on the same die.
2 0 [Posted by: MySchizoBuddy  | Date: 12/12/11 07:04:03 AM]
Reply
 
'Nothing can be faster than a GPU and CPU on the same die', yes if CPU and GPU are well integrated. This is not case for Llano: CPU and GPU communicates via.. main memory.
http://hothardware.com/Ne...os-Bandwidth-Sensitivity/
0 0 [Posted by: Tristan  | Date: 12/12/11 07:38:06 AM]
Reply
 
And who said that Llano is the last iteration of APUs?

More, it doesn't make sense to make a special communication via between CPU and GPU when latter on you won't need one, since CPU and GPU will be one. Do you see now what AMD did there? ;D

Bobcat actually has better integration between CPU and GPU, AFAIK. But both (Bobcat and Llano) are the "firsts" of their kind.
0 0 [Posted by: sirroman  | Date: 12/13/11 06:29:51 AM]
Reply
 
Putting a GPU on a slot on the motherboard makes no sense. GPU use different memory that CPU's, they use GDDR5 memory which is different from DDR3 used by CPUs.

So you not only need an empty socket but also extra RAM-banks to put the new memory sticks that doesn't even exists yet.
0 0 [Posted by: sector7  | Date: 12/19/11 01:18:59 AM]
Reply

6. 
adding anything other than a PCIe interconnect is verboten?

yet AMD is totally happy making a really well integrated Fusion product line that has GPU and CPU on the same chips, sharing memory?

it kind of makes sense: don't use proprietary CPU interfaces when PCIe works just fine. i'd love to find some hype cycles about HT powered Radeons, i think that was a thing.

my skepticism is thus: it's true CPU's boasting more and more PCIe bandwidth directly, but it seems odd to say that the graphics card, the highest bandwidth device of all, cannot play in the systems fabric and has to stick around as a pcie client of some particular socket.

i wonder if this will impact Cray's Ares/Cascade planning; it sounded like they wanted to promote the GPU to being directly on fabric.
0 0 [Posted by: rektide  | Date: 12/12/11 08:56:31 AM]
Reply

7. 
Good news post, but could benefit from proof reeding.
0 1 [Posted by: Prosthetic_Head  | Date: 12/12/11 09:18:05 AM]
Reply
- collapse thread

 
That's "proof reading", genius.
1 1 [Posted by: HumanSmoke  | Date: 12/13/11 02:21:42 PM]
Reply

[1-7]

Add your Comment




Related news

Latest News

Monday, April 14, 2014

8:23 am | Microsoft Vows to Release Xbox 360 Emulator for Xbox One. Microsoft Xbox One May Gain Compatibility with Xbox 360 Games

Tuesday, April 1, 2014

10:39 am | Microsoft Reveals Kinect for Windows v2 Hardware. Launch of New Kinect for Windows Approaches

Tuesday, March 25, 2014

1:57 pm | Facebook to Acquire Virtual Reality Pioneer, Oculus VR. Facebook Considers Virtual Reality as Next-Gen Social Platform

1:35 pm | Intel Acquires Maker of Wearable Computing Devices. Basis Science Becomes Fully-Owned Subsidiary of Intel

Monday, March 24, 2014

10:53 pm | Global UHD TV Shipments Total 1.6 Million Units in 2013 – Analysts. China Ahead of the Whole World with 4K TV Adoption

10:40 pm | Crytek to Adopt AMD Mantle Mantle API for CryEngine. Leading Game Developer Adopts AMD Mantle

9:08 pm | Microsoft Unleashes DirectX 12: One API for PCs, Mobile Gadgets and Xbox One. Microsoft Promises Increased Performance, New Features with DirectX 12

3:33 pm | PowerVR Wizard: Imagination Reveals World’s First Ray-Tracing GPU IP for Mobile Devices. Imagination Technologies Brings Ray-Tracing, Hybrid Rendering Modes to Smartphones and Tablets

2:00 pm | Nokia Now Expects to Close Deal with Microsoft in Q2. Sale of Nokia’s Division to Close Next Month