The Mix Between CPUs and GPUs in Servers Will Change – AMD

AMD Expects GPUs to Play More Important Role in Servers

by Anton Shilov
06/25/2010 | 02:00 PM

Graphics processing units (GPUs) are gaining market acceptance in high-performance computing space and are also gaining hardware features that are specifically designed to improve their compute performance. While central processing units (CPUs) will still remain in use in the long term, the mix between them and GPUs in servers will change, according to AMD.


“I think what CPUs and GPUs can do is change the mix [of their existence inside servers]. The x86 CPU architecture is on the curve towards the end of Moore’s law due to power limitations… It is hard to build an exascale server using purely x86 technology because that server is going to be too big and will consume too much power. There are already scalability limitations and some of them are pretty thoroughly explored. So, the mix between CPUs and GPUs will, I think, potentially, change,” said Patricia Harrell, director of Stream computing at AMD, in a conversation with X-bit labs.

Some fifteen years ago all supercomputers were based on various proprietary chips. However, the vast majority of HPCs today are powered by x86-based chips. Graphics chips are also moving into that space and going forward they have chances to replace conventional CPUs on that market. However, they are unlikely to completely replace microprocessors, at the end of the day it is still impossible to run operating system on a graphics chip; moreover, some code needs to be run on central processing units. In general, there dawn of heterogeneous computing is coming, not the dawn of all-around GPU computing.

“There is a market trends towards heterogeneous computing and what we call Fusion architecture. There is tremendous potential for that kind of technology in the HPC space as well. On the software side there are research projects investigating how game developers can use all of the resources available on a system. So, they do not have to think exclusively what architecture to use for their application,” said Ms. Harrell.

Even Intel Corp. admits that the mix between central processors and graphics/stream processing chips inside servers is set to change by announcing Larrabee and eventually MIC architectures, there will still be applications that will rely on pure x86 products, according to AMD.

“I think there will always be applications that run on one architecture better than on another. I believe, there will always be a space for discrete GPUs and there will always be a space for x86 technology that can run traditional applications. So, there will always be a place for both architectures and the question for an application is what is the mix of CPUs and GPUs is there and how it is architected,” said the director of Stream computing at AMD.

At present AMD offers FireStream compute accelerators for HPC servers based on ATI graphics chips. In the future the company plans to integrate CPUs and GPUs onto the same piece of silicon for both clients and servers.

It is interesting to note that one of the sessions at the Hot Chips 2010 symposium dedicated to various processing and server technologies is entitled “Surviving the End of Scaling of Traditional Microprocessors in HPC”. The specialists from Schlumberger and Stanford universities are going to discuss limitations of today’s microprocessors when it comes to HPC servers.