AMD Talks Fusion: Vision, Solutions, Software

AMD's Fusion technology is finally here. At present AMD Fusion platforms only power low-end personal computers, in the coming months AMD will introduce Fusion chips for mainstream PCs. But what about the future of Fusion program? Will it power high-end desktops? Maybe next-generation game consoles? What advantages can Fusion bring to end users? Neal Robison, the head of software developer relations department will answer these questions here and now.

by Anton Shilov
03/18/2011 | 08:05 PM

UPDATE: Adding comments regarding next-generation game consoles on page 8.

 

Advanced Micro Devices this month announced that 50 applications can now be accelerated using the company's Fusion accelerated processing units (APUs), such as Llano, Ontario, Zacate and other, which will come out later. The number marks a definite success of AMD in promoting its APU technology as well as GPGPU [general purpose computing on graphics processing unit] among software developers. But naturally, 50 applications are not a revolution. Therefore, we decided to ask AMD about the company's future plans.

As we know, AMD Fusion is not a project, it may be called a program, or it may be called a global plan, or it may be called AMD's global vision of its long-term future. That future involves a lot of pieces of the puzzle, including software applications that should run efficiently on heterogeneous multi-core microprocessors, hardware designs that should provide decent performance for actual apps, business approaches that make those programs and hardware solutions available widely as well as general vision of the future of the industry.

Today we are talking with Neal Robison from AMD about the company's software efforts, future hardware as well as his vision of the industry in the coming years.

X-bit labs: Hello Neal, please introduce yourself to our readers and tell us a little more about yourself and your daily operations.

Neal Robison: My name is Neal Robison. I've been working in the software and gaming industry for over twenty years - so I've seen some incredible innovation throughout my career. For the last 6 years, I've worked at AMD, leading the developer relations team. My official title is Senior Director of Content and Application Support. My team oversees the technical, business and marketing relationships between AMD and software developers all over the world.  A huge part of that work is focused on game developers. We help teams deliver the best experiences on AMD hardware.

What Neal does while ensuring high performance of AMD's Fusion accelerated processing units (APUs), Phenom/Athlon central processing units (CPUs) and Radeon graphics processing units (GPUs)  is crucially important for AMD and its success in the short-term and mid-term future. But what Mr. Robison tries to achieve while evangelizing APUs, GPGPU, heterogeneous multi-core compute technology and so on is vital not only for AMD, but also for the industry at large. In case a software designer knows how to better utilize a heterogeneous multi-core chip with x86 and Radeon stream compute cores, he/she will be able to tune his app for other types of heterogeneous solutions. For example, many believe that exascale supercomputers will be hybrid and will utilize different types of computing cores (e.g. combine CPUs with GPUs) and hence Mr. Robison's work essentially affects much more than just desktop or laptop personal computers.

Software for Fusion

One of the problem with software that can take advantage of stream compute capabilities of APUs and GPUs is that it is pretty hard to find. Some applications can do acceleration, some cannot and therefore end-users have to search for them. Perhaps, it makes sense to create a special APU/GPGPU app store that will sell programs tailored for new classes of compute hardware.

Application Store? Or Maybe Not?

X-bit labs: 50 applications is a significant number. Maybe it is time to think about your own APU/GPGPU app store? None of your partners have created such a store so far and even you have not been yet too active in terms of advertising those APU/GPGPU apps via the drivers.

Neal Robison: We are moving forwards. We talked about this at CES [to the industrial journalists] earlier this year. We are marching towards... but we do not have any dates. [...] Consumers have to find a good central place to be able to [localize] isolated applications for Fusion, many of those applications are OpenCL- or [Direct]Compute-based. I think it will be a good service for consumers to have all them in one location.

X-bit labs: Certainly, it is easy to grab programs from one place. But there is no such a place for APU/GPU-accelerated software...

Neal Robison: Those, who acquire Fusion-based PCs really, really want to get applications that are going to make their devices perform really well. That would be a reason for having one central place. That [place] would prevent our partners among software developers from showcasing [their programs] on their own web-sites or other locations. We just want to make sure that consumers have at least one place that they can get all the information that they w ant about those applications which perform the best on their particular device.

There are plenty of non-APU/GPU-accelerated applications that will perform very well on Fusion, so we have to look at how we will define what will be in that app store, if you will.

X-bit labs: So, would you consider to create a store that will sell programs that run best on AMD-based PCs? Or maybe just a web-site that could pinpoint appropriate programs for users' demands after finding what hardware the consumer uses?

Neal Robison: While we have considered the possibility of creating an AMD app store, we are more interested in helping consumers gain access to innovative applications, and not necessarily focused on creating a commercial app store. We do see a need for a centralized, one-stop location where users can easily find applications that are accelerated by AMD Fusion APU-powered PCs. And we believe there’s an important educational component that comes with introducing a new technology such as AMD Fusion. Information that helps users identify which applications and programs are most compatible with their usage needs would be a natural extension of the AMD Vision approach, which helps consumers select their PCs based on how they use it rather than speeds and feeds.

Encouraging Developers

People are lazy, so are software developers. Moreover, software developers have to care about stability, quality and backwards compatibility. As a result, many are reluctant to make use of brand new technologies like GPGPU in order to sustain quality. So, software developers need some kind of backing from hardware designers.  

X-bit labs: How do you plan to encourage the usage of GPGPU among software developers? Your technical and marketing support has brought in 50 applications, not a bad number to say at least. But maybe you plan new approaches, maybe you have learnt something from your work.

Neal Robison: This is something we address every single day. With software development, there has always been a learning curve every time something new emerges.

I think of Fusion as of a real opportunity for a lot of developers to make their applications and increase their performance. Traditionally, most developers relied on CPU performance increase: chips become bigger, faster, gain cores. But I think a lot of consumers nowadays want to have a great performance on mobile devices, e.g., notebooks, and they do not have desktop systems with sixteen-core CPU inside. [So, they need GPU-accelerated applications].

For this reason, we have worked with these developers [to popularize high-performance GPU-computing technology]. We have found that using OpenCL, making sure that we support the industry standard, was a really good lesson, that is one of the first things we learnt. Simply making them [developing] using with a proprietary or closed API is very unnatural for developers. I think, I've been around long enough to learn that when you are trying to make developers do something unnatural, your success will not be very good. Making OpenCL available whether it is on a PC or Macintosh, whether it comes from AMD or Intel or Nvidia or Microsoft, [wide] industry support [of the API] has definitely made a world of difference.

Another thing that we do is that we continue to improve our software development kit (SDK) that [will help] developers to be able to take advantage of OpenCL on our platform. We keep that [up to date] and publish very regularly release [update] schedule so that developers could count on lots of new features being enabled with each release of the SDK.

At the same time we have released the OpenCL university kit; we have several universities mostly in Europe and North America [...] that are able to teach OpenCL. Some may have parallel programming courses in the task and the students really wanted to learn more about OpenCL because they know it is an open-standard that will be available across platforms. We have released a course/kit that will enable professors to be able to put together an OpenCL semester course quite easily.

No Games with Hardware-Accelerated Physics

Video games have always been among the most demanding applications around. Such kind of software has all the chances to get GPGPU acceleration among the first. But will they?It looks like even in case of physics effects computing using GPUs there is not a lot of enthusiasm. Fortunately, GPGPU is not limited to physics.

X-bit labs: You have been talking about a number of improvements that GPU compute capabilities can bring into PC video games. But can you describe how compute shaders truly imroved actual titles?

Neal Robison: I think what the most of developers started to utilize GPU-compute with DirectCompute, a part of DirectX API. The majority of developers began to use DirectCompute for cross-processing of video frames in real-time. We have also seen GPU-compute for lighting computation; as you can imagine, in every scene of a video game lighting has become a much more important part of the look and feel of the game to make it as realistic as possible. Being able to include a large number of light sources and calculate where are they going and how they behave on different surfaces is a really good use for GPU-compute. Certainly, we have also used GPU-compute for physics. In fact, this month at the Game Developers Conference (GDC) Autodesk released a beta version of a plug-in for Maya that allows to preview the graphics [of physics effects computed by a GPU] that will be in the game built using Maya environment. Those are the things are primary use of GPU compute so far.

X-bit labs: When do you expect this Bullet physics Maya plug-in to be finalized?

Neal Robison: The beta plug-in is available for preview by game-developers right now. I do not think Autodesk has published the release schedule for the plug-in yet. So, the release date is unknown.

X-bit labs: What developers are previewing the Bullet physics plug-in for Maya?

Neal Robison: The plug-in was just announced earlier this month. I do not have the list of developers who use it and will only know when Autodesk actually distributes that.

X-bit labs: What about Bullet Physics engine in general? What developers use it?

Neal Robison: Right now a very small number of PC titles [designed by PlayStation developers] use it. But I know that since Bullet is open-source, people like DICE has used portions of that library to complement their own physics implementations.

X-bit labs: Should we expect games with OpenCL GPU hardware-accelerated physics in 2011 or 2012?

Neal Robison: I don't think there will be a large number of such games this year. Hardware-based physics does not seem to be a huge priority for software developers. We want to make it available as we have that technology, but it seems like a lot of developers are still choosing to use their own physics implementations simply because they want to make sure that performance and gameplay is consistent for all of their customers; a large number [of game developers] are just really concerned about making sure that the experience the consumer has is consistent no matter what graphics card they have in their system or whether they have a very powerful discrete GPU or not. The technology is there some developers will take advantage of it...

We Support Open Standards!

X-bit labs: So, can we say that right now you are more concerned about promoting OpenCL in general rather than OpenCL-based physics in particular?

Neal Robison: I think, in terms of both gaming and non-gaming developers it certainly makes a lot of sense to support an open-standard. We found that [multimedia software developers] chose to use DirectCompute for Blu-ray playback there. The bottom line for us is that we support open standards, such as OpenCL and DirectCompute, we feel this to be a way to move the whole industry forward.

AMD-Branded Apps

X-bit labs: You have an application that allows to allows to compress and decompress video. Nice one. Do you plan to create more AMD-branded applications?

Neal Robison: Traditionally we rely on our partners to create applications that take advantage of our technology. I think that application was a rare instance of AMD-branded app. We do not want to compete with our own partners. We feel that by enabling our partners to make that technology available makes much more sense as they have a lot more expertise in terms of distributing applications and making them more commercially available. You will rather see us enabling our partners to develop those new unique applications than to see AMD-branded programs.

X-bit labs: How does AMD see the market in the next year or two? What programs do you see GPGPU-optimized?

Neal Robison: The easiest ones to see will be games and multimedia playback and creation applications. Certainly, we want to take this much further and much broader. For example Internet Explorer 9 browser can take advantage of DirectX interface to accelerate 2D graphics, which is a huge step forwards acceleration of everything that we all use every single day. Forthcoming FireFox 4 and Chrome browsers will also take advantage of the GPU. In fact, moving [certain] tasks away from the CPU to the GPU, which is more efficient, prolongs battery life on mobile devices.

Tablet Plans: Yes to Windows, Android, MeeGo Under Consideration

X-bit labs: At present there are loads of interesting opportunities, particularly on the market of tablet PCs. What are you going to do? How do you plan to address the Android market, for example? Or do you see Windows as a primary opportunity? What is your stance onto the MeeGo operating system.

Neal Robison: The OpenGL ES 2.0 and OpenGL ES 3.0 are the APIs specifically build for the mobile market and we support them in a very strong way with our drivers and our development tools. The effort that we are put into optimizing compilers, into optimizing our user-built APIs, I think, will continue to help on the mobile market.

We are shipping this quarter a tablet that will be built on a Fusion chip. This will only be the first one and many others based on the Fusion architecture will come out. We believe that Fusion can bring very strong graphics performance to the tablet or mobile segment in general. Having DirectX 11-capable GPU on a tablet would open up huge opportunities for developers who know how to use this API. It is going to require a cooperation with Microsoft, obviously; at CES they announced their intention to better support the tablet market, announced something that we are working together with them on.

As we look onto open-standards market, the Android certainly makes a tremendous amount of sense. That is something we will be investigating as we take our Fusion architecture [into new markets] and we are able to create versions of this architecture for lower power environments that would work quite well for, perhaps, a tablet using this operating system.

MeeGo is pretty interesting from a Linux perspective, and I think we need to see a little bit more market acceptance. We are going to be driven by folks, who actually make the compute devices, e.g., OEMs and hardware partners. [Once they release devices and show their interest], we are going to prioritize our R&D efforts accordingly.

Hardware Plans: APUs for Everyone!

X-bit labs: What prospects do you see for GPGPU in general against CPUs on different markets? For example in case of Llano and Zacate/Ontario chips you definitely made accent onto stream processing. But what about next-gen desktop and server chips? Will you make a similar accent.

Neal Robison: The short answer is yes, absolutely. We see that the Fusion architecture, which is more than just GPGPU, certainly lands itself into pretty interesting performance scaling curve. As you pointed out, Llano and Ontario can definitely scale above and beyond next-generation desktops and servers and it is definitely something we are planning to do eventually. We have announced that intention at various financial briefings to show that the architecture is strong enough and powerful enough to power desktop machines. We certainly can scale it even further to power higher-end desktops with the addition of some discrete GPUs to make sure it has enough horsepower to make sure that it has enough power for every kind of graphics workload that might be thrown at it.

x86 Coming to Tablets, Smartphones

X-bit labs: How do you think you can address emerging low-power markets with current offerings in general? Intel integrates baseband capabilities into its SoCs, Nvidia uses ARM and believes it exceptional performance will help it to gain the market. Is there place for AMD in the smartphones?

Neal Robison: As you might remember, ATI Technologies actually went down this road, so the technology/skills to create system-on-chip devices is present. With our Fusion architecture, I see no problems addressing low-power markets, starting with the tablet strategy. I can see in the future, if the smartphones make a lot of sense from our business point of view, whether it is margins, or availability, that is something the architecture could certainly scale to.

X-bit labs: Do you believe that x86 is a good solution for smartphones?

Neal Robison: I think there is a definite place for it. Obviously, you have to make some concessions, but I think it is far easier to take x86 and be able to scale that down with a good graphics core that it would be somebody in the ARM camp to try and scale up by adding graphics. We do know that GPUs are very difficult to create and have performance necessary to run the most demanding programs today.

X-bit labs: At present there are no mobile applications that rely on x86. Maybe you should create some software development tools targeted specifically at that market?

Neal Robison: For low-power devices tools are going to be a critical part of that strategy, something that we [will have to] look at. But right now there is such a wealth of tools that are available for x86... Those are compilers or just "general" industry knowledge that I think will definitely help any ultra low-power device in terms of software development.

Fusion for Next-Generation Consoles?

X-bit labs: What do you think about next-gen consoles and GPGPU? For example, Nvidia believes into its ARM-based SoCs, Intel is working onto x86-based offerings...

Neal Robison: I think a Fusion-based system makes a huge amount of sense for next-generation consoles. If you are looking at a system that can provide a great deal of horsepower, the Fusion architecture certainly makes sense. With the processing power on its CPU in addition to just general graphics performance, I think it is really interesting because it gives a bit of headroom... In a typical console lifecycle you basically try to take advantage of the chipset as best you can at the introduction and over the lifecycle of the product, developers tend to refine and tweak [their titles] in order to take advantage of more power. I think that [cycle] would happen as you see a very good balance between the GPU core and CPU core. [...] It makes a lot of sense for the next-generation consoles.

X-bit labs: Traditionally, game consoles used custom processors based on various micro-architectures, most recently Power, MIPS, Cell (heterogeneous multi-core Power, if you wish), not x86.

Neal Robison: The original Microsoft Xbox did use x86 and the Xbox has proven that there is great performance to be had with the x86 architecture. [Everything] comes down to tools, again. That is what the most important aspects of it. As well as being able to really harness the processing power of all the chips involved. Look at the PlayStation 3. I think they thought that the Cell processor was going to be everything that they would need and that it would handle both the traditional CPU tasks and the GPU tasks. It turned out that it simply did not have the horsepower necessary for GPU tasks. So, they actually added a GPU to the system quite late in the design of the entire console. I think there is an opportunity for a very good well-balanced multi-core processor arrangement inside the next-generation consoles.

X-bit labs: Current generation consoles already use multi-core microprocessors. PS3 even uses heterogeneous multi-core microprocessor. Maybe you foresee a console with a heterogeneous multi-core CPU with some high-speed stream processor inside along with a discrete graphics chips with fixed-function hardware, etc.? Or you insist that Fusion can be scaled up so high that its CPU and GPU will be able to handle next-generation games at more or less high, but not extreme cost?

Neal Robison: Definitely. I see the Fusion architecture as capable of scaling both up and down. We’ve already talked in the past about the role of the Fusion architecture in areas such as server, and we think that our architecture is strong enough to be able to scale to many different usage scenarios. 

What's The Future Looks Like... or May Not

Stream Processing to Replace FPU?

X-bit labs: Given your accent onto GPU computing, do you foresee a major shift onto stream computing from x86 computing?

Neal Robison: In the area of high-performance computers with some of the supercomputers around the globe we are seeing a major shift on the stream computing from traditional x86 computing environment. For AMD, when we look out into the future, it is about a good balance between those two worlds. There are tasks that make absolute sense for serialized x86 cores and there are a lot of tasks that make absolute sense in a parallel-oriented GPU-compute or stream computing core. What we want to have is a well-balanced system that would accommodate all of those tasks and take advantage of the processing power that is available for everything. So, I don't see one of those things taking over from another, but we want to make sure there is a good Fusion architecture which is there for the balance.


AMD A-series "Llano" processor. Image by Tech.Icrontic.com

X-bit labs: Maybe you think it is time to scrap x87, FPU in general and use multi-stream Radeon GPUs instead?

Neal Robison: I think that you cannot have just a GPU system. We emphasize that the GPU is great for a lot of tasks, but certainly there is a definite range for the x86/x87 multi-core inside a Fusion chip because of distribution of different tasks. So I would not say that multi-stream Radeon GPU is going to solve all the world's problems, it has never been designed that way. Running an operating system on top of the GPU at this time simply does not make a lot of sense. We want to achieve a good balance of serialized tasks and parallel tasks.

X-bit labs: Don't you think that Intel will be against the shift to GPU computing, given the fact that it does not have a fast enough GPU? Intel [in the current prospect] would rather prefer fat x86 multi-core rather than something like Fusion.

Neal Robison: Perhaps, they would be against the shift. They seem to be supporting our efforts as far as the Fusion type of architecture, I mean, this is really what Sandy Bridge and the next iterations of that are designed to be. I think that they tried really hard to develop a GPU that would be able to handle intensive tasks that consumers today want and they failed. It is not the first time that they failed with a GPU as well. I believe that they are under the notion that all tasks should still be focused on an x86 CPU and we simply don't see that as the future.

X-bit labs: What do you think about creation of a GPU based on very small x86 cores?

Neal Robison: I don't think that it is going to work. [...] There are parts of the GPU that have been now developed and matured over the last twenty years that just make so much more sense with the majority of the tasks that they are asked. I think that the best of both worlds would be x86 cores and traditional GPU; that would let developers take advantage of all the power that is there, not just completely rely on one side or the other.

Fusion Opens New Doors

X-bit labs: What types of new applications do you think GPGPU and Fusion computing may create?

Neal Robison: A lot of tasks that we currently do on the CPU certainly will be optimized and accelerated. But the question seems to be talking about new types of experiences.

We addressed a little bit of that earlier this month with an application like wireless display, when you have a notebook and you want to have the "screen" of your notebook onto a larger display and in better resolution, e.g., TV. GPU computing capability would be able to accelerate the encoding of that video signal and then will be able to move that across Wi-Fi so that it appears wirelessly on that screen. This is a very interesting experience! This could be enabled by something like Fusion architecture.

Another interesting area is security. Right now the amount of processing power that is required for some kind of virus checking or even performance of higher-level of behaviour analysis is a little bit limited on lower-power devices (notebooks, etc.). Fusion-based architecture with GPU-compute and OpenCL will unlock a lot of security features that I think will be of interest for both commercial and consumer clients.

I see a lot more [opportunities] with multimedia creation and manipulation applications; for example, consumers will get enough horsepower to do pro-like things with audio, video or even stereo-3D objects.

X-bit labs: Do you think that natural user interfaces like Microsoft Kinect will require those large amounts of compute horsepower.

Neal Robison: I do. This is another interesting thing. In the Xbox 360 right now a lot of the processing is done to interpret what the Kinect  cameras see in the 3D environment is performed on the ATI Xenos GPU! I see that unlocked even further on a consumer PC. Changing the human computer interface whether it is motion control, whether it is different types of devices that are used as the interface [and require compute horsepower], Fusion architecture with GPU compute would absolutely unlock a lot of new experiences.

APUs Set to Challenge CPUs

X-bit labs: A general stupid question that you have received for a million of times, but a little bit paraphrased. Do you think that APUs will challenge - eventually - standalone CPUs and GPUs?

Neal Robison: I think APUs will definitely challenge standalone CPUs. I believe that the future of consumer as well as commercial computing environments are characterized by the ability to present a compelling visual experience. Taking a GPU core and a CPU core and using them together on one chip will definitely challenge standalone CPUs.

I do not think that APUs will challenge discrete GPUs on anything, but on the lowest-end systems. When you look at adding a discrete GPU that enhances performance of the graphics side, it makes a huge amount of sense as it scales [performance] on a wide amount of applications because of the rich visual experience that everybody expects now when they are actually using their computing device.

X-bit labs: But do you think there is a place for standalone CPUs for consumers in the next five years?

Neal Robison: Definitely! In the short-term future there will be standalone CPUs without graphics cores. [...] [It is in our interest] that the transition from [CPUs to APUs] happens as quickly as possible because of the obvious benefits of combining a CPU and GPU together.

X-bit labs: Thank you for your answers, Neal!