Central Processing Units: More Functions, Lower Power
Microprocessors as we know them today are likely to disappear in ten, perhaps, a little more, years from now. Already in 2010 we do have central processing units with integrated graphics and some other features, a decade from now we are going to have solutions that are greatly more integrated.
At present central processing units are essentially doing the same things that they have been doing for decades. But ten years from now they are more than likely to transform rather dramatically. There are several ways for such transformation:
- CPUs absorb graphics engines, but logically they remain separated and different hardware will be used for different tasks. This ensures potentially very high performance since dedicated hardware naturally accomplishes the task faster than universal. Essentially, it means ~64-core CPU with a fixed-functions graphics engine with tens of thousands of stream processors.
- CPUs and GPUs converge and their FPU units that now can process "serial" data and "parallel" graphics will be the same. The multi-core processors appear to be developing rather quickly: the first dual-core chips emerged in 2003 and today we have eight or twelve-core microprocessors from Intel and AMD. As a result, in ten years from now it may make a lot of sense to unify floating point units of CPUs and GPUs to get better utilization of resources and ensure maximum programmability for graphics applications. The main question, however, is how much programmability will be needed for graphics engines in ten years.
Both approaches naturally have their pros and cons, but the general direction is rather clear at this point: microprocessors and graphics processors are set to become one chip. Of course, there will be standalone graphics cards as well as central processing units for specific tasks that will not be able to process graphics, yet, will probably inherit certain stream processing capabilities to speed up certain operations.
Intel SCC chip
Apart from the trend towards merging microprocessors with graphics chips a naturally interesting thing is happening with the micro-architectures of those microprocessors. At present we see Intel (and AMD) taking three (two in case of AMD) several distinct routes with its CPU architectures:
- Generic multi-core CPUs with "fat" cores capable of everything. Such processors are good for client systems and servers since they can process everything and do not sacrifice reliability for power consumption.
- Many-core CPUs (like SCC or Knights Ferry/Knights Corner) with relatively simplistic cores that have limited performance and feature support by themselves. Such chips are especially good for high-performance computing applications and cloud data centers that require maximum performance per watt. Given the fact that transition from 32-bit to 64-bit computing has taken about seven years from now and is still not complete, it is unlikely that many-core chips will become the primary CPUs for general servers or clients.
- Low-power Atom CPUs that neither feature "fat" x86 cores like general-purpose multi-core chips nor loads of x86 cores like many-core processors. The chips and system-on-chips on their base are particularly good for handheld devices.
Given the fact that Intel is more than likely trying to substitute the lack of successful graphics architecture suitable for computing with its MIC (many Intel cores) micro-architecture, it is probable that the world's largest maker of chips will eventually try to wed the two designs somehow. Still, the generic multi-core CPUs will continue to prevail as software will hardly be able to benefit from MIC-like architectures.
"The software suppliers (ISVs) will continue to lag behind the hardware and disappoint us with their lack of engagement and exploitation of the hardware as they drive for the lowest common dominator. There will of course be exceptions, but for the most part ISVs have slowed down the industry," said Jon Peddie, the head of the Jon Peddie Research analyst firm.
AMD Orochi chip based on Bulldozer architecture
At present hardly anyone knows for sure what future CPUs will be like and a lot of important trends that can be extrapolated to the year 2020 will be unleashed along with the release of Haswell and post-Haswell microprocessors in 2013 and beyond. In fact, the example of AMD Bulldozer design - where processor consists of modules and two INT units share one FPU per module - shows that companies are trying hard to pack more execution units into their chips and are eager to cut down some other things. In general, Bulldozer's "hybrid" core approach may be a glimpse of the longer-term future of CPUs.
Longer-term roadmaps from both AMD and Intel clearly show that the two leading designers of processors will continue to develop low-power architectures to compete against ARM-base designs. At present everything shows that x86 and ARM will compete directly against each other on the same markets, but x86 will retain high-performance systems, whereas ARM is going to stay the king of extreme low-power market.
Let us try to summarize our guesses:
- The absolute majority of client central processing units in 2020 will feature integrated memory controllers and graphics cores that will be used for graphics processing and speeding up general consumer applications. Many thin-clients will rely on ARM-based SoCs.
- Server processors will continue to feature "fat" x86 cores with high single-thread performance. However, HPC servers will slowly, but surely migrate to MIC architecture or highly-parallel compute processors (FireStream, Tesla, etc) either in discrete or integrated form.
- Low-power handheld devices will use specific x86 micro-architectures (Atom, Bobcat, etc) or ARM architecture.
"ARM and x86 continue to fight and co-exist. ARM is too poplar and too many companies have too much invested in that architecture to change. Intel will be the challenger and continue an aggressive technology and marketing effort to gain market share," explained Mr. Peddie.
It is also highly-likely that architectures like Power and SPARC will extinct by 2020 due to economic and technology reasons.
"They cannot gain economy of scale and they cannot support the R&D needed to stay current and meaningful," said Jon Peddie.