This week, forty years ago, Intel Corp. introduced the world’s first commercially available microprocessor – the Intel 4004 – triggering the start of the digital revolution. Nowadays, the absolute majority of consumer, enterprise, professional and other electronics contain one or more microprocessors. The central processing unit is indisputably one of the key inventions of the whole 20th century. The main question now: what will CPUs bring in the next forty years?
“The sheer number of advances in the next 40 years will equal or surpass all of the innovative activity that has taken place over the last 10 000 years of human history,” said Justin Rattner, Intel chief technology officer.
The proliferation of microprocessors is due in large part to Intel’s and other companies' relentless pursuit of Moore’s Law, a forecast for the pace of silicon technology development that states that roughly every 2 years transistor density of semiconductors will double, while increasing functionality and performance and decreasing costs. It has become the basic business model for the semiconductor industry for more than 40 years.
For example, compared to the Intel 4004, today’s Core i-series "Sandy Bridge" processors are more than 350 000 times the performance and each transistor uses about 5000 times less energy. In this same time period, the price of a transistor has dropped by a factor of about 50 000.
The Intel 4004 microprocessor ran at 740KHz (0.74MHz), while the current Intel Core i-series processors achieve almost 4 GHz (4000MHz). If the speed of cars had increased at the same pace since 1971, it would take about one second to drive from San Francisco to New York (or from Lisboa in Portugal to Moscow in Russia) assuming the car speed in 1971 was 60 miles/hour and the distance between San Francisco and New York is 3 000 miles.
Such dramatic increases in performance and decreases in costs and power consumption have unlocked unprecedented amount of businesses, industries and opportunities.
Future microprocessors developed on Intel’s next-generation 22nm manufacturing process are due in systems starting next year and will deliver even more energy-efficient performance as a result of the company’s breakthrough 3D tri-gate transistors that make use of a new transistor structure. These novel transistors usher in the next era of Moore’s Law and make possible a new generation of innovations across a broad spectrum of devices.
While looking back to see how much things have changed since the microprocessor’s introduction, it’s astounding to think about the future and how this digital revolution will continue at a rapid pace as microprocessor technology continues to evolve.
Such advances in chip technology are paving the way for an age when computing systems will be aware of what is happening around them, and anticipate people’s needs. This capability is poised to fundamentally change the nature of how people interact with and relate to information devices and the services they provide. Future context-aware devices ranging from PCs and smartphones to automobiles and televisions, will be able to advise people and guide them through their day in a manner more like a personal assistant than a traditional computer, at least, in Intel's view.
Tags: Intel, Sandy Bridge, Core, Semiconductor
Comments currently: 2
Discussion started: 11/20/11 01:06:51 AM
Latest comment: 11/20/11 07:57:29 PM
Expand all threads
| Collapse all threads
Processors are languishing compared to the earlier years. The next 40 years won't show nearly the same level of improvement as the last 40.
Most importantly, we're getting to the point where shrinking will become impossible since we're approaching an atomic level. You can't get smaller than that.
Also, we starting hitting the wall of performance with the Pentium Pro and each succeeding generation, except for the horrible Pentium 4. When you went from a generation up to that, there were huge improvements that would make the Sandy Bridge improvement seem like statistical scatter. The 8086 to 80286 added enormous functionality (protected memory, virtual memory, enormous addressable memory, etc...) and ran roughly 3x faster per clock cycle (the 80186 was made at the same time as the 80286 and was designed with a different target market). The 386 kind of sucked, but still added a lot, including Virtual 86 mode, and Protected 386 mode. This allowed for 32-bit operation, an enormous increase in addressable memory and a flat memory space. The 486 was roughly twice as fast as the 386, while combining cache and floating point into the processor die itself. The Pentium was again twice was fast as the 486.
Now if we get a 15% improvement from one generation to the next, we're ecstatic. Also, all numbers I gave were clock normalized. In reality, the 8086 topped at 10 Mhz, the 286 at 12.5, the 386 at 33 MHz, the 486 at 100 MHz, the Pentium at 233 MHz. These are all Intel numbers, AMD sold higher clock versions for the 286, 386, and 486.
Processor development has slowed down dramatically, and will continue to, until hits an even harder wall when shrinking lithographies becomes impossible. Expect less, not more, advances in hardware the next 40 years. Of course, there's an enormous amount of performance to be gained with more efficient software.
11/20/11 01:06:51 AM]
- collapse thread
There is still optical CPUs. Photons travel much faster than the electrons used in todays transistors. Sure there are shortcomings to optical CPUs, but they should be addressed by the time transistors reach atomic levels.
11/20/11 07:57:29 PM]
Add your Comment
Enter your username and e-mail address. Password will be sent to you.