2005 - Dual-Core Microprocessors: Core Wars Initiated
For decades performance of microprocessors was determined by their clock-speeds as well as certain micro-architectural improvements. But as a result of quickly increasing power consumption due to power leakage and other factors it became clear that further quick increase of processor frequencies was impossible. As a result, both AMD and Intel chose another way of boosting performance of central processing units: by increasing the amount of cores.
Traditional usage model of personal computers under DOS (disk operating system) and early Windows operating system was limited to one task at a time. For example, it was impossible to run Excel, Word and an antivirus program on the background at the same time back in the early nineties. As a result, it made a great sense to improve single-threaded performance. While eventually Windows gained proper multi-tasking, performance demands of many tasks were so high that users disabled certain tasks while running others. Moreover, seeing that single-thread performance of client CPUs was rising rapidly software designers further continued to create applications that could eat the majority of resources.
The natural consequence of ever increasing demand for single-thread performance was creation a micro-architecture that could quickly gain clock-speed and could also utilize the unused resources of chips by executing two threads of code in parallel. As a result, Intel developed its Netburst micro-architecture with Hyper-Threading technology.
The main peculiarity of Intel NetBurst micro-architecture was very long pipeline: 20-stage for the code-named Willamette chip and 31-stage for the code-named Prescott processor, up considerably from 10-stage pipeline of Intel Pentium III central processing unit (CPU). On the one hand, long pipelines allow processors to run at extreme clock-speeds, however, they increase branch mis-prediction penalties, which means that software has to be developed with microprocessor design in mind. Intel hoped that it could easily amplify clock-speeds gradually and offer competitive performance no matter how competitive AMD would have been. Nonetheless, it appeared that with the code-named Prescott core with 31-stage pipeline performance gains grew much slower than power consumption. It became apparent: Intel was not unable not only to deliver 10GHz chips, but even 4GHz processors
In early May, 2004, Intel said that its longer-term future central processing units would not be based on Netburst, but would be derivatives of low-power Pentium M micro-architecture. Unofficial sources suggested that time Intel would release chips with multiple cores and already in October, 2004, the company cancelled 4.0GHz version of Pentium 4 and announced plans to release dual-core chips.
Advanced Micro Devices did not concentrate on maximal clock-speeds when designing its AMD64 architecture processors, built-in a number of capabilities to allow creation of multi-core processors relatively easily. As a result, the company made it clear already in September, 2003 that the company would release dual-core Opteron chips going forward. In April, 2004, AMD officially confirmed plans to release dual-core chip lineup in 2005.
The dual-core microprocessors for desktops and servers were successfully released in mid-2005 by both AMD and Intel. The product launch became an inflection point for many industries adjacent to client computers: the CPU industry changed its vector of development; software industry changed dramatically: programs that cannot take advantage of multi-threading either ceased to exist or lost popularity; sales of servers and workstations with more than two sockets decreased considerably; end-users no longer pay attention to clock-speeds, but rather mind the core-count.
The emergence of dual-core chips paved the way for long-term development of central processing units by showing the weak spot of continuous clock-speed evolution: it is impossible to boost a certain characteristic of a chip for a long time and obtain linear increase of performance. Going forward both AMD and Intel plan to integrate graphics processing units (GPUs) into their CPUs in order to accelerate massively parallel applications and also build-in special-purpose accelerators to improve performance in specific applications. Such approach is called heterogeneous multi-core.