Not Survived: Failed Technologies of the Decade

The high-tech evolves at unbelievable pace. What seemed to be innovative and forward looking yesterday becomes outdated and obsolete today. In this editorial we are taking a look at technologies and product categories that either did not succeed to become mainstream, failed to develop further or turned out to become out of date in the first ten years of the 21st century and attempt to understand the reasons for their failure.

by Anton Shilov
12/30/2009 | 06:13 AM

Intel NetBurst

For years clock-speeds of microprocessors were considered to be the most important indicator of their performance. Back in 2000 both Advanced Micro Devices and Intel Corp. have conquered the 1GHz milestone and towards the end of the year Intel launched Pentium 4 processor with unbelievable 1.40GHz frequency. But while the company believed that thanks to peculiarity of the NetBurst micro-architecture to quickly gain clock-speeds the new chips will quickly increase performance levels, it eventually emerged that the approach is not feasible and the whole clock-speed war cease to lose sense at some point.

 

The main peculiarity of Intel NetBurst micro-architecture was very long pipeline: 20-stage for the code-named Willamette chip and 31-stage for the code-named Prescott processor, up considerably from 10-stage pipeline of Intel Pentium III central processing unit (CPU). On the one hand, long pipelines allow processors to run at extreme clock-speeds, however, they increase branch mis-prediction penalties, which means that software has to be developed with microprocessor design in mind. Although Intel claimed that its branch prediction unit of the Pentium 4 would reduce mis-predictions considerably compared to the Pentium III, whereas Rapid Execution Engine (which clocked arithmetic logic units at twice the core-clock frequency) would offset any performance decreases and would boost raw power of NetBurst CPUs to ultimate levels, the NetBurst faced both performance and power-related problems.

Since back in the late nineties and the early 2000s many considered clock-speed as the main measure of performance, 1.40GHz – 3.0GHz Pentium 4/Xeon were viewed as something extraordinary despite of the fact that their performance was lower than expected considering extreme frequencies. However, with the emergence of AMD Athlon 64/Opteron processors in 2003 everything changed drastically. Thanks to improved micro-architecture amid relatively short pipeline, AMD64 family of chips not only quickly left Intel NetBurst-based chips behind in terms of performance, but also forced Intel to boost clock-speeds even more aggressively than before.

It was relatively easy for Intel to push clock-speeds of Pentium 4 chips upwards since in 2004 it introduced Prescott core with 31-stage pipeline, however, performance of such processors grew much slower than power consumption. Even though initially Intel wanted to reach 10GHz clock-speed with the NetBurst micro-architecture, at around 3.80GHz (for both 90nm single-core and 65nm dual-core products) thermal design power of such processors became too high – around 130W – despite of relatively low performance.

Sometimes in 2004 it became clear that the whole clock-speed war could no longer lead to tangible performance increases. The high-end Pentium 4 processors were both hot and slow. Even though their performance could be increased by introducing models with even higher clock-speeds, their efficiency and power consumption were two obvious problems. Even though performance of AMD Athlon 64 was much higher back then, it was also clear that further speed boosts would not lead in substantial increase of actual performance. In order to continue rapid pspeed scaling both AMD and Intel chose multi-core route: properly written programs can work much faster on two slow cores than on one fast core.

In 2006 Intel introduced its Core 2 microprocessors based on highly-tweaked Pentium III/P6 micro-architecture. The Core 2 Duo chips managed to offer market leading performance at just 65W thermal design power. Next year Intel canned the whole NetBurst micro-architecture from virtually all market segments.

Desktop PC Barebones

Barebones of small form-factor (SFF) desktop personal computers (PCs) started to take mind and market shares back in 2002 and by the year 2005 there were tens, if not hundreds, models of half-made SFF desktops available on the market. But nowadays desktop PC barebones are barely in demand.

Shuttle Computer was first to introduce its cube-like barebone sometime in 2001, but only in 2002 – 2003 those SFF barebones got enough media exposure and, as a result, popularity among end-users. By the middle of 2005 there were so many barebone versions, not only from Shuttle Computer, but also from companies like Iwill, MicroStar International, First International Computer and many others. Some of such systems could support Extreme Edition processors aimed at enthusiasts and two graphics cards, some were very small and utterly quiet.

The reasons for desktop PC barebones success were simple: there were no truly compact or compact and high-performance PCs on the market at that time amid rising demand towards efficient computing.

But SFF PC barebones are not popular at all these days and there are many reasons why:

Nowadays, only Shuttle Computer is still trying to make SFF desktop barebones popular, but even the XPC pioneer is offering factory build XPC systems in all regions. Unfortunately for Shuttle, they do not provide any services with its XPC systems, as a result, this business is merely successful.

Rambus RDRAM

Direct Rambus DRAM was once the preferred memory standard for Intel Corp., the world’s leading producer of microprocessors, and it provided massive bandwidth years before industry-standard dynamic access memory (DRAM) technologies, moreover, RDRAM was considerably more power efficient than its rivals. But now the only place for RDRAM is inside PlayStation 2 and some other proprietary devices, whereas its successor XDR has virtually one design win: PlayStation 3, the worst-selling new-gen video game system. Licensing fees and high manufacturing costs played a bad joke with RDRAM and Rambus itself.

RDRAM memory from Rambus was indeed very advanced for its time. But apart from technological advantages that Rambus could offer to Intel in exchange of support the new memory type by its core-logic sets, Rambus also proposed Intel one million shares for $10 per share in exchange for making  RDRAM a primary industry standard memory. There are smart people working at Intel and they understood pretty well the potential of RMBS stock after RDRAM is announced to become the next memory standard. As a result, the agreement was signed between Intel and Rambus in 1996 and by 2003 RDRAM memory was supposed to become a primary memory standard. One thing Intel and Rambus have forgotten to do is to persuade all manufacturers of DRAM that RDRAM, for which they needed to pay royalties, is the memory of the future.

Nevertheless, in order to popularize RDRAM, in 1998, Intel made a $500 million equity investment in Micron Technology and then paid $100 million to Samsung Electronics in 1999. Still, there were massive amount of Southeast Asia-based memory makers who did not receive incentives from Intel and were rather negative about the prospect to pay royalties to a technology company.

From technology standpoint, RDRAM could bring a lot of benefits for Intel Pentium 4 processors and Netburst micro-architecture. In fact, the giant microprocessor maker needed RDRAM to show off benefits of memory bandwidth hungry Intel Pentium 4 processor and Netburst micro-architecture. But Intel started to popularize RDRAM with the launch of Intel 820 and 840 core-logic sets for Intel Pentium III generation in late 1999.

RDRAM was considerably more expensive compared to SDR SDRAM back then and it was obvious that a lot of customers will not just transit to Rambus memory. As a result, Intel introduced Intel 820 with memory translator hub (MTH) that allowed SDR SDRAM support by the platform. Unfortunately for Intel, MTH did not work correctly which dramatically decreased popularity of the platform, AMD released its successful Athlon processor and Intel had to quickly encourage system makers and end-users to buy platforms based on chipsets from Via Technologies. To make the matters worse, RDRAM did not provide any benefits for Intel Pentium III and it turned out that previous-generation Intel 440BX platform with SDRAM offered higher performance compared to Intel 820 with RDRAM thanks to considerably lower latencies of SDR SDRAM compared to RDRAM: 7.5ns vs. 45ns. Despite of very intensive promotion by the world’s largest chipmaker, Rambus memory faced fiasco in 1999 – 2000 and all the hopes were for the Pentium 4 processor code-named Willamette due in 2000.

The first Intel Pentium 4 processor was released in 2000 not only because Intel needed to push RDRAM into the market place, but also because it continued to face tough competition from AMD: code-named Thunderbird processor from the latter outperformed the Pentium III and the launch of a next-generation processor was meant to return Intel the lead.

Unfortunately for Intel, by late 2000 RDRAM was still expensive and Intel had to bundle two RIMM modules with some of its Pentium 4 chips to ensure the Intel 850-based platform was cost-competitive. Sales of Pentium 4 chips were very slow throughout 2001 and it appears that Intel predicted such scenario as in August 2001 it releases PC133 SDR SDRAM supporting 845 core-logic to popularize the platform, whereas in 2002 the company launched DDR SDRAM (PC-1600 and PC-2100) supporting 845 B-step core-logic.

Even in 2002, the time when Rambus memory was supposed to command a substantial part of the market, only several manufacturers (albeit, large ones, including Samsung Electronics, Micron, Elpida, etc) produced RDRAM and, according to iSuppli market tracking company, its share was around 5%.

It was pretty clear that SDR and DDR powered platforms were quickly gaining share of the Pentium 4 market, still, Intel 850 was “preferable” core-logic set for the NetBurst chip since dual-channel PC600/PC800 RDRAM memory sub-system provided 2.40GB/s or even 3.20GB/s peak bandwidth, just what the doctor ordered for Pentium 4’s 400MHz Quad-Pumped Bus with 3.20GB/s bandwidth.

Under the contract with Rambus Intel could not release a mainstream dual-channel DDR core-logic until 2003 as such chipset would directly compete against Intel’s Rambus platform. But by 2002 it was clear that Rambus memory is not the way to go: it was expensive, it had very high latencies, it was proprietary and its performance benefits were not distinct enough. As a result, in late 2002 Intel released its last and final RDRAM-supporting chipset – Intel 850E that supported PC1066 RDRAM along with 533MHz QPB – as well as E7205, which supported dual-channel PC-2100 memory with ECC and much lower latencies compared to RDRAM, 533MHz bus and was officially aimed only at workstation.

The share of Intel Pentium 4-based machines with Rambus inside was quickly dropping, but the final nail into the coffin of RDRAM on the PC market was hammered in Spring ’03 when Intel released its breed of 800-series chipsets supporting dual-channel DDR memory at up to 400MHz clock-speed providing bandwidth of up to 6.40GB/s. In fact, Intel wanted to forget RDRAM like a bad dream: at Intel Developer Forum Fall 2002 the company said it would rather not ratify 400MHz (PC3200) DDR memory, most probably because it did not need it as it wanted to introduce 667MHz QPB with its new processors due in 2003. But since AMD’s Athlon 64 was close, Intel decided to introduce chips with 800MHz quad-pumped bus that, by coincidence, required memory bandwidth that even single-channel PC1066 RDRAM simply could not provide at that time.

After Intel dropped RDRAM support, Silicon Integrated Systems wanted to get that premium market and even introduced its R658 and R659 chipsets that supported higher-speed RDRAM in quad-channel mode, but the market did not want to go with that memory standard and by the time DDR2 emerged in 2004, Rambus was already forgotten.

HD DVD

The final specification of HD DVD high-definition video format was complete from day one in early 2006, when it was launched commercially. But Blu-ray disk (BD), which spec became final only two years later, eventually won the format war. The reason? Blu-ray support by Sony PlayStation 3 video game console as well as incentives provided to Hollywood movie studios.

HD DVD format, which was co-developed by Toshiba, NEC Electronics, Microsoft and some others, featured interactive on-screen menus, picture-in-picture function, downloadable content and other functionality, including high-definition video support, from the day it was launched. When Blu-ray was released several months later, it only provided high-def video and lacked any interactive functionality. Blu-ray players were also substantially more expensive compared to HD DVD-supporting devices. Moreover, HD DVD could be replicated on the same factories that make DVDs, whereas Blu-ray required new production lines as well as was generally more expensive to make. In addition, HD DVD did not feature one of the most questionable technologies of DVD – region coding – that was fully supported by BD.

But Blu-ray was not completely hopeless: it supported 50GB maximum capacity on a single dual-layer media, whereas HD DVD could only boast with 30GB on DL disc. Another advantage Blu-ray had was support from more consumer electronics manufacturers, in addition to Sony, it was also exclusively backed by companies like Panasonic, Philips, Sharp and others. Meanwhile, HD DVD was only supported by Toshiba, Sanyo and NEC.

But Sony, which co-developed Blu-ray disc format, integrated BD support into its latest PlayStation 3 game consoles. Owners of expensive video game consoles are hardly frequent watchers of movies, but since PS3 quickly gained user base, movie studios gained significant confidence in the future of the format. Moreover, Sony could persuade more major Hollywood studios to back BD by, it is widely believed, providing incentives.

Among major Hollywood studios HD DVD was exclusively supported by Universal Pictures, whereas Sony Pictures, Twentieth Century Fox and Walt Disney exclusively backed Blu-ray. Paramount Pictures and Warner Bros., as well as studios controlled by the two companies, remained format neutral and released both BD and HD DVD.

However, in mid-2007, after it transpired that HD DVD was winning the game in Europe, Paramount decided to become HD DVD exclusive (not for free, as they say), which provided a huge benefit to the format in the form of highly-popular “Transformers” movie. The war of incentives and formats was unleashed for real this time:

But while neither Toshiba nor Microsoft could provide enough incentives to both Fox and Warner, Sony and other Blu-ray backers could provide much more substantial incentives and in early 2008 Warner Bros. announced its intention to become Blu-ray exclusive starting May '08, which left Toshiba only with Paramount and Universal as the backers of the format. Several weeks after Warner’s announcement, Toshiba itself said it would cease manufacturing of HD DVD, which marked the end of the format war as well as HD DVD technology.


Transformers on HD DVD in 2007 and Transformers: Revenge of the Fallen on Blu-ray in 2009

In 2009 Toshiba unveiled its first Blu-ray disc players and BD-equipped notebooks.

Personal Digital Assistants

Electronic notebooks, personal digital assistants and pocket translators were quite popular less than a decade ago, however, now all of them are obsolete because of the smartphones that are available starting at around $200 - $250.


Apple Messagepad. Image originally published by BrianMadden.com

The first personal digital assistant is officially believed to be Apple Messagepad (Newton platform), which was formally unveiled in 1992 (some believe that the first PDA was Casio PF 3000 released in 1983, but its functionality was just too limited, it only could store phone numbers, addresses and memos and was limited in memory capacity), but it never was mass produced due to various reasons. The company that actually made a mass PDAs was U.S. Robotics with its Palm brand in 1996.


U.S. Robotics Palm Pilot 5000. Image originally published at Flickr

Palm quickly became rather popular among business users and in the year 2000 Microsoft Corp. introduced its operating system (OS) aimed at handhelds: Windows for Pocket PC. Along with Microsoft, numerous makers of personal computers, including Compaq and Hewlett-Packard, introduced their own PDAs.

Both Palm OS- and Windows-based PDAs did not feature GSM connectivity and that required owners to carry both mobile phone as well as PDA.


Blackberry Bold 9000

But the new class of devices was born while PDAs were evolving: in 1996 Nokia released its first Communicator phone that featured PDA functionality and in 1999 Research in Motion introduced its Blackberry (the first implementation resembled an Internet pager, like ICQ).


HP iPaq 5550

By 2003 – 2004, there were numerous smartphones on the market competing against personal digital assistants. Although they were bulky, at the time PDAs had numerous advantages over smartphones, e.g., Windows operating system, compatibility with different file types, support for both Bluetooth and Wi-Fi (in fact, it was possible to make phone calls using Skype!), higher-performance processors, higher quality screens and audio output. However, by the year 2006 smartphones evolved tremendously: they got support for Wi-Fi and also featured 3G baseband, in addition, their multimedia capabilities were a far cry from what they were just several years before that. As a result, in 2005 – 2006 timeframe the popularity of PDAs among business users started to decrease and at present almost nobody use them for business purposes.


Apple iPhone 3GS

From some point of view PDAs have not died: they evolved into smartphones in many terms. Even Apple iPhone, a device that many consider to be the peak of the evolution, resemble PDAs with touch-screens that were available many years ago.

The main reason why personal digital assistants ceased to exist is their high price and the lack of support for GSM/GPRS/EDGE/3G, which meant that cell operators had no stimulus to subsidize pricing of PDAs or sell them at discount prices. Obviously, operators were much more interested in selling smartphones, which eventually pushed PDAs out of the market.

Dual-Processor Platforms for Consumers

Like AMD Athlon 64 FX or Intel Extreme Edition processors, 2-way platforms for performance enthusiasts were meant to establish a brand-new category, extreme computing platforms for computer enthusiasts with no budget constraints. But it appears that even hardcore performance freaks do not want to pay thousands to obtain cream-of-the-creams of computing.

Advanced Micro Devices first announced its 4x4 platform in April ’06 after realizing that its dual-core AMD Athlon 64 X2 and FX chips would not be able to successfully compete against Intel Corp.’s Core 2 Duo central processing units (CPUs) in performance segment. The company hoped to attract attention of hardcore performance enthusiasts with four microprocessor cores and four Nvidia GeForce graphics processors in quad-SLI mode. But everything went completely wrong almost from the very start.

In July ’06 AMD announced acquisition of ATI Technologies and it became illogical for the firm to promote quad-SLI as a key part of 4x4, meanwhile ATI did not have 4-way CrossFire back then. At some point an AMD public relations representative even said that instead of four Nvidia graphics processors end-users could put four hard disk drives into a 4x4 PC in order to called “four by four”. Then the company simply renamed the platform to Quad FX.

Since the platform used the so-called non-uniform memory architecture (NUMA), memory latencies on Quad FX systems were higher compared to existing single-socket AMD Athlon 64 FX platforms, which reduced overall performance and in certain tasks AMD’s 2-way platform demonstrated lower performance than 1-way one.

Only Asustek Computer committed to release a dual-socket mainboard for enthusiasts based on Nvidia nForce 680a core-logic. Other manufacturers decided not to support the AMD Quad FX due to natural concerns about the demand for the platform.

After the Quad FX finally hit the market, Intel introduced its quad-core Core 2 Quad processor – which consisted of two Core 2 Duo chips on the same piece of substrate – for single-socket mainboards that appeared to be faster than AMD’s 2-socket solution for consumers. As a result, the demand towards rather expensive and unique Quad FX slashed to nearly zero.

AMD’s second attempt – FASN8 (First AMD Silicon Next-gen 8-core) platform – was supposed to feature two quad-core AMD Phenom processors and was set to be released in late 2007, however, the company cancelled it due to limited performance of Phenoms in general and due to TLB-bug discovered in the first-generation chips in particular.

While AMD was basically forced to introduce a 2-way consumer platform in order to stay competitive in the performance segment, Intel Corp. was in a very comfortable market position when it first started to talk about its dual-processor platform for enthusiasts code-named V8 and later renamed into Skulltrail. The platform was meant to deliver absolutely extreme performance with the help of eight CPU cores, however, has not become popular even among performance enthusiasts.

Intel Skulltrail was expensive. Too expensive. Intel D5400XS mainboard designed for the platform cost over $600 and required slow and costly FB-DIMM memory modules. Intel only offered Core 2 Extreme QX9775 processors for Skulltrail and a pair of them cost over $2000 transforming Skulltrail into the most expensive type of systems for gaming.

Back in early 2008 there were no video games to take advantage of eight processing engines, moreover, with the lack of DirectX 11 and Microsoft Windows 7 only one CPU core was used in graphics rendering process, which further reduced advantages that Skulltrail could provide.

With either limited performance, performance limitations set by software and simply extraordinary price, 2-way systems remained rather kinky luxury, but not a popular option for performance enthusiasts. Neither AMD nor Intel are talking about next-generation dual-socket consumer platforms nowadays.

Ageia PhysX Processing Unit

Ageia was founded in 2002 with the aim to create a special-purpose physics processing unit (PPU) and offload processing or rich physics effects from the central processing unit (CPU). The firm managed to acquire PhysX middleware with the purchase of ETH Zurich spin-off NovodeX in 2004 and unveiled its PhysX PPU at Game Developers Conference in March ’04. Even though Ageia managed to launch its PhysX cards in March, 2006, after numerous delays, the hardware has never become popular.

Although Ageia PhysX card used to cost several hundreds of dollars and few end-users wanted to buy it, the main problem was that PhysX was not supported by game developers. In fact, Ageia was in sacred cycle: nobody wanted a piece of hardware that is not supported by software and no one wanted to support hardware that is not installed widely.

As a result, rival middleware developers continued to make CPU-based physics engines more advanced and game developers continued to use them. In fact CPU-based physics engines have an important advantage over PPU- and GPU-based engines: they can process both effects physics and gameplay physics. If the former only means additional visual eye-candy, the latter means change of gameplay because of certain physical impact. In practice this means that game designers have to ensure that gameplay engine fully “understands” that certain objects may be destroyed and process certain things using CPU.


Asustek Computers Ageia PhysX accelerator card

Ageia fully understood that PPU would not get popular by itself and offered ways to process physics effects on various microprocessors, including those found inside today’s video game consoles. PhysX PPU could offer richer physics effects provided that game developers had implemented them, however, the vast majority of software makers were not interested in spending money onto effects that few people will ever notice.

Even though there were some games and demos that seriously took advantage of PhysX PPU, the amount of additional particles or objects generated by the chip was so high that graphics processors became performance bottlenecks. When this transpired back in 2006, it became clear that Ageia either have to work closely with both GPU and game developers to ensure smooth framerate of PhysX-enabled titles or become a part of one of the GPU suppliers. Since it was Ageia’s fourth year, its investors were hardly happy with the lack of any progress in terms of hardware popularization, it was logical to sell off the company, which was done in early 2008 when Nvidia agreed to acquire Ageia. Considering the fact that graphics processing units are highly parallel, it is possible to compute physics effects using GPUs, which is exactly what Nvidia is trying to do with GeForce and PhysX nowadays.

There were numerous reasons why Ageia PhysX failed to become a de-facto industrial standard and failed to become popular:

Transmeta and Code Morphing Software

Transmeta Corp. was founded in 1995 by group of enthusiasts with the aim to create ultra low-power microprocessors compatible with contemporary software designed to run on x86 chips. But while the concept seems impressive, eventually Transmeta found itself in a wrong time in a wrong place and with a wrong kind of hardware.

Transmeta’s microprocessors were based on its own proprietary VLIW architecture and their compatibility with x86 or different instruction set was maintained using the so-called code morphing software (CMS). Theoretically, this allowed Transmeta to be add new features to its central processing units by updating the software. However, this also meant that performance of such chips will hardly be very high.

The first Transmeta’s processor – Crusoe (based on 128-bit VLIW architecture) – was launched in the year 2000. Due to code-morphing software it could not perform as competing offerings from Intel. Moreover, it could not run certain industry standard applications due to issues with CMS. To make the matters worse, back then nobody heard of netbooks or smartbooks and notebooks were mostly designed for business users, who would hardly enjoy even low power notebooks with limited software compatibility based on unknown chips. The 2000 was not the right time to introduce low-power microprocessors for ultra-thin laptops.

Transmeta’s second attempt was called Efficeon (which was based on 256-bit VLIW design) and was released in 2003 to compete against Intel Centrino/Pentium M platform for the space inside premium notebooks. Not exactly a good place to compete for, considering the fact that Pentium M was much faster in terms of performance and few end-users cared about very long battery life back then.

Due to low performance and issues with code morphing software, Transmeta could not win a lot of designs. As a consequence, the company posted massive quarterly losses and in 2007 the company announced its plans to quit hardware business. Subsequently, the firm was acquired by Novafora and ceased to exist. Novafora also quickly sold Transmeta’s patents and vanished into oblivion.

Desktop BTX Form-Factor

Intel Corp. first presented its Balanced Technology eXtended (BTX) form-factor for mainboards and PC cases in September, 2003, at Intel Developer Forum Fall 2003. The world’s largest chipmaker presented the form-factor as the future of personal computers and noted that it would solve many problems of Intel’s own processors back then, e.g., necessity to cool-down very hot Pentium 4 microprocessors. However, BTX has failed to become popular due to dozens of reasons.

Some believed that BTX would solve the issues with cooling of power hungry central processing units (CPUs) and graphics cards since a special fan located right beside CPU would suck in cool air from the outside and blow it onto the microprocessor and in the direction of the graphics card. While the efficiency of such approach is indisputable, the disadvantages of the BTX lead to its very slow start and eventual fading into oblivion.


A demonstration of BTX case by Intel

The first problems emerged right after the formal roll-out. It appeared that the form-factor was incompatible with AMD Athlon 64 processors due to integrated memory controller. In order to equalize the length of DRAM signal traces, processors (or memory controller hubs) have to be centered in front of them. In case of BTX it was nearly impossible to install memory modules right in front or behind the processor. Obviously, even if Intel would had managed to transform BTX into default form-factor for Intel platforms, it would have to get rid of it after introduction of its own Core i7 microprocessors with integrated memory controllers.

The second and the main problem of BTX was the huge popularity of ATX. All mainboards, PC cases, power supply units, graphics cards, CPU coolers and other components were designed for ATX form-factor. While this might not be a problem for large PC makers, smaller computer manufacturers and end-users could not transit to a new PC form-factor overnight. Considering that BTX components were not compatible with ATX components it was impossible to re-use older parts on newer systems and… end-users as well as smaller PC manufacturers just kept using ATX.

All of a sudden, Intel itself releases Core 2-series microprocessors with thermal design power of just 65W for dual-core versions back in 2006 making exotic cooling solutions unnecessary. Considering that there was no longer need to cool-down mainstream processors with 103W TDP, Intel itself canned further development of its own BTX motherboards. Traditional mainboard vendors, who have never demonstrated huge interest towards the form-factor, followed. That was the end.

Losses of the Decade: AGP, CRT, Diskette, COM, LPT, PS/2 Ports

Apart from those technologies that were essentially born in the 2000s and died in the same decade there are loads of technologies that managed to enter the year 2000, but have never made it into 2010.

Invented back in 1897 and commercialized in 1922, cathode ray tube (CRT) technology ceased to be used in mainstream computing in the middle of the first decade of the 21st century after about thirty years of evolution within the PC industry. The high-definition with its extreme resolutions made CRT devices obsolete. In fact, the whole video sub-system got changed significantly: PCI Express x16 substituted AGP, DisplayPort and DVI replaced D-Sub in the vast majority of systems.


Evolution of diskettes: 1969 - 1982. Image by Wikipedia

Diskettes, which have been used to exchange information for over thirty years also became outdated in early this century following the rise of multimedia, Internet and large files. In addition to diskettes, there are no more audio or video cassettes too as they were ousted by CD, DVD and Blu-ray.

The decade also destroyed the traditional COM, LPT, PS/2, DIN and other time-honored ports. Printers and mice can be connected to PCs wirelessly via Bluetooth, whereas scanners use USB or FireWire. Even hard disk drives got more convenient cables which replaced Parallel ATA in all new PCs by the end of the decade.

It has been an interesting and very innovative decade. Many of the technologies that we will use in 2010 will cease to exist or will get completely transformed by 2020. Isn’t that why we all are passionate about technology?