2010 Year in Review: The Events That Made History

The year 2010 was hardly full of market-changing revolutions and revelations, but the local revolutions were very exciting. As the year is about to end, we are taking a look back on the news-stories that you read most this year and try to determine the events, companies and consequences that formed the year 2010.

by Anton Shilov
12/29/2010 | 09:24 PM

After relatively calm 2009, the year 2010 could have been called a storm. But only could, because the year 2010 was just a calm before the storm. The year 2011 not only promises to bring some natural changes to the industry (acquisitions, bankruptcies, mergers and scandals), but it promises fulfill what the 2010 has left: to introduce new realities, trends and to make companies prove the plans.

 

This year happened a number of unprecedented things: Nvidia disclosed its short-term and long-term roadmaps and while no details were revealed, the direction of the company became even more obvious, OCZ Technology decided to drop the business of inexpensive memory modules to concentrate on solid-state drives, Panasonic grabbed exclusive rights onto Blu-ray 3D version of Avatar movie and ex-Sony executives expressed doubts that the PlayStation 3 could win the war against Xbox 360 and Wii. However, those events and claims are hardly too important.

There are news-stories that you, our readers, read more than others. We picked those stories up, analyzed them and chose those, which actually introduced something truly important for the computing industry. Naturally, we combined several news-stories into a single news-topic. We did not this time tried to evaluate the importance of any events, for us, they are equally substantial.

This editorial is not about revealing news, but about recapping the events that most of you read this year and that made the 2010 different and, perhaps, will make it in history.

Intel Cans Larrabee Graphics, Ships 48-Core Microprocessor, Touts 1000-Core Chip

Designing a graphics processor is definitely not an easy task, this is a lesson that Intel Corp.'s engineers learnt in 2009 - 2010 period. Firstly, Intel "delayed" roll-out of its code-named Larrabee graphics chip, secondly, the company just scrapped the project and said officially that it was unlikely to release it as a discrete graphics product. But while the GPU from Intel is essentially dead, the company outlined plans to multi-core commercial accelerators for high-performance computing (HPC) systems and started to ship its single-chip cloud computer (SCC) processor with 48-cores to researchers.

“We will not bring a discrete graphics product to market, at least in the short-term. […] We are also executing on a business opportunity derived from the Larrabee program and Intel research in many-core chips. This server product line expansion is optimized for a broader range of highly parallel workloads in segments such as high performance computing. We will also continue with ongoing Intel architecture-based graphics and HPC-related R&D and proof of concepts,” said Bill Kircos, director of product and technology media relations at Intel.

Multi-Core Chips for HPC

HPC is a segment that will benefit tangibly from many-core architectures. This year IBM and other makers of  HPC servers unveiled machines featuring Nvidia Corp.’s Tesla 2000-series many-core computing processors that are designed specifically to rival x86 offerings from Advanced Micro Devices, Intel or non-x86 processors on the high-performance computing market. AMD also admits that eventually accelerators like AMD FireStream or Nvidia Tesla will rival traditional chips substantially.

But during the year Intel made it clear: it has a lot of trumps up in its sleeves: Intel started to ship a system running a 48-core x86 microprocessor to select software developers in a bid to encourage interest towards many-core x86 central processing units (CPUs) with the help of an actual personal computer and chip; Intel also began to ship its code-named Knights Ferry software development kit to interested parties looking forward creation of applications compatible with Intel's dedicated many-core HPC accelerators based on MIC (many Intel core) architecture.

Knights Ferry

The Knights Ferry chip appeared to be what has been previously known as Larrabee GPU: has 32 x86 cores clocked at 1.2GHz and featuring quad-HyperThreading. The unit, aimed at PCI Express 2.0 slots, has up to 2GB of GDDR5 memory. The chip itself has 8MB of shared L2 cache, which is quite intriguing by itself since highly-parallel applications do not require a large on-chip cache.

During SC10, Intel conducted demonstrations showcasing the real-world capabilities of the Knights Corner. These included using Intel MIC architecture as a co-processor running financial derivative Monte Carlo demonstrations that boasted twice the performance of those conducted with prior generation technologies. The Monte Carlo application for Intel MIC was generated using standard C++ code with an Intel MIC-enabled version of the Intel Parallel Studio XE 2011 software development tools, demonstrating how applications for standard Intel CPUs can scale to future Intel MIC products. Intel also showcased compressed medical imaging developed with Mayo Clinic on Knights Ferry. The demonstration used compressed signals to rapidly create high-quality images, reducing the time a patient has to spend having an MRI. Even earlier the company demoed real-time ray tracing rendering on its MIC development platform.


Intel Knights Ferry, image by ComputerBase.de

Perhaps, MIC is exciting, but will it bring profits? In order to develop those architectures, it is necessary to sell them  on a broad set of markets; while the HPC market still relies onto CPUs, which is why the graphics processor vendors cannot get high revenue here. It seems that luxuries can come in and go, a long-term-oriented strategy stays.

"I do not see the economic model [with MIC]. We are able to produce those [Tesla] GPGPUs because the ultimately there is one GPU for GeForce consumer graphics, Quadro professional business. It costs $500 million to $1 billion to develop those new products every year, it is a hyge investment. Unless you have that [consumer and professional] economic engine in the background, I cannot imagine how one could make a GPU without having a graphics business," said said Sumit Gupta, product manager at Nvidia's Tesla business unit, in an interview with X-bit labs.

Single-Chip Cloud Computer

Intel MIC architecture will indisputably find its place under the sun already in two to three years from now, many of its elements will be akin to today's microprocessors and graphics cards. But the SCC chip, which is actually a small breakthrough for 2010, seems to carry on a number of concepts that will parts of both short-term and long-term processors.  

The SCC prototype chip contains 24 tiles with two x86 cores per each, which results in 48 cores – the largest number ever placed on a single piece of silicon. Each core can run a separate OS and software stack and act like an individual compute node that communicates with other compute nodes over a packet-based network. Every core sports its own L2 cache and each tile sports a special router logic that allows tiles to communicate with each other using a 24-router mesh network with 256GB/s bisection bandwidth. There is no hardware cache coherence support among cores in order to simplify the design, reduce power consumption and to encourage the exploration of datacenter distributed memory software models on-chip. Each tile (2 cores) can have its own frequency, and groupings of four tiles (8 cores) can each run at their own voltage. The processor sports four integrated DDR3 memory controllers, or one controller per twelve cores.

Intel calls x86 cores inside the SCC as “Pentium-class” cores since they are superscalar in-order execution engines, but stresses that those are not cores used inside by the original Pentium (P5) processors since they have been enhanced in order to achieve certain goals and make the design suitable for implementation into the experimental chip. Considering that SCC lacks any floating point vector units, raw horsepower of the chip is relatively weak.

Intel SCC is not supposed to become an actual product by definition. The design, peculiarities and single-thread performance of the prototype will hardly satisfy actual users. The chip is purely a prototype that will help Intel and software developers to determine directions for future development of the microprocessors and software.

"The SCC is a research vehicle, we wanted it be as experimental platform as possible. Having this architecture, we have software data flow, management of execution; it is much better – for a development platform – to have this kind of capability rather than to have a fixed-function unit. Maybe, a fixed-function [data scheduler] is more efficient, but having this program [allows us to] give more flexibility to software organizations," said Sebastian Steibl, the director of Intel Labs Braunschweig, which is a part of global Intel Labs organization.

Apparently, the SCC was not only needed to test abilities of software makers, but also to explore how many simplistic x86 cores could be fit on a single chip. At present a co-designer of the SCC declares that it could be possible to build a thousand-core processor using the same architecture in the next eight or ten years.

Apple Releases iPad Tablet, iPad 2 Begins to Take Shape Immediately

Apple iPad was a very awaited device. Many wanted Apple to release an e-book reader and revolutionize the market of electronic books, some expected the slate to be a huge PDA, but it reality it turned out to be an over-sized smartphone without voice features and many other capabilities that could have been expected both from a smartphone as well as from a personal computer in a slate form-factor. In any case, the iPad is among the breakthroughs of the year 2010.

Apple iPad is based on the company's own A4 system-on-chip with 1GHz ARM Cortex-A8 processing core, PowerVR SGX 535 (OpenGL 2.0, DirectX 10.1, worse than Intel GMA 500) graphics engine and so on. The iPad is equipped with 9.7” multi-touch IPS LCD screen with 768x1024 resolution; 16GB, 32GB or 64GB of flash storage; USB, Wi-Fi 802.11n, Bluetooth 2.1+EDR and 3G (special versions only) connectivity as well as SD card reader. The tablet PC runs iOS with all of its pros and cons and can also run applications developed for iPhone as well as specially designed software.

Even though the iPad is a pretty much feature-rich product, it lacked a number of critical features, which instantly catalyzed rumours about the incoming Apple iPad 2 and inspired widespread criticism for the model that is available.

One of the most important shortcomings of Apple iPad is the absence of Adobe Flash support. Just like on the iPhone, one may still enjoy YouTube videos on the iPad, but it will be impossible to play flash games or navigate on web-sites that use flash in the menus. Considering that even some corporate sites use flash too extensively nowadays, this does seem to be a huge drawback for a device designed for the Internet. We hoped that Adobe and ARM would eventually enable Flash on the ARM processor of the iPad, or Apple will have to somehow implement it on the next-generation iPad. But later on the head of Apple, Steve Jobs, said that there would be no Flash on the iPhone and iPad since Adobe was lazy.

Another crucial disadvantage of the Apple iPad is the lack of web-cams: the device has neither front-facing cam nor back-facing cam. Even the cheapest netbooks are usually equipped with one and the vast majority of modern phones can make video calls. It is rather strange that the device developed for those, who browse the Internet often will not be able to support video conferences of calls. While video calls will  likely be a feature of the iPad, the back-facing camera is in question. Get the Samsung Galaxy Tab, if you want augmented reality and take pictures quickly.

The nonexistence of multi-tasking was a yet another strong disadvantage of the iPad. The issue was partly fixed with a new version of operating system, but the multi-tasking capabilities of the iPad are still far worse in terms of multi-tasking than netbooks.

Apple has been selling high-definition 720p (1280x720, progressive scan) videos for quite a while now. As a result, the absence of high-definition screen (1024x768 resolution supported) and high-definition output (up to 576p resolution supported) on the iPad seems to be odd. It was logical for apple to stick to 4:3 aspect ratio since iPad should act like an electronic book reader too. At present there are probably no reasonably priced small screens that support 1280x960 resolution, but in the future similar displays may emerge and find themselves on the second generation iPad.

A very strange thing about the iPad was the lack of support of GSM voice calls. Of course, Apple needs to sell iPhones, hardly anyone will use a 680 gram device as a mobile phone in general and besides, no notebooks or netbooks with WWAN support allow to make voice calls. Nevertheless, the feature might be appreciated by some users and Apple will enable it for the iPad 2, besides, with a web-cam installed, Apple will likely enable it FaceTime service on the next-gen iPad.

Almost all devices from Apple lack critical features in their first generation. Nonetheless, Apple iPad seems to be another breakthrough of 2010 with over 7.5 million units sold.

AMD Many Core Strategy: Six-Core Chip for Desktops, Twelve-Core CPU for Servers

The year 2010 marked another increase in the amount of cores within central processing units (CPUs). Both Intel and Advanced Micro Devices introduced their six-core desktop offerings this year an AMD even managed to roll-out twelve-core CPUs for servers.

Thuban Adds Turbo

Thuban, a star in the constellation of Draco and the code-name for AMD's six-core desktop processor, is the chip that offers both potential with six x86 cores and relative affordability compared to offerings from the arch-rival.

One of the most important innovations of AMD Phenom II X6 1000T-series "Thuban" processors is Turbo Core technology. Depending on the actual model and its specifications, six-core Phenom II X6 1000T-series chips are able to boost their clock-speeds by 400MHz or 500MHz when only half of available cores are active, e.g., microprocessors work in dual-core or triple-core mode. The top-of-the-range model Phenom II X6 1100T with default clock-speed of 3.30GHz can clock itself at 3.70GHz with Turbo Core enabled. The chips also carry up to 9MB of cache, are compatible with AM3 socket and have up to 125W TDP.

It cannot be said that the Phenom II X6 truly redefined AMD's presence or perception on the desktop market this year oppositely. What it managed to do was to ensure that the company's chips were inside more or less expensive systems aimed at performance-enthusiasts.

New Course with Magny Cours: No 4P Tax

For years Advanced Micro Devices had claimed that putting as many cores onto a single slice of silicon is the best way to achieve both high performance and high scalability. But with its twelve-core code-named Magny Course chips it decided to start using multi-chip-module (MCM) design, something that has been used for years by numerous makers of server chips, including IBM and Intel.

On the 29th of March, 2010, the world's second-largest supplier of central processing units, said that its Opteron processors 6000-series eight-core and twelve core AMD Opteron “Magny Cours” processors, the first chips designed  for the AMD G34 “Maranello” platform for 2-way or 4-way servers were available for sale. The chips featured quad-channel DDR3 memory interface and supported up to 12 memory modules per socket and loads of server and enterprise-specific functionality.

Maranello was AMD’s first server platform in almost a decade that is based on the company’s own chipset. AMD 5600-series chipset features I/O virtualization capability, HyperTransport 3.0 technology and PCI Express 2.0. The new AMD Opteron platform is chipset- and socket- compatible between 2P and 4P and will be compatible with the planned processors based on the next-generation AMD server processor core, code-named “Bulldozer”.

AMD also stresses that the new Maranello platform removes the so-called “4P tax”, since the same processors can be used in both 2-way and 4-way designs, and 4P-capable processors are now the same price as 2P-capable processors. In fact, the move to remove the "4P tax" is not only logical, but a well-thought one by AMD.

"The 4P market is in decline. It used to be 10% of the total market, and now it represents only 4% of the total market as 2P servers have accelerated their performance and reliability. If the 4P market is going to change, something needs to happen. Left alone, it may whiter away to only a point or two. We have had a lot of customers approach us with a desire to purchase more 4P capable systems, but because of the price premium (for generally identical silicon), they have been hesitant to buy 4P. Instead, they compromise and choose 2P because of the more favorable price. We believe this change makes sense today, just as, in the late 90’s, the reduction in 2P price points drove a large increase in units," said John Fruehe, the director of product marketing for server/workstation products at AMD.

Nvidia Fermi Powers World's Highest-Performing Supercomputer

General purpose computing on graphics processing unit (GPGPU) is the idea that is almost ten years old. Thanks to massively parallel architecture, GPUs are great for parallel computing. This year Nvidia supplied compute cards to power the world's most powerful supercomputer, the Tianhe-1A.

Nvidia Fermi in its final incarnation was a bumpy road for Nvidia. The company first demonstrated the its code-named GF100 chip for the first time in September, 2009, and said that it would have 512 compute elements with a kind of multi-threading technology, 768KB of L2 cache and that the graphics processor would deliver massive performance in double-precision floating point operations. From October, 2009, to April, 2010, the company delayed the chip for a number of times, blamed poor yields at TSMC for that and made rather bold promises.

At the end, the flagship Fermi-class GeForce graphics processor for games was cut-down to 480 compute elements, whereas the most powerful Fermi-class Tesla chip for computations featured only 448 stream processors so that to improve yields. Moreover, the GeForce GTX 480 was slower compared to ATI Radeon HD 5970 and could not provide a substantial amount of advantages compared to ATI Radeon HD 5870 released over half a year earlier. The position of the company on the graphics market was tough. But then all the efforts that Nvidia has put into GPGPU suddenly started to pay off.

Starting from around the middle of the decade, Nvidia has spent a lot of resources on creation of its CUDA platform for GPU computing and eventually also integrated a number of compute-specific logic into its Fermi-series graphics processors. For example, the Fermi architecture is tailored to deliver maximum double-precision floating point computer performance; SIMD processors of Fermi can both read and write from/to the unified L2 cache, something that is needed for compute and something that is less needed for graphics, and also something that AMD decided to skip.

In November '10 Nvidia was also the first to receive fruits from its efforts. In 2010 three supercomputers out of top five supercomputers in the world, including the #1 Tiahne-1A with 2.56petaFLOPS performance, were powered by Nvidia Tesla 2000-series compute cards. Moreover, the same cards were used in many other HPC systems.

According to the updated Top 500 list of supercomputers, the most powerful system nowadays is Tianhe-1A located in National Supercomputing Center (NSC) in Tianjin, China. the system scores 2.566 petaFLOPS (PFLOPS) in LINPACK benchmark and can theoretically perform 4.7 quadrillion floating point operations per second (FLOPS). The most powerful supercomputer on the planet is powered by 14 336 six-core Intel Xeon X5670 (2.93GHz) central processing units (CPUs) as well as 7168 Nvidia Tesla 2050 compute boards.

Other supercomputers powered by Nvidia Tesla in the top 5 are Nebulae (1.271PFLOPS, 4640 Tesla compute boards, 2.55MW) that belongs to NSC in Shenzhen, China as well as Tsubame 2.0 (1.192PFLOPS, 4200 Tesla compute boards, 1.340MW) located in GSIC Center of Tokyo Institute of Technology in Japan.

It is interesting to note that Tianhe-1A's Rmax performance in LINPACK benchmark is 83% lower than theoretical Rpeak rating. When it comes to CPU-based clusters such difference is usually around 30%, but GPU-based supercomputers listed in Top 500 tend to show much higher gap between actual and theoretical performance.

Despite of the fact that supercomputers powered by compute accelerators like Nvidia Tesla are only beginning to take off, their performance per watt efficiency is pretty spectacular. For example, Tianhe-1A consumes 4.04MW of power, whereas a CPU-based cluster based on today's microprocessors with 2.566PFLOPS performance would have used 50 thousand of CPUs and consumed 12.7MW of power, according to Nvidia. One of the most notable new entry to the Top 500 is Tsubame 2.0, the new supercomputer from Tokyo Institute of Technology. The system delivers petaflop-class performance while remaining extremely efficient, consuming just 1.340MW, dramatically less power than any other system on the top five.

"Tsubame 2.0 is an impressive achievement, balancing performance and power to deliver the most energy efficient petaflop-class supercomputer ever built. The path to exascale computing will be forged by groundbreaking systems like Tsubame 2.0,” said Bill Dally, chief scientist at Nvidia.

Microsoft Sells 2.5 Million Kinect Motion Sensors in 25 Days, Sony Move Sees Success

Microsoft Kinect, a motion-sensing controller for the Xbox 360 game console, became more than a smash hit of the fall 2010. Even though the market of game consoles is huge, selling a hundred of thousand of console accessory units a day is a whopping number. That huge number of units sold to actual customers proves that people do want controller-free gaming and consequential successful attempts to hack the device shows that there is an interest to use it not only on Xbox 360.

“We are thrilled about the consumer response to Kinect, and are working hard with our retail and manufacturing partners to expedite production and shipments of Kinect to restock shelves as fast as possible to keep up with demand. With sales already exceeding two and a half million units in just 25 days, we are on pace to reach our forecast of 5 million units sold to consumers this holiday," said Don Mattrick, president of the interactive entertainment business at Microsoft.

Sony Computer Entertainment also announced in late November that the sales of PlayStation Move motion controller for the PlayStation 3 video game console reached over 4.1 million units worldwide. But Sony always reports sold-in numbers, which means the number of units sold to retailers, not to actual customers. The 4.1 million sold-in units milestone was reached in about 70 days since the release of Move in September for North America, Europe/PAL territories and Asia, and a little more than one month since the launch of the PlayStation Move in October for Japan. Based on the fact, it can be estimated that Sony supplied around 58.5 thousand of Move accessories to its retail partners every day, which does not mean that 58.5 thousand of devices were bought by actual gamers. Nonetheless, it is indisputable fact that the Move appears to be a big success for Sony.

The Kinect sensor can be plugged via USB bus directly into any Xbox 360 and features an RGB camera, a depth sensor, audio sensors, and motion-sensing technology that tracks 48 points of movement on the human body. Kinect has the ability to recognize faces and voices. Kinect can perform full-motion tracking of the human body at 30 frames per second. While the depth sensor supports 640x480 resolution, currently it has 320x240 resolution limit.

The PlayStation Move platform includes the motion controller with a special light bulb, navigation controller, and PlayStation Eye camera. The combination of the PS3 system and PlayStation Eye camera detects the precise movement, angle, and absolute position in 3D space of PlayStation Move motion controller, allowing users to intuitively play the game as if they themselves are within the game. PlayStation Move motion controller sports three-axis gyroscope, a three-axis accelerometer, a terrestrial magnetic field sensor as well as a color-changing sphere that is tracked by PlayStation Eye camera to deliver very precise accuracy. The lighting sphere, however, requires the player to stand beside the PS Eye camera.

ATI Brand Discontinued, New Product Line Does Not Impress

On 2010 did a move that is arguably the worst decision in the recent history of the company. The Sunnyvale, California-based firm started to phase out the ATI graphics cards brand it obtained with ATI Technologies back in 2006.

The loss of ATI letters from the logotype of Radeon graphics processors marked AMD's intention to unify the brands of products that are developed within it and emphasize the importance of the company. The move was also made to underline the importance and united nature of Fusion processors containing central processing units developed by AMD and graphics processing units designed by ex-ATI teams. Unfortunately for AMD, the 25-years old ATI trademark was just too recognizable and the new family of AMD Radeon products is just too pale to actually overshadow it.

Even though many say that it is not necessary to pay for well-known brands, consumers love recognizable labels and trademarks and are willing to pay for them. The reasons are pretty simple: well-known brands are associated with high quality, performance, experience and are generally recognizable. As a result, Volkswagen Group retains brands like Audi, Bugatti, Lamborghini, Seat, etc; whereas LVMH holding preserves tens of brands that ultimately compete against each other. The reasons for that are pretty obvious: loads of money are invested into creation of a brand and its support. Many consumers have either acquired or simply know from advertisements that ATI produces graphics cards, whereas Moët et Chandon makes champagne; hence, LVMH will hardly start selling something like "Louis Vuitton Moët Hennesy Brut Imprerial". It is noteworthy that when Rackable acquired SGI in 2009, it actually decided to use SGI name instead of its own due to the value of the brand.

It should be noted that with the dawn of the accelerated processing unit (APU) era does make sense for AMD to sell those chips under one single trademark since AMD E350 with ATI Radeon HD 6310 graphics core does  not sound easy for average consumers. But does AMD Vision based on AMD E350 with Radeon HD 6310 graphics sound easy?

But to make the matters worse, the first AMD Radeon HD 6800- and 6900-series graphics adapters failed to truly impress the enthusiast and consumer community. Firstly, the  AMD Radeon HD 6870 could barely beat the previous-generation ATI Radeon HD 5850 and in spite of the fact that the model 6870 is less expensive, without a "wow" factor customers may clearly start to look at Nvidia-branded solutions, which are known for high performance and carry-on additional functionality. Secondly, the AMD Radeon HD 6970 is somewhat faster than the ATI Radeon HD 5870, but is clearly slower than the ATI Radeon HD 5970 (customers don't care about the number of chips onboard, do they?), which again eliminates any "halo" effect and in combination with a less known brand may affect business performance of AMD's graphics business unit.

It should be noted that the rather low performance of AMD Radeon HD 6800 and 6900 was not a result of poor execution or anything within a company, but was a consequence of canning of 32nm fabrication process by Taiwan Semiconductor Manufacturing Computer. As a result, both AMD and Nvidia had to design new breeds of graphics chips for 40nm process technology, something that dramatically reduced their abilities to drive performance upwards and also innovate in terms of features.

AMD's Q4 of FY2010 ended on the 25th of December, 2010, and the company will report its financial results in the second half of January, 2011. From those results it will be possible to make educated guesses whether the drop of ATI brand was a good idea or not amid the known performance issues of the new-generation products.

AMD Delays Fusion "Llano", Speeds Up Introduction of "Ontario" Chips

AMD's Fusion is the project that either proves the acquisition of ATI Technologies for $5.4 billion right or wrong. After years of changes in the roadmap, Advanced Micro Devices managed to shape two completely different types of products that combine x86 processing cores and graphics processing engines on the same piece of silicon. But on the home-stretch it had to dramatically change its plans, a reshuffle that was executed almost flawlessly.

For over a year it was thought that AMD would release its higher-performance code-named Llano accelerated processing unit (APU) first in Q1 2011 and only then would proceed with low-cost code-named Ontario project. However, the problems between the 32nm silicon-on-insulator process technology at Globalfoundries and the design of Llano completely changed the company's plans and the firm decided to accelerate the introduction of its Ontario and Zacate APUs for low-cost notebook, netbook and nettop systems.

"Llano - our Fusion APU offering aimed at the higher end of the client market - is generating positive customer response. However, in reaction to Ontario’s market opportunities and a slower than anticipated progress of 32nm yield curve, we are switching the timing of the Ontario and Llano production ramps. Llano production shipments are still expected to occur in the first half of next year. We have seen the rate of yield leaning below our plans on 32nm. [...] We take a bit more time to work on the 32nm yields up the curve. So, the effective change [...] to our internal plans on Llano amounts to a couple of months" said Dirk Meyer, chief executive officer of AMD, in mid-July, 2010.

The delays of Llano were much worse than projected by the executive. Instead of initiating mass production in late 2010, the company is rumoured to only start to make desktop Llano chips in July, 2011, according to sources familiar with AMD's plans. But even being whopping seven months behind the schedule, Llano will still be an interesting product.

Even though the "younger-brothers" code-named Ontario and Zacate are much less powerful than Llano, both are likely to boost or at least sustain AMD's share on the market of mobile computers as both offer higher CPU performance compared to Intel Atom, can compete against mobile Celeron processors and also feature DirectX 11-class graphics processing units with support for GPGPU and all the other advanced features.

Considering the fact that back in the past any change in AMD roadmap was a catastrophe for the company, this rescheduling of two completely different projects seems to be either a great luck or a result of a very thoughtful reorganization of the company.

Rambus Attacks Open Interfaces: DisplayPort, PCI Express, Serial ATA

Rambus, a designer of memory and interface technologies and also one of the most hated company in the technology industry, on December 1, 2010, did something completely unimaginable. It accused a number of companies - including the leading designer of chips that power broad range of electronics applications - of infringement of patents that concern open industrial standards, including DisplayPort, PCI Express, Serial ATA and Serial Attached SCSI (SAS).

The formal complaint of Rambus was filed with the United States International Trade Commission (ITC) requesting the commencement of an investigation pertaining to products from Broadcom Corp., Freescale Semiconductor., LSI Corp., MediaTek, Nvidia Corp. and ST Microelectronics. The complaint seeks an exclusion order barring the importation and sales of products from the aforementioned companies that infringe certain patents from the Dally family of patents and Barth family of patents. Accused semiconductor products in the complaint include graphics processors, media processors, communications processors, chip sets and other logic integrated circuits (ICs).

Rambus also demanded ITC to bar importation and sales of products based on chips that it believes infringe its patents. Apparently, the company wants to stop sales of almost all electronics available today, including personal computers, workstations, servers, routers, mobile phones and other handheld devices, set-top boxes, Blu-ray players, motherboards, plug-in cards, hard drives and modems.

For the Dally patents, the accused semiconductor products from these companies include ones that incorporate PCI Express, certain Serial ATA, certain Serial Attached SCSI (SAS), and DisplayPort interfaces. Ironically, but Rambus became the exclusive licensee for the Dally family of patents as a part of its 2003 acquisition of technology and IP from Velio Communications, a company founded by William Dally, the chief scientist of Nvidia.

In the case of the Barth patents, the accused semiconductor products include ones that incorporate DDR, DDR2, DDR3, mobile DDR, LPDDR, LPDDR2, and GDDR3 memory controllers.

Fortunately for the world, the ITC began formal investigation only about infringements of the Barth and Farmwald-Horowitz patents in late December. It is unclear whether the ITC will also try to investigate the infringements concerning Dally family of patents, some of which are a part of technologies used around the industry.

With the new attack Rambus did not change its rhetoric or its usual manner of making statements, but altered the actual wording. Today, the company, which has been a threat for the memory industry for many years now, does not claim that it had developed something and spent tens of millions on it. Instead, it declares openly that it had acquired certain patents secretly and waited for the industry to adopt the standards widely before claiming its rights.

“Rambus has invested hundreds of millions of dollars developing a portfolio of technologies that are foundational for many digital electronics. There is widespread knowledge within the industry about our patents including their use in standards-compatible products accused in these actions. In fairness to our shareholders and to our paying licensees, we take these steps to protect our patented innovations and pursue fair compensation for their use," said Harold Hughes, president and chief executive officer at Rambus.

Patenting a secretly while being a part of an industrial standard-setting organization is one degree of the lack of ethics. Acquiring patents secretly in a bid to demand a licensing fee from absolutely everyone in the industry is another. Well, Rambus entered a whole new level in 2010.

Microsoft Signs Pact with ARM, Threatens "Wintel"

One of the most significant event of the year was Microsoft's new pact with ARM Holdings in late July.While formally the agreement extends the collaborative relationship between the two companies, it may play a role for the whole industry that is hard to overestimate. In particular, this may be a threat the domination of the x86 industry, an even hard to overestimate.

Since 1997 Microsoft and ARM have worked together on software and devices across the embedded, consumer and mobile spaces, enabling many companies to deliver user experiences on a broad portfolio of ARM-based products. In particularly, Microsoft developed operating systems (OSs), including Windows Embedded, Windows CE, Windows Mobile and Windows Phone that could run on ARM-based microprocessors or system-on-chips. This time, however, Microsoft licensed ARM architecture and got closer access to ARM's intellectual property (IP), which could enable the software giant to develop its own chips based on ARM's IP or use learn how to build an efficient OS for those chips. Any scenario reshapes a lot of markets.

Windows Next on ARM?

The most intriguing thing about the official Microsoft-ARM announcement is whether Microsoft will actually enable support of ARM processors with its next-generation Windows system. It may be Windows 8, which is expected to be very flexible in terms of configuration when it comes to different PC form-factors. Or it may be a version of Windows 7 designed specifically for PCs in tablet form-factor.

The software giant is reportedly preparing to unveil a version of Windows operating system (OS) for microprocessors featuring ARM architecture for the first time at the Consumer Electronics Show. The report states that the forthcoming operating system will also be able to work with x86 microprocessors by Advanced Micro Devices and Intel Corp., which may mean that eventually Microsoft's OSs will generally be platform and microprocessor agnostic, something that has been rumoured for some time. Keeping in mind that partners of ARM, AMD and Intel either offer or plan to offer system-on-chips for tablets (and not desktops or notebooks), Microsoft's first "universal" OS may indeed be aimed specifically at slates. But there are different opinions here.

"The rumor has been circulating for a year that other ARM licensees have ported Windows to ARM," said Jon Peddie, the head of Jon Peddie Research market tracking agency.

"I would be very surprised if Microsoft moved Windows 7 or 8 onto ARM. Even if they did, there would not be any applications to run on it, since the apps all run only on x86 CPUs. I believe Microsoft will focus on ARM for Windows Phone and maybe tablets, since the full x86/Win 7 environment is a little too bulky for these low-powered devices," said Nathan Brookwood, the principal analyst at Insight 64.

The clash between ARM and x86 is heating up, even though it is not seen widely at the moment. In fact, it is more than a battle between architectures of central processing units (CPUs), it is a fight for consumers' minds, Microsoft needs that badly now that Apple and Google are escalating in smartphones that feature long battery lives because of power efficiency provided by ARM-based chips..

Potentially, it may be logical for Microsoft to enable ARM compatibility with Windows and create some kind of emulator (which is also why it needs to license ARM architecture) to run previous-generation software.

Impact on Traditional x86 Suppliers

In case Microsoft starts to work more heavily on operating system(s) for ARM-based devices, which are by definition low-power, there will be pretty obvious consequences for x86 low-power chip suppliers, namely Advanced Micro Devices, Intel Corp. and Via Technologies. To make the matters worse, ARM is attempting to enter the market of servers with specifically-designed chips. Needless to say that with server-class chips at hand it is easy to address market of client computers too.

"[The agreement] will impact the tablet and netbook segments and that is one reason you see Microsoft countering with MeeGo and HP with WebOS," said Jon Peddie.

Indeed, Microsoft does not care about Atom or Athlon, what it cares about are deployments of its own operating systems. With ARM support and proper Windows implementation, it does allow Microsoft to compete on emerging devices' markets and maintain its leadership on the desktop/notebook and server markets.

"[The decision] puts Microsoft into the game since they seemingly cannot get the right combination of features, memory footprint and power requirements with its mobile OS," implied Mr. Peddie.

Perhaps, Microsoft even attempts to create an OS for HDTVs, Blu-ray players or set-top-boxes (which feature ARM-based SoCs) in order to rival a number of competing platforms, such as Google TV. Or maybe Microsoft wants to develop its own chips to power various devices featuring its software.

Chips from Microsoft?

Microsoft quietly announced formation of its chip design group back in 2006. So far the design group seem to have worked only on various chips for Microsoft Xbox 360 console, in particular, they shrunk them in sizes and created single-chip CPU-GPU code-named Valhalla system-on-chip (which combined IBM Xenon processor with ATI Xenos graphics and memory controller). Thanks to the new license, the company will be able to create its own SoCs for a wide range of own-brand devices.

"[Microsoft] must see opportunities to optimize an ARM core in ways that will make their own software for these emerging consumer devices (phones, tablets, game platforms, appliances, etc) run better than they could on a standard ARM processor. Enhancing an ARM core in this manner will take several years, so don’t expect any instant gratification here. I certainly understand why companies like Apple or Google want to differentiate their products with custom SOCs that integrate a unique combination of CPU cores, GPU cores, and peripherals, but I’m not sure Microsoft can enhance a standard ARM core enough to make a meaningful difference in the end product," said Nathan Brookwood.

But maybe the software giant also plans chips for others?

"I doubt that they would design chips for other companies to those other companies’ specifications, but Microsoft could sell a chip they optimized for their software to third parties. Microsoft understands the virtues of a leveraged business model. If they saw a way to lever a proprietary chip design that gives their software a market advantage, I am sure they would find a way to license that technology to others who want to sell their software," said Nathan Brookwood.

Meanwhile, analyst Jon Peddie believes that Microsoft will only tailor software and will leave design and manufacturing to its "fab/ODM" partners.

It remains to be seen whether Microsoft starts to design its own chips and will eventually offer a variety of products directly competing with the company's customers' offerings and also Apple-branded gadgets. Based on what we have seen from Microsoft so far, there will be a long road between the announcement and actual products. Perhaps, we will not see any outcomes before Windows 8. Likely, the ramifications may only be observed by the time when "Windows 9" hits the market.

AMD Discloses Peculiarities of Bulldozer, Starts to Talk About Bulldozer 2

For Advanced Micro Devices, Bulldozer is not just another micro-architecture or a series of chips. For the company, which has been developing it for the past seven years, this is a matter of the future, the matter of prosperity and survival. This year Bulldozer got its face and while all the details are kept to be revealed at the launch of the central processing unit (CPU), the chip definitely got its shape.

"In the second quarter of this year we also taped out the first 32nm product based on our new high-performance Bulldozer CPU core. We plan to begin sampling our Bulldozer based server and desktop processors in the second half of this year and remain on track for 2011 launches. These new processors will deliver significant performance improvements to the AMD platform," said Dirk Meyer, chief executive officer of AMD, in mid July, 2010.

AMD Orochi design is the company's next-generation processor for high-end desktop (Zambezi) and server (Valencia) markets. The chip will feature eight processing engines, but since it is based on Bulldozer micro-architecture, those cores will be packed into four modules. Every module which will have two independent integer cores (that will share fetch, decode and L2 functionality) with dedicated schedulers, one "Flex FP" floating point unit with two 128-bit FMAC pipes with one FP scheduler. The chip will have shared L3 cache, new dual-channel DDR3 memory controller and will use HyperTransport 3.1 bus. The chip was demonstrated in operation mode in November, 2010,

At an investor conference in November Chekib Akrout, senior vice president of technology group at AMD, confirmed the company's intention to start revenue shipments of Bulldozer-based processors for desktops in Q2 2011 and for servers in Q3 2011.

Around the time when AMD was about to announce commercial plans for Bulldozer-generation processors at its analyst day, discoveries were made about the "Bulldozer NG" (next-generation) and "BDver2" chips that are expected to succeed the original Bulldozer.

Apparently, the Bulldozer version 2.0 will support at least three new extensions, including BMI (Bit Manipulation Instructions), TBM (Trailing Bit Manipulation) and FMA3 (three operand FMA [fused multiply-add]).

Meanwhile, a very sketched plan of AMD suggests (note: this is not a roadmap, these are suggestions of what could be made) that sometime around 2012, the Sunnyvale, California-based chip designer may launch a completely new server platform currently known as G42/G44. The next-generation platform will likely continue to utilize Bulldozer micro-architecture initially, but microprocessors will absorb I/O functionality of chipsets, thus, further simplifying the server platforms. Obviously, there still will be external I/O controllers, but there will be no more central hub in servers, which is a very interesting concept. Around the same timeframe AMD also plans to release GPUs seriously optimized for server needs. Later on, perhaps in 2013 - 2014 timefroma, the world's second largest maker of chips may introduce the so-called Bulldozer NG micro-architecture (Bulldozer 2) and new processors on its base. The first Bulldozer NG-powered chips are liktly to be compatible with G42/G44 infrastructure.

The most revolutionary change of AMD's server platform will take place in longer term, perhaps, in 2014 - 2015 - 2016 or even later, when the company introduces microprocessors with integrated Bulldozer NG x86 cores, high-speed stream processing units and input/output functionality. Later on the company will also switch from Bulldozer NG cores to a future micro-architecture, which can be called post-Bulldozer NG.

At present it is nearly impossible to predict any details about the mid-term or long-term microprocessors. Moreover, even AMD's conceptual roadmap does not provide any actual promises, commitments or details, but just indicates one important direction that AMD heads to: the fusion of multi-core x86 chips with many-core graphics chips. What we surely know is that AMD does have pretty serious plans!