2/2 of 2009: Top News for the Second Half of the Year

The second half of the year 2009 is over. While the first half was rather remarkable, the second truly appeared to be stunning: many important events happened and many more were predicted to happen. Today we are taking a look on the top ten news-stories that you, our visitors, considered to be the most exciting and valuable among those we published in the second half of 2009.

by Anton Shilov
12/30/2009 | 05:46 PM

The second half of 2009 can be described with one simple word: the storm. Advanced Micro Devices and Intel Corp. finally settled their long lasting legal dispute and AMD got $1.25 billion and managed to liberalize cross-licensing agreement with Intel; the latter still faced a rather formidable lawsuit from Federal Trade Commission; Globalfoundries became the No. 3 contract maker of semiconductors in the world virtually overnight with the acquisition of Chartered Semiconductor; and electronic book readers have finally shown their market opportunities with Amazon Kindle becoming one of the most popular Christmas gifts.

 

In this editorial we are taking a look back on the top ten news-stories of the second half of 2009. We picked up ten stories that were the most read by our readers and which we consider to be important for the future of the computing industry going forward. Of course, much more has happened in the last twelve months, hence, if you want to take a look back on the whole year you should also check out the Top News for the First Half of the Year editorial.

In addition to the most important news themselves, we decided to include some comments to the information and tried to outline some trends and end results of the covered events. So, let’s proceed to the top ten news-stories of the second half of 2009 and begin from the No. 10. Remember, if you want the full original story, just click on the links.

10. Asustek Set to Withdraw from Mainboards Manufacturing

Asustek Computer, the world’s largest maker of mainboards, in mid-December announced plans to completely spin off Pegatron Technologies, the company that makes motherboards, graphics cards and a lot more components for Asus. The move will allow Asus to become more competitive in terms of branding, but will further withdraw the firm from the actual manufacturing.

Asustek is well-known for premium quality components for personal computers (PC) and is very popular among the end-users thanks to reach feature-set and premium quality. Not surprising that loads of our readers considered the news-story very important since with Asustek’s outsourcing of production to third parties it remains to be seen whether the quality remains on the same level. There are no doubts though that Asus will remain one of the top suppliers of motherboards in the world.

Asus does have very strong brand recognition in many countries around the world, especially among computer enthusiasts, and has been withdrawing from actual manufacturing for quite some time now. In fact, it is the very same route that Acer Group took many years ago by spinning off its manufacturing division – now known as Wistron – and concentrating on marketing of branded device. Acer Group is now, after acquisition of Gateway, Packard Bell and some other suppliers, the world’s second largest maker of personal computers.

It remains to be seen what exactly happens with Asus-branded mainboards, graphics cards and other devices businesses. Some expect Asus to start targeting broader market segments, other believe that it will concentrate on notebooks, netbooks and various other types of mobile electronics. In the meantime, rumours emerged that Pegatron Holding itself wants to reposition Asrock, currently low-cost supplier of motherboards that belongs to Asus, and enter premium market segments.

All-in-all, the competition on the market of motherboards will hardly get easier in the coming years, which theoretically means a lot of good for the consumer.

9. First Details Regarding AMD Bulldozer Emerge: 8 Cores with Multithreading, 128-Bit FPU

Advanced Micro Devices is a company that is known for making surprises. Back in 1999 it stunned everyone with its AMD Athlon processor that immediately stole performance crown from Intel Pentium III, in 2003 its AMD Opteron and AMD Athlon 64 chips simply astonished the markets of desktops and servers with performance and features, but in 2007 the firm shocked with underperforming quad-core AMD Opteron and Phenom processors that also contained a translation look aside buffer (TLB) bug. No surprise that all eyes are on the next-generation Bulldozer micro-architecture.

In November ’09 the firm unwrapped virtually the first official details of the forthcoming Bulldozer micro-architecture and implementation and they seemed to be rather impressive!

Based on the information provided by AMD during its annual Analyst Day in November, the first Bulldozer chip code-named Zambezi (which belongs to Orochi family, according to the firm) will feature eight x86 processing engines with a multithreading technology, two 128-bit FMAC floating point units, shared L2 cache, shared L3 cache as well as integrated memory controller. AMD also states that the new CPU will feature “extensive new power management innovations”. The details are still sketchy, but what should we expect from the information about a product due in 2011?

The implementation of 128-bit FMAC is quite logical: AMD’s SSE5 set of extensions do feature 128-bit multimedia instructions as well as 128-bit three-operant instructions. In fact, there is a trend of increasing of precision of floating point instructions, as we can observe from the last decade.

What is important to note is that Intel Corp.’s forthcoming Sandy Bridge processor features Advanced Vector Extensions (AVX), which support 256-bit FP operations, something very progressive. Both AMD and Intel have already released documentation regarding AVX and SSE5 for developers, but Intel managed to unleash a new compiler supporting AVX in June ’09, whereas AMD has not managed to roll-out its SSE5-supporting tool. As a result, the vast majority of developers are already capable of creating AVX-capable software; however, almost no designers can make SSE5-capable programs at the moment.

Obviously, AMD will also support AVX, but its implementation is likely to be less efficient than that from Intel due to lower-bit FP units. Still until we know more about Sandy Bridge as well as clock-speeds of both processors, we can only guess which chip turns out to be faster.

8. Intel to Demonstrate 48-Core Microprocessor

As a part of its Tera-Scale Computing Research, Intel Corp. has been showcasing rather bizarre chips and concepts for a couple of years now. In early December the firm demonstrated its chip with 48 of fully-fledged x86 processing engines. A couple of days later the company announced decision to cancel its first-generation many-core x86-based Larrabee graphics processor. Even though there is no direct correlation between the two events, some believe that the concept chip will influence the future implementations of Larrabee.

The prototype chip – which Intel calls single-chip cloud computer (SSC) – contains 24 tiles with two IA cores per each, which results in 48 cores – the largest number ever placed on a single piece of silicon. Each core can run a separate OS and software stack and act like an individual compute node that communicates with other compute nodes over a packet-based network. Every core sports its own L2 cache and each tile sports a special router logic that allows tiles to communicate with each other. A 24-router mesh network with 256GB/s bisection bandwidth. The processor sports four integrated DDR3 memory controllers, or one controller per twelve cores.

The SCC can run all 48 cores at one time over a range of 25W to 125W and selectively vary the voltage and frequency of the mesh network as well as sets of cores. Each tile (2 cores) can have its own frequency, and groupings of four tiles (8 cores) can each run at their own voltage.

One of the distinct features of the new 48-core experimental chip will be its extreme programmability. Software applications will be able to automatically control the number of cores to use at any given time and operating systems will be able to assign certain cores for auxiliary tasks. Moreover, software will be able to manage power consumption, clock-speed of individual cores or even shut them down when not needed.

The experimental 48-core central processing unit (CPU) will help Intel and software developers to study management and scheduling mechanisms of explicitly multi-core microprocessors in order to get prepared to bring them onto the mass market. Next year, Intel plans to provide software developers more than a hundred of experimental chips for development of new software apps.

Though each core has 2 levels of cache, there is no hardware cache coherence support among cores in order to simplify the design, reduce power consumption and to encourage the exploration of datacenter distributed memory software models, on-chip. Intel researchers have successfully demonstrated message-passing as well as software-based coherent shared memory on the SCC.

The chip does seem to be extremely interesting. According to Intel, the cores are fully-fledged x86 cores and while it is unknown whether they support any extensions, such as SSE, it is understandable that they, unlike stream processors inside graphics processing engines, can perform complex operations and even run operating systems (hence, we can make a conclusion that chip level virtualization is in place).

Theoretically, the 48-core chip could even process graphics using ray-tracing method or even using traditional fixed-function pipeline if it was equipped with appropriate units. However, even if it has been equipped with GPU-specific blocks, it would have performed slower than conventional graphics processing units with up to 1600 stream processors.

One thing regarding SCC is clear: the future processors will feature tremendous amounts of cores and the general architecture of chips will be different compared to the existing microprocessors.

7. AMD Displays Llano Die: 4 x86 Cores, 480 Stream Processors

AMD has been talking about integration of graphics processors into central processing units ever since it acquired ATI Technologies back in 2006. However, since company has delayed the code-named Fusion project for many times in the past, it was definitely a good news that the firm finally demonstrated the design of its code-named Llano processor for notebooks in November, 2009, which meant that the firm had finalized the actual design.

Based on the die shot displayed by Rick Bergman, senior vice president and general manager of AMD’s products group, the first Fusion processor from AMD will feature 4 x86 cores that resemble those of Propus processor (AMD Athlon II X4) as well as 6 SIMD engines (with 80 stream processors per engine) that resemble those of Evergreen graphics chip (ATI Radeon HD 5800), PC3-12800 (DDR3 1600MHz) memory controller, possibly, with some tweaks to better serve x86 and graphics engines. The processor lacks unified L3 in order to reduce manufacturing cost, but will have 2MB of L2 cache (512KB per core).

According to another report, AMD Llano features many innovative circuit techniques the company uses to lower power consumption and leakage of the core. For instance, the core's L1 cache uses 8T memory cells to support low supply voltages, the processor also uses a power gating ring that takes advantage of isolated substrates used in the company's silicon-on-insulator technology to provide a near-zero power off state.

AMD’s Llano will feature around 1 billion of transistors, which is logical since AMD’s Propus processor has around 300 million of transistors, whereas 480 stream processors and additional special purpose logic includes around 600 million of transistors. The chip will be made using 32nm silicon-on-insulator fabrication process.

AMD Llano accelerated processing unit (APU) is a part of AMD’s Sabine platform that features AMD 900-series core-logic, USB 3.0, Serial ATA-600 and so on.

With Llano design finalized, we hope that the chip will be released on time in 2011. For AMD it will be much better if the chip is released in the first half of the year since the company just needs its badly to boost its market share in notebook market. Even though Llano does not seem to be too advanced from x86 point of view, since it is generally a highly-tweaked K10.5 (which will have rather low performance in single-threaded applications due to 512KB of L2), the chip does support DirectX 11 and GPGPU. Since many applications can now take advantage of graphics processors’ raw computing power, 480 ATI Radeon HD 5000-class stream processors will be able to offset moderate horsepower of four x86 cores.

6. Intel Clarkdale: 3.46GHz Clock-Speed, 32nm Process Tech, Launch in Q1 2010

Intel Corp. first revealed details about its code-named Clarkdale and Arrandale microprocessors based on next-generation Westmere micro-architecture in March ’09 and said that the chips will be the first central processing units to integrate graphics core. Needless to say, potential customers got quite excited about the new chips since they promised high-performance dual-core processors with integrated graphics, limited thermal design power and smaller form-factors of PCs. Still, actual products specifications were not unwrapped. In late July we corrected this little issue and released preliminary specs of code-named Clarkdale processors.

Intel Clarkdale are dual-core microprocessors based on next-generation Westmere micro-architecture with 4MB of cache, Hyper-Threading, dual-channel DDR3 memory controllers and integrated graphics cores. Arrandale are similar chips aimed at mobile computers. Both Arrandale and Clarkdale are essentially multi-chip modules featuring one dual-core processor made using 32nm fabrication process and graphics and other system logic core produced at 45nm node. It is highly likely that the new processors will be launched on the same day – on January 7, 2010 – at the Consumer Electronics Show.

The desktop-oriented Clarkdale chips, which will be sold under Core i5, Core i3 and Pentium brands, will work at up to 3.46GHz, will sport Hyper-Threading technology and will consume just about 73W, with the exception of Core i5 661, which will feature 900MHz graphics core and 87W thermal envelope. The chips will be compatible with LGA1156 infrastructure provided that BIOS versions support the new chips.

Preliminary Specifications of Intel "Clarkdale" Processors

Model

Frequency

Cores/ Threads

Cache

Memory

Integrated Graphics Frequency

Intel Clear Video HD

TDP

AES-NI

Intel vProIntel VT-xIntel VT-dIntel TXT

Core i5-670

 3.46GHz

2/4

4MB 

 1333MHz

733MHz

 +

73W

+

+

+

+

+

Core i5-661

 3.33GHz

2/4 

 4MB 

1333MHz 

900MHz 

 +

87W

+

-

+

-

-

Core i5-660

 3.33GHz

2/4 

 4MB 

 1333MHz

733MHz

 +

73W 

+

+

+

+

+

Core i5-650

 3.20GHz

2/4

 4MB 

 1333MHz

733MHz

 +

73W

+

+

+

+

+

Core i3-540

 3.06GHz

2/4

 4MB 

1333MHz 

733MHz

 +

73W

-

-

+

-

-

Core i3-530

 2.93GHz

2/4

 4MB 

1333MHz 

733MHz 

 +

73W

-

-

+

-

-

Pentium G6950

 2.80GHz

2/2

3MB

 1066MHz

533MHz 

 -

73W

-

-

+

-

-

Since Arrandale/Clarkdale central processing units (CPUs) have integrated memory controller, graphics core as well as PCI Express interconnection inside, there will be no need for GMCH (or North Bridge) on the mainboard. Instead, the new processors will connect directly to Intel 5-series core-logic (code-named Ibexpeak) platform controller hub (PCH) that will carry hard drive controller, wired and wireless network controllers, monitor physical interfaces, PCI controller and other input/output as well as platform-related capabilities.

Intel began sales of its Core i3, Core i5 and Pentium processors based on Clarkdale design on the 10th of December, 2009, to distributors, resellers and retailers, who may then sell central processing units, mainboards based on Intel Q57, Intel H55 and Intel H57 as well as other components to OEMs and other customers. Intel’s goal is to have systems based on the new chips available for sale by January 7, 2010, a source with knowledge of the matter said.

5. ATI Demonstrates “The Future” DirectX 11 Graphics Cards at Quakecon

Various demonstrations of the forthcoming DirectX 11-capable hardware were very important events for ATI, graphics business unit of AMD, this summer. The two most important one were even public: the showcase at Computex Taipei and the demo in a “secret” room at Quakecon, two large tradeshows with loads of journalists, computer enthusiasts, gamers and business partners.

“Area64 will be exclusive access only, meaning, you can try to find it, but its hidden and being kept secret. AMD will be showcasing what we lovingly refer to as ‘The Future’,” said Ian McNaughton, a senior manager of advanced marketing at AMD, in a corporate public blog just a little more than a month before ATI/AMD initiated to sell ATI Radeon HD 5800-series graphics cards to the end-users.


ATI Radeon HD 5870, front side

ATI’s sneak peak’s on DirectX 11-compliant hardware were very strategically placed in Asia, where the vast majority of graphics cards manufacturers are located, and the U.S., where there are loads of socially active gamers and press, which will quickly spread the news about the never-before-seen and ultimately-powerful-and-feature-rich hardware to the masses.


ATI Radeon HD 5870, back side

The launch of ATI Radeon HD 5800-series also demonstrated another thing: just like in case of the Radeon HD 4800 launch, nobody knew exactly what the Radeon HD 5800 is. This allowed the company to balance its designs, prices and performance exactly in accordance with market realities.

ATI still has a lot of work to do with its DirectX 11 lineup and those are pretty tough challenges. There is a tremendous performance and price gap between ATI Radeon HD 5770 and HD 5850 models. As a consequence, the company has to sell the previous-generation ATI Radeon HD 4870/4890 graphics boards to close the gap and has to compete against Nvidia, which, unlike ATI is promoting its GeForce GTS 260/275 as ultimate solutions for gamers. In addition, ATI has to constantly ensure that the yields of its 40nm DX11 chips at TSMC are high enough to meet demand for the next-generation hardware.

The long story short, thanks to sneak demos of DirectX 11 graphics cards and delivery of actual hardware in accordance with general expectations, AMD managed to further increase interest towards ATI Radeon in general and ATI Radeon HD 5000-series in particular. Well done!

4. Nvidia: DirectX 11 Will Not Catalyze Sales of Graphics Cards

Sometimes it is better to be tightlipped rather than talkative, even if you are public relations executive. Mike Hara, vice president of investor relations at Nvidia said at a technology conference that DirectX 11 would not catalyze sales of graphics cards. The quote immediately became the quote of the year in the computer graphics industry.


Michael W. Hara

“DirectX 11 by itself is not going be the defining reason to buy a new GPU. It will be one of the reasons. This is why Microsoft is in work with the industry to allow more freedom and more creativity in how you build content, which is always good, and the new features in DirectX 11 are going to allow people to do that. But that no longer is the only reason, we believe, consumers would want to invest in a GPU,” said Mike Hara.

Nvidia was days before facing the rival ATI (graphics business unit of Advanced Micro Devices) was set to start selling its ATI Radeon HD 5800-series graphics cards, which were not only DirectX 11-compliant, but were also the fastest single-chip graphics solutions in the world. But performance was not important anymore, said the vice president. What mattered, according to him, was Nvidia’ stereo 3D Vision (which is compatible with four computer displays [only two of which are on sale] and a number of high-end DLP HDTVs) technology as well as Nvidia PhysX-based visual effects in several games.

“Graphics industry, I think, is on the point that microprocessor industry was several years ago, when AMD made the public confession that frequency does not matter anymore and it is more about performance per watt. I think we are the same crossroad with the graphics world: framerate and resolution are nice, but today they are very high and going from 120fps to 125fps is not going to fundamentally change end-user experience. But I think the things that we are doing with Stereo 3D Vision, PhysX, about making the games more immersive, more playable is beyond framerates and resolutions. Nvidia will show with the next-generation GPUs that the compute side is now becoming more important that the graphics side,” concluded Mr. Hara.

In the end it transpired that DirectX 11 was still a catalyst to sell graphics cards: ATI sold 800 thousand of DX11 graphics processing units as of mid-December, 2009.

3. Nvidia Admits Showing Dummy Fermi Card at GTC, Claims First Graphics Cards on Track for Q4 2009

While Nvidia pretended that DirectX 11 was not important, it knew perfectly that it was. Tech-savvy consumers want to buy next-generation hardware that is going to perform perfectly today. As a result, less than a week after ATI/AMD began selling the Radeon HD 5800-series graphics cards, Nvidia unveiled some details about its DX11-capable GPU family code-named Fermi and even demonstrated a graphics card allegedly powered by the new processor. Days after the demonstration it had to admit: the card was just a dummy, not even an engineering sample. 


Nvidia dummy card not powered by Fermi-GF100 graphics processor. Image by PC Watch

It was a bad thing to show a fake graphics card to the public and then admit it a couple of days after. However, as it transpired later, it was even worse to promise to release the first Fermi-based graphics cards by the end of 2009. The promise was made in late September and a little more than a month later the company’s chief executive officer said that the company would only begin production of Fermi-based chips in the February – April, 2010, timeframe.


Nvidia dummy card not powered by Fermi-GF100 graphics processor. Image by PC Watch

“Next year it is going to be an interesting first quarter because, in fact, we will need more wafers than ever in Q1. The reason for that is because – and I mean more 40nm wafers than ever in Q1 – we are […] fully ramping Fermi for three different product lines: GeForce, Quadro and Tesla,” said Jen-Hsun Huang, chief executive officer of Nvidia, in a conference call with financial analysts.


Nvidia GeForce graphics card powered by Fermi-GF100 graphics processor. Image by Nvidia

Nvidia’s first quarter of fiscal year 2011 begins on the 26th of January and ends on the 26th of April, 2010.


Nvidia GF100 also known as NV60, G300, GT300 die shot

The current state of the next-generation GeForce GF100 chip (also known as NV60, G300, GT300, etc.) is not completely clear. There are reports that Nvidia wanted badly to demonstrate its next-generation GeForce at the Consumer Electronics Show in early January, there are also reports that Nvidia will only be able to ship the GF100 in March of 2010, in the meantime Nvidia cuts-down specifications of its next-generation Tesla computing cards for HPC markets that cost $2500 and upwards. Still, officially, Nvidia demos GeForce “Fermi” rendering a DirectX 11 benchmark, operating in SLI multi-GPU mode and claims that it is happy with performance of its next-gen product.

2. Sony Officially Announces PlayStation 3 Slim at $299

Sony PlayStation 3 turned three years old this year. However, until recently the cost of the PS3 was too high for an average consumer, which is why we were not exactly surprised that tens of thousands of our readers clicked on the headline announcing the PlayStation 3 Slim version with $299 price-tag. Even though the console had hard start and is still the worst-selling new-generation game system, it is a perfect Blu-ray player and supports a lot of decent high-definition games. 

In order to make PlayStation 3 Slim possible, Sony had to manufacture Cell processor using 45nm process technology and make Nvidia RSX graphics chip at 65nm node. To further reduce costs, the design of the latest-generation PlayStation 3 is significantly revised from previous versions. The major changes involve the use of less expensive semiconductors, a general redesign of the product and a reduction in the number of components in the console. Excluding the controller and the box contents, the latest version of the PlayStation 3 includes approximately 2568 components, down from 4048 in the original version.

The new chips also cut the power usage of the PlayStation 3, allowing design changes that reduce hardware costs. The new system cuts the energy budget nearly in half from the first-generation hardware as the new PS3 employs a 220W master power supply, compared to a 400W supply in the first version. The lower wattage reduces the cost of the power supply as well as other power and cooling components.

Even though Sony Computer Entertainment has managed to greatly reduce the manufacturing cost of its PlayStation 3 video game system, the company still sells its console at a loss, according to a new teardown analysis conducted by iSuppli market research firm. In fact, the latest PS3 Slim 120GB costs $336.27 to make.

Thanks to the price-cut, PS3’s sell-through numbers have increased rather substantially since August. Nevertheless, Microsoft Xbox 360 and Nintendo Wii remain more successful in the U.S. According to NPD Group, in November, 2009, Nintendo sold 1.26 million Wii systems, Microsoft supplied 819.5 thousand Xbox 360 consoles, Sony shipped 710.4 thousand of PlayStation 3s and 203.1 thousand of PS2s.

1. AMD Readies “Thuban” Six-Core Desktop Processor

Nothing inspires our readers more than the future developments by Advanced Micro Devices. In the second half of 2009 the most popular news-story at X-bit labs was about a yet another forthcoming microprocessor by AMD. This time it was not twelve-core Opteron “Magny-Cours” chip, but rather “modest” six-core AMD Phenom II X6 “Thuban” central processing unit (CPU). 

The world’s second largest developer of x86 chips plans to start shipping its AMD Phenom II X6 “Thuban” processors in the second quarter of next year, according to sources familiar with AMD’s roadmap. AMD Phenom II X6 will be compatible with socket AM3/AM2+ (with split power plane) infrastructure and will have integrated dual-channel PC3-10600 (DDR3 1333MHz) memory controller. It is very likely that Thuban processors will retain the design of the code-named Istanbul chips for servers, thus, will feature 3MB L2 cache (512KB per core) and 6MB of L3 cache. The chips will be made using 45nm SOI fabrication process. Power consumption of the new CPUs is set to be decided.


Die shot of six-core AMD Opteron "Istanbul" processor

The new six-core microprocessors will be the main part of AMD’s Leo platform that will be based on the AMD 890FX and 890GX core-logic sets. The new chipsets will offer better performance and functionality, e.g., they will support Serial ATA-600, 14 USB 2.0 ports and so on, but both will only hit mass production in April, 2010, and will be formally released in May next year, according to market sources.

AMD officially confirmed plans to launch six-core processors for desktops next year, however, the company remained tight-lipped about their performance, specifications or features. The reason for that is simple: even though Thuban is inside AMD’s roadmap there is still a lot of work to do before it will be possible to release the novelty. In order to be competitive with currently available products, AMD will have to clock its six-core desktop chips at 3.0GHz – 3.20GHz; however, since performance of microprocessors tends to increase rather rapidly, the company should consider 3.40GHz - 3.60GHz  ranges as well as dynamic overclocking technology (in fact, AMD is working hard to improve overclockability of its platforms, so me may well see a new version of its Overdrive tool that will automatically overclock six-core chips when not all cores are needed) in order to return to the high-end CPU market.