Bookmark and Share


Advanced Micro Devices was nearly a year late to market with its quad-core microprocessors compared to Intel Corp., but the firm seems to be optimistic about its roadmap execution going forward. The company says that its octa-core microprocessors are due in 2009, whereas chips with more than eight cores can be made thanks to AMD’s architecture.

Randy Allen, corporate vice president of server and workstation division at AMD, said in an interview that AMD’s new quad-core server processor made using 45nm process technology code-named Shanghai is due in the second half of 2008, whereas octa-core microprocessor code-named Montreal along with new socket G3 platform are set for release in 2009.

The vice president of the world’s second largest maker of x86 central processing units (CPUs) also said that Shanghai microprocessors will be able to offer higher instructions per clock (IPC) throughput compared to Barcelona, which should transform into higher overall performance per clock. Thanks to higher IPC and larger level-three cache (6MB instead of 2MB), the new processors are likely to offer considerably higher speed than existing quad-core chips by AMD.

Following this, Mr. Allen is reported to have said that he expected the demand for chips with cores beyond eight, especially in datacentres, and that AMD’s architecture would be able to expand to accommodate this, reports IT Week wed-site. However, AMD is not really looking forward to enable simultaneous multi-threading technology akin to Intel’s Hyper-Threading, at least with its server chips.

“It is very clear that most server workloads are multi-tasking, not really multi-threaded,” Mr. Allen is reported to have said.

But while the high-ranking executive of AMD seems to be optimistic both about quad-core Shanghai and regarding octa-core Montreal, the actual execution of the plans may be a hard task in the current situation of AMD.

Back in September ’07 Mr. Allen promised that in several months time the company would be able to increase clock-speeds of its quad-core server processors to 2.50GHz from 2.0GHz by December ’07. AMD has not announced any higher performance quad-core server chips since then. Moreover, the company managed to start shipments of 2.50GHz quad-core Phenom chips for desktops only a couple of weeks ago. The firm explained that it could not initiate production of higher-speed quad-core  chips in mid-Q1 since it had to reassign resources on development of more energy-efficient processors with four or three processing engines.

Recently AMD also said that it would reduce its workforce by 10% by the end of the third quarter of 2008. It is not completely clear how AMD plans to continue delivering new products in accordance with earlier announced roadmap while having fewer human resources.


Comments currently: 16
Discussion started: 04/08/08 12:19:43 PM
Latest comment: 04/21/08 02:22:24 PM
Expand all threads | Collapse all threads


AMD held a serious CPU design over Intel for years, and what happened? During this time, AMD fans fruitlessly BEGGED the company to continue improving its architecture, but instead, the management of AMD used this time to massively inflate their own salaries, whilst ensuring that no money was 'wasted' on improved designs.

So bad was the situation that, almost two years after every tech savvy person knew that Intel had given up the dreadful netburst Pentium 4, and was returning to the Pentium 3 design (as seen in the winning core2due etc chips), AMD planned to release old design dual core athlons with massive level 2 caches (aping the last days of netburst). Though these chips never made it to market, all AMD's new dual cores suffered, for in order to support the massive cache in the chip design, the speed of the secondary cache had to be much reduced. That's right- AMD's 'new' .65 dual cores were actually SLOWER than dual cores made on the previous process, clock for clock, when running code that depended on the speed of L2 cache.

So, the sum total of AMD's innovation for the last 5 years or so was slowing down L2 cache to sell (if time hadn't run out, and netburst hadn't disappeared from the market) worthless chips with massive caches that made no difference in most apps, and (showing how the 'cache' guy was the all powerful engineer in AMD) adding utterly worthless L3 cache to the new quad-cores.

If AMD had spent even a minute talking to its user base, it could have added (decent) vector instructions to accelerate video/physics tasks with almost zero effort. AMD was NEVER going to match Intel's process advantage. AMD had to think smart, but the company had long become a cash cow for a bunch of very dodgy top execs.

Intel's speed of intellect is snail like in speed, and gave AMD years and years in which to keep the intellectual lead. Problem is that once a small hungry company has early success, the key engineers are promoted into pointless and non-creative desk jobs with excellent share options. The last thing these now has-been execs want is any young turks rocking the boat.

In case you missed my point, I'll repeat it. AMD RELEASED A DESIGN BASED ON INTEL'S TOTAL FAILURE, PENTIUM 4 NETBURST, TWO YEARS AFTER IT WAS KNOWN THAT INTEL HAD KILLED NETBURST. These lazy and stupid execs expected to keep AMD's ASP high by conning people into buying AMD dual cores with slow L2 caches of 4Mb, 8Mb sizes (up from 1/2Mb). Five years, and all AMD came up with was "make the L2 cache much bigger, cos the fools will never notice how much slower it has got" and "didn't we con people with L3 cache in one of our old designs?".

Of course, you should all remember that just like ATI, AMD actually only got a decent CPU design because it gained the expertise by buying the design skills of another company. And just as in ATI's case, after the early excellent work done by these new designers (ATI got the 9700), both companies produced ever lazier follow-ups that were, in reality, very minimal evolutions. In the mean time their competitors (Intel, Nvidia) eventually caught up, and then massively out paced them, by the simple strategy of being prepared to create new designs, rather that attempting to milk existing ones for ever.

AMD obviously though, when buying ATI, that it was repeating the strategy that saved AMD in the past. However, they obviously forgot that ATI was suffereing many of the engineering management mistakes that were taking AMD down the drain.

ATI has already given up competing with high-end Nvidia product. ATI still has no answer to a design that is more than 1.5 years old from Nvidia. At the year's end, ATI will be even further behind in single GPU performance, when Nvidia is selling its next gen super GPU. However, ATI has no process disadvantage when compared to Nvidia, cos both use the same asian fabs. AMD is set to also produce a chunk of its CPU's at the same fabs by years end, in the hope of soon using 3rd party fabs for most of its CPU's (hence the reason AMD is endlessly talking about ridding itself of its own fab business).

AMD's great future profit model is to earn license fee payments for owning the IP of its designs, as others make them, and eventually sell them too. This is the model of ARM, for instance. Design and license IP to others. It is an inevitable model as the parts you would otherwise make yourself see a collapse in ASP (average selling price). Of course, while game consoles continue to see massive sales, AMD can dream of getting design contracts from MS, Nintendo, Sony etc (which moved to the licensed IP model some time back).

Oh, and multi core, past 3 or so, is a total joke (and has been known to be so by those in computer science for many many years). There are no general program methodologies that can take worthwhile advantage of more than a handful of cores. Massively symmetric algorithms for consumers tend to be implemented by much cheaper dedicated silicon (think hidef video decoding, for instance). A triple core core2 Intel design (yeah, i know its actually quad) at 3Ghz has no task that needs its power that 99.99% of existing users needs to compute. Even programmers with massive compiles are bogged down by everything (memory/HD speed) EXCEPT CPU speed, if they own a new Intel part.

The future holds no known consumer use of more CPU power. Intel, in a full blown panic over this (and ignoring past academic searches) has bought a games physics company, and is paying grants to universities to study (for the millionth time) this issue. However, even if they found some unlikely consumer use of voice recognition, or video recognition, the lousy memory bus of the consumer grade CPU's would tend to kill the advantage of large numbers of cores.

GPU's are, of course, already massively 'multicore', in a form that suits their single minded purpose. The CPU is already dead as an exciting, advancing part in a home PC. The GPU has taken that role with a vengence. A 200-dollar GPU on a card already does more (a lot more) than the work of TEN 200-dollar+ Intel quad-core CPU's. Intel and AMD are now into the final round of super 'buggy whip' designs, now that 'horses' are being clearly replaced by 'cars'. And as this happens, internet sites that have specialised in taliking about the merits of 'buggy whip' design will continue to act, for as long as possible, as if nothing has changed in the marketplace. Shame on them!
0 0 [Posted by:  | Date: 04/08/08 05:33:09 PM]
- collapse thread

I can't believe I actually wasted a few minutes of my life (which I can never get back) reading that crap, ranting on and on. B.S.
0 0 [Posted by:  | Date: 04/09/08 01:11:05 PM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture