So bad was the situation that, almost two years after every tech savvy person knew that Intel had given up the dreadful netburst Pentium 4, and was returning to the Pentium 3 design (as seen in the winning core2due etc chips), AMD planned to release old design dual core athlons with massive level 2 caches (aping the last days of netburst). Though these chips never made it to market, all AMD's new dual cores suffered, for in order to support the massive cache in the chip design, the speed of the secondary cache had to be much reduced. That's right- AMD's 'new' .65 dual cores were actually SLOWER than dual cores made on the previous process, clock for clock, when running code that depended on the speed of L2 cache.
So, the sum total of AMD's innovation for the last 5 years or so was slowing down L2 cache to sell (if time hadn't run out, and netburst hadn't disappeared from the market) worthless chips with massive caches that made no difference in most apps, and (showing how the 'cache' guy was the all powerful engineer in AMD) adding utterly worthless L3 cache to the new quad-cores.
If AMD had spent even a minute talking to its user base, it could have added (decent) vector instructions to accelerate video/physics tasks with almost zero effort. AMD was NEVER going to match Intel's process advantage. AMD had to think smart, but the company had long become a cash cow for a bunch of very dodgy top execs.
Intel's speed of intellect is snail like in speed, and gave AMD years and years in which to keep the intellectual lead. Problem is that once a small hungry company has early success, the key engineers are promoted into pointless and non-creative desk jobs with excellent share options. The last thing these now has-been execs want is any young turks rocking the boat.
In case you missed my point, I'll repeat it. AMD RELEASED A DESIGN BASED ON INTEL'S TOTAL FAILURE, PENTIUM 4 NETBURST, TWO YEARS AFTER IT WAS KNOWN THAT INTEL HAD KILLED NETBURST. These lazy and stupid execs expected to keep AMD's ASP high by conning people into buying AMD dual cores with slow L2 caches of 4Mb, 8Mb sizes (up from 1/2Mb). Five years, and all AMD came up with was "make the L2 cache much bigger, cos the fools will never notice how much slower it has got" and "didn't we con people with L3 cache in one of our old designs?".
Of course, you should all remember that just like ATI, AMD actually only got a decent CPU design because it gained the expertise by buying the design skills of another company. And just as in ATI's case, after the early excellent work done by these new designers (ATI got the 9700), both companies produced ever lazier follow-ups that were, in reality, very minimal evolutions. In the mean time their competitors (Intel, Nvidia) eventually caught up, and then massively out paced them, by the simple strategy of being prepared to create new designs, rather that attempting to milk existing ones for ever.
AMD obviously though, when buying ATI, that it was repeating the strategy that saved AMD in the past. However, they obviously forgot that ATI was suffereing many of the engineering management mistakes that were taking AMD down the drain.
ATI has already given up competing with high-end Nvidia product. ATI still has no answer to a design that is more than 1.5 years old from Nvidia. At the year's end, ATI will be even further behind in single GPU performance, when Nvidia is selling its next gen super GPU. However, ATI has no process disadvantage when compared to Nvidia, cos both use the same asian fabs. AMD is set to also produce a chunk of its CPU's at the same fabs by years end, in the hope of soon using 3rd party fabs for most of its CPU's (hence the reason AMD is endlessly talking about ridding itself of its own fab business).
AMD's great future profit model is to earn license fee payments for owning the IP of its designs, as others make them, and eventually sell them too. This is the model of ARM, for instance. Design and license IP to others. It is an inevitable model as the parts you would otherwise make yourself see a collapse in ASP (average selling price). Of course, while game consoles continue to see massive sales, AMD can dream of getting design contracts from MS, Nintendo, Sony etc (which moved to the licensed IP model some time back).
Oh, and multi core, past 3 or so, is a total joke (and has been known to be so by those in computer science for many many years). There are no general program methodologies that can take worthwhile advantage of more than a handful of cores. Massively symmetric algorithms for consumers tend to be implemented by much cheaper dedicated silicon (think hidef video decoding, for instance). A triple core core2 Intel design (yeah, i know its actually quad) at 3Ghz has no task that needs its power that 99.99% of existing users needs to compute. Even programmers with massive compiles are bogged down by everything (memory/HD speed) EXCEPT CPU speed, if they own a new Intel part.
The future holds no known consumer use of more CPU power. Intel, in a full blown panic over this (and ignoring past academic searches) has bought a games physics company, and is paying grants to universities to study (for the millionth time) this issue. However, even if they found some unlikely consumer use of voice recognition, or video recognition, the lousy memory bus of the consumer grade CPU's would tend to kill the advantage of large numbers of cores.
GPU's are, of course, already massively 'multicore', in a form that suits their single minded purpose. The CPU is already dead as an exciting, advancing part in a home PC. The GPU has taken that role with a vengence. A 200-dollar GPU on a card already does more (a lot more) than the work of TEN 200-dollar+ Intel quad-core CPU's. Intel and AMD are now into the final round of super 'buggy whip' designs, now that 'horses' are being clearly replaced by 'cars'. And as this happens, internet sites that have specialised in taliking about the merits of 'buggy whip' design will continue to act, for as long as possible, as if nothing has changed in the marketplace. Shame on them!