Dear forum members,
We are delighted to inform you that our forums are back online. Every single topic and post are now on their places and everything works just as before, only better. Welcome back!


Discussion on Article:
AMD Phenom Changes Stepping to B3: TLB Bug – in the Past

Started by: AMD Suckers United | Date 03/26/08 09:15:27 PM
Comments: 57 | Last Comment:  12/19/15 04:04:30 AM

Expand all threads | Collapse all threads


Good Luck! It's still pathetic!
0 0 [Posted by:  | Date: 03/26/08 09:16:43 PM]
- collapse thread

yeah! i love it when they're bleeding like hell after being pounding in the @$$ like crazy! so long AMD!
0 0 [Posted by:  | Date: 03/26/08 09:18:27 PM]
yeah, i like the way you intel suck-asses, like being pounded in the @$$ like crazy
0 0 [Posted by:  | Date: 03/27/08 01:40:37 AM]

ZOMG....intels cheapest quad core only costs 35% more then AMDs...and on average performs 10-20% better real world apps.

Then of course theres the fact that their new "ground breaking" chip they put out every 3 months or so goes for $1500 or so, and performs marginally better than the almost identical chip that costs around $500.

But hey, at least with every new chip gotta go and spend another $200-300 on a new motherboard that supports it...but it still will feel farmiliar since it's most likely the same socket design you're upgrading from. Maybe you'll even get to buy some new ram to support it since ram compatibility changes with that pesky FSB/NB thing.

2 years, 8 sockets many pin layouts?

I'm very happy for you intel fan boys that brag about dropping several thousand bucks every 6 months or so to get marginal performance gains, no real design improvement outside of smaller die size, bigger slower cache and putting 64bit development at a standstill because intel can't figure that out quite yet.

But hey, AMD just managed to put out a board that will run vista, play Blu-ray, and even allow for passable gaming with no add in video card. Actually can build a vista system (board, cpu, 2 gigs of ram) for around $150. Not like intel had a big market share in that area or anything...oh wait..yeah they kind of did. But maybe they'll still compete with thier onboard graphic boards that can't run xp without an extra card.

Then again, what about the server server chips go for around $5k a piece, and put out 220-260w per chip...ah just like a cool spring breeze...of course with the cooling includes 8 high speed 120MM fans - 35mm deep and 6000 rpm. that nice heat output is coupled with the sound of a jet engine....and only with the need for a low power 1100w PSU.

Yes, they're faster. They're more expensive, they're seemingly plauged with driver problem and performance and compatibility least for awhile. But that's if you can get the new ones within the first 6 months of the "offical launch"

How bout you go do what was done in the old days, ya know when overclockers took all parameters into account.

Look at the speed of the processors.
Look at the Cache sizes.
Remember that the power consumption of the intel NB. should be factored in to even things up with AMDs on die memory controller in that power consumption of the cpu, ya know if you want to nit pick...or be accurate.
Look at memory speeds (notice that AMD still holds something like the top 30 slots in memory performance before intel makes it on the list, despite the often considerably slower speeds.

Now, go to tomshardware cpu charts. Look at the real benchmarks, not the theorhetical synthetic crap. Figure out the performance difference in each bench mark, then compare it to the corresponding piece of hardware that's benched.
Then factor in the price.

For example, back when the qx6850 was coming out with it's
$1500 price tag, it was benched against the Athlon 6000x2.
same clock speed,
intel chip has 75% more L2 cache,
50% more cores,
running 30% smaller die size
8% faster memory.

AMD still had like a 25% lead in memory performance despite it being slower, and real world app benchmarks the intel chip averaged 15-30% faster. The high end of the performance gain intel had was 45% in one bench. Intel quad, 2x4mb L2, 65nm die 800mhz ram..against the amd dual core, 2x1mb L2, 90nm 750mhz ram.

Average highend performance gain of 30% in real word apps.
Intel $1500
AMD $ 180
difference 88%
sad part, qx6850 had like a 20% high end performance gain over the $300 2.4ghz q6600 with other wise identical specs.

And now, the qx6850 still costing $1000, compared to the $200 Phenom has an avg performance gain of 10-20%, hitting a high at 35% in one bench.

If someone gave me an intel rig, sure i'd use it. But i'd much prefer to build 2 or 3 AMD rigs and over clock them to the breaking point and still be stable for the same cost of one intel rig.

lol it's just absurd that anyone can complain about AMD overcharging for a chip after doing the math, getting the pure clean hard facts about intels performance even compared with itself and the prices charged.

Intel has the lead, but it isn't as big as you amd flaming intel fanboys would like to belive, the performance in no way merits the price, and they don't lead in everything.

In fact, when it comes to innovation...they're way behind, look at the fact that it will have taken 6 years to release a chip with an IMC, and a couple more to figure out how to make a native quad core...
0 0 [Posted by:  | Date: 03/27/08 09:00:08 AM]
- collapse thread

ZMOG, Right bang on.
a $2500 rig from AMD with 10 - 20% less performance compared to a 5500$ rig form Intel, Hmmm i am sticking with the cheaper rig.
0 0 [Posted by:  | Date: 03/27/08 02:53:01 PM]
$2500 rig for your grandma?!?

and then you keep upgrading every 6 months just to keep up with new games or resource hungry apps?!? roughlly $7500.. go ahead and suck up wit AMD...

Intel's Q6600 has been out for more than a year and can still whoop your Phenominal @$$ anytime!
0 0 [Posted by:  | Date: 03/27/08 06:15:56 PM]
Say what? 2 years, 8 sockets?
You must have been living on Mars for the last couple of years. Or in the AMD headquarters... nothing happened there either!

Anyway, technically, Intel doesn't use a socket, it's a Land Grid Array. That was just a footnote.
So, LGA775 (Intel) and socket 939 (AMD) came out in the same period, june 2004 to be precise. Now, in 2008, Intel still has... LGA775. AMD on the other hand had socket 939, socket AM2, socket AM2+ and now they will also have socket AM3... Moreover, all new Intel cpu's are compatible with older mainboards. Even older cpu's like the Pentium D fit into new mainboards. Now you try to fit a socket 939 cpu into an AM2 mainboard. Succes!! No, offcourse it won't work, it's totally different. AMD announced that AM3 cpu's will be backwards compatible with AM2 sockets but not vice-versa. So when you say "upgrading every six months", you must be talking about AMD then...

Next point, you say "no real design improvement outside of smaller die size, bigger slower cache and putting 64bit development at a standstill because intel can't figure that out quite yet." If AMD figured out how it all works like smaller but faster cache for instance, then why is Intel still superior to AMD? Well, one innovation doesn't make you superior. So maybe it's just Intel that makes all the improvements.

Next point, "they're seemingly plauged with driver problem and performance and compatibility issues". O boy... Ever heard about the TLB-bug? Performance issues? Compatibility issues? Hmmmm, you must be talking about AMD again.

Next point. Aaaah, forget it. Just go back to Mars or the AMD headquarters.

Oh wait, I forgot to mention something. I currently have a C2D which will be replaced by a C2Q within a few months. But before that, I had 3 AMD's in a row. So I'm not an Intel fanboy, nor an AMD fanboy. All I need is performance...
0 0 [Posted by:  | Date: 03/31/08 12:36:55 AM]

I would like to see the price/performance charts also. Everybody can see for what they have paid and expect nothing more.
0 0 [Posted by:  | Date: 03/28/08 12:01:07 AM]

yo guys (da ones at xbitlabs) should use a better mobo lyk da one from gigabyte which has better ocing than the one from dfi

i agree with ISeeFractals...

intel's q6600 get beaten to death with the e8400... im thinkin of gettin one 4 myself... or w8 till AMD transits 2 45nm process den lets see wot da hell can those stupid jews at da intel fab in israel can cook up...
0 0 [Posted by:  | Date: 03/28/08 01:25:23 PM]

Something I can really not understood is why people (almost all sites testing phenom so far) are so unfair with AMD new chip :
- AMD phenom are supposed to have improved performance in term of floating point calculations and not a bright integer unit. And so far every sites, xbitlabs included, avoid floating point benchmark and show integer only one. I'm feeling this is somehow unfair. And at least half of the information available to the tester is not displayed. Quite a pity !
- When new intel chips goes out, people always argue that they don't perform that good because the software are not optimized for the new chip. This holds also for AMD phenom for as much as I know ? In fact generally this argument is not of very much importance. In this case it really is : most codes are optimized to take profit of the very large cache of Intel core 2 processors. That is in real "microsoft" world case the new arch from AMD is really suffering an unfair burden just from the tuning used in the codes.
For those two reasons I feel a real world fair comparison between intel and AMD processorswould involve :
- optimizing the binaries for each processor in the same way that is making for both binaries that runs the fastest possible.
- Including floating point benchmarks as well as integer ...

Call me an AMD fanboy if you want. But I'm waiting to have my hands on their barcelona before making a conclusion about the performances. At least from this test I think it's not possible to get a sensible idea of the perfo of the new AMD proc.
0 0 [Posted by:  | Date: 03/30/08 06:46:04 AM]
- collapse thread

because floating point applications are rare in real world tasks! Intensive float point operations are usually used in scientific research (fluid dynamics, meteorological modelling). Besides most floating point benchmarks like specf show a very marginal boost in the K10's design. Check out anand's early benchmarks of the barcelona...
0 0 [Posted by:  | Date: 03/30/08 06:42:24 PM]
I'm not going to call you a fanboy, but I am going to call you mistaken.

Regarding your first objection: although I can't speak for certain about all the benchmarks, if you look at only the Mathematica benchmark, I can tell you for sure that it consists of 15 tests: only 3 of these are integer-based, and the remaining 12 are floating-point tests. That means that in fact the majority of this benchmark--which sees the Phenom lagging the furthest behind the Intel chips--did indeed exercise the improved FP units in the Phenom. To be quite honest this most likely applies just as well to the video encoding tests which require heavily FP-based DCT and DWT operations.

Regarding the second, it is not true that all code is optimised for large L2 caches. This simply depends on the working set size of the problem. As it happens you can largely determine what the working set size is by how badly the patched B2 Phenom does relative to the other chips: if it's performing at approximately clock speed parity, then the Phenom's L3 cache is not being used significantly anyway. This case applies as far as I can see only in the WinRAR benchmark which has an unusually large working set size.

Since both of the reasons are wrong in the first place, I won't comment on your proposed solutions. :)
0 0 [Posted by:  | Date: 03/30/08 07:13:19 PM]
I should clarify my previous comment: it is the case of a large working set size being a possible disadvantage to the Phenom which applies mainly for the WinRAR benchmark. The other benchmarks where B2 and B3 Phenom perform similarly don't suffer unduly from it.
0 0 [Posted by:  | Date: 03/30/08 07:21:28 PM]
I suppose you know what you are speaking about. I'm really surpised a formal calculation software written in C++ (mathematica) could be FP instensive. Also, not saying you are mistaking, I'm really not convinced by your argument about cache (likely my own misunderstanding). Anyway I shall soon make my own tests and discover if this arch is really worth anything in real world FP (actually DFT calculations) load.
0 0 [Posted by:  | Date: 03/31/08 07:39:48 AM]
Mathematica is actually written in C, not C++, although to be fair this is largely irrelevant. All except one of the benchmarks are purely numerical and don't exercise the symbolic capabilities of the software at all; one can argue that this is rather unrepresentative of typical usage, but it shows the performance of the CPU quite well.

I'm not sure whether by DFT you meant density functional theory or discrete Fourier transform, but whichever it is, I'm confident that you'll see about the same relative performance in your application as was demonstrated in these benchmarks.
0 0 [Posted by:  | Date: 03/31/08 10:22:07 AM]

I find it hard to believe that they could only get 2.7 @ 1.4V out of this CPU. I'm running 2.83GHz @ 1.325 volts stable on Asus M3A32MVP on the stock cooler and just waiting for my THC cooler so I can take it over 3GHz. I don't know how well that DFI board works but only 2.7 @ 1.4 is crap...
0 0 [Posted by: ojdidit84  | Date: 08/22/08 11:09:11 AM]


Back to the Article

Add your Comment