Bookmark and Share


Advanced Micro Devices disclosed at the International Solid-State Circuits Conference (ISSCC) that in order to increase clock-speeds of its forthcoming microprocessors based on Piledriver micro-architecture, it will use a new resonant clock mesh technology developed by Cyclos Semiconductor. The new tech allows to cut power consumption by 10%, or boost clock-speed by 10% without inrease of TDP.

AMD’s x86-64 core code-named “Piledriver” with 4GHz and higher clock-speeds employs resonant clocking to reduce clock distribution power up to 24% while maintaining the low clock-skew target required by high-performance processors. Fabricated in a 32nm SOI process, Piledriver represents the first volume production-enabled implementation of resonant clock mesh technology.

“We were able to seamlessly integrate the Cyclos IP into our existing clock mesh design process so there was no risk to our development schedule. Silicon results met our power reduction expectations, we incurred no increase in silicon area, and we were able to use our standard manufacturing process, so the investment and risk in adopting resonant clock mesh technology was well worth it as all of our customers are clamoring for more energy efficient processor designs,” said Samuel Naffziger, corporate fellow at AMD.

Cyclos resonant clock mesh technology employs on-chip inductors to create an electric pendulum, or “tank circuit”, formed by the large capacitance of the clock mesh in parallel with the Cyclos inductors. The Cyclos inductors and clock control circuits “recycle” the clock power instead of dissipating it on every clock cycle like in a clock tree implementation, which results in a reduction in total IC power consumption of up to 10%.

Clock mesh power reduction is one area where EDA vendors have not yet delivered design solutions so the validation of resonant clock mesh technology via the AMD Piledriver design is welcome news to the IC design community.

Implementing inductors on-chip to resonate a clock mesh is a simple idea with complex implementation requirements. Cyclos has commercialized over 10 years of research to produce the first resonant clock mesh design solution that meets all the testability, reliability, dynamic frequency scaling, and quality assurance requirements of today’s ICs.

“Now that the Cyclos technology is validated, we’re looking forward to expand into SoC designs via the design automation tools that are in development at Cyclos. We believe resonant clock mesh design will be a key enabler for GHz+ embedded processor IP blocks in next generation SoCs that also require ultra-low power consumption,” said Marios Papaefthymiou, founder and president of Cyclos Semiconductor.

Tags: AMD, Cyclos, Piledriver, Trinity, Vishera, Viperfish, 32nm, Globalfoundries


Comments currently: 109
Discussion started: 02/22/12 09:11:10 PM
Latest comment: 04/13/16 11:35:19 PM
Expand all threads | Collapse all threads

[1-20 | 21-23]

show the post
7 14 [Posted by: vid_ghost  | Date: 02/22/12 09:11:10 PM]

vid_ghost: Go on and design better CPU yourself.

Bulldozer's pipeline is much shorter than Pentium 4's pipeline.

Actually, bulldozer's pipeline quite equally long to the pipeline of world's fastest CPU, IBM's power7. So it seems to be on the sweet spot of pipeline lenght.

Bulldozer's problems are elsewhere. With shorter pipeline it would be even worse cpu, as it would clock much lower.

And K10-basen "Athlon 6x" cannot clock to 4 GHz (reliably with reasonable power consumption) because of it's shorter pipeline, so you can stop dreaming about it.

Bulldozer is performing badly mostly because of:
1) Combination of small L1 caches and slow L2 caches. This problem stays with piledriver.
2) L1 instruction cache aliasing problems and write-through L1 caches causing excessive L2 traffic. This problem stays with piledriver
3) The made couple of small mistakes somewhere and it cannot reach the clock speeds it was supposed to reach/what speeds most of it's pipeline would allow. Piledriver will fix this.
4) To get full floating point performance, you have to use AMD's own FMA4 instructions. No legacy software uses those, and not all new software is going to use them because intel is not going to implement those same instructions. Piledriver is going to support Intel Haswell-compatible FMA3, so new code optimized to intel will give full fpu performance on piledriver, no need for amd-specific optimizations.
14 7 [Posted by: hkultala  | Date: 02/22/12 09:30:26 PM]
- collapse thread

show the post
6 13 [Posted by: vid_ghost  | Date: 02/23/12 12:23:28 AM]
Thuban 1100T was not 3.7 GHz with 6 cores active. It was 3.3 GHz all cores active, and the turbo mode is far from perfect; Even when there is only one core active, it's often running at some lower clock speed than 3.7 GHz.

And intel with shorter pipeline can reach high clock speed because intel has much better factories/much better manufacturing process. AMD/GlobalFoundries don't have so good factories. They'll have to live with what they have.

K10 had reached it's age. Already Nehalem beat it badly, and there was no space for improvement in K10, there was too much legacy burden from K7, like lack of memory disambiguation, too tightly coupled ALU and AGU units, tomasulo style OOE instead of PRF-based OOE etc.
And you cannot change these things in existing architecture, they had already changed everything that can be changed/improved between K7 and K10.

So quite many years ago AMD knew it needs a totally new architecture after these K7-derivates, and they developed bulldozer. It ended up being worse than expected, but most of the problems are with the implementation, not deeply in the architecture.

Now there is a lots of room for improvement by fixing those things that appeared to be bottlenecks in the design.
9 1 [Posted by: hkultala  | Date: 02/23/12 05:42:39 PM]
Great post, it seems Intel fanboys suffer from intense paranoid amdphrenia and dual personality disorders, they actually believe they can design, all by themselves, better cpus than AMD!
13 5 [Posted by: bereft  | Date: 02/23/12 02:10:16 AM]
im a pro amd fanboy, i just hate the fact the stuffed up so much...

At least they FIRED a CEO over it
3 3 [Posted by: vid_ghost  | Date: 02/23/12 02:49:00 AM]
Actually I think the three problems with BD is 1) the not wide decoder/fetcher which could (for one module) decode as much as an i7-2600 (for one core), 2)Low IPC which should improve with Piledriver, 3)maybe the cache problems you mentioned

and for all the people who whine, go design a better CPU, If we compare performance among all desktop CPUs in the world, Intel #1, AMD #2, it's not extremely bad to be #2 and AMD is much much better in terms of performance than tens of companies

plus at least AMD could near SB in terms of performance although AMD has much less workers and most of those less workers work on GPUs
5 3 [Posted by: madooo12  | Date: 02/23/12 03:27:36 PM]
I really don't like you.

1. You have NO clue what you're talking about. Studying?? Must be some low budget Post Secondary institution let me tell ya.

2. You're hard headed. Others have made logical arguments which FOLLOW. Your arguments are entirely based on your emotional attachment to a name brand (AMD).

3. You sound like BaronMatrix and anyone who sounds like him... doesn't deserve to be respected.

0 2 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:07:29 AM]
Sounds like you almost know what your talking about. I agree with you. but screw the rest. i use AMD because im a fan and don't want Intel to have a monopoly, there are a lot of really cool things about Amd's awesome overclockers and unlockers buy 2 cores and get 4... just imagine what they could achieve with the same market share as Intel...just give the FX a chance I'm sure they will grow up just as fine as the k7 did even better perhaps.
1 0 [Posted by: aMdArK  | Date: 08/20/12 08:01:51 AM]

In 2 months we can compare piledriver (trinity) to K10(llano), both manufactured with same 32nm technology.

Then we will start seeing what piledriver gives compared to K10.

10 0 [Posted by: hkultala  | Date: 02/22/12 09:32:48 PM]
- collapse thread

For now, we can compare Athlon II (45nm K10.5 without L3 cache) to Llano (32nm K10.5 without L3 cahce); apple to apple. AnandTech and other reviews seem to conclude that the die-shrink does little to improve the performance of K10.5 architecture. It has already maxed out.
3 1 [Posted by: gamoniac  | Date: 02/23/12 02:49:59 AM]
Athlon II (45nm K10.5 without L3 cache) to Llano (32nm K10.5 without L3 cahce); apple to apple.

No .... llano chips are APU's and Althons are not so you cannot compare them apple to apple
2 1 [Posted by: veli05  | Date: 02/23/12 12:32:13 PM]
What he means is that they use the same core architecture as the Athlons, just like how the APUs use Radeon graphics cores.
5 0 [Posted by: TekTekDude  | Date: 02/23/12 02:04:12 PM]

How much will AMD pay for that RCM licensing?
Will AMD manage to increase cpu ASP or sales to compensate the licensing?
3 1 [Posted by: Azazel  | Date: 02/22/12 10:58:11 PM]

Sandy bridge would be in a whole lot of trouble with a 32nm phenom with Clock Mesh Tech.
5 4 [Posted by: saneblane  | Date: 02/22/12 11:45:56 PM]
- collapse thread

show the post
5 8 [Posted by: vid_ghost  | Date: 02/23/12 12:29:06 AM]
It is bad, but it is far from rubbish. The idea itself is good, but like many things by AMD, it was planned poorly. They are going to be patching up a lot of problems in the second generation and many more improvements in the future.
1 1 [Posted by: TekTekDude  | Date: 02/23/12 02:05:45 PM]
yes, just see how PHII is compared to PHI

Everything AMD makes as a 2nd generations is much much better than the 1st generation but of course it just ditches it then

wish AMD could improve BD but NOT ditch it
4 4 [Posted by: madooo12  | Date: 02/23/12 03:20:14 PM]
When K10 had it's not so successful launch I was told Intel will be in a whole lot of trouble when Bulldozer comes in 2009 with it's SSE5.
1 0 [Posted by: klikodesh  | Date: 02/23/12 02:37:11 AM]

So much fanboism in this thread, it's not even funny.

Xbitlabs themselves have shown in their Bulldozer review that 2 Sandy Bridge cores are about as fast as 2 Bulldozer modules at the same clocks (i.e., 2 SB cores = 4 Bulldozer cores).

Furthermore, if we look at a benchmark such as Cinebench, we can see both single-threaded and multi-threaded performance. Cinebench scales extremely well with IPC and cores. It correctly shows that Phenom II is better per clock than Bulldozer, which has been shown by Xbitlabs and many other sites.


Single-threaded Performance
FX-8150 3.6ghz = 1.02 (100%)
Phenom II X6 1100T 3.3ghz = 1.1 (108%)
Core i5 2500K 3.3ghz = 1.38 (135%)
Core i7 2600K 3.4ghz = 1.41 (138%)
Core i7 3770K Ivy 3.5ghz = 1.65 (162%)

On average, we can expect Ivy Bridge to hold about a 50-60% advantage in single-threaded performance over current iterations of Bulldozer (i.e., games such as Starcraft 2 and SKYRIM show this "Cinebench" % lead translates extremely well when comparing Sandy Bridge to Bulldozer). We can also see that Phenom II is better on both a per core and a per clock basis than Bulldozer architecture. This again has been shown by all the major professional reviewers.

Multi-threaded performance:
Phenom II X6 1100T 3.3ghz = 5.86 (97%)
FX-8150 3.6ghz = 6.03 (100%)
Core i7 2600K 3.4ghz = 6.88 (114%)
Core i7 3770K Ivy 3.5ghz = 7.52(125%)

Again, Bulldozer can only outperform Phenom II as a result of having 8 cores and more aggressive TurboBoost when all the cores are used. Phenom II is still the moer efficient processor even under multi-threaded tasks considering it has lower clocks and only 4 cores. Compared to Intel's offerings, Bulldozer's 8 cores still cannot even match 4 Sandy Bridge cores with HT and lose badly to 4 Ivy Bridge cores with HT.

Bulldozer would need a 25% increase in frequency just to match a stock Ivy Bridge Processor in multi-threaded tasks and an unreal increase in frequency to have any shot to beat Ivy Bridge in applications that use 1-4 threads only (something that won't happen on any refined 32nm process).

Now, even if AMD could release a 4.5ghz Piledriver, Ivy Bridge can still easily overclock from 3.5ghz to 5.0ghz on air cooling. Since we are enthusiasts and not average consumers, it's only logical to compare top speeds on air cooling. Not many people here spend $200-300 on a CPU and not overclock it. Under those conditions, Sandy Bridge and Ivy Bridge would have higher overclocking and at the same time lower power consumption, only extending their lead.

Essentially, Piledriver will not change a single thing. Next year Intel will add another 15% IPC increase with Haswell and most likely it will overclock even higher than 5.0ghz because 22nm process will have matured even more in 12 months.

It's perfectly OK to buy AMD chips because 1 person might prefer AMD over Intel. However, let's be subjective here and look at the reality based on facts. Right now, AMD's CPU performance is so far behind that even a 1st generation overclocked Nehalem/Lynnfield processor such as i7 920 @ 4.0ghz or i7 860 @ 3.9ghz would beat Bulldozer in 90% of applications. Bulldozer would net some wins in Rendering, Video Encoding, Encyption. If you perform those tasks often, there is a case for Bulldozer. Problem is i7 920 is a chip from 2008. So Bulldozer competes against SB and Piledriver will compete against IVB and most likely to some extent Haswell.

Basically, if someone wants to support AMD, there is a legitimate way to do so by buying their superior HD7900 series. But calling Intel CPU users fanboys is actually the most ridiculous thing anyone can say since it's actually difficult if not impossible to make a case for AMD's CPUs > $100 at the moment based on benchmarks, overclocking and/or power consumption.

But if people still want to argue facts, there are plenty of professional reviews that contradict their assessment that AMD has "competitive CPUs".

Gaming performance with HD7970:

For example, the Pentium G630 and Pentium G860 performed just slightly below and above AMD's venerable Phenom II X4 955, respectively. Even more eye-opening was that the new FX-4100, -6100, and -8120 actually underperformed both budget-oriented Intel chips, despite their higher price points. The Athlon II X3 and X4 CPUs were left in the dust, along with the Llano-based A8 and A6 APUs, and the A4-3400 performed dismally.

Pushed as far as they can go on air, no AMD CPU can touch a stock Core i5-2400 in the same benchmark suite.


It would be laughable for someone with HD7970 or a pair of those to use an AMD based CPU. Who would cripple $600-1200 worth of graphics performance? Even AMD tells reviewers to test their HD7970 graphics card with Intel CPUs to not show CPU bottlenecks!
10 4 [Posted by: BestJinjo  | Date: 02/23/12 07:24:48 AM]
- collapse thread

But calling Intel CPU users fanboys is actually the most ridiculous thing anyone can say since it's actually difficult if not impossible to make a case for AMD's CPUs > $100 at the moment based on benchmarks, overclocking and/or power consumption.

It is easy to make a case for AMD cpu's, people refuse to support a company that jacks prices because they can, and engage in shady business practices. BOOM!!! nuff said
5 4 [Posted by: veli05  | Date: 02/23/12 12:42:13 PM]
Do you remember the Athlon 64 prices, especially the FXes ?
2 3 [Posted by: klikodesh  | Date: 02/24/12 03:14:49 AM]
Seriously? $225 for 2500k, one of the best overclocking chips, that has amazing power consumption even when overclocked is a bargain.

Back then when AMD was in the lead, the prices for their Athlon X2 64 line were ridiculous:


Also, now a good CPU lasts much longer since for the most part tasks are I/O limited (so you are better off upgrading an SSD first) or are GPU limited. And you want CPU speed, IPC and frequency are key. Unless you know you need 8 full working threads, buying an 8-core Bulldozer is just for "bragging rights".

At 4.8ghz, an FX8100 chip will put 200-250W more than Sandy. That's only going to get worse once IVB launches. A 2500k @ 4.5ghz will last 4 years, easily. $225 for that in the context of overall system cost and $550 videocards that become too slow in 3 years is a bargain.

In that sense, buying a $225 CPU over say a $130 Bulldozer still makes sense.
5 4 [Posted by: BestJinjo  | Date: 02/24/12 05:27:11 PM]
Funny how people vote down posts where the only thing you point out is what AMD did in the past. Seems like some people do not want to remember.
1 1 [Posted by: klikodesh  | Date: 02/26/12 05:33:00 AM]
Cinebench does not use Bulldozer's FMA4 instructions.

This means it can only use 50% of the theoretical fpu performance of bulldozer. (the real world performance difference is smaller than theoretical).

So using such benchmark as comparison which is better architecture is not very informative.

See the x264 test on

on pass1 bulldozer fps went from 75 to 121 fps, and intel core i7 from 100 to 151 when enabling FMA4 for AMD and AVX for Intel. Phenom II does not support these instruction sets, and could not easily be changed to support them.

on pass 2 the fps only went from 35.8 to 38.6 fps with fma4 support, ie ~8% increase.
3 4 [Posted by: hkultala  | Date: 02/23/12 06:03:40 PM]
It is informative since Cinebench accurately represents the average performance difference on a per core/per clock basis.

Using something like X264 test is actually extremely biased since it's one of the most favourable programs for an 8 core CPU.

If you need to render, encode video, or do encryption, of course the Bulldozer comes highly recommended. But for 90% of everyone else, 2500K and soon 3570K will be miles better. Even the lower-end quads from Intel are superior.

Let's not even get started on the mobile segment where APU has nothing to compete > $600 level in laptops.

Anyone who wants a good desktop AMD processor is far better off buying the $125 Phenom II X4 960T and unlocking it into an X6:;Tpk=phenom%20ii%20960t

There isn't a single Bulldozer processor worth buying unless they drop the price of FX8150 to $140 or below.
5 5 [Posted by: BestJinjo  | Date: 02/24/12 05:29:25 PM]
"However, let's be subjective here and look at the reality based on facts."

Interesting way to put it, lol. Adam from MythBusters would reject your reality and substitute his own.
0 0 [Posted by: mikato  | Date: 03/13/12 12:14:57 PM]

They told us the same with Bulldozer its great its good its fast bla bla untill we saw the benchmarks lol. Its better to wait and see it yourself.
6 1 [Posted by: 3Dkiller  | Date: 02/23/12 10:21:33 AM]

Intel tri-gate provides more benefit than resonant mesh in Piledriver, so distance between AMD and Intel is still increasing.
AMD is no more competition for SB or IB. Intel previous CPU like Conroe based and Nehalem based, are much harder competition for SB or IB, than any CPU from AMD.
So Intel is hard pressed by his previous success, and must make serious improvements with any new CPU. If not, sales will fail, profit drops, and will lose more money than required to develop new CPU.
3 3 [Posted by: Tristan  | Date: 02/23/12 12:43:00 PM]
- collapse thread

TSMC skips 22 nm, rolls 20-nm process

TSMC has shown that a 22nm VS 22nm with trigate improves REAL WORLD power by as little as 8%

The real improvement comes from the shrink intel makes from 32nm to 22nm. 3D Trigate isnt as important at 22nm as it will be at 14nm or smaller
1 1 [Posted by: vid_ghost  | Date: 02/23/12 02:43:52 PM]
As awful BD's power consumption is, 8% would mean a whole lot to AMD.
2 1 [Posted by: klikodesh  | Date: 02/24/12 03:05:27 AM]
I like the way you put "as little as 8%". This implies that that is the minimum decrease in power, and for all we know it could decrease AVERAGE power consumption by as much as 12% or more.

AMD said that this "resonant mesh" thing reduced power by 10%. Besed on what AMD has done in the past (bulldozer anyone?) we can expect this claim to be over exaggerated. "resonant mesh" will probably have negligible reductions in power consumption, 8% at the most.

The real benefit piledriver will see over bulldozer is the fact that it is a die shrink from 45nm (bulldozer), to 32 nm.
0 0 [Posted by: Darth415  | Date: 03/29/12 04:48:41 PM]
show the post
1 6 [Posted by: madooo12  | Date: 02/23/12 03:11:58 PM]
that isn't relevant to the conversation.... GAH!
0 0 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:12:14 AM]

As an Intel fanboy, I truly hope this works out for AMD.
7 2 [Posted by: lol123  | Date: 02/23/12 02:27:19 PM]

The CPU is of minor importance at present as there is not acommercial OS which can use the capacity capacity of either make of CPU or use the multi-cores available. Most benchmark test use single core operation for benchmarks. There are very few programs written for multi-cored processors. A very few graphic programs use all the cores. Many programs open at the same time are just used by lazy operators or folks who were badly trained or not trained in the operation of a computer.
2 2 [Posted by: tedstoy  | Date: 02/23/12 04:30:00 PM]
- collapse thread

Windows 7 is perfectly capable of using multiple cores. There are some (but not many) programs on windows 7 that can stress all 12 threads the i7 3930k has to offer.

People not trained in the operation of a computer won't have "many" programs on their computers running. It is the power users that will multitask, and the power users that will buy expensive processors.

Transcoding video, video editing, compressing/encrypting files, gaming, and even the overall responsiveness of your desktop will benefit from a faster processor!
0 0 [Posted by: Darth415  | Date: 03/29/12 05:28:01 PM]

From what I hear Piledriver will have 10-15% better IPC than Bulldozer

Resonant clock mesh reduces power by 10% allowing 10% higher clocks. It also eliminates clock skew for another ~10% overclocking.

Bulldozer can get to 5ghz on air. Piledriver might get to 6ghz. Add 10-15% IPC increase and 10-15% boost from Windows 8 CMT and we might have Intel on the ropes as far as socket 1155 goes.

Intel might not be prepared to have a price war using socket 1366/1211.

It could be a very interesting year.
4 2 [Posted by: grndzro  | Date: 02/23/12 11:23:35 PM]
- collapse thread

Intel on the ropes?

Take a look at this review: 2600k @ 5.0ghz vs. FX-8150 @ 4.8ghz

2600k @ 5.0ghz vs. FX8150 @ 4.82ghz
Cinebench = +23%
WPrime = +18%
Image Editing = +107%
Video Encoding = +37%
Multi-Tasking = +35%
Arma II = +49%

Consumes ~270W less.

It's going to take a 6.0ghz Bulldozer just to match a 5.0ghz 2600K. But IVB will overclock better and have higher IPC, plus for sure lower power consumption.

Also, Bulldozer was already tested in Windows 8. The performance increase is like 2-3%.

Even if an overclocked Piledriver matches an overclocked IVB, it'll never touch Intel's 22nm power consumption in overclocked states.
7 3 [Posted by: BestJinjo  | Date: 02/24/12 05:50:33 PM]
show the post
1 5 [Posted by: madooo12  | Date: 02/25/12 07:43:23 AM]
You make no sense at all.

65nm Core 2 Duo to 45nm Wolfdale (5% increase in IPC)
45nm Wolfdale to Nehalem/Lynnfield (15-20% increase in IPC)
Nehalem/Lynnfiled to Sandy Bridge (15-20% increase in IPC)

IVB is supposed to have at least 5% faster IPC.

Take a look at 2500k vs. Q6600:

Take away the Turbo-Boost function of 2500k and it's clear it's about 50% faster in IPC than the old Core 2 Quad. The additional IPC increase + higher clocks allow 2500k to be almost 2x faster than Q6600.

If Intel didn't increase IPC, how in the world is a 3.3ghz 2500k 2x faster than a 2.4ghz Q6600 in 4 threaded apps such as video encoding?

If Intel didn't increase IPC why is Nehalem miles better than Q6600 in Starcraft 2?

Run any single-threaded application such as SuperPi, you'll see a direct correlation between all Intel processors and how they increased IPC at the same clocks from 1 generation to another. It's one of the best measurements for Intel CPUs on a per core basis.

5 seconds of google - Lynnfield i5 760 vs. Sandy Bridge i5 2500k @ 2.8ghz = IPC increase was measured at 14% in their test:

5 2 [Posted by: BestJinjo  | Date: 02/25/12 08:04:43 AM]
show the post
1 4 [Posted by: madooo12  | Date: 02/25/12 08:41:35 AM]
Believe whatever you want to believe based on 1 website against 100 others that show continuous increases in IPC. Like I said, go look at AnandTech's review. It's impossible to explain how 2500k is ~2x faster than Q6600 since it only had 3.3ghz clocks vs. 2.4ghz.

I have owned Intel CPUs since Core 2 Duo and the first thing I ever do is test them at the same clocks to see what improvements I get. I see increases in IPC in games and programs I personally run.

Using synthetics such as 3DMark and Sisoftware is meaningless. Other programs in the links you presented are heavily memory bandwidth limited (7-Zip) and WinRAR, while other programs such as audio and video encoding clearly show increases in IPC.

Honestly, claiming that Intel hasn't increased IPC from C2D days is the most retarded thing I've ever heard from any hardware enthusiast. No offense. Just by going from i7 to 2nd generation i7, Intel increased IPC by at least 15%.

Here, I even went out of my way to find yet another review with Nehalem vs. Sandy Bridge clock for clock. i5 760 @ 2.8ghz vs. i5 2500k @ 2.8ghz, all turbos disables:


Final Performance Rating: Sandy leads by 15%

So 15% alone in that 1 generation.

Also, the very reason that Intel 4-core HT CPU is as fast as an 8-core Bulldozer CPU is because each modern Intel Sandy Bridge core is 2x as powerful.

If Intel hadn't increased IPC all this time, then why is i7 875 @ 4.0ghz losing to 2500k @ 3.3ghz in so many benchmarks in this review?

Your theory makes no sense. If you are best friends with beenthere, then I understand.
5 2 [Posted by: BestJinjo  | Date: 02/25/12 04:24:29 PM]
show the post
2 5 [Posted by: madooo12  | Date: 02/26/12 12:15:59 AM]
I could do this all day.

Core 2 Duo E6850 = 3.0ghz no Turbo = 84.5%
Core i3 2100 = 3.1ghz no Turbo = 128%

i3 2100 is on average 51% faster per core.

That almost exactly aligns with performance increases from Core 2 Duo generation to today if you break it down by generation (1.05 IPC from Wolfdale x 1.20 IPC from Nehalem x 1.15-1.20 IPC from Sandy = 1.45x-1.51x) <-- exactly what benches are showing us.
7 2 [Posted by: BestJinjo  | Date: 02/25/12 04:43:58 PM]

There are core architectural changes galore between C2D and Nehalem (major ones). This results in more work being done per clock (IPC).

With each successive generation Intel improves the amount of work their processors can do per clock. The reason being that parallelism (in programming) has yet to catch up to the hardware and Intel's tick-tock strategy requires a steady release of new processors which evidently must show improvements over previous generations in order to sell.

Have you ever thought about "thinking"?
0 1 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:15:19 AM]

show the post
1 4 [Posted by: xybit  | Date: 02/25/12 04:56:13 PM]
- collapse thread

Yes, but he comes with arguments and he is right.
3 2 [Posted by: cosminmcm  | Date: 02/26/12 02:23:51 AM]
show the post
1 4 [Posted by: madooo12  | Date: 02/26/12 02:50:37 AM]
Do you want me to repeat everything he said for you to see that he is right? Even in your link from Tom's it is clear that Intel improved the performance per clock and that the better performance doesn't come only from the higher frequency. And AMD's best doesn't beat the old Conroe per clock. What more do you want?
2 2 [Posted by: cosminmcm  | Date: 02/26/12 10:56:32 AM]
show the post
1 4 [Posted by: madooo12  | Date: 02/26/12 01:38:10 PM]
Yes, sometimes 6 megs of L2 cache can be hard to beat even with an integrated memory controller (with HT disabled).
Ivy is a die shrink with some minor tweaks, but even so it will provide some growth in IPC (just like Wolfdale did against Conroe)
You got him, he has lots of accounts because one is not enough to put up you.
3 1 [Posted by: cosminmcm  | Date: 02/26/12 08:47:15 PM]
show the post
1 4 [Posted by: madooo12  | Date: 02/26/12 11:32:21 PM]
What do you know?
0 1 [Posted by: cosminmcm  | Date: 02/27/12 12:48:14 AM]
show the post
1 4 [Posted by: madooo12  | Date: 02/27/12 11:45:57 AM]
Do I have to be Pat Gelsinger to write on this forum?
I've studied CPU architectures in college.
Finished the Faculty of Electronics and Telecommunications from Bucharest, although my main domain is telecommunications/networking.
Passionate of hardware since 1999.
If you want that much knowledge go to Realworldtech forums and comment there with the big boys.
Your comments don't inspire that much knowledge. In your first post you sent the guys who think Bulldozer architecture is weak to design a better one themself. Say what?
Then you said that Phenom 2 is so much better than Phenom 1. Actually it is not. It just has a bigger cache and it could hit higher frequencies because of the fabrication process, nothing magical there. It didn't beat Kentsfield or Yorkfield.
You said also that intel didn't improve IPC, which is obviously not true even from your link.
That guy BestJinjo came with competent comments and with links to back up what he said. You just tried to make fun because you had no arguments.
2 1 [Posted by: cosminmcm  | Date: 02/27/12 02:14:51 PM]
you asked how much I knew about CPUs and stuff

I said what I knew

I haven't gotten a university degree about them (yet) but I was fond of Hardware since 2003, I was too young before to understand how it works, (I still knew a lot before)

so can YOU who studied about CPU architectures make a better CPU

then benchmarks show that PHII is much better than PHI even if it is mostly the same, AMD knew what was wrong with it and fixed it to get better performance for the price then

I didn't make fun at all, I was just trying to show my point of view

and I don't know anything about the core of x86, it's too complex so I won't provide 100% reliable information, but then again 99.99% of the people in the world don't

I'll be sure to check Realworldtech and see about it
1 2 [Posted by: madooo12  | Date: 02/28/12 12:22:25 AM]
I don't have to make a better CPU, Intel has it already (since the Core 2).
I think that where Core 2 is better that Nehalem it is so because of the cache. Do I have to be a CPU designer to say that? No. If you think I am wrong, say what is your opinion about that and why do you think that is happening.
If you look for a clock for clock comparison between Phenom and Phenom 2 you will see that there are places where Phenom 2 has little performance advantage, and on average is not faster than 5-10%. The reason it looked so good (compared to Phenom) is because it was launched at 3-3.2 GHz (vs 2.6GHz).
Either way, I consider this conversation closed.
1 1 [Posted by: cosminmcm  | Date: 02/28/12 02:52:41 AM]
This is the Internet. Next you are going to claim to be a Karate Master married to Jessica Alba making Millions of $ a year.

Honestly... I'd be surprised if you finished high school.

0 1 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:18:21 AM]
If by being an "Intel" fanboy you mean I buy the best CPU for performance/watt and performance/clock and high-end performance per dollar at a particular point in time, then Yes I am. Just like I was when I was an "AMD" fanboy when I had overclocked my Barton XP2500+ and A64 X2 3800+. By that definition, I was a "hardcore AMD" boy. I am sure some Intel fans must have hated my comments when I was putting down their Netburst Pentium 4s and oven-roasted Pentium Ds. However, a lot of current Intel users never dismissed AMD's at the time revolutionary IMC, superior performance/clock, better power consumption in overclocked states, better performance in games/office tasks. We actually ditched Pentium 4 in a hurry for A64s.

Now I see a repeat of history, but in this case AMD users are now dismissing the advantages Intel has, while previous Athlon 64 X2 users who loved X2 for what it was have moved on to Intel's CPUs for the exact reasons they loved X2s in the first place.

Granted, to some extent it is unrealistic to expect AMD to be able to beat Intel considering Intel is using 22nm Tri-gate transistors. But it is what is. Manufacturing node lead is part of a firm's strategy. AMD loyalists can't use the excuse that well because AMD is still on 32nm, it's unfair to expect AMD to win the performance/watt crown. They have no problems comparing 28nm HD7900 series to 40nm Fermi.

There is another way to look at it. By supporting AMD's current processors, that customer is sending a message to AMD that their hot and slow processors are good enough. There is another perfect way to support AMD - buy their excellent GPUs or Phenom II X4 960T. But supporting an inferior Bulldozer architecture doesn't scream efficient markets to me.
5 4 [Posted by: BestJinjo  | Date: 02/26/12 08:35:11 AM]
couldn't you do this all day and reply to me
1 3 [Posted by: madooo12  | Date: 02/26/12 08:56:34 AM]
as far as I know, architectural improvements are way more important than transistor sizes in power

it's only use is for smaller dies
1 3 [Posted by: madooo12  | Date: 02/26/12 08:59:28 AM]
Yeah and each successive generation from Intel since the C2D comes with architectural improvements which boost the amount of work they can do per clock.

They also come with improved process technologies which shrink transistors and improve yields/power consumption etc.

You know... it is possible to improve a product in more ways than one. *face palm*
0 1 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:20:57 AM]

AMD has gone too far sadly. These whole comments I've read didn't add anything to my knowledge. But then again while Intel has better techs and fabs and lower nanometer its harder and harder for AMD to catch up with, IMO.
3 1 [Posted by: Pouria  | Date: 02/26/12 04:15:22 AM]

Amd-s new architecture iz gaining on performance!
Bulldozers architecture iz the only new thing that happened for a long time in x86 world!
Intel's approach with i5 i7gen. deployed 4-core pump and doubled omount of logical rops.
This whose the real reason for such big performance gain over Amd and Intel's older platforms (50%). On the other side Amd iz going pretty good in server market where new core architecture iz showing all of it's benefits.
For all of as faster and more competitive Amd = Intel's care and development = more performance per $ and not the any other way around.
This iz the real Amd-s legacy. It happened twice that Amd whose better then Intel and it will happen again! Last time this happend Amd did not behaved politely! They fighting iz ours benefit. ARM gen progress in performance iz coming to 50% -100% due the more competitors an on PC this rate iz lazy 15% - 20%.
4 3 [Posted by: Zola  | Date: 02/26/12 01:34:24 PM]
- collapse thread


seriously if you can understand what's written, you'll find it real smart
1 3 [Posted by: madooo12  | Date: 02/26/12 11:30:04 PM]
That was one of the dumbest posts I've read all day... and I understood it.

1. His first line is stating the obvious.
2. His second line is pure bullshit.
3. His 3rd line is claiming that Intel increased the amount of logical raster ops and that this is somehow a "bad" thing and to be shunned. Well ROPS are a part of GRAPHICS CARD design... not CPU. Does he mean FPU? Who knows...
4. He goes on to talk about the server market. Sure enough AMD have lost market share in the server segment but they're still doing ok.
5. Then he just goes into a "I'm an Alex Jones supporter"-style rant about too much fluoride in the water or something. :p

How is that smart?
0 1 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:26:00 AM]

Haha there's a guy here called BestJinjo or dumbo that evens brags about how SuperPI proves how Intel improves its CPU's.

What a douche
2 4 [Posted by: CarlosTex  | Date: 02/27/12 05:47:39 AM]
- collapse thread

at least that is far better than the crap from you and beenthere
3 2 [Posted by: PnoyP  | Date: 02/27/12 10:10:52 AM]
You clearly have misunderstood my post. I do not advocate extrapolating the results from SuperPi to compare across Intel and AMD in real world benchmarks. If Intel processors are 2x faster in SuperPi, of course it doesn't mean a 2500K would beat an an FX8150 processor by 2x in Video Rendering.

Everyone knows that Super Pi benchmark does not accurately reflect real world multi-processing, multi-core capabilities of modern processors. However, it serves its purpose extremely well for comparing IPC (i.e., single threaded performance per core)within Intel's product line. It's a good gauge for comparing the single threaded floating point performance of CPUs within a brand (i.e., SuperPi correctly shows superior IPC @ same frequency by going from C2D all the way to SB and inferior performance of Bulldozer to Phenom II). Its consistency makes it very popular with overclockers.

If you actually run benchmarks of Conroe/ Kensfield vs. Wolfdale/Yorkdale vs. Nehalem/Lynnfield vs. Sandy Bridge, you will see that performance increases are very much commensurate with IPC increases for those generations (on average).

As such, so far SuperPi (despite its useless real world application) has shown to be a fairly accurate predictor of IPC, at least for Intel processors.

BTW, as a side note, SuperPi's performance was stellar on Athlon 64 compared to Pentium 4, most likely an outcome of A64's far superior memory latency (courtesy of IMC) and performance/clock. Athlon 64 had leading IPC at the time and SuperPi showed that to be true. Ironically, at the time it was not dismissed as irrelevant by Athlon 64 owners...

I am not going to claim with 100% confidence that SuperPi should be used to gauge real world performance in all applications. Of course not. It's a much better predictor of IPC however than Sisoftware Sandra, 3DMark or Everest's Mandrel tests, etc.
3 1 [Posted by: BestJinjo  | Date: 02/27/12 06:44:18 PM]
No, it does not even serve a purpose of measuring IPC.

No it is not a good gauge to correcyly show IPC within brands.

The only use SuperPI has is to measure x87 performance on your processor. And still it isn't perfect. That program was compiled more than 15 years ago. How many applications you run right now that rely on native x87 execution?

The fact that Athlon 64 was faster than Pentium 4 on SuperPI is not in direct relation with IPC or lower memory latency, but on the fact that legacy x87 decoding was superior on the Hammer architecture vs Netburst. Nothing more than that. Conroe inherited and improved on already strong x87 decoding of Pentium III, which was superior to Pentium 4, and achieved the best SuperPI scores anyone had seen yet. On the other way, AMD deprecated x87 decoding and pratically didn't change its efficiency from Hammer to Bulldozer architectures.

Bulldozer has issues of course, but these aren't proved in any way by SuperPI, there are others however that show Bulldozer weaknesses. Yeah SuperPI does show that x87 decoding is not very fast on Bulldozer, they even deprecated it further vs Phenom II. Gee what a loss.

I don't care which company has the faster processor. But it is important for AMD to exist. But Intel has pretty much the bread and butter, and they dictate the rules, because most will follow them.
Don't you wonder why Intel didn't adapt SSE5?
0 1 [Posted by: CarlosTex  | Date: 02/28/12 02:52:48 AM]

Don't reproduce... just don't.
0 1 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:28:03 AM]
Yeah if you clock two CPUs at the same clocks and then run SuperPi and one comes out ahead of the other... what conclusion would you derive???
0 0 [Posted by: ElMoIsEviL  | Date: 04/10/12 01:27:33 AM]

What i see here,is that AMD will keep that promise of 10-15% improvement.Witch means that the architecture cannot be fixed.
Basically they are just using the clock mesh to keep the stock speed to 4Ghz.That is the same chip at a 10% higher stock overclock.Useless stuff here.
If they made a die shrink of phenom 2 they would of got more performance than this crap.
I see dark days for AMD until they ditch this stupid design.
3 2 [Posted by: Liver  | Date: 02/27/12 02:13:37 PM]
- collapse thread

I knew that this post would not be liked by most of you.
Cause i just spoken the truth about the real situation here.
if you want to keep that fanboy stuff in you,that is ok with me.But i am not doing this sorry.
2 1 [Posted by: Liver  | Date: 02/28/12 01:02:31 AM]

And before any of you say something,i am an AMD fanboy.
Stop saying stupid things,just look at the facts.
They are just keeping this stupid design to make some money back from what they spent on it.
No thanks i will not buy that shit until they make something good.
I will not be amds next idiot to buy their cpus.
I just wish for the sake of competition and prices for AMD to do well.
But this is not the way to make money AMD.
Hope you see this.
3 1 [Posted by: Liver  | Date: 02/27/12 03:03:51 PM]

0 0 [Posted by: isoldera  | Date: 02/28/12 07:57:33 AM]

reduce clock distribution power up to 24% while maintaining the low clock-skew target required by high-performance processors.

I did some math based on the stock tdp's of the current BD line up of and it looks as though this will put the flagship cpus of the Piledriver line around 95 and 72 watts respectively. Observe

8150, and 8120 125 tdp range:

125 - (125 x .25) = 93.75 tdp for Flagship 2nd gen revision

6100, 4100 95 tdp range:

95 - (95 x x.25) = 71.25 tdp for 2nd gen revision

Now overclocking tdp's could be an entirely different animal altogether. BUT, the point is the gap is closing between intel's tdp offerings and AMD's. The verdict is still out on which architecture will be more efficient, Ivy Bridge or Piledriver?
0 0 [Posted by: veli05  | Date: 02/29/12 12:29:23 PM]

atleast amds gpus are way better then intels
0 0 [Posted by: coolmarcus  | Date: 04/05/12 05:42:28 AM]

[1-20 | 21-23]

Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture