Bookmark and Share


Intel Corp.'s next-generation microprocessor code-named Ivy Bridge will, among other things, support the next-generation PCI Express 3.0 interconnection, according to slides published by a web-site. Potentially, such support may catalyze designers of graphics processing units (GPUs) to introduce graphics chips with PCIe 3.0 support.

Even though Ivy Bridge will be a successor of the Sandy Bridge and will generally inherit its micro-architecture, it will sport a rather significant number of improvements. Firstly, it will have certain improvements that will boost its performance in general applications by around 20% compared to Core i "Sandy Bridge" chips. Secondly, the forthcoming chip will have a new graphics core with DirectX 11 and OpenCL 1.1 support as well as 30% higher performance compared to the predecessor. Thirdly, Ivy Bridge will feature PCI Express 3.0 x16 interconnection as well as PCIe 2.0 x4 controller, according to a slide published by SemiAccurate web-site.


The PCIe 3.0 specification extends the data rate to 8GHz and continues to support 2.5GHz and 5GHz signaling. Based on this data rate expansion, it is possible for products designed to the PCIe 3.0 architecture to achieve bandwidth near 1GB/s in one direction on a single-lane (x1) configuration and scale to an aggregate approaching 32GB/s on a sixteen-lane (x16) configuration. The new 128b/130b encoding scheme also allows near 100% efficiency, offering a 25% efficiency increase for 8GHz as compared to the 8b/10b efficiency of previous versions, which enables the doubled bandwidth. This evolutionary specification integrates a number of enhancements to the protocol and software layers of the architecture. These enhancements range in scope from data reuse hints, atomic operations, dynamic power adjustment mechanisms, latency tolerance reporting, loose transaction ordering, I/O page faults, BAR resizing and many more extensions in support of platform energy efficiency, software model flexibility and architectural scalability. PCIe 3.0 will be used in servers, workstations, desktop and mobile personal computers, embedded systems, peripheral devices and more.

PCI Express 3.0 will enable new classes of devices initially: even higher performance graphics cards for consumers as well as ultra high speed solid-state drivers for servers or workstation. Going forward other kind of electronics will also take advantage of PCIe 3.0.

Intel did not comment on the news-story.

Tags: Intel, Ivy Bridge, Panther Point, PCI Express


Comments currently: 11
Discussion started: 03/31/11 03:08:51 AM
Latest comment: 12/19/15 05:19:50 PM
Expand all threads | Collapse all threads


Even though Ivy Bridge will be a successor of the Ivy Bridge and...

I take it you mean Sandy Bridge at the end of that phrase?
0 0 [Posted by: GavinT  | Date: 03/31/11 03:08:51 AM]

Please enlighten me: What are "atomic operations"?
0 0 [Posted by: BernardP  | Date: 03/31/11 04:04:01 AM]
- collapse thread

An operation during which a processor can simultaneously read a location and write it in the same bus operation. This prevents any other processor or I/O device from writing or reading memory until the operation is complete.

Atomic implies indivisibility and irreducibility, so an atomic operation must be performed entirely or not performed at all.
0 0 [Posted by: seronx  | Date: 03/31/11 09:11:27 AM]
Thanks for the explanation!
0 0 [Posted by: BernardP  | Date: 04/01/11 07:58:35 AM]

Should be interesting to see the necessity of this port, since even the latest dual GPU cards don't even come close to 70-75% of saturating the PCI-Ex 2.0 BUS.
0 0 [Posted by: TAViX  | Date: 03/31/11 10:21:26 AM]
- collapse thread

I don't see where your problem is pci-e 2.0 also wasnt used in the first few generations... And if I remember correctly the first boards with pci-e 2.0 where expensive as fuck so why not release pci-e 3.0 soon?
0 0 [Posted by: asmileXD  | Date: 03/31/11 02:35:11 PM]

Awesome, Pity the way the game industry is going we won't need any of it,seems like the latest trend is to downgrade games to match the outdated console hardware!
Take note Intel,AMD,Nvidia you should all start getting worried,No point releasing your new generation tech as consoles are slowing down progress. How about investing some of that money you all make into getting the damn game developers to make pc the base platform,Then downgrade game from pc version to suit console hardware,NOT the other way around!
I am starting to query why I spend a fortune on latest tech only to find that I have to play some crappy downgraded console version, I may have well just bought the game on PS3 instead which I own. I think consoles are slowing down progress, I must say Crysis 2 was VERY dissapointing on pc,click enter to start WTF,Direct X 9 in 2011 WTF, don't quote me but wasn't crysis 1 direct X 10? Now I just read Skyrim is limited on Direct X 11 front cause they are trying to make it multiplatform as well. sigh.....
0 0 [Posted by: ozegamer  | Date: 03/31/11 11:00:02 PM]
- collapse thread

This is a AMD spokesperson speaking on how fusion will integrate into next gen gaming consoles. You might get what you wish for with better scalability between pc and console games
0 1 [Posted by: veli05  | Date: 04/01/11 09:46:09 AM]
Well. Before you claim "Console is outdated", please do some homework to check the bus speed of Rambus FlexIO and performance of Cell-BE used in 2006 PS3.

Rambus Flex IO (65GB/s)
PCI-E 3.0 x16 (32GB/s)

Given that Ivy Bridge will be available in 2012, still got half bandwidth compare to the 2006 PS3. Notice that GPU-CPU bandwidth is very critical and is the bottleneck for many GP-GPU applications!

Also, please note the industry wide LINPACK cross platform results:
Intel Core i7 980X 6 cores 107.55 GFLOPS

In some sense, I think PC is slowing down the bus and stream processing progress!
0 0 [Posted by: RoyalHorse  | Date: 04/02/11 09:10:41 AM]
I hope you do know that the ps3 only has a modified Geforce 7900GT as graphic chip do you still think consoles aren't limiting pcs? Take a look at crysis 2 for example its clear that this game was done for consoles and it looks worse than its predecessor !! Modern GPUs are atleast 3-5 times faster than this old pile of tripe you are claiming to be so great.

Also GPGPU sucks, I have yet to see anything meaningful done with it outside scientific applications, everything else is just ported crap and can be done better on the cpu or we face parallelism problems like the people that wanted to port super pi to cuda ... they came to the conclusion that the maths wasn't ready yet for such great parallelism, and to not forget the failed attempt to port video encoders to GPUs like Badaboom tried they couldn't even compare to the speed of x264 which already had hand written assembly, and the quality sucked too.
0 0 [Posted by: asmileXD  | Date: 04/03/11 02:03:15 PM]


Add your Comment

Related news

Latest News

Wednesday, November 5, 2014

10:48 pm | LG’s Unique Ultra-Wide Curved 34” Display Finally Hits the Market. LG 34UC97 Available in the U.S. and the U.K.

Wednesday, October 8, 2014

12:52 pm | Lisa Su Appointed as New CEO of Advanced Micro Devices. Rory Read Steps Down, Lisa Su Becomes New CEO of AMD

Thursday, August 28, 2014

4:22 am | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

1:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

10:41 am | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture