Bookmark and Share


Modern microprocessors contain up to sixteen high-performance processing cores that all need bandwidth to fetch enough data to be efficient. Despite that all leading-edge chips include techniques to hide long memory access latencies and maximize bandwidth, researchers from North Carolina State University claim that their technologies can improve bandwidth and boost performance of chips.

It is not a secret that each core within a multi-core central processing unit (CPU) needs to retrieve data from memory that is not stored on its chip. There is a limited pathway – or bandwidth – these cores can use to retrieve that off-chip data. As chips have incorporated more and more cores, the bandwidth has become increasingly congested – slowing down system performance.

One of the ways to expedite core performance is called prefetching. Each chip has its own small memory component, called a cache. In prefetching, the cache predicts what data a core will need in the future and retrieves that data from off-chip memory before the core needs it. Ideally, this improves the core’s performance. But, if the cache’s prediction is inaccurate, it unnecessarily clogs the bandwidth while retrieving the wrong data, which actually slows the chip’s overall performance.

The researchers from the NC State University propose two techniques: one improved efficiency of prefetching and another allocates bandwidth required to particular cores. Unfortunately, the research is highly theoretical and may not contain practical value.

“The first technique relies on criteria we developed to determine how much bandwidth should be allotted to each core on a chip. Some cores require more off-chip data than others. By better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance,” said Dr. Yan Solihin, associate professor of electrical and computer engineering at NC State and co-author of a paper describing the research.

The researchers use easily-collected data from the hardware counters on each chip to determine which cores need more bandwidth.

“The second technique relies on a set of criteria we developed for determining when prefetching will boost performance and should be utilized as well as when prefetching would slow things down and should be avoided," said Mr. Solihin.

These criteria also use data from each chip’s hardware counters. The prefetching criteria would allow manufacturers to make multi-core chips that operate more efficiently, because each of the individual cores would automatically turn prefetching on or off as needed.

Utilizing both sets of criteria, the researchers were able to boost multi-core chip performance by 40%, compared to multi-core chips that do not prefetch data, and by 10% over multi-core chips that always prefetch data. Given the fact that all modern chips use prefetching (except, perhaps, many-core graphics chips), it means that the allocation techniques can boost performance by only 10% in certain cases.

Tags: Intel, AMD, IBM, Oracle, Fujitsu


Comments currently: 0

Add your Comment

Related news

Latest News

Tuesday, July 22, 2014

7:38 pm | AMD Vows to Introduce 20nm Products Next Year. AMD’s 20nm APUs, GPUs and Embedded Chips to Arrive in 2015

4:08 am | Microsoft to Unify All Windows Operating Systems for Client PCs. One Windows OS will Power PCs, Tablets and Smartphones

Monday, July 21, 2014

10:32 pm | PQI Debuts Flash Drive with Lightning and USB Connectors. PQI Offers Easy Way to Boost iPhone or iPad Storage

10:08 pm | Japan Display Begins to Mass Produce IPS-NEO Displays. JDI Begins to Mass Produce Rival for AMOLED Panels

12:56 pm | Microsoft to Fire 18,000 Employees to Boost Efficiency. Microsoft to Perform Massive Job Cut Ever Following Acquisition of Nokia