Researchers Propose New Bandwidth Management Techniques to Improve Performance of Multi-Core Chips

New Bandwidth Management Techniques Could Boost Performance of Multi-Core Chips

by Anton Shilov
05/30/2011 | 09:30 PM

Modern microprocessors contain up to sixteen high-performance processing cores that all need bandwidth to fetch enough data to be efficient. Despite that all leading-edge chips include techniques to hide long memory access latencies and maximize bandwidth, researchers from North Carolina State University claim that their technologies can improve bandwidth and boost performance of chips.

 

It is not a secret that each core within a multi-core central processing unit (CPU) needs to retrieve data from memory that is not stored on its chip. There is a limited pathway – or bandwidth – these cores can use to retrieve that off-chip data. As chips have incorporated more and more cores, the bandwidth has become increasingly congested – slowing down system performance.

One of the ways to expedite core performance is called prefetching. Each chip has its own small memory component, called a cache. In prefetching, the cache predicts what data a core will need in the future and retrieves that data from off-chip memory before the core needs it. Ideally, this improves the core’s performance. But, if the cache’s prediction is inaccurate, it unnecessarily clogs the bandwidth while retrieving the wrong data, which actually slows the chip’s overall performance.

The researchers from the NC State University propose two techniques: one improved efficiency of prefetching and another allocates bandwidth required to particular cores. Unfortunately, the research is highly theoretical and may not contain practical value.

“The first technique relies on criteria we developed to determine how much bandwidth should be allotted to each core on a chip. Some cores require more off-chip data than others. By better distributing the bandwidth to the appropriate cores, the criteria are able to maximize system performance,” said Dr. Yan Solihin, associate professor of electrical and computer engineering at NC State and co-author of a paper describing the research.

The researchers use easily-collected data from the hardware counters on each chip to determine which cores need more bandwidth.

“The second technique relies on a set of criteria we developed for determining when prefetching will boost performance and should be utilized as well as when prefetching would slow things down and should be avoided," said Mr. Solihin.

These criteria also use data from each chip’s hardware counters. The prefetching criteria would allow manufacturers to make multi-core chips that operate more efficiently, because each of the individual cores would automatically turn prefetching on or off as needed.

Utilizing both sets of criteria, the researchers were able to boost multi-core chip performance by 40%, compared to multi-core chips that do not prefetch data, and by 10% over multi-core chips that always prefetch data. Given the fact that all modern chips use prefetching (except, perhaps, many-core graphics chips), it means that the allocation techniques can boost performance by only 10% in certain cases.