Intel Corp. and Micron Technology demonstrated a jointly developed DRAM concept called Hybrid Memory Cube (HMC). The memory cubes promise to dramatically reduce energy consumption and boost performance and thus will improve power efficiency by around seven times. The HMC has 10 times the bandwidth and 7 times the energy efficiency than even the most advanced DDR3 memory module available.
The Hybrid Memory Cube, demonstrates a new approach to memory design delivering a 7-fold improvement in energy-efficiency over today's DDR3. Hybrid Memory Cube uses a stacked memory chip configuration, forming a compact “cube”, and uses a new, highly efficient memory interface which sets the bar for energy consumed per bit transferred while supporting data rates of 1Tb/s (one trillion bits per second). This research technology could lead to dramatic improvements in servers optimized for cloud computing as well as ultrabooks, televisions, tablets and smartphones
Current DRAM technologies – which include DRAM manufacturing processes, modules and other devices – enable massive memory capacity within at a low manufacturing cost. However, as the number of individual processing units (cores) on a microprocessor increases, the need to feed the cores with more memory data expands proportionally and input/output interface becomes a constraint for performance. The increase of the amount of DRAM modules increases power consumption and thus lowers power efficiency. There are severe limitations to achieving high-speed and low-power using a commodity DRAM process, according to Intel.
“We knew that future high-speed memory will need to conquer a challenging set of tradeoffs and achieve low cost and power as well as high density and speed. We came to the conclusion that mating DRAM and a logic process based I/O buffer using 3D stacking could be the way to solve the dilemma. We found out that once we placed a multi-layer DRAM stack on top of a logic layer, we could solve another memory problem which limits the ability to efficiently transfer data from the DRAM memory cells to the corresponding I/O circuits,” said Bryan Casper, an Intel official.
Getting the data out of the memory cells to the I/O is analogous to the difficulty of navigating the streets of a crowded city. However, placing the logic layer underneath the DRAM stack has a similar effect to building a high-speed subway system underneath the streets, bypassing encumbrances such as the DRAM process as well as the routing restricted memory arrays. Additionally, the adjacent logic layer enables integration of an intelligent control logic to hide the complexities of the DRAM array access, allowing the microprocessor memory controller to employ much more straightforward access protocols than what has been achievable in the past, according to Intel.
The result of this joint research project between Micron Technology and Intel has been the development of some key achievements. Last year, Intel designed and demonstrated an I/O prototype that achieved a record-breaking 1.4mW 1Gb/s energy efficiency that was optimized for this hybrid-stacked DRAM application. The two companies worked together to jointly develop and specify a high-bandwidth memory architecture and protocol for a prototype that was designed and manufactured this year by Micron. This hybrid-stacked DRAM prototype, known as the Hybrid Memory Cube (HMC), is the world’s highest bandwidth DRAM device with sustained transfer rates of 1Tb/s. On top of that, it is also the most energy efficient DRAM ever built when measured in number of bits transferred versus energy consumed. This groundbreaking prototype has 10 times the bandwidth and 7 times the energy efficiency than even the most advanced DDR3 memory module available.
Intel believes that the developments will likely have a fundamental impact on data centers and supercomputers that thirst for low-power high-bandwidth memory accesses. With this technology, next generation systems formerly limited by memory performance will be able to scale dramatically while maintaining strict power and form factor budgets. Additionally, these developments may play a key role in the optimization of system architectures and memory hierarchies of future mainstream systems in the client and server markets.