News
 

Bookmark and Share

(3) 

More than 100 developer and adopter members of the hybrid memory cube consortium (HMCC) this week announced is has reached consensus for the global standard that will deliver a much-anticipated, disruptive memory computing solution. Developed in only 17 months, the final specification marks the turning point for designers in a wide range of segments to begin designing Hybrid Memory Cube (HMC) technology into future products.

"This milestone marks the tearing down of the memory wall. The industry agreement is going to help drive the fastest possible adoption of HMC technology, resulting in what we believe will be radical improvements to computing systems and, ultimately, consumer applications," said Robert Feurle, Micron's vice president for DRAM marketing.

Hybrid Memory Cube uses a stacked memory chip configuration, forming a compact “cube”, and uses a new, highly efficient memory interface which sets the bar for energy consumed per bit transferred while supporting extreme data rates.

The achieved specification provides an advanced, short-reach (SR) and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close-proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking, and test and measurement. The next goal for the consortium is to further advance standards designed to increase data rate speeds from 10, 12.5 and 15Gb/s up to 28Gb/s for SR and from 10Gb/s up to 15Gb/s for USR. The next-generation specification is projected to gain consortium agreement by the first quarter of 2014.

The HMC standard focuses on alleviating an extremely challenging bandwidth bottleneck while optimizing the performance between processor and memory to drive high-bandwidth memory products scaled for a wide range of applications. The need for more efficient, high-bandwidth memory solutions has become particularly important for servers, high-performance computing, networking, cloud computing and consumer electronics. 

A major breakthrough with HMC is the long-awaited utilization of advanced technologies to combine high-performance logic with state-of-the-art DRAM. With this first HMC milestone reached so quickly, consortium members have elected to extend their collaborative effort to achieve agreement on the next generation of HMC interface standards. 

"The consensus we have among major memory companies and many others in the industry will contribute significantly to the launch of this promising technology. As a result of the work of the HMCC, IT system designers and manufacturers will be able to get new green memory solutions that outperform other memory options offered today," said Jim Elliott, vice president of memory planning and product marketing at Samsung Semiconductor.

The HMCC is a focused collaboration of OEMs, enablers and integrators who are cooperating to develop and implement an open interface standard for HMC. More than 100 leading technology companies from Asia, Japan, Europe and the U.S. have joined the effort, including Altera, ARM, Cray, Fujitsu, GlobalFoundries, HP, IBM, Marvell, Micron Technology, National Instruments, Open-Silicon, Samsung, SK Hynix, ST Microelectronics, Teradyne and Xilinx. Continued collaborations within the consortium could ultimately facilitate new uses in HPC, networking, energy, wireless communications, transportation, security and other semiconductor applications.

Tags: HMC, DRAM, Altera, ARM, Cray, Fujitsu, Globalfoundries, HP, IBM, Marvell, Micron Technology, National Instruments, Open-Silicon, Samsung, SK Hynix, ST Microelectronics, Teradyne, Xilinx

Discussion

Comments currently: 3
Discussion started: 04/04/13 02:56:33 AM
Latest comment: 04/14/13 04:51:54 AM
Expand all threads | Collapse all threads

[1-2]

1. 
I really need to delve into whatever has been published publicly, which I haven't, but am I understanding this correctly?

Will it start as 8x stacked chips of 1333, 1600, and 1866, as it appears?

28GBps is telling. That would be 8x DDR4 chips (likely rated double the spec of ddr3-1866), which is currently right above where JEDEC has the official DDR4 spec (3200mhz, double ddr3-1600). Granted it will climb in short order, likely to 4266mhz (twice the highest ddr3 spec) and conceivably higher, but they must be planning for '3733mhz' to be ratified before Q1 14...which is interesting. One might even assume they foresee that as a realistic price/yield/need reality for the architectures around ~2015, which sounds correct in my estimation.

It's great to see these advancements in both density footprint and bandwidth. Here I was, simply hoping we get something like '7790' in the (20nm?) successor to Kaveri (excavator) with enough bandwidth to make that design actually feasible. As these new memory advancements sit, that shouldn't be a problem regardless if they take a 256-bit route to DDR4, a 128-bit route to GDDR5, or simply plop 4 HMC's around the thing (granted the later is probably a ways off). Any would theoretically be sufficient, perhaps each more applicable to certain markets than others, but also provide sufficient buffer size (assuming HMC is at least 2Gb chips and GDDR5 8Gb). Hooray for progress, and more importantly architectures with potential longevity in perhaps the most-needed area as we move forward.
0 0 [Posted by: turtle  | Date: 04/04/13 02:56:33 AM]
Reply
- collapse thread

 
This is actually quite a bit faster than you think it is. 10Gb/s is per link, and there are 8 links per lane, and 16 lanes per cube, which is 160GB/s (input AND output, full duplex) per cube, which is 1.6 times faster than the Radeon 7790 memory interface. Remember, this is also full ECC, and suitible for use in compute cards and high availability servers.

What they want to do is push that per cube speed past 240GB/s and use it to replace L3 cache, allowing chips to have much higher core counts while significantly increasing performance per core. A single HMC is 500 times larger than typical L3 cache, but slower and with higher latencies. Using HMC will either allow chips with more cores and more L2 cache, or it could be used as a large L4 cache, which would substantially improve virtualized servers running many threads at once.

Imagine a server board, quad socket, 8 cores per socket....

L1cache 64KB (per core)
L2cache 256KB (per core)
L3cache 8-30MB (shared) (per cpu)
L4cache 4GB HMC (shared) (per socket)
DDR4 64GB quad channel (per socket)
1 0 [Posted by: BillionPa  | Date: 04/04/13 05:41:21 PM]
Reply

2. 
Does DDR4 will sheer?
0 0 [Posted by: Saad  | Date: 04/14/13 04:51:54 AM]
Reply

[1-2]

Add your Comment




Related news

Latest News

Tuesday, July 15, 2014

6:11 am | Apple Teams Up with IBM to Make iPhone and iPad Ultimate Tools for Businesses and Enterprises. IBM to Sell Business-Optimized iPhone and iPad Devices

Monday, July 14, 2014

6:01 am | IBM to Invest $3 Billion In Research of Next-Gen Chips, Process Technologies. IBM to Fund Development of 7nm and Below Process Technologies, Help to Create Post-Silicon Future

5:58 am | Intel Postpones Launch of High-End “Broadwell-K” Processors to July – September, 2015. High-End Core i “Broadwell” Processors Scheduled to Arrive in Q3 2015

5:50 am | Intel Delays Introduction of Core M “Broadwell” Processors Further. Low-Power Broadwell Chips Due in Late 2014

Wednesday, July 9, 2014

4:04 pm | Intel Readies New Quark “Dublin Bay” Microprocessors. Intel’s “Dublin Bay” Chips Due in 2015