Quantcast
Channel: micron | MandETech
Viewing all articles
Browse latest Browse all 23

Hybrid Memory Cube

$
0
0

September 18, 2012, Memcon, Santa Clara, CA—Scott Graham from Micron offered the hybrid memory cube architecture as a way to address the challenges and memory bottlenecks in high performance systems. The memories and their interfaces are inhibiting the ability for servers to improve throughput performance.

Application server performance requires reducing latency for networks, while increasing network connectivity. These improvements are being driven by the mobile devices and the billion people who are carrying them. The advent of machine to machine communications will only make these problems more difficult.

The increases in Ethernet port speeds try is a requirement for faster memory accesses. One way to achieve these speed increases is to repackage RAM into a hybrid memory cube (HMC) using 2-D or 2.5-D integration. By reducing physical distances to RAM, bandwidth and power performance or improve as well as requiring smaller footprints on the boards.

The growth of video and other high-speed data sources forces servers into cloud configurations, and increasingly into 24 x 7 enterprise functions. Mobility, and big data and analytics are continuing trends. All these facets drive memory performance and capacity issues.

HMC provides high bandwidth and reduced energy per bit. It enables greater abstraction for memory management, through memory vault and control functions. A stacked die approach requires through silicon vias (TSV) to get to the desired packaging density. The memories are arranged as 16 partitions, and chips are stacked four or eight high within the package. The decoding and control logic is stripped from each of the chips and replaced by a separate logic chip on the bottom of the stack.

The memory vault is a vertical stack of partitions with a simple link to host 16 bidirectional lanes to the processor. Tools are available to just bandwidth for near or far configurations. The near configuration as all links to the host and HMC logic tied together. The far link must be configured to connect to the local host or to other cubes.

Future configurations will address higher speeds and the possibility of changing the I/O from electronic to optical or other technologies. There working on enhancing reliability through integrated repair, redundancy, ECC, and other standard memory system configuration modifications.

In comparison with standard DDR 3 or DDR 4, HMC uses fewer lines, less power, and less area. Even though the number of channels for area a number of pins is lower, total bandwidth is much higher for the HMC.

The HMC consortium is developing standards for HMC. The interface specification for first generation HMC is projected for Q1 '13 release. They have internal working groups for the different parts of specifications, and are starting developer groups to encourage greater adoption. They are looking to follow the trend a stacked TSV packaging and might eventually be looking at including the processor within the package.

HMC technologies can address can address the memory-processor bandwidth gap. Current memory interface technologies can achieve 50 GB per second. HMC is capable of up to 160 GB per second will first hit 65 GB per second in a 2-D or 2.5-D package in '14. Some of the IP for management and control will move into the CPU to further improve performance.


Viewing all articles
Browse latest Browse all 23

Trending Articles