Samsung's new HBM-PIM claims to be twice as fast while drawing over 70% less power

Samsung - one of the creators of the 3D-stacked High Bandwidth Memory, or HBM - has launched its next innovation in that department with the introduction of its new processing-in-memory (PIM) architecture, which will now add the power of AI into HBM.

As the name suggests, PIM or processing-in-memory will bring some sort of programmability within the memory layer with the help of a new embedded "DRAM-optimized" AI engine inside the memory banks. The AI engine, dubbed Programmable Compute Unit (or PCU) will be tasked to move the data between the CPU and memory in a parallelized manner which, in turn, claims to eliminate the inherent bottleneck of the von Neumann architecture. The traditional von Neumann approach sees sequential back-and-forth data movement between the CPU and memory which tends to increase the overall latency and inefficiency of the processing. Samsung says that early testing of the HBM-PIM implementation inside its 2018 "Aquabolt" HBM2 sees a doubling of performance alongside a power reduction of more than 70%.

The technology is currently undergoing validatory tests inside AI accelerators from various partners of the company and the South Korean giant expects the validation phase to be complete within the first half of this year.

When ready, HBM-PIM can be seamlessly integrated into new and existing systems for use in data centers, high-performance computing (HPC), AI-based mobile applications, etc., with no change necessary in the hardware and software setup.

Samsung will share more details on HBM-PIM at the ongoing 2021 International Solid-State Circuits Virtual Conference (ISSCC).

Report a problem with article
Next Article

Opera launches new Dify fintech business in Spain

Previous Article

Valve's MOBA Dota 2 gains an anime series from Netflix