News

KAIST has a roadmap projecting the evolution of high-bandwidth memory from HBM4 to HBM8 through 2038, detailing major gains in bandwidth, capacity, I/O width, power, and even system architecture.
Memory innovation for AI is accelerating rapidly, but power demands are skyrocketing, raising serious sustainability and infrastructure concerns.
Next-generation GPU-HBM roadmap teases HBM4, HBM5, HBM6, HBM7, HBM8 with HBM7 dropping by 2035 with new AI GPUs using 6.1TB of HBM7 and 15,000W AI ... with a per-HBM package power of up to 80W. ...
A new KAIST roadmap reveals HBM8-powered GPUs could consume more than 15kW per module by 2035, pushing current infrastructure ...
Micron Technology's HBM revenues surpass $1 billion in Q2 FY25, fueled by AI and hyperscaler demand. ... TSMC develops HBM packages based on SK Hynix’s HBMs. In 2024, ...
Each of the eight HBM packages features eight memory dies stacked in a true 3D configuration—with a separate controller die under the memory stack. The maximum 192 gigabyte ...
HBM capacity per stack will increase from 288 GB to 348 GB for HBM4, to 5,120 GB to 6144 GB for HBM8. ... The I/O width per HBM package is also set to increase from the 1,024-bit interface of today's ...
Micron Technology's HBM revenues surpass $1 billion in Q2 FY25, fueled by AI and hyperscaler demand. ... TSMC develops HBM packages based on SK Hynix’s HBMs. In 2024, ...