Azernews.Az

Thursday February 12 2026

Samsung leads SK hynix in HBM4 deployment

12 February 2026 19:45 (UTC+04:00)
Samsung leads SK hynix in HBM4 deployment

By Alimat Aliyeva

Samsung Electronics announced Thursday that it has begun mass production of its sixth-generation high-bandwidth memory (HBM4) chips, becoming the first company in the industry to do so. Shipments have already started to major customers, including Nvidia, giving Samsung an edge over rival SK Hynix, AzerNEWS reports, citing Korean media.

The Suwon, Gyeonggi-based chipmaker had originally planned to begin mass shipments shortly after Korea’s Lunar New Year holiday, which ended Wednesday, but moved the timeline up by about a week following discussions with key clients.

HBM4 was developed using Samsung’s latest 10-nanometer-class 1c dynamic random-access memory (DRAM) process. The company said the new process achieved stable yields and industry-leading performance from the start, without requiring any redesigns.

“Instead of following the conventional path of using existing proven designs, Samsung took the leap and adopted the most advanced nodes, including 1c DRAM and 4-nanometer logic process for HBM4,” said Hwang Sang-joon, executive vice president and head of memory development at Samsung Electronics. “By leveraging our process competitiveness and design optimization, we can secure substantial performance headroom, meeting our customers’ escalating demands for higher performance precisely when they need it.”

Samsung said HBM4 delivers the industry’s fastest processing speed at 11.7 gigabits per second (Gbps), 46% higher than the current standard of 8 Gbps and 22% faster than the maximum 9.6 Gbps pin speed of its predecessor, HBM3E. Performance can be further boosted to 13 Gbps to help relieve data bottlenecks as AI models grow larger and more complex.

The 12-layer HBM4 offers capacities ranging from 24 to 36 gigabytes (GB), with plans to expand the lineup to 16-layer versions of up to 48 GB according to customer production schedules. Total memory bandwidth per stack has nearly tripled compared with HBM3E, reaching a maximum of 3.3 terabytes per second. Power efficiency improved by 40% thanks to low-voltage through-silicon via (TSV) technology and optimized power distribution networks, while thermal resistance rose by 10% and heat dissipation improved by 30%.

Samsung said it continues to receive strong demand from global hyperscalers designing their own customized AI chips and plans to deepen technological collaboration with these clients. HBM revenue is expected to triple this year compared with 2025, driven by surging AI-related memory demand. Production capacity will also expand, with HBM chips planned for manufacture at the P5 facility in Pyeongtaek, Gyeonggi, starting in 2028.

Samsung is preparing to ship samples of next-generation HBM4E in the second half of this year, with customized HBM products — optimized for specific AI accelerators or GPU architectures — scheduled for release in 2027.

“The quality and supply stability secured through the 1c process in HBM4 mass production are expected to provide key advantages as we transition to higher-value products, including HBM4E and custom HBM variants,” a Samsung Electronics spokesperson said.

Analysts say Samsung’s HBM4 not only strengthens its leadership in AI memory solutions but also positions the company to dominate the next wave of AI accelerators and supercomputing applications, where high-speed, low-latency memory is critical for performance.

Here we are to serve you with news right now. It does not cost much, but worth your attention.

Choose to support open, independent, quality journalism and subscribe on a monthly basis.

By subscribing to our online newspaper, you can have full digital access to all news, analysis, and much more.

Subscribe

You can also follow AzerNEWS on Twitter @AzerNewsAz or Facebook @AzerNewsNewspaper

Thank you!

Loading...
Latest See more