ASAP Drew

ASAP Drew

Share this post

ASAP Drew
ASAP Drew
2 Best HBM Stocks for AI Chips (2025-2030): SK Hynix & Micron for AGI Boom?
Copy link
Facebook
Email
Notes
More
Stocks & Crypto

2 Best HBM Stocks for AI Chips (2025-2030): SK Hynix & Micron for AGI Boom?

SK Hynix, Micron (MU), and Samsung are the top players in HBM... HBM demands will likely increase significantly while scaling to AGI

ASAP Drew's avatar
ASAP Drew
Jan 05, 2025
∙ Paid

Share this post

ASAP Drew
ASAP Drew
2 Best HBM Stocks for AI Chips (2025-2030): SK Hynix & Micron for AGI Boom?
Copy link
Facebook
Email
Notes
More
Share

As AI workloads balloon in both size and complexity, memory increasingly stands out as a critical bottleneck.

Large Language Models (LLMs) and advanced inference techniques—particularly those that rely on multi-step “chain-of-thought” reasoning—demand massive parallelism and ultra-fast data access.

Traditional DDR and GDDR solutions can’t keep pace with these extreme bandwidth requirements.

High-Bandwidth Memory (HBM), a tightly stacked 3D DRAM architecture, has emerged as the indispensable solution for accelerator GPUs, custom AI chips, and high-performance computing.

From 2025 to 2030, analysts project unprecedented growth in HBM. While logic transistors are still (albeit more slowly) scaling according to Moore’s Law, DRAM scaling has largely plateaued.

HBM addresses this mismatch by packaging multiple DRAM die in a high-speed stack, delivering far higher bandwidth at lower power per bit transferred than any other memory type.

What Is HBM (High Bandwidth Memory)?

HBM is a specialized 3D-stacked DRAM technology.

Traditional memory uses side-by-side chips on a DIMM or a graphics card, but HBM piles multiple DRAM die vertically and connects them via through-silicon vias (TSVs).

These stacks are then placed in close proximity (often the same package) to the GPU or AI accelerator. The result:

  1. Wider Bus Width: Instead of 16- or 32-bit widths typical of standard DRAM chips, HBM can reach 1,024 bits or more per stack, dramatically boosting bandwidth.

  2. Lower Power per Bit: Shorter interconnects reduce signal energy, making HBM notably more power-efficient.

  3. Smaller Footprint: Stacking and advanced packaging mean more memory capacity within a tighter area—critical for GPUs or AI SoCs where board space is precious.

For large-scale AI tasks—like training GPT-like models, running complex inference with chain-of-thought expansions, or searching multiple reasoning branches—bandwidth is king.

The CPU or GPU must shuttle enormous volumes of data from memory each second. HBM’s parallel I/O structure and proximity to compute address this bottleneck head-on.


Why HBM Demand May Skyrocket (2025–2030)

AI Model Scaling

Neural networks are scaling by orders of magnitude every few years:

  • GPT-3 had 175B parameters; GPT-4 soared into the trillion-parameter regime.

  • Emerging “o-series” or “chain-of-thought” architectures require multiple parallel reasoning streams and deeper memory contexts. (Read: OpenAI’s New o3 & o3 Mini Models)

These expansions—both in training (massive data sets, enormous weight matrices) and inference (maintaining huge context windows)—drive incredible memory bandwidth needs.

Even custom AI chips from cloud providers (like AWS Trainium/Inferentia, Google TPU, etc.) must rely on HBM to handle these heavier memory loads.

Bandwidth Bottlenecks

Even though advanced GPUs keep improving compute throughput, the biggest real-world slowdown is often memory I/O.

As the industry tries to scale to AGI/ASI-level systems or advanced chain-of-thought inference, memory is the gating factor.

HBM is the only near-term fix that meaningfully boosts bandwidth without a radical redesign of DRAM cell technology—thus it garners an outsized portion of R&D and capex spend.

Underestimation by Markets?

General market valuations often lump HBM into “DRAM” without appreciating:

  1. Pricing Power: HBM can cost 3x–5x more per GB than standard DDR, with higher margins.

  2. Supply Constraints: TSV stacking, advanced packaging, and high yields are difficult. In a strong AI demand scenario, supply can remain tight, sustaining elevated HBM ASPs.

  3. Spillover Effect: Gains in HBM can offset cyclical downturns in standard DRAM.

Hence, if AI labs worldwide adopt large-chain-of-thought or advanced multi-modal models, HBM shipments could far exceed current forecasts—yet some valuations appear to price in only a moderate DRAM upswing.

Top 3 HBM Players & Market Share (2025)

Currently, 3 companies dominate nearly 100% of the global HBM market:

1. SK Hynix Inc.

  • Approx. 50% share in HBM.

  • Known as the primary HBM supplier for NVIDIA’s flagship AI accelerators (H100/H200).

  • Their overall memory business (DRAM + NAND) typically makes up most of total revenue—DRAM alone is around 70% of corporate revenues, but HBM is still just a fraction of that DRAM segment. Exact percentages remain undisclosed.

  • Gains huge advantage from HBM’s premium margin. Even if HBM is <25% of total revenue, its profit contribution is disproportionately high due to higher ASPs.

2. Samsung Electronics

  • The second largest HBM manufacturer, significant supplier to AMD, and some to NVIDIA.

  • Facing yield challenges on advanced HBM nodes.

  • Overall scale dwarfs the others, but memory is just one part of Samsung’s massive electronics empire—meaning HBM’s direct revenue impact is smaller in relative percentage. Nonetheless, they remain a formidable competitor.

3. Micron Technology (MU)

  • The smallest HBM presence among the “big three” but aggressively expanding.

  • Some estimates place Micron’s HBM revenue at just a few percent of total revenue in 2024 (~2–3%), with the ambition to reach double-digit billions in HBM by 2025–2026.

  • If Micron hits that stride, they can capture 20–25% of the HBM segment, aligning with their broader DRAM share.

  • This potential ramp from near-zero to significant share could present a steep growth curve if done successfully.

Collectively, these three players have built an oligopoly in HBM manufacturing.

Complexity, capex, and advanced 3D packaging deter new entrants. That means if AI demand explodes, these three are the direct beneficiaries.

Keep reading with a 7-day free trial

Subscribe to ASAP Drew to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 ASAP Drew
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More