AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.
**Bottleneck theme:** Memory Supercycle
**Focus:** $MU — MICRON TECHNOLOGY INC
Micron is the only Western HBM player with material U.S. capacity, and the anchor of the memory supercycle thesis. The company has gone from single-digit HBM market share entering 2024 to a credible #2 globally behind SK hynix, with HBM3E shipping at scale to NVIDIA Blackwell platforms and HBM4 sampling on track for Rubin-generation deployment. The Idaho fab build and the Syracuse, NY mega-fab give Micron a domestic-manufacturing posture no Korean competitor can match — a strategic asset under U.S. CHIPS Act funding and an increasingly important hedge for hyperscalers worried about Taiwan/Korea concentration risk.
The investment case is structural plus cyclical: HBM demand growth is multi-year and supply-constrained, while traditional DRAM and NAND ride a cyclical recovery. Micron's HBM mix is the highest-beta lever — every percentage point of HBM share lifts blended ASP and gross margin meaningfully. The bear case is execution: SK hynix has a multi-year HBM lead, U.S. fab unit economics may underperform Korean peers, and a memory cycle reversal could compress the blended margin even with strong HBM mix. Pair with $WDC, $STX (storage), $SNDK (NAND), and the Korean memory complex.
HBM — High Bandwidth Memory — 3D-stacked DRAM (HBM2E/HBM3/HBM3E/HBM4) connected via through-silicon vias, delivering 1+ TB/s of bandwidth per stack. Co-packaged with GPUs, TPUs, and custom AI accelerators for datacenter AI training/inference and HPC workloads.