Macroplane › Investment Theses › Only Western HBM player with material US capacity

Only Western HBM player with material US capacity

AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.

**Bottleneck theme:** Memory Supercycle **Focus:** $MU — MICRON TECHNOLOGY INC Micron is the only Western HBM player with material U.S. capacity, and the anchor of the memory supercycle thesis. The company has gone from single-digit HBM market share entering 2024 to a credible #2 globally behind SK hynix, with HBM3E shipping at scale to NVIDIA Blackwell platforms and HBM4 sampling on track for Rubin-generation deployment. The Idaho fab build and the Syracuse, NY mega-fab give Micron a domestic-manufacturing posture no Korean competitor can match — a strategic asset under U.S. CHIPS Act funding and an increasingly important hedge for hyperscalers worried about Taiwan/Korea concentration risk. The investment case is structural plus cyclical: HBM demand growth is multi-year and supply-constrained, while traditional DRAM and NAND ride a cyclical recovery. Micron's HBM mix is the highest-beta lever — every percentage point of HBM share lifts blended ASP and gross margin meaningfully. The bear case is execution: SK hynix has a multi-year HBM lead, U.S. fab unit economics may underperform Korean peers, and a memory cycle reversal could compress the blended margin even with strong HBM mix. Pair with $WDC, $STX (storage), $SNDK (NAND), and the Korean memory complex.

Focus companies in this thesis (1)

  • MICRON TECHNOLOGY INC (MU)

Supply-chain categories covered

  • Advanced Packaging — 2.5D/3D packaging, CoWoS, chiplets, fan-out wafer-level packaging
  • HBM — High Bandwidth Memory — 3D-stacked DRAM (HBM2E/HBM3/HBM3E/HBM4) connected via through-silicon vias, delivering 1+ TB/s of bandwidth per stack. Co-packaged with GPUs, TPUs, and custom AI accelerators for datacenter AI training/inference and HPC workloads.
  • DRAM — Dynamic random-access memory chips
  • AI GPUs — Compute accelerators and GPUs powering AI training, inference, and large language models.
  • Hyperscalers — Major cloud operators (AWS, Azure, GCP, Meta, Oracle, Alibaba, Tencent, Baidu, Naver) and tier-2 / neocloud providers (DigitalOcean, OVHcloud, Rackspace, Kingsoft) tracked as a demand signal across multiple theses (photonics, HBM, AI accelerators, power, cooling). Excludes SaaS apps, telcos, REITs, and IT services firms.
  • Memory Supercycle — Investment-thesis bucket from bottlenecks.app: Memory Supercycle
  • Foundry / Fab Services — Contract semiconductor manufacturing — wafer fabrication for fabless and partially-fabless customers, spanning leading-edge logic, mature-node analog/mixed-signal, RF, and specialty processes (BCD, BiCMOS, SiC, SOI).

Thesis milestones & bottleneck markers

  • MU HBM revenue > $2B/quarter — MU
  • HBM gross margin expansion — MU
  • US HBM capacity share >20% — Micron US fab output as % of global HBM supply
  • HBM4 qualification — MU — First major customer HBM4 production ramp

Browse all AI supply-chain theses · Macro trends · Industries · Product categories