AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.
**Bottleneck theme:** Custom Silicon
**Focus:** $AMD — ADVANCED MICRO DEVICES INC
AMD is the only credible merchant GPU alternative to NVIDIA at training and high-end inference scale. The MI300X / MI325X / MI355 / MI400 roadmap, paired with rapidly maturing ROCm software and meaningful HBM3E content per accelerator, has made AMD the procurement-backstop choice for hyperscalers and frontier labs unwilling to be 100% sole-sourced on NVIDIA. The OpenAI agreement to buy 6 GW of MI-series GPUs (announced 2025), combined with Microsoft, Meta, and Oracle MI355 deployments, validates that the bookings are real rather than narrative.
The investment case is share recovery in data-center GPU plus margin mix expansion, with optionality from EPYC server CPUs continuing to take share from Intel and an embedded/Xilinx franchise providing baseline revenue. The bear case is structural: NVIDIA's CUDA software moat compounds over time, MI-series gross margins are below NVIDIA's by hundreds of basis points, and AMD has a capacity allocation problem (sharing TSMC CoWoS-L and HBM3E with NVIDIA in a constrained market). Treat as the highest-quality merchant alternative to NVIDIA, with continued execution required to convert design wins into sustained share.
HBM — High Bandwidth Memory — 3D-stacked DRAM (HBM2E/HBM3/HBM3E/HBM4) connected via through-silicon vias, delivering 1+ TB/s of bandwidth per stack. Co-packaged with GPUs, TPUs, and custom AI accelerators for datacenter AI training/inference and HPC workloads.