Macroplane › Investment Theses › Only credible merchant GPU alternative

Only credible merchant GPU alternative

AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.

**Bottleneck theme:** Custom Silicon **Focus:** $AMD — ADVANCED MICRO DEVICES INC AMD is the only credible merchant GPU alternative to NVIDIA at training and high-end inference scale. The MI300X / MI325X / MI355 / MI400 roadmap, paired with rapidly maturing ROCm software and meaningful HBM3E content per accelerator, has made AMD the procurement-backstop choice for hyperscalers and frontier labs unwilling to be 100% sole-sourced on NVIDIA. The OpenAI agreement to buy 6 GW of MI-series GPUs (announced 2025), combined with Microsoft, Meta, and Oracle MI355 deployments, validates that the bookings are real rather than narrative. The investment case is share recovery in data-center GPU plus margin mix expansion, with optionality from EPYC server CPUs continuing to take share from Intel and an embedded/Xilinx franchise providing baseline revenue. The bear case is structural: NVIDIA's CUDA software moat compounds over time, MI-series gross margins are below NVIDIA's by hundreds of basis points, and AMD has a capacity allocation problem (sharing TSMC CoWoS-L and HBM3E with NVIDIA in a constrained market). Treat as the highest-quality merchant alternative to NVIDIA, with continued execution required to convert design wins into sustained share.

Focus companies in this thesis (1)

  • ADVANCED MICRO DEVICES INC (AMD)

Supply-chain categories covered

  • AI GPUs — Compute accelerators and GPUs powering AI training, inference, and large language models.
  • Fabless Chip Design — Companies designing chips without owning fabs
  • Foundry / Fab Services — Contract semiconductor manufacturing — wafer fabrication for fabless and partially-fabless customers, spanning leading-edge logic, mature-node analog/mixed-signal, RF, and specialty processes (BCD, BiCMOS, SiC, SOI).
  • Chips for networking gear and consumer computing
  • HBM — High Bandwidth Memory — 3D-stacked DRAM (HBM2E/HBM3/HBM3E/HBM4) connected via through-silicon vias, delivering 1+ TB/s of bandwidth per stack. Co-packaged with GPUs, TPUs, and custom AI accelerators for datacenter AI training/inference and HPC workloads.
  • Server & System Assembly (ODM/EMS) — ODMs and EMS firms assembling servers, racks, and AI systems from components.
  • Hyperscalers — Major cloud operators (AWS, Azure, GCP, Meta, Oracle, Alibaba, Tencent, Baidu, Naver) and tier-2 / neocloud providers (DigitalOcean, OVHcloud, Rackspace, Kingsoft) tracked as a demand signal across multiple theses (photonics, HBM, AI accelerators, power, cooling). Excludes SaaS apps, telcos, REITs, and IT services firms.
  • AI Training Accelerators — GPU and AI accelerator chips designed for training large AI models, the core demand drivers.
  • Advanced Packaging — 2.5D/3D packaging, CoWoS, chiplets, fan-out wafer-level packaging

Thesis milestones & bottleneck markers

  • AMD MI300 revenue inflection — AMD — MI300 series reaches meaningful revenue contribution
  • HBM supply stabilization — HBM pricing normalizes as capacity grows
  • AMD AI GPU market share — AMD — AMD captures datacenter GPU share
  • AMD hyperscaler wins — Major design wins beyond Microsoft
  • ROCm maturity vs CUDA — AMD — AMD software stack gains traction

Browse all AI supply-chain theses · Macro trends · Industries · Product categories