Macroplane › Investment Theses › Foundry monopoly for advanced nodes + the entire CoWoS pipeline

Foundry monopoly for advanced nodes + the entire CoWoS pipeline

AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.

**Bottleneck theme:** Lithography & Fab Tools **Focus:** $TSM — TAIWAN SEMICONDUCTOR MANUFACTURING CO LTD TSMC is the foundry monopoly for advanced nodes (N3, N3P, N2, A16) and the entire CoWoS pipeline that makes AI accelerators physically possible. Every NVIDIA GPU, every AMD MI accelerator, every hyperscaler custom-ASIC, every Apple SoC, and almost every leading-edge IP gets manufactured here. The CoWoS-L expansion has been the gating supply factor on Blackwell and Rubin shipments — TSMC has been adding monthly capacity through 2024-2026, and this remains the single most important infrastructure variable for AI hardware availability. The investment case is the structural foundry monopoly plus pricing power that no alternative ($INTC, $SSNLF foundry) has yet matched. The bear case is geopolitical (Taiwan-China tension is the original tail-risk), Intel Foundry's gradual ramp at 18A and 14A, and the multi-year capex commitment to U.S. (Arizona) and Japanese (Kumamoto) fabs that may underperform Taiwan unit economics. Pair with $ASML (the EUV monopoly upstream of TSMC) and $LRCX/$AMAT/$KLAC (the equipment that fills TSMC fabs).

Focus companies in this thesis (1)

  • TAIWAN SEMICONDUCTOR MANUFACTURING CO LTD (TSM)

Supply-chain categories covered

  • EDA Software — Electronic design automation tools for chip design and verification
  • IP Cores — Licensed semiconductor IP blocks (ARM cores, PHY, SerDes, interfaces)
  • Lithography & Fab Tools — Investment-thesis bucket from bottlenecks.app: Lithography & Fab Tools
  • Foundry / Fab Services — Contract semiconductor manufacturing — wafer fabrication for fabless and partially-fabless customers, spanning leading-edge logic, mature-node analog/mixed-signal, RF, and specialty processes (BCD, BiCMOS, SiC, SOI).
  • HBM — High Bandwidth Memory — 3D-stacked DRAM (HBM2E/HBM3/HBM3E/HBM4) connected via through-silicon vias, delivering 1+ TB/s of bandwidth per stack. Co-packaged with GPUs, TPUs, and custom AI accelerators for datacenter AI training/inference and HPC workloads.
  • Advanced Packaging — 2.5D/3D packaging, CoWoS, chiplets, fan-out wafer-level packaging
  • AI GPUs — Compute accelerators and GPUs powering AI training, inference, and large language models.
  • Hyperscalers — Major cloud operators (AWS, Azure, GCP, Meta, Oracle, Alibaba, Tencent, Baidu, Naver) and tier-2 / neocloud providers (DigitalOcean, OVHcloud, Rackspace, Kingsoft) tracked as a demand signal across multiple theses (photonics, HBM, AI accelerators, power, cooling). Excludes SaaS apps, telcos, REITs, and IT services firms.

Thesis milestones & bottleneck markers

  • HBM3e supply chain maturity — HBM suppliers achieve stable 12-layer production.
  • TSM N2P risk production — TSM — 2nm process enters risk production phase.
  • CoWoS monthly capacity 35K — TSM — TSM hits 35K wafers/month CoWoS capacity.
  • TSM revenue >$100B — TSM — Annual revenue exceeds $100B.
  • TSM gross margin >55% — TSM — Pricing power sustains elevated margins.

Browse all AI supply-chain theses · Macro trends · Industries · Product categories