Foundry monopoly for advanced nodes + the entire CoWoS pipeline
AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.
**Bottleneck theme:** Lithography & Fab Tools
**Focus:** $TSM — TAIWAN SEMICONDUCTOR MANUFACTURING CO LTD
TSMC is the foundry monopoly for advanced nodes (N3, N3P, N2, A16) and the entire CoWoS pipeline that makes AI accelerators physically possible. Every NVIDIA GPU, every AMD MI accelerator, every hyperscaler custom-ASIC, every Apple SoC, and almost every leading-edge IP gets manufactured here. The CoWoS-L expansion has been the gating supply factor on Blackwell and Rubin shipments — TSMC has been adding monthly capacity through 2024-2026, and this remains the single most important infrastructure variable for AI hardware availability.
The investment case is the structural foundry monopoly plus pricing power that no alternative ($INTC, $SSNLF foundry) has yet matched. The bear case is geopolitical (Taiwan-China tension is the original tail-risk), Intel Foundry's gradual ramp at 18A and 14A, and the multi-year capex commitment to U.S. (Arizona) and Japanese (Kumamoto) fabs that may underperform Taiwan unit economics. Pair with $ASML (the EUV monopoly upstream of TSMC) and $LRCX/$AMAT/$KLAC (the equipment that fills TSMC fabs).
EDA Software — Electronic design automation tools for chip design and verification
IP Cores — Licensed semiconductor IP blocks (ARM cores, PHY, SerDes, interfaces)
Lithography & Fab Tools — Investment-thesis bucket from bottlenecks.app: Lithography & Fab Tools
Foundry / Fab Services — Contract semiconductor manufacturing — wafer fabrication for fabless and partially-fabless customers, spanning leading-edge logic, mature-node analog/mixed-signal, RF, and specialty processes (BCD, BiCMOS, SiC, SOI).
HBM — High Bandwidth Memory — 3D-stacked DRAM (HBM2E/HBM3/HBM3E/HBM4) connected via through-silicon vias, delivering 1+ TB/s of bandwidth per stack. Co-packaged with GPUs, TPUs, and custom AI accelerators for datacenter AI training/inference and HPC workloads.