Macroplane › Investment Theses › Etch monopoly for HBM TSV + 3D NAND staircase + advanced packaging

Etch monopoly for HBM TSV + 3D NAND staircase + advanced packaging

AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.

**Bottleneck theme:** Lithography & Fab Tools **Focus:** $LRCX — LAM RESEARCH CORP Lam Research is the etch monopoly. For the most demanding etch steps in modern fabs — high-aspect-ratio TSV (through-silicon via) etch in HBM, 3D NAND staircase / channel-hole etch, advanced-packaging via formation, and gate-all-around fin-removal — Lam holds 80%+ share and is functionally sole-source. Every HBM stack drives roughly 4-6x more etch tool time than a planar DRAM die, so the HBM3E → HBM4 → HBM4E roadmap maps directly to per-wafer Lam revenue growth. Add 3D NAND moving from ~200 layers to ~400 layers (each layer is more etch passes) and the secular content tailwind is unambiguous. Lam's secondary lever is advanced packaging — CoWoS, hybrid-bonding via formation, and the metallization steps around each. The bear case is China NAND/DRAM exposure (export-control risk) and lumpy memory CapEx cycles. But the structural content-per-wafer story compounds: each new memory generation adds more etch passes, and Lam's installed-base service revenue grows with the world's wafer-start count regardless of cycle. Pair-trade or peer comp candidates: $AMAT, $TER, $ASML.

Focus companies in this thesis (1)

  • LAM RESEARCH CORP (LRCX)

Supply-chain categories covered

  • Etch Equipment — Plasma etch systems for patterning semiconductor wafers.
  • Memory Supercycle — Investment-thesis bucket from bottlenecks.app: Memory Supercycle
  • Foundry / Fab Services — Contract semiconductor manufacturing — wafer fabrication for fabless and partially-fabless customers, spanning leading-edge logic, mature-node analog/mixed-signal, RF, and specialty processes (BCD, BiCMOS, SiC, SOI).
  • HBM — High Bandwidth Memory — 3D-stacked DRAM (HBM2E/HBM3/HBM3E/HBM4) connected via through-silicon vias, delivering 1+ TB/s of bandwidth per stack. Co-packaged with GPUs, TPUs, and custom AI accelerators for datacenter AI training/inference and HPC workloads.
  • NAND Flash — NAND flash memory and solid-state storage
  • Advanced Packaging — 2.5D/3D packaging, CoWoS, chiplets, fan-out wafer-level packaging
  • AI Training Accelerators — GPU and AI accelerator chips designed for training large AI models, the core demand drivers.
  • Hyperscalers — Major cloud operators (AWS, Azure, GCP, Meta, Oracle, Alibaba, Tencent, Baidu, Naver) and tier-2 / neocloud providers (DigitalOcean, OVHcloud, Rackspace, Kingsoft) tracked as a demand signal across multiple theses (photonics, HBM, AI accelerators, power, cooling). Excludes SaaS apps, telcos, REITs, and IT services firms.

Thesis milestones & bottleneck markers

  • $MU packaging spend — MU
  • $LRCX HBM revenue inflection — LRCX

Browse all AI supply-chain theses · Macro trends · Industries · Product categories