Macroplane › Investment Theses › Largest pure AI-cloud at scale

Largest pure AI-cloud at scale

AI supply-chain thesis — mapping bottlenecks, focus companies, and supply-chain exposure for investors.

**Bottleneck theme:** AI Cloud / Neoclouds **Focus:** $CRWV — CoreWeave, Inc. CoreWeave is the largest pure-play AI cloud operator at GPU-cluster scale — built from the ground up around NVIDIA reference architectures, with engineering culture and customer relationships closer to NVIDIA than to AWS or Azure. The Q4 25 backlog of $66.8B and approximately 3.1 GW of contracted power give visibility through 2030, with Microsoft and OpenAI as anchor customers and a growing book of foundation-model and enterprise tenants. Where the hyperscalers operate AI as one workload among many, CoreWeave is single-purpose: dense, liquid-cooled, NVL72/NVL576-ready capacity sold by the GPU-hour with bare-metal performance. The investment case rests on three assumptions: (1) demand for frontier-model training capacity continues to outstrip hyperscaler internal supply, (2) NVIDIA continues to allocate scarce H100/Blackwell/Rubin product to CoreWeave at favorable terms, and (3) CoreWeave can finance the multi-billion-dollar power/data-center capex without destroying equity holders through dilution. The first two have been true so far; the third is the live debate. Treat as the highest-beta way to own the AI capex super-cycle, with full awareness of GPU-as-collateral leverage risk if utilization disappoints.

Focus companies in this thesis (1)

  • CoreWeave, Inc. (CRWV)

Supply-chain categories covered

  • AI GPUs — Compute accelerators and GPUs powering AI training, inference, and large language models.
  • Data Center Servers — High-density AI servers integrating power and cooling at the rack level.
  • Networking Semiconductors — Ethernet switches, PHYs, NICs, SerDes for AI clusters and datacenter interconnects
  • Datacenter Racks & Servers — Rack-scale systems and servers optimized for dense AI GPU deployments.
  • Power Infrastructure — Power delivery systems, PDUs, and cooling for AI datacenters housing training accelerators.
  • AI Cloud / Neoclouds — Investment-thesis bucket from bottlenecks.app: AI Cloud / Neoclouds
  • Hyperscalers — Major cloud operators (AWS, Azure, GCP, Meta, Oracle, Alibaba, Tencent, Baidu, Naver) and tier-2 / neocloud providers (DigitalOcean, OVHcloud, Rackspace, Kingsoft) tracked as a demand signal across multiple theses (photonics, HBM, AI accelerators, power, cooling). Excludes SaaS apps, telcos, REITs, and IT services firms.
  • Power & Grid — Investment-thesis bucket from bottlenecks.app: Power & Grid
  • Colocation Facilities — Hyperscale and enterprise colocation data centers consuming power and cooling infrastructure.
  • AI Training Accelerators — GPU and AI accelerator chips designed for training large AI models, the core demand drivers.

Thesis milestones & bottleneck markers

  • $66B Backlog Revenue — CRWV

Browse all AI supply-chain theses · Macro trends · Industries · Product categories