As of May 1, 2026, Nvidia's dominance in the AI chip market remains unchallenged, with recent announcements from Amazon and Alphabet providing a clear signal of sustained demand. Both tech giants have placed substantial orders for Nvidia's next-generation Blackwell B200 GPUs, according to industry reports verified by multiple sources.
Amazon Web Services (AWS) confirmed plans to deploy Nvidia's Blackwell chips in its cloud data centers, while Alphabet's Google Cloud announced expanded availability of the same hardware for AI workloads. This comes despite both companies developing their own custom AI accelerators—AWS's Trainium2 and Google's TPU v5—indicating that Nvidia's ecosystem and performance remain critical for high-end AI training and inference.
Nvidia's revenue from data center chips reached $47.5 billion in fiscal 2025, a 145% year-over-year increase, driven by hyperscaler demand. Analysts project further growth in 2026 as enterprises adopt generative AI. The Blackwell chip, launched in March 2025, offers up to 30x performance improvement over previous generations for large language models.
However, competition is intensifying. AMD's MI400X accelerator and Intel's Gaudi 3 are gaining traction, while custom chips from Amazon, Google, and Microsoft are reducing dependency on Nvidia for specific tasks. Yet, for now, Nvidia's software ecosystem, including CUDA and its networking technology, provides a moat that competitors struggle to replicate.
Investors are watching Nvidia's upcoming earnings report, expected in late May 2026, for guidance on Blackwell's ramp and potential supply constraints. The company's market cap has fluctuated between $2.5 trillion and $3 trillion in recent months, reflecting both optimism and caution about long-term growth.