Meta Expands Nvidia Deal to Deploy Millions of AI Chips in Largest-Ever Data Center Build-Out
Meta announces sweeping partnership with Nvidia worth tens of billions, becoming the first to deploy Nvidia standalone Grace CPUs at scale while committing $600 billion to US infrastructure by 2028.

Meta Expands Nvidia Deal to Deploy Millions of AI Chips in Largest-Ever Data Center Build-Out
Meta has announced a sweeping expansion of its partnership with Nvidia, becoming the first company to deploy Nvidia's standalone Grace central processing units at scale while committing to a massive data center build-out that solidifies its position as one of the world's largest AI infrastructure investors.
The multiyear deal, announced Tuesday, will see Meta deploy millions of Nvidia chips across its data centers, including the tech giant's new standalone CPUs and next-generation Vera Rubin systems. The partnership is valued in the tens of billions of dollars, according to analysts.
"We're delivering on our commitment to bring personal superintelligence to everyone in the world," said Meta CEO Mark Zuckerberg in a statement. The deal marks the most significant expansion of Meta's AI infrastructure strategy to date.
Standalone CPUs: A First for Nvidia
The most notable aspect of the deal is Meta's deployment of Nvidia's Grace CPUs as standalone chips—a first for the GPU maker. Traditionally, CPUs have been paired with GPUs in AI servers, but Nvidia's Grace CPUs are designed to run inference and agentic workloads independently.
"They're really designed to run those inference workloads, run those agentic workloads, as a companion to a Grace Blackwell/Vera Rubin rack," explained chip analyst Ben Bajarin of Creative Strategies. "Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia's putting across both sets of infrastructure: CPU and GPU."
The next-generation Vera CPUs are planned for deployment in 2027.
$600 Billion US Commitment
The deal is part of Meta's broader commitment to invest $600 billion in the United States by 2028 on data centers and infrastructure. The company plans to build 30 data centers, with 26 located in the United States.
Two massive projects are already under construction: the 1-gigawatt Prometheus site in New Albany, Ohio, and the 5-gigawatt Hyperion facility in Richfield Parish, Louisiana.
Meta also announced it would use Nvidia's networking technology, including Spectrum-X Ethernet switches to link GPUs together, along with security capabilities for WhatsApp AI features.
Market Reaction
Shares of both Meta and Nvidia climbed in extended trading following the announcement. Meanwhile, Advanced Micro Devices stock sank approximately 4% on concerns about competition in the AI chip market.
In January, Meta announced plans to spend up to $135 billion on AI in 2026 alone—a figure that underscores the massive infrastructure race underway among tech giants.
Sources
Share this article
About AI Infrastructure Team
Expert researcher and writer at NeuralStackly, dedicated to finding the best AI tools to boost productivity and business growth.
View all postsRelated Articles
Continue reading with these related posts

Hollywood Studios Demand ByteDance Shut Down Seedance AI Video Tool Over Copyright Concerns
Major US studios accuse TikTok owner ByteDance of mass copyright infringement with Seedance 2.0, an AI tool that generates ultra-realistic video clips from text prompts.

India AI Impact Summit 2026: Key Highlights and Announcements
India hosts major AI summit in Delhi featuring OpenAI, Google, and global tech leaders. Sarvam AI emerges as standout exhibitor, showcasing breakthrough OCR technology.

OpenAI Accuses DeepSeek of Distilling US Models for Competitive Advantage
OpenAI reportedly accuses Chinese AI startup DeepSeek of using model distillation to replicate proprietary US AI technology. The controversy escalates tensions between US and Ch...