Cerebras Files for IPO: The $23B AI Chip Company Taking On NVIDIA
Cerebras Systems filed for IPO on April 18, 2026 at a $23B valuation. Revenue, tech specs, the $10B OpenAI deal, and what it means for the AI hardware market.
Cerebras Files for IPO: The $23B AI Chip Company Taking On NVIDIA
Cerebras Files for IPO: The $23B AI Chip Company Taking On NVIDIA
On April 18, 2026, Cerebras Systems filed its S-1 registration statement with the SEC. The company is expected to go public in mid-May at a valuation around $23 billion.
This is not Cerebras' first attempt. The company filed for IPO in 2024 but withdrew after a federal review (CFIUS) scrutinized investment from G42, an Abu Dhabi-based AI firm. That review concluded. The filing is back. And this time, the numbers look different.
The financials
Cerebras reported $510 million in revenue for 2025, with GAAP net income of $237.8 million. On a non-GAAP basis, the company posted a net loss of $75.7 million after excluding one-time items.
Those revenue numbers are a big jump from where the company was two years ago. In 2023, Cerebras was still primarily a research-stage hardware company selling small batches of its CS-2 systems to national labs and pharmaceutical companies. The OpenAI deal changed that trajectory entirely.
The company has not disclosed how many shares it plans to offer or the price range. The $23 billion valuation comes from its February 2026 Series H round, which raised $1 billion and was led by Tiger Global. Before that, a September 2025 Series G raised $1.1 billion at an $8.1 billion valuation. The valuation nearly tripled in five months.
The OpenAI deal
The single most important fact in the S-1: Cerebras has a multi-year agreement with OpenAI worth more than $10 billion, according to the Wall Street Journal.
This is not a vague partnership announcement. OpenAI is using Cerebras hardware for inference, and CEO Andrew Feldman is blunt about what that means. In a WSJ interview, he said: "Obviously, [Nvidia] didn't want to lose the fast inference business at OpenAI, and we took that from them."
OpenAI had previously considered acquiring Cerebras outright. The deal fell through, but the commercial relationship stuck. For a company that has spent years trying to prove that wafer-scale computing works at production scale, having OpenAI as a customer is the strongest validation possible.
What makes Cerebras different
Most AI chips start as thumbnail-sized processors cut from a 300mm silicon wafer. NVIDIA's H100, for example, is about 814 square millimeters. You get maybe 70-80 of them per wafer.
Cerebras does the opposite. Its Wafer Scale Engine (WSE) uses nearly the entire 300mm wafer as a single chip. Roughly 8.5 inches on each side. 4 trillion transistors. 900,000 compute cores. All on one piece of silicon.
The advantage is bandwidth. In traditional GPU clusters, data has to move between separate chips across interconnects (NVLink, PCIe, networking). That movement is slow and power-hungry. On a single wafer, data travels across on-chip fabric at speeds no inter-chip connection can match.
Cerebras claims 20x faster inference compared to competing systems. Independent benchmarks are still limited, but the OpenAI contract suggests the performance is real enough for the largest AI deployment in the world.
Other partnerships
OpenAI is the headline, but not the only customer:
- •Amazon Web Services: Cerebras has an agreement to deploy its chips in AWS data centers. This gives Cerebras access to AWS customers without building its own cloud.
- •G42: The UAE AI company partners with Cerebras to deploy 8 exaflops of compute in India. This was the deal that triggered the CFIUS review in 2024.
- •National labs and pharma: Earlier customers include Argonne National Laboratory, Lawrence Livermore, AstraZeneca, and GlaxoSmithKline. These deals proved the architecture worked for scientific workloads before the AI inference market materialized.
Risks
The S-1 filing lists the standard risk factors, but a few stand out:
Customer concentration. If the OpenAI deal accounts for the majority of revenue (the filing has not broken this down yet), losing that contract would be catastrophic. One customer having that much leverage is a real concern for public market investors.
NVIDIA's response. NVIDIA is not going to watch its inference business get eaten without a fight. The company is already working on faster inference-optimized chips and has far more resources for R&D, software ecosystem development, and customer relationships.
No GAAP profitability track record. The $237.8 million GAAP net income in 2025 looks good, but the non-GAAP loss suggests the underlying business still burns cash. The company has raised over $2 billion in venture funding. Going public changes the scrutiny level.
Prior IPO failure. The 2024 withdrawal is a red flag for some investors, even though the CFIUS issue was resolved. It raises questions about what else might surface.
The competitive landscape
Cerebras is not the only company trying to break NVIDIA's grip on AI hardware:
| Company | Approach | Status |
|---|---|---|
| NVIDIA | GPU-based training and inference | Market leader, $2T+ market cap |
| Cerebras | Wafer-scale computing | IPO filing, $10B+ OpenAI deal |
| Groq | LPU inference architecture | Shipping, fast inference but limited memory |
| AMD | MI300X GPU accelerator | Shipping, growing data center share |
| CoreWeave | GPU cloud provider | Public, $35B+ valuation |
| SiFive | RISC-V based AI chips | $3.65B valuation, Nvidia-backed |
The common thread: everyone is trying to offer something NVIDIA does not. Cerebras differentiates on raw silicon scale. Groq differentiates on inference speed per dollar. AMD differentiates on price and open software. None of them have NVIDIA's CUDA moat, which is the real battle.
What this means for the AI industry
The Cerebras IPO matters for three reasons:
1. NVIDIA is no longer the only game in town. OpenAI choosing Cerebras for inference proves that at least one hyperscaler is willing to diversify. If OpenAI can make it work, others will try.
2. Wafer-scale computing is real. For years, Cerebras was an interesting research project. Revenue of $510 million and a $10B customer contract moves it into the "proven technology" column.
3. AI infrastructure is the new oil. Cerebras, CoreWeave, and Groq are all reaching public markets or late-stage private funding within months of each other. The market for AI compute is growing fast enough to support multiple billion-dollar companies, not just NVIDIA.
The IPO is expected in mid-May 2026. The ticker symbol has not been announced. Underwriters and price range will be disclosed in an amended S-1 filing, likely within the next two weeks.
Should developers care?
If you build AI applications, the Cerebras IPO probably will not change your day-to-day workflow. You will still call the same APIs, use the same models, and pay the same per-token prices.
But the downstream effects matter. More competition in AI chips means lower inference costs over time. It means NVIDIA has to innovate faster on price-performance. And it means the infrastructure layer of AI is becoming diversified enough that a single vendor failure (or price hike) will not break the ecosystem.
If you work in AI infrastructure or MLOps, pay attention to the AWS-Cerebras deployment. When Cerebras chips become available through AWS, that opens up direct benchmarking against NVIDIA instances. Early tests will set the narrative for the next year of AI hardware.
Bottom line
Cerebras is going public with real revenue, a massive customer contract, and technology that legitimately challenges the GPU status quo. The $23 billion valuation is aggressive for a company that only recently reached half a billion in revenue, but the OpenAI deal and the wafer-scale architecture justify the premium if the growth trajectory holds.
The risk is concentration. One customer. One key technology. One competitor with 100x the resources. But for the first time in the AI hardware cycle, a credible NVIDIA alternative is going public. That alone makes this worth watching.
Share this article
About NeuralStackly
Expert researcher and writer at NeuralStackly, dedicated to finding the best AI tools to boost productivity and business growth.
View all postsRelated Articles
Continue reading with these related posts
Cursor Raises $2B at $50B Valuation: What It Means for AI Coding in 2026
Cursor Raises $2B at $50B Valuation: What It Means for AI Coding in 2026
Cursor is in talks to raise $2 billion at a $50 billion valuation, nearly doubling from its November round. Here is what the deal signals for AI coding agents.
Google Gemini Robotics-ER 1.6: Embodied Reasoning Upgrade for Real-World Robots
Google Gemini Robotics-ER 1.6: Embodied Reasoning Upgrade for Real-World Robots
Google DeepMind released Gemini Robotics-ER 1.6 with improved spatial reasoning, instrument reading, and multi-view success detection. Here's what changed and why it matters for...
Qwen Free Tier Discontinued: What It Means for AI Tool Pricing in 2026
Qwen Free Tier Discontinued: What It Means for AI Tool Pricing in 2026
Qwen just killed its free tier. Here is what that means for developers, AI tool builders, and the future of AI pricing across the industry.