Why the Zig Project Banned All LLM-Assisted Contributions — And What It Teaches Us About AI in Open Source
The Zig project's firm anti-AI contribution policy reveals a deeper argument about open source sustainability, contributor growth, and why 'contributor poker' might be the most honest framing of code review ever.
Why the Zig Project Banned All LLM-Assisted Contributions — And What It Teaches Us About AI in Open Source
The Zig project has one of the most firm, clearly-articulated anti-AI policies in open source. And after reading Loris Cro's (VP of Community at Zig Software Foundation) explanation on Simon Willison's blog, it might also be the most intellectually honest.
The policy is simple: no LLM-authored contributions, period. No exceptions. No "I only used AI to write tests" or "it helped with comments." The line is unambiguous.
The rationale — which Cro calls "contributor poker" — is worth understanding even if you disagree with it.
The Core Argument: You Bet on the Contributor, Not the Code
Crops' central point is this:
> "In successful open source projects you eventually reach a point where you start getting more PRs than what you're capable of processing. Given what I mentioned so far, it would make sense to stop accepting imperfect PRs in order to maximize ROI from your work, but that's not what we do in the Zig project."
Zig actively invests in helping new contributors get their work in, even when those contributors need help. The reason isn't just that it's "the right thing to do" — it's that every PR is an investment in the contributor, not just the code.
The goal of code review isn't to land new features. It's to help grow contributors who become trusted, long-term members of the project. Each review interaction is a teaching moment that compounds over time.
LLM assistance breaks this completely. If a PR was written by an LLM, the time the Zig team spends reviewing it does nothing to grow a contributor. You're not investing in a person — you're processing a text output from a model.
What Is "Contributor Poker"?
Crops uses the metaphor of poker: "You play the person, not the cards."
In contributor poker, you bet on the contributor, not on the contents of their first PR. A new contributor who shows up struggling, learns, iterates, and eventually lands a well-crafted PR — that's someone who becomes a reliable project member over years.
An LLM-assisted PR from that same contributor? Even if the code is correct, you've learned nothing about the person. You can't evaluate whether they're trustworthy, whether they'll stick around, whether they understand the project deeply enough to make architectural decisions.
> "Why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?"
It's a sharp question. If the output is what matters, and an LLM can produce equivalent output, the economics of volunteer code review don't work.
Why This Matters Beyond Zig
This argument isn't just about Zig. It touches on something every open source project and engineering team is wrestling with right now:
When AI generates code, what does code review actually accomplish?
For many teams, the answer has shifted: code review is no longer about catching bugs or improving code quality. It's about knowledge transfer, team cohesion, and ensuring contributors understand what they're building. If an LLM wrote the code, none of those goals are served by reviewing it in the traditional sense.
The uncomfortable implication: if you're a senior engineer and you use an LLM to write a PR, your junior colleague's code review is now performing a different job than it was designed for. You're both going through the motions.
The Counterargument: LLMs Lower the Barrier to Contribution
Not everyone agrees. Critics of blanket bans point out:
LLMs help non-native English speakers participate — Cro actually addresses this directly in the Zig policy: "English is encouraged, but not required. You are welcome to post in your native language and rely on others to have their own translation tools of choice to interpret your words." Translation tools, specifically, are fine. LLMs for code generation are not.
LLMs help contributors who are learning — A junior developer using an LLM to write their first PR might learn more from the code review than the code itself. Cro's response: the problem isn't the LLM, it's the contributor-developer's growth trajectory. If they're learning, they should be writing code without the LLM intermediary.
Bans are unenforceable — Unless you do deep code analysis on every PR (which projects like Linux actually do), there's no practical way to detect LLM authorship. The Zig policy is a cultural norm, not a technical enforcement.
The Bun Fork Is the Interesting Test Case
The most concrete example of this tension: Bun — the JavaScript runtime and bundler, written in Zig — operates its own fork of Zig. Bun recently achieved a 4x performance improvement on compilation after adding parallel semantic analysis and multiple codegen units to the LLVM backend.
Bun explicitly does NOT upstream these changes to Zig. Why? Because Zig has a strict ban on LLM-authored contributions, and Bun's internal development uses AI assistance heavily.
This creates an unusual dynamic: the most prominent Zig project is actively diverging from upstream partly because of the AI policy. It's a live experiment in what happens when an open source project draws a hard line.
What This Means for AI Tool Builders
If you're building AI coding tools — like the ones we track at NeuralStackly — this debate is relevant to your product decisions:
Agents that generate code vs. agents that teach — The tools that will win in open source communities are the ones that help contributors grow, not just ship code faster. The value proposition of "ship faster" runs directly into projects like Zig.
Audit trails and attribution — As LLM-generated code proliferates, projects will develop norms around disclosure. The implicit norm right now is: if you used AI to write significant portions of a PR, say so.
The sustainability question — Every project will have to answer: are we optimizing for code output or contributor growth? These goals align sometimes and conflict other times. Knowing which one you're optimizing for shapes your culture.
The Takeaway
The Zig anti-AI policy is worth taking seriously because it's not based on LLM-phobia or tech conservatism. It's based on a coherent theory of what makes open source projects work: sustained contributor investment compounds over time.
Whether you agree with the policy or not, "contributor poker" is a useful mental model for thinking about code review, project sustainability, and the real value that human reviewers provide.
The most honest question any open source maintainer can ask right now: "If this PR was written by an LLM, am I still getting value from reviewing it?" If the answer is no, something has shifted in what code review is for.
What do you think about LLM-assisted contributions? Share your perspective with the NeuralStackly community.
Share this article
About NeuralStackly
Expert researcher and writer at NeuralStackly, dedicated to finding the best AI tools to boost productivity and business growth.
View all postsRelated Articles
Continue reading with these related posts

AI Coding Tools Create Mixed Blessing for Open Source and Software Engineers in 2026
TechCrunch reports that AI coding tools are causing as many problems as they solve for open source projects, while Silicon Valley engineers grapple with existential questions ab...
Chinese Open-Source AI Models Overtake US in Downloads: What Developers Need to Know
Chinese Open-Source AI Models Overtake US in Downloads: What Developers Need to Know
Alibaba's Qwen has become the most downloaded AI model series on Hugging Face, surpassing Meta's Llama. MIT research confirms Chinese open-source models now lead in total downlo...
Claude Cowork Plugins: What Anthropic’s Open-Source ‘Knowledge Work Plugins’ Mean for Teams
Claude Cowork Plugins: What Anthropic’s Open-Source ‘Knowledge Work Plugins’ Mean for Teams
Anthropic open-sourced a set of role-based plugins for Claude Cowork. Here’s what’s inside, how the plugin structure works, and what it means for enterprise workflows.
Microsoft and OpenAI End Exclusive Partnership: What Changed and What It Means
Microsoft and OpenAI End Exclusive Partnership: What Changed and What It Means
Microsoft and OpenAI ended their exclusive cloud partnership on April 27, 2026. Here is what changed, what stayed, and what it means for AI tool buyers.
Musk vs Altman OpenAI Trial 2026: What the Lawsuit Reveals So Far
Musk vs Altman OpenAI Trial 2026: What the Lawsuit Reveals So Far
The Musk vs Altman OpenAI lawsuit trial is underway. Here is what Ilya Sutskever's deposition revealed, what is at stake, and what it means for AI.