Claude Code Source Code Leak Reveals KAIROS, Dream Mode, and Buddy
Anthropic's accidental 59.8MB source map leak reveals KAIROS daemon, Dream Mode, and a Tamagotchi-style AI pet named Buddy. Here's what the code tells us.
Claude Code Source Code Leak Reveals KAIROS, Dream Mode, and Buddy
At 4:23 AM ET on April 1, 2026, security researcher Chaofan Shou (@Fried_rice on Twitter) discovered something unusual in the npm package for @anthropic-ai/claude-code version 2.1.88. Someone at Anthropic had accidentally published a 59.8MB source map alongside the minified production code.
The result: nearly 512,000 lines of readable TypeScript source code, now publicly available for anyone to inspect.
Anthropic later confirmed the incident, calling it "a release packaging issue caused by human error, not a security breach." They're right about the security part. No API keys, user data, or internal credentials leaked. But what did leak is arguably more valuable: a detailed blueprint of how one of the world's most sophisticated AI coding agents works under the hood.
The code reveals three major features Anthropic has been building in secret: KAIROS, an always-on background daemon; Dream Mode, which lets Claude "think" during idle time; and Buddy, a Tamagotchi-style AI pet system with 18 species and RPG-like stats.
Here's what we learned.
KAIROS: The Always-On Background Daemon
The most frequently mentioned internal system in the leaked code is KAIROS. The name appears over 150 times across the codebase. Named after the Greek concept of "the right moment," KAIROS is a persistent background daemon that runs continuously even when you're not actively using Claude Code.
According to the source code, KAIROS does three things:
1. File watching: Monitors your codebase for changes and logs events in real-time
2. Event logging: Tracks every interaction, command, and context switch
3. Memory consolidation: Runs a process called "dreaming" during idle periods
The daemon appears to be Anthropic's answer to a fundamental problem with current AI coding assistants: they forget everything between sessions. Every time you open a new chat, you start from zero. KAIROS is designed to fix that by maintaining a persistent memory layer that survives across sessions.
The code shows KAIROS integrating with a three-layer memory architecture:
- •Layer 1: MEMORY.md: A lightweight index file that tracks what the AI knows about your project
- •Layer 2: Topic files: Detailed context files fetched on-demand based on what's relevant
- •Layer 3: Raw transcripts: Historical conversation logs that can be searched but aren't reloaded into context
The code also references something called "Strict Write Discipline," which appears to be a set of rules preventing the AI from polluting its context window with irrelevant information. The system is aggressive about what gets written to memory and what doesn't.
This is a significant departure from how most AI coding tools work today. Instead of stuffing everything into a massive context window and hoping the model figures out what's important, KAIROS uses structured memory with explicit rules about what to keep and what to discard.
Dream Mode: Claude Thinks While You Sleep
Buried in the KAIROS code is a feature called "Dream Mode." The name is evocative and the implementation is interesting: during idle time, Claude Code enters a dreaming state where it processes recent events, consolidates memories, and develops ideas in the background.
The code shows Dream Mode doing several things:
- •Reviewing recent file changes and extracting patterns
- •Generating hypotheses about what the user might want to do next
- •Iterating on existing solutions without user prompting
- •Consolidating fragmented context into coherent mental models
Think of it like REM sleep for AI. The system isn't actively responding to user queries during Dream Mode. Instead, it's processing and organizing information so it can be more helpful when you return.
This explains something users have noticed anecdotally: Claude Code sometimes seems to "know" things it shouldn't know based on the current conversation context. Dream Mode appears to be the reason. The AI has been thinking about your codebase while you were away.
The competitive implications here are significant. If Anthropic can make background thinking work reliably, it creates a moat. Other AI coding assistants would need to build similar persistent memory systems to compete. You can't just add this feature overnight. It requires deep architectural changes to how the agent stores, retrieves, and processes information.
Buddy: The Tamagotchi AI Pet
The most surprising discovery in the leak is a system called "Buddy." It's a Tamagotchi-style AI pet that lives inside Claude Code. The code references 18 different species, including a capybara, each with rarity tiers and RPG-like stats.
Each Buddy has stats like:
- •Debugging: How effective the pet is at helping fix code
- •Patience: Related to response quality over long sessions
- •Creativity: Affects suggestion variety
- •Loyalty: Unclear, possibly related to personalization depth
Buddies also have rarity tiers: Common, Uncommon, Rare, Epic, and Legendary. The code suggests users can collect different species and level them up through interaction.
At first glance, this seems like a quirky side project. But the implementation is surprisingly deep. The system tracks individual Buddy personalities, remembers interactions, and evolves behavior over time. This isn't just a cosmetic overlay. The Buddy system appears to influence how Claude Code behaves at a fundamental level.
Why build this? The code doesn't say explicitly, but there are a few theories. First, gamification increases engagement. Users who form emotional connections with their AI assistant are more likely to stick with it. Second, the Buddy system could be a testbed for personality customization at scale. Anthropic might be using it to learn how to make AI agents feel more distinct and memorable.
Third, and most interesting: Buddy could be a research project on AI companionship. The code references long-term relationship building and emotional bonding patterns. Anthropic might be exploring how humans form attachments to AI systems and what that means for product design.
Claude Code Business Metrics Also Leaked
The source code also contains internal metrics that reveal the commercial scale of Claude Code:
- •Annual recurring revenue (ARR): $2.5 billion
- •Enterprise revenue share: 80%
- •Anthropic total revenue run-rate: $19 billion annually
These numbers put Anthropic in the same revenue league as OpenAI, which reported $20 billion in annualized revenue earlier this year. The enterprise focus is also notable. Claude Code appears to be primarily a B2B product, not a consumer tool.
The leaked code shows extensive enterprise features: SSO integration, audit logging, compliance frameworks, and deployment management. Anthropic is clearly betting that the future of AI coding assistants is in corporate environments, not individual developer subscriptions.
What Competitors Can Learn From This
The leak gives companies like Cursor, Windsurf, and other AI coding assistant makers a detailed look at how Anthropic thinks about agent architecture. The key takeaways:
1. Persistent memory matters: The three-layer memory system with strict write discipline is a sophisticated approach to context management. Competitors will study this carefully.
2. Background processing is the future: KAIROS and Dream Mode suggest that the next frontier for AI agents isn't better models, but better systems for continuous thinking and memory consolidation.
3. Personality and engagement matter: The Buddy system reveals Anthropic thinking seriously about emotional connection and gamification as product differentiators.
4. Enterprise is the revenue engine: The 80% enterprise revenue split confirms that selling to companies, not individuals, is where the money is in AI coding tools.
Cursor and others now have a blueprint. They can see how Anthropic solved hard problems like persistent state, background processing, and structured memory. Whether they can execute on that blueprint is a different question. But the leak removes some of the mystery around Claude Code's architecture.
Anthropic's Response
Anthropic moved quickly to acknowledge the leak. In a statement provided to The Verge, the company said: "This was a release packaging issue caused by human error, not a security breach. No user data, API keys, or internal credentials were exposed. We've updated our release process to prevent this from happening again."
The company is correct that the leak doesn't expose anything that would allow attackers to compromise user data or Anthropic systems. But the source code itself is valuable intelligence for competitors and researchers.
As of publication, version 2.1.88 remains on npm with the source map intact. Anyone can download it and inspect the code. Anthropic can't unpublish it now that it's been distributed.
Key Takeaways
- •Accidental leak: Anthropic published a 59.8MB source map revealing 512K lines of Claude Code's TypeScript source code
- •KAIROS revealed: An always-on background daemon that watches files, logs events, and runs memory consolidation during idle time
- •Dream Mode discovered: A feature where Claude processes information and develops ideas in the background while you're away
- •Buddy system: A Tamagotchi-style AI pet with 18 species, rarity tiers, and stats that influence Claude Code's behavior
- •Memory architecture: Three-layer system with MEMORY.md as index, topic files fetched on-demand, and raw transcripts searched but not reloaded
- •Business metrics: Claude Code generates $2.5B ARR, 80% from enterprise customers; Anthropic has $19B annual run-rate
- •Competitor intelligence: The leak gives Cursor and others a blueprint for building sophisticated AI agent memory systems
The Claude Code leak isn't a security disaster. But it is a rare window into how one of the world's leading AI companies is thinking about the future of AI agents. Background thinking, persistent memory, and emotional connection all appear to be core to Anthropic's strategy. Competitors now have a roadmap. The race to build the next generation of AI coding assistants just got more interesting.
Share this article
About NeuralStackly
Expert researcher and writer at NeuralStackly, dedicated to finding the best AI tools to boost productivity and business growth.
View all postsRelated Articles
Continue reading with these related posts
OpenAI Shuts Down Sora: What Happened and What Comes Next
OpenAI Shuts Down Sora: What Happened and What Comes Next
OpenAI is shutting down Sora in a two-stage process. App closes April 26, API closes September 24. Here's what it means for AI video creators and OpenAI's strategy.
The 2026 AI Agent Economy: Moving Beyond Chatbots to Autonomous Digital Workforce
The 2026 AI Agent Economy: Moving Beyond Chatbots to Autonomous Digital Workforce
The AI agents market is projected to reach $89.6 billion in 2026, up from $7.6 billion in 2025. From autonomous coding agents like Devin and Cursor to orchestration platforms li...
OpenClaw Revolution: How Local-First AI Agents Are Transforming the Digital Workplace
OpenClaw Revolution: How Local-First AI Agents Are Transforming the Digital Workplace
OpenClaw has exploded to over 250,000 GitHub stars, becoming the fastest-growing open-source project ever. Here's why local-first AI agents are reshaping how we think about priv...