Clawdbot went from zero to 60,000 GitHub stars over the weekend of January 24-25, 2026. Within 72 hours, security researcher Jamieson O’Reilly of red-teaming firm Dvuln searched Shodan for the gateway’s default HTML title tag and found over 900 unauthenticated control panels sitting wide open on the public internet. Each one was leaking Anthropic API keys, Telegram bot tokens, Slack OAuth secrets, and complete conversation histories in plaintext. By the time the project rebranded to Moltbot on January 27, RedLine, Lumma, and Vidar had already deployed Clawdbot-specific stealer modules targeting the ~/.clawdbot/ config directory.
This was not a sophisticated attack chain. It was a default configuration meeting the real world.
The Localhost Fallacy: How 900 Gateways Ended Up Naked on the Internet
Clawdbot’s gateway uses cryptographic device authentication via challenge-response protocols for remote connections. But it auto-approves any connection originating from 127.0.0.1, because localhost is supposed to be you. That design works fine on a laptop. It falls apart when users deploy the gateway behind a reverse proxy on the same server, which is the standard production pattern for any web-accessible service.
When gateway.trustedProxies stays empty (the default), the gateway ignores X-Forwarded-For headers and reads only the socket address. Every connection forwarded by the reverse proxy arrives from the loopback interface. The gateway sees localhost. Authentication is bypassed. SlowMist warned on January 26 that hundreds of gateways were running this way, exposing everything stored in the configuration files.
The exposed data was not limited to chat logs. Configuration dumps contained:
- Anthropic Claude API keys (each worth hundreds of dollars in usage credits)
- Telegram bot tokens (full control over linked bots)
- Slack OAuth secrets (workspace-level access)
- Discord integration tokens
- Device-pairing metadata and signing keys
Unlike browser password stores that are encrypted with DPAPI on Windows or Keychain on macOS, Clawdbot stores credentials in plaintext JSON and Markdown files. One Shodan query returned the keys to the kingdom.
Why This Was Worse Than a Data Leak
Clawdbot agents are not passive data stores. They can actively send messages, run tools, and execute commands across Telegram, Slack, Discord, and WhatsApp. An attacker with access to the control layer could impersonate the operator, inject rogue messages into ongoing conversations, and siphon data through trusted integrations without triggering any anomaly detection. The agent was already authorized. The attacker just borrowed its identity.
Matvey Kukuy, CEO of Archestra AI, demonstrated a prompt injection attack in about five minutes by sending a crafted email to an address monitored by a Clawdbot instance. The agent read the email, executed the injected instructions, and exfiltrated a private SSH key. No vulnerability exploitation required. Just a well-crafted message to a tool that reads everything.
Infostealers Were Faster Than Security Teams
The speed of exploitation was striking. Guardz threat intelligence confirmed that three major infostealer families added Clawdbot-specific modules within 48 hours of the tool going viral:
RedLine Stealer added the ~/.clawdbot/ directory to its collection targets, harvesting configuration files, API keys, and conversation histories alongside its usual browser credential extraction.
Lumma Stealer deployed a module that specifically targeted Clawdbot’s plaintext credential storage, packaging stolen API keys for resale on dark web marketplaces.
Vidar Stealer extended its file-grabber component to include Clawdbot configuration paths on Windows, macOS, and Linux.
VentureBeat reported that these stealer families updated their target lists before most enterprise security teams even knew Clawdbot was running in their environments. The attackers identified the value of the stored credentials faster than defenders identified the risk.
By late January, Censys had catalogued 21,639 publicly accessible instances. Not all were unauthenticated, but the combination of default misconfigurations and plaintext credential storage meant that even a fraction of those instances represented thousands of compromised API keys and tokens.
The Shadow AI Problem Is Not Theoretical Anymore
The Clawdbot incident is the clearest example yet of shadow AI at enterprise scale. Employees downloaded a trending GitHub project, ran it on work machines, connected it to corporate Slack workspaces and email accounts, and nobody in IT or security knew until the breach reports started arriving.
Brandefense’s analysis framed it precisely: this is the evolution from Shadow IT to Shadow AI. The critical difference is scope. When an employee installed an unapproved SaaS tool, the blast radius was limited to whatever data they manually entered. When an employee deploys an AI agent with access to their email, messaging platforms, and cloud credentials, the agent can autonomously access, process, and transmit everything it connects to.
An IBM/Censuswide study from late 2025 found that 80% of employees at organizations with 500+ people use AI tools not sanctioned by their employer. Trend Micro research confirmed that one in five organizations deployed Clawdbot (or its later incarnations) without IT approval.
The detection gap compounds the problem. Clawdbot renamed itself three times in two months: Clawdbot, then Moltbot, then OpenClaw. Each rename changed process names, configuration directory paths, and DNS patterns. Detection rules written for clawdbot process names miss moltbot and openclaw instances entirely. CrowdStrike’s removal content pack explicitly checks for all historical names across Windows, macOS, and Linux, but most organizations do not have CrowdStrike-level detection engineering.
What the 72-Hour Window Teaches About Agent Security
The Clawdbot meltdown compressed an entire category of security failures into three days. Every lesson applies to any AI agent your organization might deploy.
Default Configurations Kill
The gateway’s trustedProxies: [] default was the root cause. Nobody exploited a bug. Nobody cracked a cipher. They connected to a service that was configured to trust them by default. This is the same pattern that exposed thousands of MongoDB instances in 2017, thousands of Elasticsearch clusters in 2019, and thousands of Kubernetes dashboards in 2020. AI agents inherit every infrastructure misconfiguration pattern we have not yet fixed.
Credentials in Plaintext Are Credentials for Everyone
Storing API keys, OAuth tokens, and signing secrets in unencrypted JSON and Markdown files on disk violates every principle of credential management. Even if the gateway had been properly authenticated, a separate application vulnerability, a filesystem traversal, or a compromised backup would expose the same credentials. Agent tooling needs to use OS-level credential stores (Keychain, DPAPI, libsecret) or dedicated secrets managers like HashiCorp Vault or AWS Secrets Manager.
Attacker Toolchains Adapt in Hours, Not Months
RedLine, Lumma, and Vidar shipped Clawdbot-specific modules within 48 hours. The economics are straightforward: Anthropic API keys sell for $50-200 on dark web markets depending on remaining credits. A single successful steal can fund the module development cost many times over. Every new AI tool with stored credentials becomes an immediate target for commodity malware.
You Cannot Secure What You Cannot See
The shadow deployment pattern is the actual threat multiplier. A known, approved Clawdbot instance can be hardened with proper proxy configuration, authentication enforcement, and credential encryption. An unknown instance on a developer’s laptop, connected to the company Slack workspace via a personal API token, is invisible until the damage surfaces.
Hardening Checklist for Security Teams
If Clawdbot, Moltbot, or OpenClaw is running anywhere in your environment, Bitdefender, CrowdStrike, and Lasso Security all published detection and remediation guidance. The common thread:
Scan for all three names. Check for .clawdbot, .moltbot, and .openclaw config directories. Monitor TCP ports 18789 and 18793. Query DNS logs for clawdbot.ai, moltbot.ai, and openclaw.ai.
Set gateway.trustedProxies explicitly. If running behind a reverse proxy, configure the trusted proxy IP addresses so the gateway can read the real client IP from X-Forwarded-For headers.
Rotate every exposed credential immediately. Any Anthropic, OpenAI, Telegram, Slack, or Discord token stored in a Clawdbot config file should be considered compromised. Revoke and regenerate.
Block the ClawHub marketplace. Community skills from unvetted marketplaces are indistinguishable from supply chain attacks until audited.
Update your AI agent policy. If your security policy does not explicitly address employee-deployed AI agents, the Clawdbot incident is the case study that justifies writing one.
The Clawdbot breach was not a one-off. It was a preview. Every AI agent framework that stores credentials, connects to messaging platforms, and executes actions on behalf of users will face the same attack surface. The 72-hour window between viral adoption and active exploitation is now the standard timeline defenders need to beat.
Frequently Asked Questions
What happened in the Clawdbot security breach?
Clawdbot went viral over the weekend of January 24-25, 2026. Within 72 hours, security researchers found over 900 unauthenticated control gateways exposed on the internet, leaking Anthropic API keys, Telegram tokens, Slack OAuth secrets, and conversation histories in plaintext. The root cause was a default configuration that auto-approved localhost connections, which when combined with reverse proxy deployments bypassed all authentication.
How did infostealers target Clawdbot so quickly?
RedLine, Lumma, and Vidar stealer families added Clawdbot-specific modules within 48 hours of the tool going viral. These modules targeted the ~/.clawdbot/ directory structure to harvest API keys, OAuth tokens, and conversation histories stored in plaintext JSON and Markdown files. The economics were clear: stolen API keys have immediate resale value on dark web markets.
What is the shadow AI risk from Clawdbot?
Employees deployed Clawdbot on work machines and connected it to corporate Slack, email, and messaging accounts without IT approval. An IBM study found 80% of employees at organizations with 500+ people use unsanctioned AI tools. Unlike traditional shadow IT where blast radius is limited to manually entered data, an AI agent can autonomously access and transmit everything it connects to.
How can security teams detect Clawdbot in their environment?
Scan for config directories named .clawdbot, .moltbot, and .openclaw (the project renamed itself three times). Monitor TCP ports 18789 and 18793. Query DNS logs for clawdbot.ai, moltbot.ai, and openclaw.ai domains. CrowdStrike, Bitdefender, and Lasso Security have all published detailed detection and removal guidance covering all historical project names.
Is Clawdbot the same as OpenClaw?
Yes. The project started as Clawdbot, was renamed to Moltbot on January 27, 2026 after Anthropic filed a trademark complaint, and finally became OpenClaw on January 30, 2026. All three names refer to the same open-source AI agent. Security teams need to check for all three names because each rename changed process names, configuration paths, and detection signatures.
