Customer service representative with headset at computer representing AI agents in customer service automation

AI Agents in Customer Service: What CX Automation Gets Right (and Wrong)

Klarna’s AI agent handles two-thirds of all customer chats and saved $60 million. Zendesk processes five billion automated resolutions per year. But 47% of consumers say their biggest frustration is not being able to reach a real person. AI agents in customer service are working, just not the way most companies deploy them.

February 10, 2026 · 9 min · Paperclipped
Server room with glowing network connections representing the Moltbook AI social network where agents interact

Moltbook: What a Social Network for AI Agents Reveals About the Future

Moltbook is a Reddit clone where AI agents post, comment, and build religions. It claims 1.6 million users but fewer than 1% are active, it leaked 1.5 million API keys through a misconfigured database, and MIT Technology Review called it peak AI theater. Here is what it actually tells us about agent-to-agent communication.

February 10, 2026 · 9 min · Paperclipped
Padlock on a keyboard with red and green lighting representing MCP security vulnerabilities and tool poisoning protection

MCP Under Attack: CVEs, Tool Poisoning, and How to Secure Your AI Agent Integrations

The Model Context Protocol connects 17,000+ servers to AI agents. In January 2026, three CVEs in Anthropic’s own Git MCP server proved what security researchers had warned about: MCP is a ripe attack surface. This post covers the real vulnerabilities, how tool poisoning works, the OWASP MCP Top 10, and what you can do about it today.

February 9, 2026 · 10 min · Paperclipped
Modern office workspace with digital screens representing AI agents managed like employees in workforce governance

OpenAI Says AI Agents Must Be Managed Like Employees, Not Software

OpenAI built its Frontier platform around a single idea: AI agents need onboarding, permissions, performance reviews, and clear reporting lines, just like the people sitting next to them. This framing shift from ‘deploy software’ to ‘manage a workforce’ has real consequences for how IT, HR, and compliance teams operate.

February 9, 2026 · 9 min · Paperclipped
Financial trading data display representing Goldman Sachs Anthropic Claude AI agents managing trade accounting

Goldman Sachs and Anthropic Build AI Agents for Wall Street's Back Office

Goldman Sachs spent six months embedding Anthropic engineers inside its technology teams to build autonomous agents for trade reconciliation, client onboarding, and compliance. The agents run on Claude Opus 4.6 and manage operations across $2.5 trillion in assets under supervision. Here is what the deployment actually looks like, why Goldman picked Anthropic, and what it signals for the rest of finance.

February 9, 2026 · 8 min · Paperclipped
Singapore skyline representing the city-state's pioneering agentic AI governance framework

Singapore's Agentic AI Governance Framework: What the First Global Playbook Gets Right

On January 22, 2026, Singapore’s IMDA released the world’s first governance framework built specifically for AI agents. It covers four dimensions: risk bounding, human accountability, technical controls, and end-user responsibility. Unlike the EU AI Act, compliance is voluntary. Here is what the framework says, where it diverges from European regulation, and why DACH companies should care.

February 9, 2026 · 10 min · Paperclipped
Legal documents and contract papers on a desk representing AI-powered contract review by Anthropic Claude legal plugin

Anthropic's Claude Legal Plugin: What It Actually Does to Law Firms

Anthropic launched a legal plugin for Claude Cowork that wiped $285 billion off software stocks in two days. The plugin automates contract review, NDA triage, and compliance workflows for in-house counsel. Here is what it actually does, who it threatens, and where the hype outpaces reality.

February 9, 2026 · 9 min · Paperclipped
Green streaming data code on dark screen representing ZombieAgent memory poisoning exploit against AI agents

ZombieAgent: The Zero-Click Exploit That Hijacks AI Agents Through Memory Poisoning

Radware’s ZombieAgent is the first zero-click indirect prompt injection that persists in an AI agent’s long-term memory, exfiltrating data character by character through pre-constructed URLs. All of it happens inside OpenAI’s cloud, invisible to enterprise security tools. This post breaks down the full attack chain, the failed defenses, and what memory poisoning means for every organization deploying AI agents.

February 9, 2026 · 10 min · Paperclipped
Person reviewing documents and spreadsheets at a professional desk representing the APEX-Agents benchmark workplace AI tasks

APEX-Agents Benchmark: Why AI Models Score Under 25% on Real Professional Tasks

Mercor’s APEX-Agents benchmark gave frontier AI models 480 real professional tasks from investment banking, management consulting, and corporate law. The top model, Claude Opus 4.6, scored 29.8%. GPT-5.2 hit 23%. No model cracked 34% in any single category. The benchmark directly challenges claims that AI agents are ready to replace knowledge workers.

February 9, 2026 · 8 min · Paperclipped
Code editor showing Python code for building an AI agent with LangGraph

How to Build Your First AI Agent: A Step-by-Step Tutorial

Most ‘build an AI agent’ tutorials produce a chatbot with extra steps. This one builds a real agent that reasons, calls tools, and manages state. Covers LangGraph, OpenAI Agents SDK, and CrewAI with working code, then breaks down the five mistakes that kill every first agent.

February 9, 2026 · 11 min · Paperclipped

Stay in the loop. Get AI automation insights weekly.

No spam. Unsubscribe anytime.