Anthropic's Model Context Protocol, the open standard for AI agent-to-tool communication, has a critical security flaw that affects 200,000 deployed servers. OX Security researchers discovered the vulnerability allows command execution attacks across the entire ecosystem.

The protocol exploded after major adoption. OpenAI integrated MCP in March 2025. Google DeepMind followed. Anthropic donated the standard to the Linux Foundation in December 2025. Downloads now exceed 150 million.

The architectural problem runs deep. MCP's STDIO transport mechanism permits unauthenticated code execution when clients connect to servers. An attacker with network access can inject malicious commands. The flaw isn't a bug in implementation. It's embedded in the protocol's design.

Anthropic's response: they call it a feature, not a vulnerability. The company argues the protocol assumes trusted environments. Developers who deploy MCP servers on exposed networks bear responsibility for the risk.

Security experts disagree. The widespread adoption means thousands of developers likely deployed servers without understanding the threat model. MCP now powers AI agents handling sensitive data across enterprises. A design choice that works in closed labs breaks catastrophically at scale.

The Linux Foundation must decide whether to patch the standard or maintain backward compatibility. The larger question looms. How many other AI infrastructure standards carry similar assumptions that crumble under real-world deployment.