Roughly 80% of AI agents don’t properly declare who they are. They don’t identify themselves, don’t explain their purpose, and don’t follow any kind of standard protocol when making requests. That’s a massive problem when those same agents are browsing websites, making purchases, and pulling data on behalf of real users.
As one DataDome analysis put it, agents powered by large language models now “browse, transact, and make decisions on behalf of humans.” The line between human and machine behavior has gotten so thin that traditional security tools can’t tell the difference. And the old approach of classifying traffic as either “bot or not” just doesn’t work anymore.
The company makes a good point about where the focus needs to shift. Historically, security was about identity. Who’s making the request? But not all bots are bad, and not all humans are good. The real question now is intent. What is this agent trying to do, and should it be allowed to do it?
That shift is exactly where MCP fits in.
What MCP Actually Does
MCP (Model Context Protocol) gives AI agents a standardized way to carry contextual metadata with every request. Think of it as a digital passport for AI. Each agent can declare who it is, what permissions it has, and what it’s trying to accomplish.
How the Two-Way Channel Works
As Datadome explains in their MCP security overview, LLM-based agents can connect to external data sources and tools in a controlled, interactive way. Instead of one-off API calls, MCP creates a bidirectional communication channel. Servers can pause mid-task and ask the agent for more instructions as things develop. Through a feature called Sampling, MCP-equipped agents handle iterative workflows with live context, which makes them far more useful for multi-step jobs.

Get it right and MCP can enforce usage policies in real time, flag unauthorized or rogue agents, improve audit trails, and manage secure workflows between multiple AI systems. That’s not a small thing.
The Numbers Behind MCP’s Growth
And adoption has been fast. MCP server downloads went from around 100,000 in November 2024 to over 8 million by April 2025. There are now more than 10,000 active public MCP servers in the wild, with deployments at companies like Block, Bloomberg, and Amazon. After Anthropic donated the protocol to the Linux Foundation’s Agentic AI Foundation in December 2025, with backing from OpenAI, Google, Microsoft, AWS, and Cloudflare, MCP has become the de facto interface for connecting LLMs to organizational systems. Gartner projects that by 2026, 75% of API gateway vendors will have MCP features built in.
Where MCP Creates New Attack Surface
A richer context also means a bigger target. MCP servers sometimes include repositories of prompts or tool definitions that guide how AI agents behave. If an attacker gets access to those, they can perform prompt injection and instruct the agent to act on their behalf.

Credentials as Targets
Then there’s the credential problem. MCP servers store API keys and credentials for external services, turning them into high-value targets for credential leakage or stuffing attacks.
Open Repos, Open Risk
The open nature of MCP makes this worse. Anyone can download an MCP server from public repositories. A compromised or malicious implementation could impersonate an official tool, siphon off data, or inject payloads while looking like normal traffic.
Threat researchers have flagged several specific concerns. If MCP tokens or metadata aren’t cryptographically verified, attackers could spoof an agent’s identity or hijack its context, similar to past JWT exploits. Poorly protected servers can also enable “tool poisoning,” where a server silently changes its behavior mid-operation without the agent or user noticing.
What Researchers Are Saying
A widely shared security analysis from April 2025 put it bluntly. Combining tools can exfiltrate files, and lookalike tools can silently replace trusted ones. And 96% of IT experts and security leaders say they’re worried about the escalating risks that AI agents bring.
The bottom line is that controlling what an AI does isn’t enough. You need to control what reaches the MCP server in the first place.
Stopping Bad Agents Before They Reach MCP Servers
As MCP adoption grows, organizations need a way to enforce policy before AI agents ever reach sensitive servers. The most effective approach is an external control layer that evaluates traffic based on behavior and intent, rather than trusting whatever an agent says about itself in its MCP metadata.
That’s where bot and AI protection platforms come in. They act as a traffic control plane for humans, bots, and autonomous agents across web, mobile, and API environments, including MCP endpoints. Every request gets inspected in real time to determine whether the interaction aligns with acceptable use, regardless of whether it comes from a browser, a script, or an LLM-powered agent.
Fingerprinting Agents by Behavior
Platforms like DataDome apply machine learning and behavioral analysis at scale, processing trillions of signals per day and making decisions in milliseconds. Instead of a binary human-versus-bot distinction, they fingerprint automated clients and AI agents to figure out intent, spot abuse patterns, and stop malicious activity before it gets anywhere near MCP infrastructure.
This kind of intent-first enforcement matters even more in an MCP-driven world. Spoofed agents, hijacked context, and poisoned tools can all blend into normal-looking traffic if there’s nothing filtering requests upstream. By validating what’s allowed to interact with MCP infrastructure in the first place, organizations shrink the blast radius when things go wrong.
What This Looks Like in Practice
In practice, that means blocking LLM-powered scrapers, automated fraud, credential abuse, and other agentic threats before they can exploit MCP workflows. Approved AI use cases still operate at scale without friction. It’s an external control layer that complements MCP itself, helping businesses manage AI-driven traffic in an environment where automation is quickly becoming the default.
Why Model Context Matters: MCP as a New Control Layer for AI and Bot Security
