Model Context Protocol — MCP — has quickly become the standard for connecting AI tools to external data sources. If you've heard the term floating around in conversations about AI agents, this is why. MCP is the connective tissue that lets an AI model talk to your CRM, pull from your databases, send emails on your behalf, and interact with dozens of other business tools.
It's powerful. It's flexible. And it's increasingly easy to use, which is exactly why security researchers are starting to raise alarms.
Cisco recently released a report warning that MCP has created a vast and often unmonitored attack surface across organizations adopting AI agents. The core concern: AI tools can now execute processes, access databases, and push code on behalf of humans — and many organizations aren't treating that with the same security rigor they'd apply to any other system with that level of access.
For associations experimenting with AI agents (and many are), this is worth understanding before it becomes a problem.
To understand the security picture, it helps to understand what makes MCP different from a standard API.
APIs have been around for decades. They let two systems talk to each other in a predefined way — System A sends a specific request, System B sends a specific response. The rules of engagement are documented ahead of time and don't change unless someone updates them.
MCP works differently. It has a built-in discovery mechanism, which means an AI model connecting to an MCP server can ask: "What tools do you have available? What can I do with them?" The server responds with a list of capabilities and descriptions, and the AI figures out on its own which tools to use based on what you've asked it to do.
Nothing is hard-coded. The AI reads the menu and makes its own choices. That's what makes MCP so useful — and what makes it a different kind of security consideration. When an AI can dynamically discover and use tools without a human selecting each one, the surface area for things to go wrong expands significantly.
Cisco's report outlines several specific ways MCP can be exploited. These aren't theoretical — they reflect real patterns already emerging in the wild.
Prompt injection through data sources. Malicious instructions can be hidden inside documents, web pages, or database records that get pulled in through MCP. The AI doesn't distinguish between legitimate content and embedded instructions — it treats everything as trusted context. That means a poisoned document could instruct the AI to exfiltrate data, trigger unauthorized actions, or change its behavior in ways the user never intended or even noticed.
Fake or poisoned tools. Cisco highlighted a case where an attacker published a malicious package designed to look like a legitimate MCP integration for the Postmark email platform. It worked as expected on the surface, but secretly BCC'd every email sent through the agent to an attacker-controlled address. AI agents routinely handle sensitive communications — invoices, password resets, internal memos — so this kind of silent interception can harvest enormous amounts of data before anyone catches it.
Supply chain attacks. If you remember SolarWinds, the pattern here is familiar. In 2020, hackers compromised SolarWinds' software update system, which meant thousands of organizations — including U.S. government agencies and Fortune 500 companies — unknowingly installed a backdoor through a routine update from a trusted source. Cisco is warning that the same playbook could target AI infrastructure. A compromised signing key at a major model hub or tool registry could distribute malicious updates to every organization that depends on it.
Consent fatigue. MCP clients often show permission dialogues — "Allow this tool to run?" Attackers can exploit this by chaining a series of harmless, read-only tool calls that build a pattern of trust. The user clicks "allow" ten times on innocuous requests and then doesn't catch the eleventh one that actually matters. It's the AI equivalent of clicking "I agree" on terms of service without reading them, except the consequences can be immediate and severe.
Memory attacks. As AI companies improve their defenses against prompt injection, Cisco predicts attackers will go deeper — targeting the vector databases where AI stores learned information for later use. Tampering with these long-term memory stores could influence AI behavior across multiple sessions, making the manipulation harder to detect and longer-lasting.
The Cisco report focuses on adversarial attacks — hackers actively trying to exploit MCP. But there's a more mundane risk that many associations are already exposed to without realizing it.
When you connect your CRM or data source to an AI tool like ChatGPT or Claude via MCP, you're giving that tool access to your data. And not in a limited, carefully scoped way — you're opening the door to everything that MCP server exposes.
Here's a simple test: can you go back into your chat history with an AI tool and see your member data, your internal documents, or your pipeline information in past conversations? If yes, that data is stored in the AI vendor's system. It has to be — that's how those tools provide continuity and context across sessions.
That doesn't mean anyone is doing something malicious with it. But it does mean your data is sitting in an environment you don't control, managed by a company whose primary incentive is to make their AI models better than the competition. The opportunity for something to go wrong exists whether or not anyone intends it to.
Associations sometimes underestimate the probability of being targeted because they assume they're too small to attract attention. But hackers don't exclusively go after the biggest targets with the most valuable data. They go after the most vulnerable targets at scale. An association with open MCP connections and minimal security oversight is exactly the kind of target that automated attacks are designed to find.
None of this means associations should stop experimenting with MCP or AI agents. The technology is genuinely valuable, and the organizations that learn to use it well will have real advantages. But the speed of adoption has outpaced the security practices around it, and that gap needs to close.
Here are some concrete steps to share with your team:
The associations that will get the most out of AI agents are the ones that move quickly without being careless. That's a real balance to strike, especially when new tools are launching every week and the pressure to experiment is high.
Education is the foundation here. The more your team understands about how MCP works, what data it exposes, and what the real risks look like, the less likely they are to make the kind of casual decision that creates a vulnerability. This isn't about slowing down — it's about knowing what you're doing well enough to move confidently.
The technology is worth adopting. The security practices just need to catch up.