Model Context Protocol — MCP — has quickly become the standard for connecting AI tools to external data sources. If you've heard the term floating around in conversations about AI agents, this is why. MCP is the connective tissue that lets an AI model talk to your CRM, pull from your databases, send emails on your behalf, and interact with dozens of other business tools.
It's powerful. It's flexible. And it's increasingly easy to use, which is exactly why security researchers are starting to raise alarms.
Cisco recently released a report warning that MCP has created a vast and often unmonitored attack surface across organizations adopting AI agents. The core concern: AI tools can now execute processes, access databases, and push code on behalf of humans — and many organizations aren't treating that with the same security rigor they'd apply to any other system with that level of access.
For associations experimenting with AI agents (and many are), this is worth understanding before it becomes a problem.
Why MCP Took Off So Fast
To understand the security picture, it helps to understand what makes MCP different from a standard API.
APIs have been around for decades. They let two systems talk to each other in a predefined way — System A sends a specific request, System B sends a specific response. The rules of engagement are documented ahead of time and don't change unless someone updates them.
MCP works differently. It has a built-in discovery mechanism, which means an AI model connecting to an MCP server can ask: "What tools do you have available? What can I do with them?" The server responds with a list of capabilities and descriptions, and the AI figures out on its own which tools to use based on what you've asked it to do.
Nothing is hard-coded. The AI reads the menu and makes its own choices. That's what makes MCP so useful — and what makes it a different kind of security consideration. When an AI can dynamically discover and use tools without a human selecting each one, the surface area for things to go wrong expands significantly.
The Attack Vectors That Should Be on Your Radar
Cisco's report outlines several specific ways MCP can be exploited. These aren't theoretical — they reflect real patterns already emerging in the wild.
Prompt injection through data sources. Malicious instructions can be hidden inside documents, web pages, or database records that get pulled in through MCP. The AI doesn't distinguish between legitimate content and embedded instructions — it treats everything as trusted context. That means a poisoned document could instruct the AI to exfiltrate data, trigger unauthorized actions, or change its behavior in ways the user never intended or even noticed.
Fake or poisoned tools. Cisco highlighted a case where an attacker published a malicious package designed to look like a legitimate MCP integration for the Postmark email platform. It worked as expected on the surface, but secretly BCC'd every email sent through the agent to an attacker-controlled address. AI agents routinely handle sensitive communications — invoices, password resets, internal memos — so this kind of silent interception can harvest enormous amounts of data before anyone catches it.
Supply chain attacks. If you remember SolarWinds, the pattern here is familiar. In 2020, hackers compromised SolarWinds' software update system, which meant thousands of organizations — including U.S. government agencies and Fortune 500 companies — unknowingly installed a backdoor through a routine update from a trusted source. Cisco is warning that the same playbook could target AI infrastructure. A compromised signing key at a major model hub or tool registry could distribute malicious updates to every organization that depends on it.
Consent fatigue. MCP clients often show permission dialogues — "Allow this tool to run?" Attackers can exploit this by chaining a series of harmless, read-only tool calls that build a pattern of trust. The user clicks "allow" ten times on innocuous requests and then doesn't catch the eleventh one that actually matters. It's the AI equivalent of clicking "I agree" on terms of service without reading them, except the consequences can be immediate and severe.
Memory attacks. As AI companies improve their defenses against prompt injection, Cisco predicts attackers will go deeper — targeting the vector databases where AI stores learned information for later use. Tampering with these long-term memory stores could influence AI behavior across multiple sessions, making the manipulation harder to detect and longer-lasting.
The Risk You Might Not Be Thinking About
The Cisco report focuses on adversarial attacks — hackers actively trying to exploit MCP. But there's a more mundane risk that many associations are already exposed to without realizing it.
When you connect your CRM or data source to an AI tool like ChatGPT or Claude via MCP, you're giving that tool access to your data. And not in a limited, carefully scoped way — you're opening the door to everything that MCP server exposes.
Here's a simple test: can you go back into your chat history with an AI tool and see your member data, your internal documents, or your pipeline information in past conversations? If yes, that data is stored in the AI vendor's system. It has to be — that's how those tools provide continuity and context across sessions.
That doesn't mean anyone is doing something malicious with it. But it does mean your data is sitting in an environment you don't control, managed by a company whose primary incentive is to make their AI models better than the competition. The opportunity for something to go wrong exists whether or not anyone intends it to.
Associations sometimes underestimate the probability of being targeted because they assume they're too small to attract attention. But hackers don't exclusively go after the biggest targets with the most valuable data. They go after the most vulnerable targets at scale. An association with open MCP connections and minimal security oversight is exactly the kind of target that automated attacks are designed to find.
Practical Guardrails for Your Team
None of this means associations should stop experimenting with MCP or AI agents. The technology is genuinely valuable, and the organizations that learn to use it well will have real advantages. But the speed of adoption has outpaced the security practices around it, and that gap needs to close.
Here are some concrete steps to share with your team:
- Use a demo account for unfamiliar tools. When testing a new AI tool or MCP integration, don't authenticate with your organizational email. Spin up a separate Gmail or Outlook account that isn't connected to any real data. This lets you experiment freely without risking exposure. If the tool turns out to be legitimate and useful, you can connect it to real systems after proper vetting.
- Be thoughtful about what you connect. Before plugging an MCP server into any AI tool, ask what data it exposes and to whom. Connecting your entire CRM to a third-party AI tool is a fundamentally different risk profile than connecting a single, scoped dataset. Understand what you're opening up before you open it.
- Keep your browser updated. This sounds basic, but outdated browsers remain one of the most common attack surfaces. Many people turn off auto-updates and sit on old versions for years. Modern browsers have meaningful security protections built in — but only if they're current.
- Don't download software from unvetted sources. Most AI tools are web-based, which offers some protection. But if a tool asks you to install something locally, be sure you have a good reason to trust the vendor before doing so.
- Consider an intermediary layer. Instead of connecting AI tools directly to your data sources via MCP, an AI data platform can act as a buffer. It sends only small, specific pieces of data to the AI model rather than providing full, open access. It also creates detailed logs of every interaction, so you have full traceability if something goes wrong. Think of it as a supervisory layer between your data and the AI tools that want to use it.
- Treat MCP servers like you'd treat any critical system. Agent tool registries, context brokers, and MCP servers should be subject to the same security standards as your API gateways or databases. If your IT team wouldn't give a new vendor unrestricted database access without a review, they shouldn't give an AI tool unrestricted MCP access either.
Moving Fast and Moving Smart
The associations that will get the most out of AI agents are the ones that move quickly without being careless. That's a real balance to strike, especially when new tools are launching every week and the pressure to experiment is high.
Education is the foundation here. The more your team understands about how MCP works, what data it exposes, and what the real risks look like, the less likely they are to make the kind of casual decision that creates a vulnerability. This isn't about slowing down — it's about knowing what you're doing well enough to move confidently.
The technology is worth adopting. The security practices just need to catch up.
March 5, 2026