Skip to main content

There's a version of the current AI story where associations aren't really in the picture. The disruption is happening at the enterprise level, the budgets are enormous, the stakes are someone else's problem. That version is comfortable, but it's not accurate.

AI has changed cybersecurity in ways that matter to organizations of every size, including yours. The tools bad actors are using have gotten faster, more convincing, and more accessible. A small group with limited technical expertise can now execute what previously required a full team of hackers. The attacks are more automated, the phishing emails are harder to spot, and voice cloning technology has made impersonation over the phone a genuine threat. Understanding what's actually happening — and what practical steps are available — is worth the time.

What the Numbers Reflect

The data on AI-assisted attacks is worth looking at directly. According to JP Morgan wealth management research, 16% of enterprise cyberattacks are now AI-generated, and those attacks produce 24% more damage on average than traditional methods. IBM's 2025 data breach report puts the global average cost of a breach at $4.4 million. Perhaps most striking: 97% of companies that experienced AI-related security incidents didn't have adequate protections in place beforehand.

Those are enterprise figures, and associations operate at a different scale. But the exposure is real. Your organization holds member data, financial records, and in many cases the professional credentials of entire industries. That's meaningful information, and it sits inside systems that are often under-protected relative to the sensitivity of what's stored there.

JP Morgan projects global cybersecurity spending will reach $240 billion in 2026, with AI-driven security tools growing three to four times faster than the broader market. The security industry is scaling up its response precisely because the threat has escalated. That's the environment associations are operating in, whether they've registered it yet or not.

The Human Element Is Still the Weak Link

The most sophisticated AI-powered attacks in the world still rely on the same fundamental vulnerability: people. Not because people are careless or unintelligent, but because humans aren't built to be as consistently vigilant as machines. Bad actors understand this and design their attacks around it.

Phishing emails generated by AI are measurably harder to catch than their predecessors. They don't have the spelling errors or awkward phrasing that used to make suspicious messages easier to identify. They can be personalized at scale using publicly available information about your organization, your staff, and your leadership. A convincing email that appears to come from your board chair or your bank isn't a hypothetical anymore.

Voice cloning is worth understanding specifically. Audio of your executive director speaking is almost certainly publicly available — conference recordings, webinars, podcasts. That's enough source material for AI to generate a convincing imitation. A call from what sounds like your CEO asking someone to process a wire transfer is the kind of scenario that has already played out at organizations across multiple sectors. The production cost for that kind of attack has dropped significantly.

This is why security conversations that focus only on software and firewalls miss something important. Technology protects systems. Humans protect organizations. Both matter.

A Low-Tech Defense That's Surprisingly Effective

One of the more practical responses to voice impersonation and executive fraud doesn't involve any software at all. Consider establishing a verbal passcode system with your leadership team.

The idea is simple. Gather your leadership team in person, with no recording devices running. Agree on three or four non-obvious keywords that rotate quarterly. Write them on paper and keep them somewhere offline — not in a document on your computer, not in a shared folder. When someone calls with an unusual or high-stakes request, regardless of how confident you are it's who they say it is, ask them to verify the code before acting.

If the real person is on the other end and gets asked for the code, they'll understand. If they're not, the request stops there. The reason this works is simple: it's offline. There's no database to breach, no password to reset, no system to compromise. The information exists only in the heads and wallets of the people who were in the room.

For situations where a code system isn't yet in place, the same principle applies more informally. Ask a verification question that only the real person would know — something specific to your relationship with them, not something that might turn up in a public record or a social media profile. Most automated attacks are operating at scale with limited personalized information. A specific question is usually enough to break the pattern.

The New Risk: Moving Fast With AI Tools

There's a category of cybersecurity risk that's newer and worth naming directly, because it's showing up more frequently as AI adoption accelerates.

People are connecting new AI tools to sensitive systems without fully vetting the source. The value proposition of many of these tools is so immediate and compelling that due diligence gets abbreviated. An MCP server — a type of integration that lets AI tools access and act on your data — can carry far more access to your systems than you might expect. Connect the wrong one to your SharePoint, your financial platform, or your AMS, and you've created an opening that didn't exist before.

This isn't an argument against experimentation. Trying new tools is how organizations learn what works. The smarter approach is sandboxing: testing new tools in an isolated environment before giving them any connection to your real systems or your real data. Ask basic questions before connecting anything. Who built this? What company is behind it? How established are they? What data does this tool actually touch? These aren't complicated questions, and they catch most of the obvious risks.

The early adopters of new AI tools tend to be the most enthusiastic, which is valuable, but enthusiasm and caution don't have to be in conflict. The organizations that experiment thoughtfully are going to be better positioned than the ones that either avoid new tools entirely or adopt them without asking any questions at all.

Using AI to Assess Your Own Security

One underused option available to associations right now is using AI tools to walk through a basic cybersecurity audit. Claude, ChatGPT, and Gemini can all guide you through the process — what to look at, where to start, what questions to ask about your own setup. They can't access your systems directly, and you wouldn't want them to, but they can help you understand what a reasonable security posture looks like and where common gaps tend to appear.

For associations running on Microsoft 365, there are built-in security tools in the Azure portal that many organizations simply haven't used. An AI tool can walk you through where to find them and what the outputs mean. That's a meaningful starting point that doesn't require a large budget or an outside consultant.

The broader point is that you have access to genuinely useful guidance on this topic without having to hire a firm or build an internal IT team. The tools are available. The main thing required is deciding to use them.

What Associations Should Take Away

Associations aren't the primary target of most sophisticated cyberattacks. But they tend to be among the more vulnerable organizations, and that combination draws attention from bad actors looking for accessible openings rather than the biggest possible score.

Most of the practical steps available to associations right now are low-cost, low-complexity, and meaningful. Establishing a passcode protocol with your leadership team takes an hour. Running through a basic security audit with an AI tool takes an afternoon. Reviewing what new tools have access to your systems is something any staff member can do.

The organizations that handle cybersecurity well in this environment aren't necessarily the ones with the biggest IT budgets. They're the ones that take it seriously before something goes wrong.

Mallory Mejias
Post by Mallory Mejias
March 17, 2026
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.