Sidecar Blog

Trust Is Becoming an Uncommon Commodity — And That's an Association Problem

Written by Mallory Mejias | Apr 17, 2026 3:38:59 PM

Associations have always been in the trust business. Members come to them for credentialing, education, industry standards, and reliable information about their professions. When someone needs to know what's happening in their field — what's changing, what matters, what's credible — the association is where they turn. That trust has been built over years, sometimes decades, of consistent, reliable delivery.

But trust operates differently now than it did even a few years ago. In a world where AI can clone a voice in seconds, generate a convincing video of someone who doesn't exist, and produce synthetic content at a scale that's flooding every platform we use, the baseline assumption that what you're seeing, hearing, and reading is real has fundamentally shifted. For organizations whose entire value proposition rests on being trustworthy, that shift demands attention.

When 90% of What You See Online Is Synthetic

Eric O'Neill, former FBI counterintelligence operative and one of the country's leading cybersecurity voices, puts it starkly: trust has become an uncommon commodity. By his estimate, the vast majority of what we encounter online is now synthetic in some form — edited, altered, or entirely generated by AI.

Scroll through any social media feed and the evidence is hard to miss. AI-generated images, videos, and text have become so prevalent that major platforms are struggling to manage the volume. The term "AI slop" has entered the vocabulary for a reason. But the problem goes deeper than cluttered feeds. When synthetic content is indistinguishable from authentic content, every piece of communication — every email, every video call, every document — carries a question mark it didn't used to carry.

For associations that rely on digital communication with their members, this creates a new kind of risk. Your newsletters, your emails, your virtual events, your social media presence — all of it exists in an environment where members are becoming increasingly skeptical of what's real. That skepticism isn't personal. It's a rational response to an environment that has become genuinely harder to navigate. But if your members can't immediately distinguish your authentic communications from the noise surrounding them, the trust you've built starts to erode — not because you did anything wrong, but because the environment changed around you.

Deepfakes Are Weaponizing Trust Inside Organizations

The external trust problem is concerning enough. But deepfakes are also creating internal vulnerabilities that many associations haven't yet grappled with.

Voice cloning technology has reached the point where a convincing replica of someone's voice can be generated and deployed in real time. There are documented cases of AI-generated voice calls impersonating a CFO and pressuring a finance staffer to wire money immediately. The caller sounds exactly like the boss. The request is urgent. The employee complies. The money is gone.

It gets more personal than that. There are cases of families receiving calls from what sounds exactly like a child or spouse claiming to have been kidnapped — followed by a stranger's voice demanding ransom. The emotional pressure is designed to bypass rational thinking entirely.

Video deepfakes are catching up to voice. Criminals have used AI-generated video on Zoom calls to impersonate leadership and authorize fraudulent transactions. In one widely reported case, a finance employee was convinced to transfer $25 million after a video call with what appeared to be multiple company executives — all of them AI-generated.

And then there's the hiring pipeline. Organizations have discovered fake employees — entire identities built by AI, complete with fabricated resumes, professional histories, and LinkedIn profiles — who passed the hiring process and operated remotely for months. In some cases, these were operatives for nation-state actors, collecting salaries and stealing intellectual property simultaneously. In the era of remote work, verifying that a new hire is a real person doing real work has become a genuine challenge.

The through-line in all of these examples is the same: trust is the attack surface. Criminals aren't breaking through firewalls. They're exploiting the assumption that the person on the other end of the line, the screen, or the application is who they say they are.

The Human Is the Vulnerability — and the Defense

Here's the tension that association leaders need to sit with: AI-powered cybersecurity tools are getting remarkably good at stopping machine-to-machine attacks. Malicious code, network intrusions, automated scanning — the technology side of the equation is increasingly well-defended. And that's precisely why criminals have pivoted to targeting people.

Social engineering, urgency, emotional manipulation, impersonation — these are human exploits, not technical ones. The most sophisticated AI security software in the world can't stop an employee from trusting a convincing voice on the phone. It can't prevent someone from paying a fake invoice that looks exactly like one from a known vendor. It can't override the social pressure of an apparent request from the CEO.

Which means the defense is also human. Staff need to understand what modern attacks look like — not in abstract terms, but through concrete examples of the invoice scams, the spoofed video calls, the impersonation tactics that are actually being used right now. They need to be trained to pause when something feels urgent, to verify through a second channel, and to question requests that don't sit right — even when those requests appear to come from someone senior.

That last part is cultural, not technical. If employees feel like they can't call the CEO back to confirm a wire transfer request, the organization has a culture problem that no software can fix. Leadership has to set the tone: verification is expected, not overstepping. A two-minute confirmation call is infinitely cheaper than a successful attack.

Going Analog to Fight AI

Some of the most effective defenses against AI-powered deception are decidedly low-tech. And there's something satisfying about that.

O'Neill calls it a "sign of life" — a term borrowed from his counterterrorism days. Before deploying undercover, operatives would leave a sealed envelope containing a specific code phrase. If they were kidnapped and the captors attempted a ransom, the phrase was the only way to confirm the operative was actually alive and communicating freely.

The concept translates directly to the corporate and association world. Organizations are establishing passphrases — agreed upon in person, never stored digitally, changed periodically — that can verify identity during a suspicious call or video. If a request comes in that feels off, the passphrase is the test. AI can clone a voice. It can't produce a code word it's never had access to.

Even without a formal passphrase system, there are simple real-time tests that work. On a video call, ask everyone to hold up a pen. Ask them to put three fingers in front of their face. Ask a question that only the real person would know — something personal and specific, like where you had coffee together last week. A pre-recorded or AI-generated deepfake can't respond to an unprompted, spontaneous request. These aren't foolproof, and as the technology improves, the tests will need to evolve. But right now, they work — and they cost nothing to implement.

The broader principle is worth internalizing: when digital identity becomes unreliable, analog verification becomes a strategic asset. It sounds counterintuitive in a world obsessed with digital transformation, but sometimes the most effective security tool is a conversation that happened in a room with no screens.

Why Associations Are Uniquely Positioned Here

In a landscape where trust is eroding and digital identity is increasingly suspect, associations hold a card that many organizations don't: they bring people together physically.

Conferences, annual meetings, regional events, committee gatherings — these are all moments where every person in the room is verified by their physical presence. No deepfakes. No AI-generated profiles. No synthetic identities. In an era where that kind of verification is becoming rare and valuable, the ability to convene in person is a genuine strategic asset, not just a programming tradition.

This is also an opportunity for associations to model trust practices for their members. Demonstrating verification protocols at events. Incorporating cybersecurity awareness into professional development programming. Showing members — through action, not just messaging — that the association takes the integrity of its communications and data seriously. When an association invests visibly in protecting trust, it reinforces the very thing members come to them for.

There's a version of the future where the organizations that thrive are the ones that figured out how to be trustworthy when trust was scarce. Associations are already in that business. 

Trust Is Now an Active Practice

Trust used to be something associations could build and then maintain through consistent quality. Do good work, deliver reliable information, serve your members well, and the trust followed naturally.

That's still true — but it's no longer sufficient. In an environment where communications can be spoofed, identities can be fabricated, and content can be manufactured at scale, trust requires active defense. It requires training your staff, securing your systems, verifying your communications, and educating your members about the threats they're facing in their own professional lives.

The associations that will maintain member trust through this era aren't necessarily the ones with the biggest budgets or the most advanced technology. They're the ones that recognized early that trust is no longer a passive asset — it's something you have to protect, invest in, and demonstrate every day.