Your AMS vendor just announced an AI assistant module. It's an additional $10,000 per year. Your LMS now offers AI-powered content generation for a premium. Your event platform promises AI-driven engagement features.
Your staff try these tools. They find that copying and pasting into ChatGPT produces better member communications, more relevant content suggestions, and more useful analysis.
This is an uncomfortable situation. You're paying for enterprise AI features that underperform a free consumer tool. And your staff, being rational people, keep using ChatGPT on the side.
If the AI features embedded in your platforms can't outperform what staff accomplish with general-purpose tools, staff will gravitate toward whatever works better. This isn't insubordination or resistance to change. It's common sense.
There's a useful analogy here. Imagine going to an expensive restaurant with impressive credentials—Michelin stars, critical acclaim, months-long waitlist. You sit down, order the tasting menu, and find yourself underwhelmed. The food is fine, but it's not delivering the experience you expected for the price. So you walk outside and grab something from a street vendor. A $6 crepe. And you're finally satisfied.
This happens constantly with software. Associations implement sophisticated centralized systems, and then the events team maintains critical data in Excel spreadsheets anyway. Not because they're trying to violate policy, but because entering data in the official system takes 30 minutes while dropping it into a spreadsheet takes 30 seconds.
People go where friction is lowest and value is highest. Always.
Most AI features added to existing association technology platforms are bolt-ons. They're responses to market pressure—"we need an AI story"—rather than deeply considered integrations that solve specific problems.
The vendors building these features haven't always found the right intersection of process, data, and user behavior that creates genuine value. They're adding chat interfaces and summarization tools because those are technically achievable, not because they've identified the unique problems their platform is positioned to solve.
There's also the wrapper problem. If a vendor builds an AI feature that's essentially a thin layer on top of a foundation model, the next ChatGPT or Claude update might leapfrog whatever they've built. The vendor can't keep pace with the rapid improvement cycles of the major AI labs. So their $10K add-on becomes obsolete before you've finished implementing it.
Your staff love ChatGPT. They use it to draft emails, analyze basic data, brainstorm ideas, and speed through routine tasks. Individual productivity gains are real and significant.
But there's a gap between personal productivity tools and what high-stakes association work requires. Certification decisions need audit trails. Accreditation standards require version control. Regulatory compliance demands defensible reasoning that can withstand scrutiny.
Generic consumer tools can't provide these controls. You can't point an auditor to your staff's ChatGPT history as documentation of how a credentialing decision was made.
The challenge is that many enterprise AI features can't provide these controls either—at least not yet. You're caught between consumer tools that work well but lack governance, and enterprise tools that promise governance but don't work as well.
If you're evaluating AI features from your existing technology partners, raise your expectations.
Ask for live demonstrations with realistic scenarios, not polished slide decks. Request access to pilot programs so your staff can test the tools against their actual workflows. Find out what their AI roadmap looks like beyond the current feature set. Are they building toward something substantive, or checking a box?
Most importantly, evaluate whether their AI solves your actual business problems. Does it address pain points your staff and members experience? Or does it solve a generic problem that sounds relevant but doesn't match how your organization operates?
The fact that a vendor has AI features doesn't mean those features are worth paying for. You have the leverage to demand more.
Should you wait for your existing vendors to catch up? The answer depends on several factors.
If your vendor has a credible roadmap for AI features that address your specific business problems, and they have a track record of delivering on their promises, patience may be warranted. Rebuilding from scratch what a vendor will deliver in six months doesn't make sense.
But if you're in the middle of a platform selection process—choosing a new AMS, for example—make AI capability a central criterion rather than an afterthought section at the end of your RFP. Think about what your processes should look like in an AI-enabled environment, and evaluate platforms against that vision.
And consider what problems your current vendors will never be positioned to solve. An AMS might eventually offer excellent AI features for member data analysis, but it's probably not going to become your organization's knowledge management platform. Knowing where vendor capabilities end helps you plan where to invest independently.
The goal here isn't to pit enterprise vendors against ChatGPT in some kind of competitive grudge match. The goal is to ensure that the tools you pay for deliver value your staff can't easily replicate elsewhere.
If an AI feature costs $10,000 per year, it should save more than $10,000 worth of time or produce more than $10,000 worth of improved outcomes. If staff can get 80% of the value from a free tool in a fraction of the time, the math doesn't work.
Your vendors should be racing to prove their worth, not relying on contractual lock-in to justify their fees. Hold them to that standard.