AI can generate photorealistic videos. Write entire novels. Pass the bar exam. These capabilities make headlines and dominate conference keynotes.
But the organizations actually succeeding with AI right now? They're automating receipt scanning. Processing documents. Screening conference proposals against rubrics. The tedious, repetitive work nobody wanted to do anyway.
And they're building years of competitive advantage while everyone else waits for AI to get "ready."
The 20-Year Trajectory
Sarah Guo, founder and partner at Conviction VC, recently shared an insight that should change how associations think about AI adoption: it will mirror cloud computing's 20-year trajectory.
Large private equity firms are already working on five-year AI transformation timelines. They're not rushing. They're planning methodically for a long-term shift. For associations, this creates an opening. Start experimenting now. Learn what works in your specific context. Build institutional knowledge about AI capabilities. Accumulate years of advantage. Or wait until the technology is "finished" and find yourself starting from zero while your peers are running sophisticated AI-assisted operations.
The organization on its tenth AI experiment understands infinitely more about what works than the organization attempting its first.
Why Boring Wins
SAP, the enterprise software giant, reports 34,000 organizations using their AI capabilities. Not for flashy applications. For document processing. Receipt scanning. Mundane automation.
These implementations succeed because they slide into existing operations without friction. Staff don't need extensive training. The use cases are immediately clear. When automation handles receipt processing, finance teams can focus on analysis instead of data entry. Not exciting. Valuable.
The pattern holds: "boring" AI gets adopted first because it solves actual problems without requiring massive behavior change. Flashy demos make great conference presentations. Mundane automation makes operations better.
Your event planning team may not need an AI system that reimagines the entire conference experience. They need something that handles the repetitive parts of session scheduling so they can focus on curating content and connecting with speakers. That's boring. That's also what they'll actually use.
The $20 Conference Proposal Automation
Let's get concrete with an example many associations face. Your association runs a professional conference. You receive somewhere between 500 and 2,000 proposals annually to fill maybe 200-300 session slots. Someone—probably multiple someones—spends hours reading through every submission for initial screening.
Does this proposal address one of our six content pillars? Does the submitter meet basic speaker qualifications? Is the abstract actually complete, or did they submit a half-finished draft?
Straightforward questions. Hundreds of proposals. Enormous time sink.
Here's what you need to automate this:
- Your rubric or guidelines document
- File storage you already use (Box.com, Google Drive, Dropbox)
- Around $20/month for AI/automation tools
- Zero developers
The process: Proposals land in a designated folder. AI reads each proposal against your guidelines. Based on the rubric, it sorts proposals into "needs review" or "rejected" folders. For rejections, it drafts an email explaining specifically why the proposal didn't meet requirements.
You can set this up in a tool like Zapier right now as a business user. No coding. No developer requests. No IT tickets. Connect your file storage to Zapier, add an AI step that reads the proposal against your rubric, then set actions based on the result—move the file, draft an email, update a spreadsheet.
Your staff reviews the "needs review" folder and makes final decisions. You spot-check rejections to catch AI errors. If you notice mistakes, you adjust the prompt or rubric and improve the system.
This won't make headlines. Nobody's writing case studies about proposal screening. But it saves your team 20-30 hours during proposal season. Multiply that across multiple programs and events. The time adds up. More importantly, it's a learning opportunity. You discover what AI can reliably handle. You learn how to write clear instructions. You build comfort with reviewing AI outputs. You develop judgment about where automation adds value and where human expertise remains critical.
That knowledge transfers to the next automation project. And the one after that.
Failed Experiments Are Part of Learning
Not every automation will work. Some will fail spectacularly. You'll discover edge cases you didn't anticipate. You'll find that your rubric wasn't as clear as you thought. You'll learn that AI misinterprets certain phrasing.
The experimentation phase is where early movers build advantage. Failed pilots teach you what doesn't work and why. That information becomes valuable when you revisit the same use case six months later with a better model.
Keep a document of failed experiments. Note what you tried, why it failed, what you learned. When GPT-6 or Claude 5 releases, pull out that document and test those failed use cases again. Maybe they work now. Maybe they still don't, but you understand why more clearly.
The organization attempting its tenth AI experiment operates at a completely different level than the organization attempting its first. They know how to scope projects appropriately. They understand realistic expectations for accuracy. They've developed processes for human review and quality control. They can move from idea to implementation faster because they've built that muscle.
Your peers waiting for AI to be "ready" will eventually adopt finished, polished products. The technology will work great. Their challenge will be change management, process integration, and building institutional knowledge you've been accumulating for years.
What Early Adoption Actually Looks Like
Early adoption doesn't mean betting your operation on unproven technology. It means identifying low-stakes opportunities to experiment.
Document routing is perfect for this. Many associations have staff manually moving information between systems. Export data from the AMS, manipulate it in Excel, import it somewhere else. This takes time, introduces errors, creates bottlenecks.
Automating this doesn't require replacing your systems. It requires having AI execute the same manual steps. The stakes are low because you verify output before using it. If the automation produces bad data, you catch it before it causes problems.
Event management is another good testing ground. Loading session information into your event app. Processing speaker submissions. Sending confirmation emails. Repetitive, well-defined, easy to verify. Also time-consuming enough that automation creates real value.
Member communications offer opportunities too. Not the creative strategy work—the execution. Personalizing email content based on member segments. Scheduling follow-ups. Tracking engagement. The boring operational stuff that takes time but doesn't require human creativity.
The pattern across successful early adoption—start with processes that are:
- Clearly defined with documented steps
- Repetitive enough that automation saves meaningful time
- Low stakes if something goes wrong
- Easy to verify and review
- Based on stable systems unlikely to change dramatically
Avoid processes that are mission-critical, involve sensitive data, require nuanced judgment, or have high costs of failure. Save those for later after you've built more experience.
Your Starting Point
You probably have a process right now that makes everyone groan. Not particularly difficult, but tedious. Takes too long. Happens too frequently. Nobody enjoys it, but someone has to do it.
That's your starting point.
Not the most important process. Not the most visible one. The most annoying one that's repetitive, well-defined, and low-stakes.
Take 30 minutes to document exactly how it works currently. List every step. Note where decisions get made. Identify what could go wrong. Take screenshots if it involves software.
Then explore whether automation makes sense. Could AI handle some or all of these steps? What would need human verification? How would you know if the automation produced incorrect results?
You might discover automation isn't feasible yet. Document why and revisit in six months. You might find a partial solution that handles 60% of the work. Still valuable. You might find a complete solution that just works. Even better.
The point is building experience with AI automation in a low-risk context. Learn what these tools can do. Develop judgment about where they add value. Build institutional knowledge while the technology evolves.
The associations figuring this out early won't just have better operations. They'll have years of accumulated learning about how to leverage AI effectively. That advantage compounds in ways that are difficult to overcome later.

October 8, 2025