This post is inspired by Ethan Mollick’s article “Making AI Work: Leadership, Lab, and Crowd” and by the practical lessons we’re learning at MSTA as we chart our own AI journey.
As association leaders, we’re all hearing the same thing: AI is changing the way organizations work. But how do we make AI work for our teams, our members, and our missions? Like most of you, I’ve read countless articles promising AI-powered transformation—and found few that speak directly to the realities of association management.
That’s why Ethan Mollick’s “Leadership, Lab, and Crowd” framework resonated with me. His core message is this: The path to real AI adoption isn’t about buying the right tool or copying someone else’s playbook. It’s about building a culture that learns quickly by combining leadership vision, hands-on experimentation, and tapping into the creativity of every team member. At MSTA, we’ve started to put these principles into practice, and I want to share what we’re learning (sometimes the hard way).
A Framework for AI Adoption: Leadership, Lab, and Crowd
Mollick’s approach starts with a simple but powerful insight: We are all figuring this out together. The companies and associations that will win with AI are not those waiting for perfect answers—they’re the ones willing to learn and adapt, even while the ground is shifting beneath them.
Here’s how that looks at MSTA.
1. Leadership: Start with Vision, Not Perfection
As executive director, I realized early that simply talking about AI wasn’t enough. Our staff needed to see that I—and the board—were ready to learn alongside them. That’s why we opened Sidecar’s AI Learning Hub training to everyone, regardless of role. I went through the training myself, and I invited the board to do the same. Out of 43 full-time staff, 29 are now Association AI Professional (AAiP) certified, and eight directors are working through the same program.
We don’t have all the answers. But by putting ourselves in the same learning environment, we’re sending a clear message: AI isn’t a threat or a “tech thing”—it’s an opportunity for everyone to grow. That vision, grounded in participation rather than proclamations, helps ease anxiety and opens the door to experimentation.
2. The Lab: Making Space for Experimentation
One thing Mollick stresses is that there’s no AI instruction manual that fits every organization. You have to create your own “lab”—not a physical space, but a mindset and a set of processes where people are free to experiment, make mistakes, and share what they discover.
At MSTA, staff who complete the training get access to our ChatGPT Team. This isn’t just a perk—it’s an open invitation to try new things, together. We recently created our first custom GPT as a proof of concept. Now we are launching an AI Collaboration Team in Microsoft Teams so staff can swap ideas, favorite prompts, and support each other.
Some experiments have turned into real wins. Last year, we launched Tillie, our AI-powered knowledge agent. We expected it to handle member questions, but we didn’t anticipate how quickly staff and members would embrace it, or the creative ways they’d use it. That kind of rapid feedback—both the surprises and the “failures”—has been invaluable.
We’re also piloting a common data platform (CDP) with Member Junction, bringing together data from our AMS, Tillie, and public data sources. The goal isn’t to chase the latest trend, but to build a foundation for whatever the next AI project might be—like Skip, an AI agent that will allow us to analyze data in plain language.
3. The Crowd: Harnessing Everyone’s Ideas
Perhaps the biggest lesson—one Mollick drives home—is that AI innovation doesn’t just come from the top or from a tech team. It comes from the “crowd”: people on the front lines who figure out how to use AI to solve real problems in real time.
That’s why we’ve worked to make AI experimentation visible and safe. Our AI Collaboration Team is a place where staff can share prompts, workflows, and wins without fear. When someone figures out a shortcut, or a new way to serve members, we celebrate it. The result? More people willing to try, and more ideas making their way to the rest of the organization.
None of us are AI “super users” now. Some people are still skeptical; some are still learning. That’s okay. We’re not aiming for 100% adoption overnight. We’re aiming for a culture where it’s normal to learn, experiment, and share what works (and what doesn’t).
What We’ve Learned (So Far)
- Lead from the front. If leaders don’t engage directly, it’s hard to expect staff to take risks.
- Training is just the start. The real value comes from building spaces—like our ChatGPT Team and AI Collaboration Team—where staff can support each other as they learn.
- Quick wins matter. Projects like Tillie give people confidence that AI can make a difference.
- Celebrate sharing, not just success. Some of our best ideas came from staff who tried something and then told us what didn’t work.
- Don’t wait for certainty. The pace of change in AI is only accelerating. Acting now, even if you’re learning as you go, puts your organization in a much stronger position.
Final Thoughts: Join the Experiment
If there’s one thing I hope other association leaders take away, it’s this: you don’t need a perfect plan to get started. The real secret is to lead openly, experiment boldly, and create room for everyone to contribute. We’re not AI experts—we’re AI learners. And in this moment, that’s exactly what our members, our staff, and our missions need.
https://www.oneusefulthing.org/p/making-ai-work-leadership-lab-and

June 20, 2025