There's a question lurking in the back of many association leaders' minds right now: Who am I to lead an AI initiative?
You're not a data scientist. You didn't study computer science. Maybe you can't explain how large language models actually work. And every time you read about AI, there's a new acronym or framework that makes you feel further behind.
Here's the thing: that feeling of inadequacy might be completely misplaced.
Bruce Reed recently made the Time AI 100 list. He coordinated AI policy across federal agencies in the White House. He now serves as head of AI at Common Sense Media, helping shape how families and organizations navigate this technology responsibly. By any measure, he's one of the most influential voices on AI policy in the country.
And he openly admits he's not a tech expert.
In a recent conversation on the Sidecar Sync Podcast, Reed shared why technical credentials matter far less than most leaders assume—and what actually makes someone effective at leading through this moment. His perspective offers a useful reframe for association leaders who feel underqualified to guide their organizations through AI adoption. The barrier to entry isn't what you think it is.
There's a natural instinct to defer to experts when facing something new and complex. AI certainly qualifies. The technology is sophisticated, the terminology is dense, and the pace of change is disorienting.
But Reed points to a real danger in over-relying on technical expertise: organizations and even countries go wrong when they put too much faith in people who seem to know what they're doing but don't necessarily understand how their decisions affect ordinary people.
The most valuable leadership skill in moments of uncertainty isn't technical mastery. It's the ability to keep your eyes open, observe what's actually happening, apply common sense, and explain problems in ways that real people can understand.
Association leaders do this constantly. You translate complex industry issues for members at different career stages. You synthesize policy implications for boards with varying levels of engagement. You make abstract challenges concrete and actionable.
That skill transfers directly to AI leadership.
Part of what makes this moment different is the nature of the technology itself.
Previous waves of digital transformation required genuine technical knowledge to participate meaningfully. Building a website in 1998 meant understanding HTML. Implementing a CRM in 2008 meant working closely with IT specialists who could configure databases and integrations. The barrier between "business leader" and "technical implementer" was real and often insurmountable.
AI has changed that equation. What's happening under the hood is staggeringly complex, but the interface is intuitive. You interact with AI through natural language. You ask questions, give instructions, and refine outputs through conversation.
This accessibility means that building competence doesn't require years of study. It requires curiosity and practice. The technology meets you where you are.
That's a meaningful shift for leaders who've historically felt locked out of technical decision-making. You can now form your own opinions about what AI does well, where it struggles, and how it might serve your members.
If technical expertise isn't the prerequisite, what is?
Firsthand experience. You cannot lead effectively on something you haven't touched. Playing with ChatGPT once and declaring you've "checked the box" doesn't count. Meaningful competence comes from applying AI to real work challenges, not just asking it trivia questions.
This applies across levels. Whether you're fresh out of college or a seasoned CEO, you need hands-on exposure to develop informed judgment. The goal isn't to become a prompt engineer. It's to understand the technology well enough to ask good questions, evaluate vendor claims, and make sound strategic decisions.
Comfort with uncertainty. AI is evolving faster than anyone can track. New models, new capabilities, and new risks emerge constantly. Leaders who need complete information before acting will find themselves paralyzed.
Effective AI leadership means being open about what you know and what you don't. It means making decisions with incomplete information and adjusting as you learn. That kind of intellectual humility builds trust with teams and stakeholders who are also navigating uncertainty.
Translation ability. The leaders who create the most value in this moment will be those who can bridge the gap between technical possibilities and organizational realities. They'll translate AI capabilities into member benefits. They'll connect abstract potential to concrete use cases. They'll help boards and committees understand what's at stake without drowning them in jargon.
This is the work association leaders already do on countless issues. AI is simply the newest domain requiring that skill.
Associations face a unique structure that most businesses don't: volunteer leadership.
Your board members and committee chairs are often highly accomplished professionals in their own fields. Doctors, lawyers, engineers, architects, executives. They're accustomed to being the smartest people in the room on matters within their expertise.
But expertise in surgery or litigation or structural engineering doesn't automatically translate to AI fluency. And the traditional dynamic where staff defers to volunteer expertise can create problems when volunteers don't yet understand the technology well enough to guide strategy.
This means association staff may need to lead up in ways that feel uncomfortable. You may need to educate board members who are used to educating others. You may need to push for decisions that feel premature to volunteers who prefer certainty.
That's not overstepping. That's responsible leadership. And it requires staff to build their own AI competence first, so they can bring volunteers along from a position of informed confidence.
When associations look to hire or designate someone to lead AI initiatives, the instinct is often to seek technical credentials. A background in data science, perhaps, or experience building AI products.
Reed's example suggests a different approach: look for someone who understands business issues and thinks big picture. Technical skills can be developed or hired for specific projects. Strategic judgment and organizational fluency are harder to teach.
The best AI lead for your association might already be on staff. They might be the person who asks the sharpest questions about member needs, who sees connections across programs, who can communicate complex ideas simply. Those capabilities may matter more than a computer science degree.
None of this means AI leadership is easy. The technology raises genuine questions about privacy, accuracy, job displacement, and ethical use. There are real risks to getting it wrong.
But waiting until you feel fully qualified means waiting forever. The technology won't slow down to let you catch up. And the cost of inaction compounds as competitors, partners, and members move forward without you.
The association leaders who will thrive in this environment are those willing to learn in public, make mistakes, and adjust. They're the ones who recognize that perfect information isn't coming and that informed action beats paralyzed expertise.
Reed's journey from policy generalist to AI 100 honoree didn't happen because he went back to school for a technical degree. It happened because he stayed curious, engaged with the technology directly, and focused on the human implications of what he was seeing.
That path is available to anyone willing to walk it.
The AI moment doesn't require your association to hire a team of technologists. It requires your current leaders to lean in with curiosity and build competence through practice.
Start by using the tools yourself. Not casually, but with intention. Apply AI to a real project. See where it helps and where it falls short. Form your own opinions based on experience rather than headlines.
Then bring your teams and volunteers along. Share what you're learning. Create space for experimentation. Normalize the reality that everyone is figuring this out together.
The leaders who will guide their organizations successfully through this transition won't be the ones who understand the algorithms best. They'll be the ones who understand the stakes, stay close to member needs, and refuse to be intimidated by complexity they don't yet fully grasp.
Technical expertise is (sometimes) overrated. Curiosity, judgment, and the courage to act are not.