This post draws on Wharton professor Ethan Mollick's recent article, "The IT department: Where AI goes to die," which we analyzed on a recent episode of the Sidecar Sync. The core argument—that organizations are squandering AI's potential by handing it to IT and treating it like normal enterprise software—is his. What follows applies his thinking to the realities association leaders are navigating right now.
In many associations, the reflex to new technology is predictable. When a powerful new tool emerges, the executive leadership team looks toward the IT department and says, “This sounds important; you handle the rollout.” For decades, this was the correct move. Whether it was implementing a new Association Management System (AMS), migrating to a cloud-based server, or deploying a new Learning Management System (LMS), IT was the natural home for technology ownership. These were infrastructure projects that required technical expertise, risk mitigation, and long-term stability.
However, applying this same logic to artificial intelligence is a fundamental mistake. In fact, handing sole ownership of AI strategy to a department built around risk elimination is a category error that often leads to what Mollick describes as the place where AI goes to die. While IT plays a critical role in the success of AI, they should not be the ones driving the strategy. For AI to be truly transformative, it must be led by business owners—the people responsible for membership, marketing, events, and education—who are focused on outcomes rather than infrastructure. To move forward, association leaders must adopt a new framework for AI governance that balances the need for stability with the necessity of rapid experimentation.
To understand why AI strategy often stalls in IT, we have to look at the core mandate of the department. The primary responsibility of an IT professional is to minimize risk, ensure system stability, and maintain cybersecurity. We want our IT teams to be cautious. We want them to ensure that the email system doesn't go down, that member data is secure, and that the firewall is impenetrable. Their success is often measured by the absence of problems—the lights stay blinking in the server room, and the systems remain stable.
AI, however, demands the exact opposite of this mindset. AI is not a traditional software deployment; it is a mysterious and often unpredictable tool that requires constant experimentation to find value. To get the most out of AI, an organization must be willing to “break” things in a controlled environment, test unproven ideas, and iterate rapidly. When you hand this technology to a department whose job is to prevent instability, they will naturally attempt to “de-weird” the AI. They will try to sand down its strange, transformative edges to make it fit into existing enterprise software management models.
As Mollick puts it, treating AI like a standard software rollout is like receiving a mysterious, powerful artifact and immediately using it as a paperweight. It might be safe, but you’ve squandered its potential. When AI is managed solely through the lens of risk mitigation, the result is often a series of restrictive policies that stifle innovation before it begins. The goal of AI governance should not be to eliminate risk entirely, but to create a sandbox where experimentation can flourish without compromising the organization’s core security.
If IT shouldn’t own the strategy, who should? The answer lies with the business owners—the department heads who understand the organization’s most pressing challenges. A membership director knows why members are churning. A marketing director knows which segments are unresponsive. An education lead knows where the gaps are in the professional development catalog. These are the individuals who can identify the high-value use cases for AI.
In a business-led model, the membership or marketing department identifies a specific business objective, such as increasing member retention by 25%. They then look for AI tools—perhaps a conversational tool like Claude or a visual creation tool like Claude Design—to help them achieve that goal. The focus remains on the business outcome, not the technical specifications of the model. This prevents the “KPI trap,” where organizations measure success by how many people are using a tool rather than what value that usage is creating. When usage becomes the primary metric, you end up with “work slop”—endless extra memos, PowerPoints, and summaries that nobody asked for and that add zero value to the mission.
By putting business owners in the driver’s seat, the association ensures that AI is being used to solve real problems. IT’s role shifts from being the gatekeeper to being a strategic partner. They provide the guardrails, the security protocols, and the technical support, but they do not decide which experiments are worth running. This model reinforces that you don't need to be a tech expert to lead through AI initiatives effectively, empowering staff to look at AI as a tool for 100x productivity gains in their specific domain, rather than just another piece of software they are being forced to adopt.
One of the most effective ways to balance speed and security is to encourage the development of “Swiss cheese” prototypes. These are early-stage AI solutions built by non-technical staff to prove a concept. For example, an education lead might use a conversational AI to build a custom agent that helps draft course outlines or generate quiz questions based on existing webinar transcripts.
These prototypes are called “Swiss cheese” because they have holes. They might not be scalable, they might not be production-grade, and they certainly wouldn't pass a rigorous cybersecurity audit if they were connected to the association’s core database. However, they are incredibly valuable because they prove that a specific AI application can move the needle on a business objective. They allow the team to move fast, test ideas, and find the “sweet spot” of AI value without waiting for a six-month IT review process.
In this stage, the organization should give staff the freedom to experiment in a sandboxed environment. This means providing access to professional-grade models like Claude Opus 4.7 while clearly defining what data can and cannot be uploaded. By allowing for these imperfect, non-scalable experiments, the association can quickly separate the transformative use cases from the ones that don't work. The goal is to find the prototypes that taste good—the ones that show clear business value—before investing the resources to fill in the holes.
Once a “Swiss cheese” prototype has proven its value, the relationship between the business owner and IT enters its most critical phase: production-grading. This is where IT’s strengths in stability and security become the organization’s greatest asset. Instead of IT trying to invent the AI solution, they take the proven concept from the business owner and “harden” it for enterprise use.
Production-grading involves several key steps. First, IT reviews the logic and the outputs of the prototype to ensure they are reliable and accurate. Second, they address the “holes” in the Swiss cheese—ensuring that the tool meets the organization’s cybersecurity standards and data privacy policies. Third, they look at scalability, determining how the tool can be deployed across the entire department or organization. Finally, they integrate the tool into the association’s existing technical ecosystem, such as connecting it to the AMS or a centralized data platform.
This hand-off creates a powerful teamwork dynamic. The business owners provide the creative spark and the domain expertise, while IT provides the professional-grade infrastructure. This model respects the mandate of both groups. It allows the association to move at the speed of AI innovation while maintaining the security and reliability that members expect. It transforms IT from a potential “graveyard” for AI ideas into a launchpad for production-grade solutions that drive long-term value.
For this framework to work, association leadership must address a cultural hurdle: the speed of decision-making. Many associations are naturally consensus-driven and committee-based, which can lead to a slow, cautious approach to change. In the age of AI, where model capabilities are jumping significantly every few months, a slow decision-making process is a major liability.
Leaders need to lead. This means making choices faster, even when not everyone is fully bought in. If a typical decision takes a month, challenge the team to make it in a week. If a project usually requires a year of planning, see what can be prototyped in a month. This doesn't mean being reckless; it means being willing to act, pay attention to the results, and change course when needed. AI is a magnifying glass that will find the weak points in an organization’s culture. If the leadership is indecisive, the AI strategy will reflect that indecision.
By cutting cycle times for decisions, leaders create an environment where the business-led, IT-partnered model can thrive. They signal to the staff that experimentation is encouraged and that the organization is committed to moving forward aggressively. The focus should always remain on the trajectory of the technology. It is getting better, faster, and cheaper at an exponential rate. Organizations that wait for perfect consensus or perfect security before they start experimenting will find themselves left behind by those who are willing to build, test, and refine in real-time.
Moving AI strategy out of the IT department and into the hands of business owners is not about diminishing the role of technology professionals. On the contrary, it allows them to focus on what they do best: building secure, scalable, and reliable systems. When IT is no longer burdened with the impossible task of “owning” the creative and strategic application of AI, they can become the essential partners who turn messy experiments into professional-grade tools.
For association executives, the path forward is clear. Stop treating AI like a software rollout and start treating it like a business transformation. Empower your department heads to build “Swiss cheese” prototypes that solve real member problems. Create a clear hand-off process where IT can production-grade the winners. And most importantly, increase your own decision-making velocity to match the speed of the technology. When you balance the need for stability with the necessity of speed, you move beyond the “work slop” of basic adoption and toward a future where AI truly enhances the value you provide to your members.