5 min read

Stop De-Weirding Your AI: Why Enterprise Logic Kills Innovation

Stop De-Weirding Your AI: Why Enterprise Logic Kills Innovation

This post draws on Wharton professor Ethan Mollick's recent article, "The IT department: Where AI goes to die," which we analyzed on a recent episode of the Sidecar Sync. The core argument—including the concepts of "de-weirding" AI, "work slop," and "secret cyborgs"—is his. What follows applies his thinking to the realities association leaders are navigating right now.

Imagine finding a mysterious, glowing artifact in your association’s lobby. It doesn’t come with a manual, but it seems to have the ability to solve complex problems, draft entire strategic plans in seconds, and predict member churn with unsettling accuracy. What is the first thing you do? If you follow the standard playbook of modern enterprise management, you would likely hand it to the IT department, ask them to ensure it complies with the existing firewall settings, and then set a mandatory KPI requiring every staff member to touch the artifact at least three times a week.

In doing so, you would be guilty of "de-weirding" the most transformative technology of our generation. Many organizations are currently treating artificial intelligence as if it were simply Microsoft Office 2.0 or a slightly more advanced version of their Association Management System (AMS). This approach is a fundamental category error. By attempting to sand down what Mollick calls the 'weird,' unpredictable, and non-linear nature of AI to make it fit into traditional software deployment models, association leadership risks squandering the technology’s actual potential. Instead of achieving a breakthrough in member value, these organizations often end up with "work slop"—a mountain of AI-generated noise that serves no one.

The Fallacy of the Traditional Software Rollout

For decades, the path to adopting new technology in the association world has been deterministic. When you implement a new Learning Management System (LMS) or a financial database, you expect a specific input to result in a predictable output. The logic is linear: you buy the software, IT configures it, staff are trained on its specific buttons and menus, and the organization moves forward with a slightly more efficient version of its previous self. This is the traditional software rollout model built on risk mitigation and stability. It is designed to ensure that systems do not break and that every dollar spent results in a measurable, albeit incremental, improvement.

AI, however, is not deterministic. It is probabilistic. It does not behave like a calculator; it behaves more like a highly capable, slightly eccentric colleague. When associations try to manage AI through the lens of traditional IT deployment, they effectively turn a powerful engine of innovation into a glorified paperweight. The "de-weirding" process begins when leadership demands that AI be predictable, safe, and easily categorized within existing departmental silos.

When you hand sole ownership of AI strategy to a department built around risk elimination—which is the core mandate of most IT departments—you are essentially asking them to kill the very thing that makes AI valuable: its ability to experiment, fail, and find novel solutions. IT's job is to keep the lights on and the data secure. AI’s job is to challenge the status quo. These two mandates are in constant tension, and if enterprise logic wins, innovation dies. This doesn't mean IT shouldn't be involved; they are critical partners for governance and security. But they cannot be the owners of the vision. The vision must belong to the business owners who are responsible for member outcomes, rather than letting an infrastructure gap holding your AI strategy back dictate the pace of innovation.

Avoiding the KPI Trap and the Proliferation of Work Slop

One of the most visible symptoms of de-weirded AI is the "KPI trap." In an attempt to show progress to boards and stakeholders, many association leaders set compliance-based metrics for AI adoption. They might mandate that 90% of the staff use a specific AI tool weekly or that every department must submit three AI-generated reports per month. On a dashboard, these numbers look fantastic. In reality, they often produce what can only be described as "work slop."

Work slop is the digital equivalent of empty calories. It is the ten-page summary of a meeting that no one attended, the thirty-slide PowerPoint deck that restates obvious facts, and the endless stream of internal memos that exist only because an AI could generate them in seconds. When employees are incentivized to show usage rather than value, they will use AI to create more work for everyone else. This creates a paradox where the organization feels busier than ever, yet the actual mission-critical output remains stagnant.

True AI organizational strategy requires a shift from measuring activity to measuring outcomes. Instead of asking how many people are using a chatbot, leaders should be asking how AI is moving the needle on member retention or how it has shortened the development cycle for a new certification program. If an AI tool allows a staffer to complete a forty-hour task in four hours, the goal shouldn't be to fill the remaining thirty-six hours with AI-generated memos. The goal should be to reinvest that time into high-value human activities, like direct member engagement or long-term strategic thinking. If you don't define what that high-value work looks like, your staff will default to producing slop to satisfy the usage metrics.

The Secret Cyborg Problem and Misaligned Incentives

When associations fail to rethink their organizational structure around AI, they inadvertently create a culture of "secret cyborgs." These are the employees—what Mollick calls 'secret cyborgs'—who have figured out how to use AI to achieve massive productivity gains... who have figured out how to use AI to achieve massive productivity gains—perhaps doing their entire week’s work by Tuesday afternoon—but who have every reason to hide this fact from leadership. In a traditional enterprise model, the reward for high efficiency is often just more work, or worse, the fear of being deemed redundant.

Consider the "automation reflex." When many leaders hear that AI can provide a 30% boost in productivity, their first instinct is to look for 30% workforce cuts. This is the ultimate de-weirding move. It treats AI as a cost-cutting tool rather than a value-expansion tool. When staff perceive that AI is a threat to their job security, they will not share their breakthroughs. They will use AI to finish their work quickly and then spend the rest of the week performing "busy work" to maintain the appearance of a forty-hour workload.

To unlock the true power of AI, association leadership must move from an automation reflex to an expansion reflex. What could your organization achieve if every staff member could produce 100 times more output? Instead of cutting a department of five people down to three, what if those five people could serve ten times as many members? What if they could provide personalized career coaching to every single member of the association, a task that was previously impossible due to scale? When you align incentives so that AI-driven productivity leads to more impactful work rather than just more tasks, the secret cyborgs come out of hiding and start driving the organization forward.

Rebuilding the Organization Around 100x Output

To stop de-weirding AI, leaders must be willing to be "unreasonable" with their timelines and expectations. If the technology allows for a massive increase in output, the old schedules no longer apply. A project that used to take six months—like a comprehensive market analysis or the development of a new educational track—might now be possible in six days.

This requires a level of decision-making speed that many associations are not accustomed to. The traditional committee-based, consensus-driven model of association management is often too slow for the age of AI. If you wait three months for a committee to approve a pilot program, the underlying technology will have already changed twice. Leadership must empower staff to build "Swiss cheese" prototypes—solutions that are imperfect and perhaps have a few holes, but that prove a concept quickly. This requires a sophisticated understanding of the exploration-exploitation trade-off in organizational strategy.

Once a prototype proves its value in the real world, it can then be brought back into the enterprise fold to be "production-graded" by IT. This creates a healthy cycle: the business units experiment and move fast to find value, and IT provides the governance and stability to scale those successes. This is the opposite of the "IT as a graveyard" model, where new ideas go to die because they don't fit the existing security protocols on day one. By allowing for a stage of "weird" experimentation, you ensure that the organization is actually solving problems rather than just deploying software.

Conclusion: Embracing the Transformation

The most successful associations of the next decade will be those that resist the urge to make AI feel normal. They will be the ones that lean into the strangeness of the technology and use it as a catalyst to rethink everything from their membership models to their internal workflows. They will recognize that AI is not a tool to be managed, but a force to be harnessed.

Stop asking how AI fits into your current organization. Start asking what your organization would look like if it were built today with AI at its core. When you stop de-weirding the technology, you stop limiting its potential. You move past the work slop and the KPI traps, and you begin the real work of digital transformation. The artifact in your lobby isn't a paperweight; it’s the key to your association’s future. It’s time to start using it like one.