A lot of associations spent the past year or two making what felt like a big decision: ChatGPT, Claude, Gemini, Copilot. Teams were assembled. Budgets were approved. Policies were written. And for many organizations, that felt like the hard part was done.
It was an important step. But the landscape has shifted faster than most expected, and the thing that felt like the big choice — which AI model to go with — is rapidly becoming the least differentiating part of the equation.
The real question now isn't which model you picked. It's what you're building around it.
How Models Became Commodities
Rewind just a couple of years and there was a clear hierarchy in AI models. GPT-4 was the benchmark, and nothing else came close. For a long time, the open source community was playing catch-up. Claude wasn't in the same league. Google had Bard, and it was rough. The question people kept asking was: when will someone else have a model as good as GPT-4?
That gap has closed remarkably fast. Today, multiple models from different providers perform at comparable levels for most tasks. Open source models have narrowed the distance dramatically. Claude, Gemini, GPT — they each have relative strengths, but for the majority of what association professionals are doing day to day, any of them will get the job done well.
The model providers know this. There are very smart people at these companies, and they recognize there's no lasting moat around the model itself. A model is increasingly a plug-and-play commodity. What keeps users on a platform isn't the raw intelligence of the model — it's everything else.
Where the Value Actually Lives
In the AI world, the infrastructure and tooling wrapped around a model is called the "harness." And that's where real differentiation now sits.
Can the tool run in a loop and execute multi-step tasks without you babysitting every action? Does it log what it does so you can audit later? Can it integrate with your existing systems — your CRM, your file storage, your communication platforms? Does it manage permissions in a way that gives you control without creating so much friction that nobody uses it? Can you communicate with it remotely, from your phone, through the channels your team already uses?
These are harness questions, not model questions. And they're the ones that determine whether AI becomes genuinely useful across your organization or stays a novelty that a few enthusiasts play with on the side.
Think about what makes tools like Claude Code or Claude Cowork valuable. It's not just that Claude is a good model (though it is). It's the permission system that lets you control what the agent can access. It's the ability to interact with it through Slack or Teams. It's the logging that tracks what the agent did and why. It's the sandboxing that keeps experiments contained. Take that same harness and plug in a different model of comparable quality, and you'd get very similar results.
The harness is the product. The model is the engine inside it — important, but increasingly interchangeable.
What This Means for Your AI Vendor Decisions
If the model is becoming a commodity, then locking yourself into a single provider based on the model alone is a strategic risk. The flexibility to swap models — or use different models for different tasks — is becoming an advantage worth protecting.
Here's one practical implication: don't sign long-term enterprise agreements with AI providers if you can avoid it. Most associations are small enough that they're on standard online plans that can be cancelled anytime, which is actually the ideal position. You want the ability, at the end of a 12-month stretch with one provider, to evaluate whether you should switch.
These companies understand this dynamic, too. They're making it easier to switch on purpose — at least for now. Claude can import your ChatGPT conversations and memories. The onboarding experience is designed to lower the barrier to switching. That's partly competitive strategy, but it also reflects the reality that users have options and providers can't afford to make leaving painful.
Over time, expect that to shift. Platforms will try to build walled gardens — accumulating your data, your workflows, your organizational context in ways that make switching costly. It's the same pattern we've seen in every technology cycle. The data formats we all use now, like .docx and .xlsx, are open standards — but they didn't become open because Microsoft wanted them to be. They became open because the market demanded interoperability.
The same dynamics will play out in AI. Open standards for data portability are still emerging, but organizations that maintain flexibility now will have more leverage later. When you're evaluating AI platforms, ask about model agnosticism and data export capabilities. Those questions matter more than which model is powering the tool today.
The Copilot Question
A lot of associations defaulted to Microsoft Copilot because it was already in their ecosystem. That's a reasonable starting point, and this isn't an anti-Microsoft argument — they're doing a lot of things right directionally.
But Copilot's pace of evolution has been noticeably slower than competitors. The features and capabilities available in Claude or ChatGPT today are often months ahead of what Copilot offers. Microsoft's recent partnership with Anthropic to bring Cowork into the Copilot environment will help close some of that gap, but the platform is still moving at a different speed.
If Copilot is your organization's primary tool and your team is comfortable with it, that's fine. But make sure at least a handful of people have access to other tools so you can continuously benchmark. You don't need to switch every quarter, but you do need someone paying attention to whether the tool you chose 18 months ago is still the best fit for where you're headed.
Think Infrastructure, Not Just Interface
The associations that will be best positioned for the next phase of AI aren't the ones that picked the "best" model two years ago. They're the ones thinking about infrastructure — even if they're not building it themselves.
That means evaluating tools based on how they handle permissions, logging, and audit trails. It means asking vendors whether their platforms are model-agnostic or locked to a single provider. It means paying attention to agent frameworks — systems that can run AI in a loop, manage tool access, and track every action — because that's the layer where the most meaningful work is going to happen.
Enterprise-grade agent platforms do things like limit how many times an agent can run, how long it can operate, and how much it can spend. They monitor every prompt and every tool call. They can even have secondary systems reviewing the logs for anomalies. That kind of infrastructure is what separates a controlled, auditable AI deployment from someone running an open-ended agent on their laptop with no guardrails.
You don't have to build this yourself. But you should understand that it exists, that it's where the industry is heading, and that the decisions you make now about platforms and vendors will determine how easily you can get there.
The Model Will Keep Changing. Build Around That.
Two years ago, picking the right AI model felt like a high-stakes decision. Today, it's closer to choosing which brand of cloud storage to use — it matters, but it's not the thing that defines your strategy.
The model will keep changing. Six months from now, there will be something newer and arguably better. That's the nature of a field where capability doubles every few months. The organizations that handle this well won't be the ones chasing every new model release. They'll be the ones that built flexible infrastructure — or chose vendors who did — so that when the model changes, everything else keeps working.
Pick a primary tool. Let your team learn it deeply. But keep a few people experimenting with alternatives, stay away from contracts that lock you in, and pay attention to the harness, not just the engine. That's where the real strategic decisions are now.
March 31, 2026