I recently bought a home, and like anyone staring down empty rooms and a budget that isn't infinite, I went deep — books, YouTube channels, design blogs, the whole rabbit hole. And one concept kept coming up that surprised me. The best designers don't dread a room with an awkward layout or a window in a terrible spot. They welcome it. A blank slate, it turns out, can be your worst enemy. When everything is possible, nothing pushes you toward something interesting. But when you have a constraint — an odd angle, a load-bearing wall you can't move, a room that's too narrow — suddenly your creativity has somewhere to go. The restriction becomes the starting point for something you never would have designed otherwise.
That idea has stuck with me, and not just because I'm still figuring out what to do with a weirdly placed hallway closet. It applies directly to something I keep hearing from association professionals: the belief that they can't meaningfully innovate with AI because they don't have the resources of a major tech company.
That belief is worth challenging.
We recently sat down with Benjamin Rosman, professor of computer science and applied mathematics at the University of the Witwatersrand in Johannesburg, founding director of the Mind Institute, and one of TIME's 100 Most Influential People in AI for 2025. Rosman has spent years building the African AI ecosystem from the ground up — launching the largest machine learning summer school in the world, building a natural language processing company, and pulling together researchers across disciplines from evolutionary science to neuroanatomy to creative arts. His perspective on innovation, constraints, and what it actually takes to build technical capacity reshaped how we think about what's possible for organizations working with limited resources.
Because some of the most interesting AI innovation happening in the world right now isn't coming from organizations with unlimited budgets and massive engineering teams. It's coming from people who have to think differently precisely because of their constraints.
Consider what's happening across Africa. In Kenya, mobile money platforms emerged years before phone-based payments became mainstream anywhere else — not because of superior infrastructure, but because traditional banking infrastructure wasn't available to most people. The constraint created the opening.
In South Africa, a major health insurance company built its entire model around behavioral nudges — incentivizing members to exercise regularly and drive safely to lower premiums across the board. They even offer a service where members can call to have potholes repaired, because it turns out fixing roads is cheaper than paying out car insurance claims. That's not the kind of innovation that comes from having more resources. It comes from looking at a problem sideways because the straightforward approach isn't available.
In rural healthcare settings across the continent, clinics without access to specialists have found ways to jerry-rig diagnostic solutions — connecting directly with technologists who can help, bypassing the long institutional chains that would slow things down in a more established system.
These are stories about constraints creating conditions where people solve problems in ways that more comfortable environments never would have produced.
For associations — many of which operate with lean teams, tight budgets, and limited technical staff — this framing matters. You're not at a disadvantage because you can't throw millions at AI infrastructure. You're in a position where the solutions you build will be sharper, more practical, and more tailored to your members because you don't have the luxury of being generic.
There's a useful way to think about how organizations approach AI adoption right now. Picture the old prospecting days — a wide open field with people digging in different spots, looking for gold. Someone strikes it rich in one area, and the crowd swarms. Everyone starts digging in the same place.
The problem is obvious. That one spot has a limited amount of gold. The more people who crowd in, the more they're competing for diminishing returns. Meanwhile, the rest of the field — potentially with richer deposits — goes unexplored.
This is what's happening across industries with AI. Everyone is rushing toward the same tools, the same use cases, the same implementation strategies. Chatbots for customer service. Content generation. Basic automation. These aren't bad applications, but when every organization is digging in the same spot, it's hard to find something that genuinely differentiates your value to members.
Associations have a unique advantage here. You are closer to your industry and your members than almost any other type of organization. You understand the specific pain points, the workflow bottlenecks, the information gaps that generic AI solutions don't address. That proximity is your map to a different part of the field — the part where fewer people are digging and the deposits might be richer.
The key is resisting the gravitational pull of whatever everyone else is doing and asking instead: what problem do our members actually have that AI could address in a way nobody else is offering?
Most association leaders have encountered the idea of data sovereignty by now — the principle that you should own and control your organization's data. It's an important concept, and it's gotten a lot of attention as organizations move more of their operations into cloud platforms and third-party tools.
But there's a related concept that hasn't gotten nearly enough attention, and it might matter even more: algorithmic sovereignty.
Data sovereignty asks: who owns the data? Algorithmic sovereignty asks a harder question: who controls the systems that turn that data into decisions?
Here's why this matters for associations. If your entire tech stack is built on external platforms and tools that you don't fundamentally understand, you're exposed in ways that aren't immediately obvious. What happens when a vendor changes their pricing model and your budget can't absorb it? What happens when a tool you've built workflows around gets acquired or discontinued? What happens when a new AI capability hits the market and you can't evaluate whether it's genuinely useful for your members or just well-marketed noise?
This isn't an argument for building your own AI models. For most associations, that doesn't make sense. But it is an argument for having some degree of technical understanding within your organization or your network — enough that you can make informed decisions rather than dependent ones.
Think of it this way. Every successful technology your members use today exists because of a long pipeline of innovation. The health and fitness app on someone's wrist only works because other people put GPS satellites into space. Those satellites only work because someone else developed the physics of relativity. The flashy end product depends on deep foundational work at every level of the chain.
If you're only investing in the end of that pipeline — buying finished tools and plugging them in — you're building on a foundation you don't understand and can't control. You don't need to invest at every level, but you need enough depth to know what questions to ask and when something doesn't add up.
So what does building this kind of resilience actually look like for an association with limited resources?
It starts with people, not platforms. You need someone in your orbit — on staff, on your board, in your member community, or in a trusted advisory relationship — who understands AI at a level deeper than which tools are trending. Not someone who can build a large language model from scratch, but someone who can help you evaluate whether a vendor's claims hold up, whether a new tool is genuinely relevant to your members, and whether your current tech stack has dependencies that could become liabilities.
It means investing in literacy before tools. Before your organization commits budget to the next AI platform, invest in helping your team understand the basics of how these systems work. Not at a PhD level — at a level where they can participate meaningfully in strategy conversations and vendor evaluations. That understanding is what separates organizations that adopt AI intentionally from organizations that adopt it reactively.
And it means treating your constraints as design parameters, not limitations. Your small team means decisions happen faster. Your tight budget means you can't afford to chase hype — which is actually a strategic advantage, because it forces you to focus on what genuinely matters. Your deep connection to a specific industry or profession means you understand problems that no general-purpose AI company is going to solve for you.
Back to that oddly shaped room. The designer who dreads the awkward window placement is the one who ends up with a generic space. The designer who works with it — who lets the constraint guide the creative decision — ends up with something you couldn't have planned on a blank canvas.
The same is true for your association's approach to AI. Your constraints aren't obstacles standing between you and innovation. They're the starting point for the kind of innovation that no one with a bigger budget and a blank slate would ever think to build.