Skip to main content

Your association just spent six months building a new member portal. The features are impressive. The interface is clean. Leadership loves the demo. Three months after launch, usage sits at 11%.

Sound familiar? The problem isn't your technology. The problem is that you built for your ideal scenario instead of your actual reality.

In a recent conversation on the Sidecar Sync podcast, Shekar Sivasubramanian, Head of Wadhwani AI, shared a philosophy that flips the script on how we approach technology projects. His organization has deployed AI solutions to over 100 million people across India. They didn't get there by building the most sophisticated technology. They got there by obsessing over deployment reality before writing a single line of code.

His maxim: "Deployment first, day one."

The Checkerboard Nobody Has

Here's a story that illustrates the problem perfectly. Wadhwani AI built a solution to measure infant health metrics using an eight-second video. The technology worked brilliantly. Accurate measurements of weight, length, and head circumference from a simple video clip. Revolutionary stuff.

There was just one catch: it required a checkerboard pattern as a reference object for sizing. Makes sense from a technical standpoint. A checkerboard gives you known dimensions, consistent spacing, high contrast for the camera. Perfect. Except in rural Indian communities, nobody had access to checkerboards.

The ultimatum came down: "Either we solve it with a ruler or we throw the solution out. There is no other option." They redesigned for a ruler. A single, simple ruler. Something that costs almost nothing, fits in a bag, and every community health worker already carries one. The solution worked. Not because the technology was fancier, but because it matched the deployment environment.

Now think about your association. How many member tools require desktop computers when your members work from job sites? How many assume stable internet in areas with spotty coverage? How many demand 30 minutes of focused attention from people who work in 5-minute increments?

You're building for checkerboards. Your members have rulers.

What "Deployment First" Actually Means

When someone brings a technology problem, the first question usually sounds like: "What's the best platform for this?" or "Which vendor should we use?" or "Can AI do this?" Wrong starting point.

The first question should be: "Tell me your deployment environment." Where will this be used? By whom? Under what conditions?

Get specific. What devices do they actually have access to? (Not what you wish they had.) What's their connectivity situation? (Not your office wifi.) What's their technical literacy level? (Not your most tech-savvy member.)

What does adoption actually cost them? Time, money, learning curve, changing existing workflows. And critically: what happens when it fails or makes a mistake?

These aren't obstacles to work around. They're design parameters. Most technology projects fail because we treat constraints as annoyances instead of as the foundation of good design. We build what's technically possible, then act surprised when adoption tanks.

Prescriptive vs. Feature-Rich

Let's explore prescriptive solutions versus feature-rich solutions. When working with people trying something new, you need prescriptive tools. Two choices. Clear paths. Guardrails that prevent wrong turns.

Feature-rich tools give users nine options, six radio buttons, four dropdown menus. They transfer all the decision-making responsibility to the user. When something goes wrong, you can shrug and say, "Well, they chose the wrong setting."

Prescriptive tools take responsibility for outcomes. They say, "We've thought this through for your specific situation. Here's what you should do."

Think about your member portal. Is it a Swiss Army knife when they need a hammer? All those customization options you're proud of? They might be preventing adoption. Your members don't want flexibility. They want to accomplish a task and move on with their day.

The Cost Constraint Nobody Talks About

An AI reading assistant was designed to help children across India improve their oral fluency. The technology had to work at 5 paisa per inference—that's 1/100th of a rupee, roughly $0.0006 per use.

Why does that number matter? Because at that price point, adoption happens fast. Schools can afford to let every student use it. Data collection accelerates. The model improves quickly. You hit a virtuous cycle.

If the cost is 10x higher, deployment slows to a crawl. Data trickles in. The model takes years to improve instead of months. You never reach scale. The technology was designed around an economic constraint, not in spite of it.

What's the "cost" for your members to adopt your solutions? I'm not just talking about subscription fees. What's the time investment? The learning curve? The disruption to existing workflows? The psychological cost of trying something new and potentially looking foolish if it doesn't work?

If those costs are too high, your adoption numbers will never move. You can't fix this after launch. You have to design for it from day one.

Meeting Members Where They Are

When agricultural specialists were explaining their pest-detection app to farmers, they led with AI terminology. Model accuracy. Neural networks. Computer vision. The kind of language that impresses funders and technical audiences.

The feedback was immediate and clarifying. The farmers didn't need to understand the technology. They needed to understand that someone cared enough to solve their actual problem in a way that worked for their actual lives.

Technology adoption in the real world hinges on trust. Trust comes from proximity. From showing up. From understanding someone's daily reality well enough to build something that fits into it. User research isn't running a survey. User research is spending days with your members. Watching what they actually do, not what they say they do. Understanding their workflows. Seeing where they get stuck. Noticing what tools they already use and trust.

That pest app ended up with a feature nobody planned initially. In rural areas, not everyone has a smartphone. But farmers talk to each other. They congregate at local shops. Farms are geographically clustered.

So the system got redesigned around that reality. When a farmer with a smartphone (a "lead farmer") photographs a pest problem and gets a diagnosis, the system registers that nearby farmers need to be informed. The lead farmer then alerts neighbors through existing social networks—conversations at the shop, stopping by adjacent fields.

One smartphone, one point of technology access, suddenly serves ten farms. The app didn't try to reach everyone digitally. It worked with how information already spreads in rural communities.

That only happens when you understand the deployment environment. When you're there. When you're paying attention.

What This Means for Associations

Designing AI to improve infant health outcomes or literacy rates for millions of children operates at a completely different stakes level than building a member portal. When you're working on social impact at scale, lives literally depend on getting deployment right.

But here's what we can learn. These projects succeed not because they cut corners or build bare-bones products. They succeed because they practice ruthless focus on what actually matters for adoption. Every feature, every choice, every complexity serves a specific purpose tied to deployment reality.

The lesson isn't "build less." The lesson is "build with intention." Your member portal doesn't need fewer features because you're cutting costs. It might need fewer features because each additional option creates friction, reduces clarity, and lowers adoption. That's a strategic choice, not a budget constraint.

When stakes are high, you can't afford to build things people won't use. Neither can your association.

How to Apply This Tomorrow

Here's a framework you can use:

Name your deployment environment precisely. Not "members" or "young professionals" or "small firm owners." Get specific. "Members on construction sites between 7am-3pm, using phones with limited data plans, checking during breaks."

List every constraint. Technical limitations. Budget restrictions. Time pressures. Organizational politics. Competing priorities. Write them all down. These are your design parameters, not your excuses.

Design to the tightest constraint first. If some members have limited connectivity, everyone gets the low-bandwidth version. If some members have minimal technical literacy, everyone gets the simple interface. This isn't dumbing down. This is being responsible.

Build prescriptive, not comprehensive. Two clear paths beat nine flexible options. Take responsibility for guiding people to success instead of giving them freedom to fail.

Calculate the true adoption cost. Add up everything: learning curve, behavior change, time investment, workflow disruption, psychological risk. If that total is too high, redesign.

Plan to ride shotgun. Budget for 2-3 years of active support, iteration, and improvement. Technology adoption isn't a launch. It's a journey you take with your members.

The Ruler, Not the Checkerboard

The best technology doesn't win. The most deployable technology wins. One organization reached 100 million people not by building the most sophisticated models, but by designing for reality from day one. They asked different questions. They prioritized different things. They measured success differently.

Your members don't need the most sophisticated tools. They need tools that work in their actual environment. So before your next technology investment, before the vendor demos and the feature comparisons and the pilot programs, ask one question: Do we have the ruler, or are we assuming they have the checkerboard?


🎧Want to hear the full conversation about deployment-first design and AI implementation? Listen to the complete Sidecar Sync podcast episode with Shekar Sivasubramanian.

Mallory Mejias
Post by Mallory Mejias
October 1, 2025
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.