Sidecar Blog

Riding Shotgun with AI: Why You Can't Set It and Forget It

Written by Mallory Mejias | Oct 2, 2025 6:29:38 PM

Your association just launched an AI assistant to handle member questions. The vendor demo was impressive. The pilot went well. Leadership expects it to run autonomously within a month.

That timeline misunderstands how AI works at a fundamental level.

We're used to software that follows a familiar pattern. Build it, test it, launch it, move on to the next project. Maybe you come back for updates every quarter or patch a bug occasionally. But mostly? It runs itself.

AI doesn't work that way. In a recent Sidecar Sync conversation, Shekar Sivasubramanian from Wadhwani AI put it bluntly: you need to be willing to "ride shotgun with AI" for two to three years before it starts driving reliably on its own. 

That changes everything about how we budget, staff, and commit to AI projects.

AI Is Only 3% of the Solution

Wadhwani AI has deployed numerous AI solutions across India, and they've learned something most organizations miss. The AI model itself represents roughly 3-5% of what makes the project successful.

The other 95%? Ecosystem building. User education. Relationship management. Continuous model retraining. Investigating edge cases. Explaining errors. Adjusting deployment based on feedback. The unglamorous, time-intensive work of staying committed when things don't work perfectly.

This has huge implications for how associations approach AI. If you're budgeting for AI like you budget for software licenses, you're already setting yourself up for failure. If you're staffing it like a platform subscription that runs itself after setup, same problem.

Success with AI comes from staying committed while the technology learns, not from having the perfect model on day one.

The Journey from 82 Grams to 10 Grams

Wadhwani AI built a tool to measure infant health metrics using a simple video captured on a smartphone. Weight, length, head circumference. Critical measurements that typically require specialized equipment and trained staff.

When they first deployed, they had somewhere between 25,000 and 30,000 measurements in their dataset. Scientifically, they knew they needed more. Much more. But you have to start somewhere.

Their initial error rate? 82 grams (about 3 ounces) on weight measurements. Not terrible when you consider the alternative in resource-constrained environments might be no measurement at all, or equipment that's equally imprecise. But not good enough.

So they stayed with it. Kept testing. Kept comparing against physical measurement devices. Kept collecting data from real deployments. Kept retraining the model.

The error rate dropped to 41 grams. Then to 12 grams. Then under 10 grams. Consistently beating the alternative measurement methods available.

This didn't happen in a month. This happened over years of active partnership with the communities using the tool. Years of investigating why certain cases produced errors. Years of incremental improvement.

You can't rush accuracy. You earn it through commitment.

The Transparency Paradox

Conventional wisdom says you hide your weaknesses. You don't advertise that your AI makes mistakes. You smooth over the rough edges in your marketing materials and hope users don't notice the imperfections.

Wadhwani AI did the opposite, and it's one of the reasons their deployments succeed. They're completely transparent about error rates. They tell government partners and users exactly where the technology stands. "Our error was 82 grams. Now it's 41 grams. Now it's 12 grams."

They run parallel systems. The AI measurement runs alongside traditional methods. They keep humans in the loop for critical decisions. They don't automate away accountability until the technology has proven itself repeatedly over time.

The paradox: this transparency builds trust faster than perfection. When you acknowledge limitations openly, people understand they're partnering with you in improvement. When you hide limitations and they discover them anyway, trust collapses.

For associations, this means rethinking how you communicate about AI tools to members. The instinct is to launch when it's "ready," presenting it as a finished product. The smarter approach might be launching earlier with clear expectations: "This is learning. Here's how accurate it is today. Here's how we're improving it. You're helping us make it better."

What Riding Shotgun Actually Means

What does this active partnership look like day-to-day? Because "riding shotgun for three years" sounds abstract until you understand the actual work involved.

The Wadhwani AI team investigates significant errors to understand what went wrong. They're not just logging issues in a dashboard. They're analyzing patterns, retraining models as new data comes in, and adjusting deployment based on what they learn.

They explain to users what happened when the AI made a mistake. Not defensive explanations. Clear, honest communication about what went wrong and what they're doing to fix it.

They stay present through the learning phase. Testing retrained models. Comparing performance against previous versions and against alternative methods. Tweaking interfaces based on real-world feedback.

The uncomfortable part: staying accountable when the technology fails. Because it will fail. The question is whether you're there to help people through those failures or whether you've moved on to the next shiny project.

Understanding Your Data Reality

Wadhwani AI faced a specific data challenge: they needed 200,000+ measurements but started with 25,000-30,000. They could only close that gap through active deployment and usage, even when the model wasn't perfect yet.

For associations, the data situation probably looks different. You likely have lots of data—member interactions, event registrations, content engagement, certification completions. 

But having data and having the right data for your AI use case are different things. Maybe you have member profiles and event attendance records, but you're trying to build a networking recommendation engine that needs professional specializations and interest areas you've never systematically captured. Maybe you track certification completions but you're building a system to predict member churn that needs engagement signals across email, community forums, and event participation—data that lives in three different systems you've never connected.

The question becomes: do you have the data your specific AI project needs, structured in a way the model can learn from? And if not, what's your plan for getting it?

Unlike Wadhwani's scenario where they needed massive scale to improve accuracy, you might discover you need different data, not just more of what you already have. That might mean instrumenting new tracking, restructuring existing databases, or being strategic about which AI projects are realistic given your current data landscape.

>> Related: Rethinking the Tech Roadmap: Build Your AI Data Platform Before Replacing Your AMS

Why Associations Struggle With This

There are structural reasons why the "ride shotgun for years" model clashes with how most associations operate.

Budget cycles typically run annually. Projects get funded, launched, and evaluated within 12-month windows. Nobody's budgeting for three years of active AI management when they approve a project.

Project management culture reinforces "launch and move on." You ship version 1.0, celebrate the milestone, reassign the team to the next priority. The idea of babysitting a project for years after launch feels inefficient.

Vendors often overpromise on autonomy timelines because that's what closes deals. "It'll basically run itself after the first month" sounds better than "you'll need dedicated staff managing this for 2-3 years."

Staff turnover means the person who understood the model's quirks and limitations leaves, taking that institutional knowledge with them. The replacement is starting from scratch or relying on documentation that never captures everything.

Board expectations don't include multi-year technology supervision. They want to see results, impact metrics, ROI. "We're still in the learning phase" is a hard sell in year two.

None of these are insurmountable. But they require changing how we think about AI projects from the start.

A Different Framework

If you're serious about implementing AI in your association, the commitment looks like this:

Budget for the long haul. Not three months of implementation and then maintenance mode. Plan for active management, iteration, and improvement. That includes staff time, not just platform costs. How long? Depends on the complexity and stakes of what you're building.

Assign clear ownership. Not "the team" or "the department." One person who's accountable for riding shotgun. Someone who's tracking errors, communicating with users, coordinating improvements, and advocating for necessary resources.

Set transparent expectations with members from day one. This is learning with them. You're not selling them a finished product. You're inviting them into a partnership where their feedback directly shapes how the tool evolves.

Create tight feedback loops. Make it easy for members to report issues. Show them how their input drives specific improvements. Close the loop by communicating what changed and why.

Measure improvement over time, not perfection at launch. Track error rates declining. Monitor user confidence growing. Document the journey from "this is rough" to "this actually works." That progression tells the real success story.

Keep parallel systems running longer than feels comfortable. Let members choose between the AI tool and traditional methods until the AI proves itself consistently. Don't force adoption before trust is earned.

Plan for the human safety net. Critical decisions shouldn't be fully automated until the AI has demonstrated reliability across edge cases and unexpected scenarios. Human review isn't a failure of AI. It's responsible deployment.

Matching Commitment to Stakes

Let's be clear about something important: not every AI project requires the same level of commitment.

Wadhwani AI is measuring infant health metrics where accuracy directly impacts medical decisions. That demands years of rigorous testing, continuous improvement, and unwavering commitment. The stakes are life and death.

Your member chatbot answering common questions about certification requirements? Different stakes entirely. If it makes a mistake, someone follows up with your staff. Nobody's health is at risk.

The framework is the same—transparency, feedback loops, active management, continuous improvement. But the timeline and intensity scale with complexity and consequences.

A simple content recommendation system might need three months of active tuning. A sophisticated member matching algorithm might need a year. A tool that makes high-stakes decisions about member credentials? That deserves the full multi-year treatment.

The mistake isn't committing too much or too little. The mistake is committing without understanding what you're actually signing up for. Match your investment to the reality of what you're building.

Before starting an AI project, ask: What happens if this fails? Who does it impact? How critical is accuracy? How complex is the problem we're solving? Your answers should shape your timeline, budget, and staffing plans.

AI is increasingly accessible. The use cases for member engagement, content delivery, and operational efficiency are real and compelling. You don't need to be intimidated into inaction.

You just need to be realistic about the journey. Some AI projects are sprints. Some are marathons. Know which one you're running before you lace up.

Want to hear more about what long-term AI partnership actually looks like? Listen to the full Sidecar Sync podcast conversation with Shekar Sivasubramanian from Wadhwani AI.