Summary:
In this special interview edition of the Sidecar Sync, Mallory Mejias sits down with Shekar Sivasubramanian, Head of the Wadhwani Institute for Artificial Intelligence, to explore how AI can be deployed at scale for social good. Shekar shares powerful stories of AI in action—from an oral reading fluency assistant helping millions of Indian children, to smartphone-powered pest detection for rural farmers. This episode dives into designing for deployment (not just innovation), building trust with limited data, and why humility and human-centered thinking are vital for transformative impact. It’s a conversation full of heart, hope, and hard-earned wisdom.
Shekar Sivasubramanian brings nearly four decades of experience in technology and business leadership, having scaled multiple startups to billion-dollar valuations. Now Head of the Wadhwani Institute for Artificial Intelligence, he leads a team delivering AI solutions in agriculture, healthcare, and education to underserved populations across India. An alum of IIT Bombay and the University of Missouri–Kansas City, Shekar also maintains deep technical expertise and a research affiliation with Carnegie Mellon University.
Timestamps:
👥Provide comprehensive AI education for your team
https://learn.sidecar.ai/teams
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🤖 Join the AI Mastermind:
https://sidecar.ai/association-ai-mas...
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
🛠 AI Tools and Resources Mentioned in This Episode:
Wadhwani AI ➡ https://www.wadhwaniai.org
https://www.linkedin.com/company/sidecar-global
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
⚙️ Other Resources from Sidecar:
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖
[00:00:00] Intro: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence, and associations.
[00:00:14] Mallory: Hello everyone and welcome to today's episode of the Sidecar Sync Podcast. My name is Mallory Mejias, and I'm one of your co-hosts along with Amith Nagarajan, and today we've got a special edition. Interview episode of the Sidecar Sync lined up for you. Today we're joined by Shekar Sivasubramanian, Head of the Wadhwani Institute for Artificial Intelligence.
[00:00:35] Shekar has nearly 40 years of experience leading large tech and business organizations taking multiple companies from inception to billion dollar valuations today. He leads a team of more than 250 professionals at Wadhwani AI deploying solutions in agriculture, health and education with a mission to bring AI to a billion people and beyond.
[00:00:58] In our conversation today, [00:01:00] Shekar shares how an oral reading fluency assistant could transform learning for tens of millions of children across India. Why designing for deployment first, not technology first is the key to scaling AI for social good. What it means to ride shotgun with AI when data is limited, and why trust matters more than flashy models.
[00:01:23] And finally, we'll talk about how associations like government and NGOs can learn to meet people where they are to drive real impact. We've interviewed many people on the Sidecar Sync Podcast, and I've thoroughly enjoyed all of our interview episodes, but I've gotta tell you all this has been one of my favorite interviews so far.
[00:01:43] I left this conversation feeling inspired and hopeful, and I think you will too. Please enjoy this interview with Shekar.
Shekar, thank you so much for joining us on the Sidecar Sync Podcast from so far away too. I'd like to first kick off the episode, [00:02:00] allowing you to talk a little bit about yourself and your background, and then also the incredible work that you all are doing at Wadhwani AI.
[00:02:06] Shekar: Uh, thank you very much. The opportunity. Uh, my name is, uh, Sheik Siman, uh, I head up, uh, the WA Institute for Artificial Intelligence. I started in the tech industry in India when it was at its early stages. So I've got about 40 years of experience and I worked in, um, global organizations, corporate setups in the tech space, and using tech, we created multiple business opportunities.
[00:02:35] Uh, I was also involved in various research activities. Um, and then about five years ago, I started my association with babani. Uh, I've been, I've known one of the founders for the last, uh, literally 35 years, so I understand the context of the organization. So what we did was essentially our mission and goal at our institute is [00:03:00] to be able to deliver solutions using AI for social impact and social
[00:03:09] lies.
[00:03:15] Problem definition, the ability to uncover problem, understanding the ecosystem, the overall operations, defining the problem, solving it, understanding that data in beginning and working your way out as you start implementing the solution. Understanding the user groups that we work with, understanding how we make sure that the impact is measured.
[00:03:43] At India. So we started off by embedding ourselves in working with the government, identifying problems, defining problems. Today we have started our distribution network by establishing ourselves in multiple states in India. So we've got about 25 plus solutions in [00:04:00] AI that are at various stages of deployment, efficacy impact.
[00:04:06] And over the last four and to five years, we have been humbled.
[00:04:13] Whole thing up and running. We're the largest organization of its kind in India. We have over two 50 fulltime employees and over close to two consultants. So it's a huge organization that is completely committed for only one reason. We exist for only one reason to make AI count for the individuals who needed the most, uh, in the social pyramid.
[00:04:35] And that's the reason for our existence.
[00:04:38] Mallory: That's incredible. Shaker. Thank you for sharing that context and background. We spent a lot of time on the podcast talking about. AI for good, certainly, but particularly AI in the context of professional associations and how they can use it internally within their business, but also kind of downstream to bring the most value to their own members and hopefully impact their, their members' professions and [00:05:00] industries.
[00:05:00] I'm curious of all this might be a tough question for you, of all of the solutions that you've deployed or even the ones that you're currently working on. Do you have one for you that's a standout that if someone's asking you about the organization you always lead with? Let me tell you about this one project that we worked on.
[00:05:17] Shekar: Yes. And it's a recent one. We have multiple, but it's one that I would say is arguably, uh, one of the most important, at least for me personally. And that is, uh, the oral, which is the capability to be able to, uh, provide an to a child where they speak a paragraph. And it establishes the foundation skill within a child as an assistant, as an AI assistant, to ensure that they can improve their spoken fluency.
[00:05:48] And when you do that, uh, and if you can do that Mulally in India, and if you do that, say at the age of six or seven, and you add on that by providing a [00:06:00] whole set of. Very, um, non-intrusive, very simple support in the learning exercise for children. Wow. I believe that it'll make a generational difference to the learning efficacy of the entire children population and adult population in India.
[00:06:20] In other words. You start off, for instance, as an example, if there's a child at the age of say seven or eight, who's starting to get comfortable that they're given a P, make some mistakes and in it, and then as they get to 10 or 12, get even better at it, and over a period of time they start realizing, you know what?
[00:06:39] This help my entire education. It's personalized. It's in the language I speak. It establishes communities which are comfortable for me. It doesn't scare me. It's very supportive and most important, there are limited dangers in such an overture. You don't need to [00:07:00] worry about excess risks and complications on an occasion, given the fact that AI will make a mistake.
[00:07:06] The penalty of mistakes in this environment is very low, so it's almost designed to be safe, so it's gonna be a safe, secure environment. We, we make an occasional mistake. It's gonna happen with children, it's gonna happen with all of us. But if you can systematically improve the fluency, it's widely researched and understood, improves the learning ability, if we can support it with not just assessment, but diagnostic remediation.
[00:07:37] There are various tools in India.
[00:07:47] Nature of spoken languages is so rich in India. You can take some things from each of the languages and just improve yourself in the way you speak. Addiction can improve, your confidence can improve. It's important even otherwise. [00:08:00] I also look at it as being a tool for adults in India. So let's say there are people.
[00:08:06] The trade of being a in a certain city, and they find it difficult to speak a set of phrases, which causes them to get a amount of money because the customer feels, you know, they're not very good in their subject. Well make them better in the way they speak. Let them speak the terms very clearly so that the counterparty understands it, the great amount of clarity if that happens.
[00:08:26] Money. It's revolutionary kind of foundation skill. And I would argue that something like the, say I wanna go to Germany or France and I dunno the language very well and if I practice it, it'll help me in that. So I feel that this is one solution whose AI may be something, but the impact is gonna tremendous if adopted Pan India.
[00:08:49] And our goal, we're hoping to achieve. A goal by 27. Um, and December 27, we would like to get at least 50 to 80 million children [00:09:00] adopt us.
[00:09:01] Mallory: Wow.
[00:09:02] Shekar: In India, that is all our numbers are in several million. Even today, I touch a hundred million people using AI solutions. Not all of them have been done in the perfect way.
[00:09:13] Not all of them meet the satisfaction in terms of a scientist saying, you know, get to be better. But that's just a journey. You have to willing ai, as you know very well, AI is not like software. You don't just release it and walk away. When you do your version zero release or version one, least, that's not the ending point.
[00:09:32] You have to be willing to stay with it for at least two to three years before you the impact and the benefits of ai, which is one of the greatest challenges that people do not understand about ai. So see in one in 10,000 or one in hundred thousand, what happens to that one person who got the wrong result?
[00:09:49] If you cannot think like that, then you're not responsible with ai. The fact that AI will make a mistake is something that's part of your design thinking of the way you deploy ai, [00:10:00] and in particularly in environments where the people who will use it may not necessarily conform to a pattern of people.
[00:10:15] You even have sufficient guard because you would not even have thought of those kinds of problems. So you have to be willing to shotgun with AI for a period of time till it starts actually, you know, driving the car in the right manner. You cannot leave it alone. That's the part, people in a world where they're used to delivering and then getting the results quickly, whatever, it's financial, non-financial benefits here, you've gotta stay the course.
[00:10:45] Mallory: Is incredible shaker. Thank you for sharing all that. Education was certainly one part I wanted to dive into with you because. It's highly relevant for associations who oftentimes provide really core education for their [00:11:00] memberships. And I think the idea of having, you know, a personalized AI learning assistant is exciting, but also, I, I really wanna double down on what you said about the downstream impact.
[00:11:11] So having a plumber potentially who can communicate better and earn a better living for he and his family members in the same way that. Providing better education to a healthcare professional or to a teacher from their professional association is an impact on that individual, but also an impact on the communities that they serve as well.
[00:11:29] Have you seen any, I know you said this is a more recent project, any early results from this, uh, AI assistant?
[00:11:36] Shekar: Yes. Yes. It's starting. So as you know, the scale of adoption tends to be pretty large in India, and uh, it's kind of good. Once you get adoption, we have released it in one state where it's 5 million kids have used it.
[00:11:52] So it's not small. The advantage of that is you start getting the volume of data you start get, and it's integrated into their, [00:12:00] uh, assessment system. So it's, the AI is already built in and it has already started reducing the load for teachers on those types of tasks. So, and these are not, um, I would like to state something that's.
[00:12:13] And I'll give you another example of a far more innovative aspect of ai.
[00:12:18] Mallory: Mm-hmm.
[00:12:18] Shekar: But my personal belief is when we discuss ai, uh, my, so my preference is, you know, how do you use AI to solve a problem? So in this statement, I focus on ai, I wanna focus on the problem. In other words, in that statement, the most important word is problem, not ai.
[00:12:38] How do you use AI to solve a problem? Means even if you can take a relatively straightforward problem and make, I don't look for great innovation in ai, that I'm happy do something very simple, but that moves the needle for society [00:13:00] because I'm focused on societal change and AI is one that will certainly help.
[00:13:07] But there are several others that we have found that have a much greater impact on the final outcome. So in a very crude sense, in our world, we think AI is about three to 5% of that of problem solving, because the rest of it is the ecosystem, the education, the getting people together, making it count, spending time with it, letting people make mistakes, and.
[00:13:36] Retraining models constantly and improving them. This journey is what finally causes the result. Not a super great AI model, and it's extremely important to understand that is being used in in context since society where individuals being not maybe experiencing technology itself for the first time, not [00:14:00] ai.
[00:14:01] Individuals may be holding a cell phone or a smart phone for the first time. So what do you do? So we give what I often call prescriptive, not feature rich solutions. Prescriptive. So this, again, very important in the social sector. See, when you are working with a set of individuals who, uh, and I often call it India and Para India is the urbanized people who you know, can search everything on the web.
[00:14:28] Barak individuals who live all over, it's a big population. It's highly aspirational. It's an enormous amount of talent, but who are, who are working their way up right now To access technology, to learn things in that space, you need to provide what I call prescriptive. So here, what that means is if I was to give a feature rich application to a person, then I give the person lots of choices.
[00:14:52] In a certain sense, not irresponsibly, but in a certain sense, I can walk away saying, you know what? That person selected the wrong choice. [00:15:00] That's why they got the wrong outcome. I ha I do not have the choice in the space. I'm responsible for their actions as well. That means I have to govern their actions within Guard.
[00:15:12] I cannot allow open-ended applications, which allow nine different choices, six radio buttons, four dropdowns. Not at all. Something gives them two.
[00:15:25] Mallory: Mm-hmm.
[00:15:25] Shekar: And if they do wrong them something, I have literally handhold the application. Ensure benefits. That's far more important than which application, not the time that will come later.
[00:15:43] Once people graduate to getting the benefits, time benefits, quality benefits, some benefit that a human being can relate to that makes their life better, then they'll open their minds out to other forms of learning, which will [00:16:00] make it more interesting for them to use it. Suddenly expecting people to make the dramatic jump, specifically based on faith or trust.
[00:16:08] It's not possible. You have to make them. You have to help them reach that decision making clarity saying, no, I'm comfortable now using this ai. I know it'll work for me. It's reliable for me. I like it. It helps me. I need to reach that point, and it's a very emotive and simple point. The problem is. A world where have very limited attention spans, we can quickly gloss over it.
[00:16:34] And in an urbanized setting, it's easy to gloss over it. But in settings which we work with, some of these are very important for people. So if I release an application, for instance, which counts the number of pests found in a track, in a feed for a farmer. It's below a certain number in a [00:17:00] farm. It means everything is good.
[00:17:02] They did not. If it's in between two numbers, say four and below, it's okay. They can deal 10. That's that individual. Now, get up, they'll go to a pesticide, they'll buy the pesticide and they'll do something, and that has a direct.
[00:17:25] And their belief that it's gonna help them, that's a very responsible position to cannot make a mistake in that. That's fundamental. And there's other types of, they get, it's telling them something they gotta believe. We've been able to be successful in those spectrum. We, we've spent a lot of time training people, educating them.
[00:17:53] There's a world application that's the [00:18:00] complexity of ai, you know,
[00:18:05] implementation of AI among common people in a large, diverse, multicultural like India, which is what makes it both. Exciting and very rich and very complex, but mm-hmm it requires a steady heart and mind to be on it.
[00:18:23] Mallory: Man, this is already shaping up to be such a good conversation. I, I know only a few people in my life who are, who are very much anti ai, like anything to do with it, don't wanna hear about it.
[00:18:34] It's bad for the environment, it's making people lazy, so on and so forth. And then. Hearing this conversation with you, I feel like there's just no doubt that we can utilize AI for social impact. Um, I did wanna ask you about the app that you just talked about. I think that is called Cotton Ace, is that correct?
[00:18:52] Yes. The, the pest app. So you talked about AI being, solving the problem. With artificial intelligence only kind [00:19:00] of being a small percent of the whole visual and that we need to look at the ecosystem, getting people to use the technology to learn about it. So what was the process like for you all to get probably rural farmers to use an app to determine, you know, the pest count in their crop?
[00:19:17] Shekar: So, uh, what happens is you start off with small implementations with highly hand health. Very, very much assisted kind of mood, and it takes time. So I'll give you an example, um, in this, um, PE cotton example. So we, Norman, we came with a nomenclature saying that, um, we have somebody called a lead farmer and, uh, cascade farmer.
[00:19:42] These are just nice terms. Mm-hmm. So what's a lead farmer? A lead farmer is a person who has a smartphone, a task kit farmer does not. Okay. So in our first implementation, the way we saw it, we only targeted individuals who had smartphones. Okay, [00:20:00] that's a very small number. But in a village or in a rural setup where there are multiple farms co-located, and if there is an ation in one of the farms, there's a very high likelihood that it's also possible other farm will be infected.
[00:20:15] Earlier, our target population was only smartphone, so we had their numbers. We'd get in touch with them, they would take the photo, we would tell them. We realized that's the way to do it. You need to be, these individuals meet at the local shop. They meet with one another, they talk to one another. But we did not have the reach across to the individuals who didn't have smartphones because they were not taking photographs.
[00:20:36] So now you need to do something else. You've gotta extend it and the cardinal or the multipliers by five or 10 times for every one person. You've got five to 10 other farmers you can reach. So suddenly, from starting with 15. Hundred thousand farmers, not because hundred thousand farmers suddenly had, uh, smartphone, but because we extended the application to say, [00:21:00] now target all farmers.
[00:21:02] The moment you get a farmer who sent a photograph and they interpreted it as a problem, register all the other farmers and inform them to look at their, you know, to look at their farms to investigate it, even though. It has. So one spot AI problem quickly propagates to a nearby area. This is something you would not do unless you think about the problem differently.
[00:21:26] Mallory: Yeah.
[00:21:27] Shekar: Second, when we, when we went to the farmers, you know, this kind of a funny anecdote. So we had our, we had our scientists go there and naturally our folks were very excited and they wanted to tell the farmers that, you know, we've done this, we've done that, and they would speak AI terms. It's kind because farmer doesn't understand any of this, so, but the scientists explain things to the farmer saying that, you know, this, that, this, that.
[00:21:52] So finally the farmer just, you know, got them cold in their tracks by just saying a simple statement and all [00:22:00] that the person said was, look, I don't understand anything you're saying, but I understand one thing. You care about us. That's it.
[00:22:13] In that simple statement is the basis for us, which places an enormous responsibility on us as an institution to not violate the trust, which means we've gotta stay with the problem until we're confident fully, we have to attend to it down with the people. Some of our user design research team, which happened to be at that time, an all woman team, they would spend days living with the farmers.
[00:22:39] They'll know what's your day looks like? When do you go to your farm? How often do you go to your farm? What is the difficulty you face? So unless you understand the problem at that level, chances are you'll not solve it well. And that's where people see there are two ways you can do problem solving. One is you create a solution, an nifty, [00:23:00] snazzy solution, and go out there and sell it.
[00:23:02] Another is you embed yourself in the ecosystem, live with them, understand what the problems are. Then solve it. We followed option two in the social space. Option two is what will work, because you need to have that with you in a solution which may have a residual sense of uncertainty, which AI has. So if you're not willing to stay with them at the human level, they'll feel very betrayed when the errors take place.
[00:23:26] So you have to stay with them, assuring that, that, don't worry, it can be sometimes this makes mistakes, so don't worry too much. We're there to help you. I'll give you another example where we did the anthropometry where we take a five, eight second video of a bb. So what we did there is we take an eight second video of a small baby and newborn infant between zero and 42 days.
[00:23:46] And using the technology, we're in a position to be able to assess its critical parameters, which is way length circumference, and.[00:24:00]
[00:24:03] So initially what happened, the individuals who solved it, um, solved it. You need also another device in the frame of a known dimension against which you can size the baby. You need a prop. And so they came up with a two dimensional prop, similar to a chessboard, called a checker board. Okay? So they tried solving it with that.
[00:24:27] I them that look in India, where is it that people are gonna get access to this animal called check? Nobody will. You don't get that quite as easily. In rural communities, what you do get, what you do get is a ruler, a single, you know, a ruler? Mm-hmm. Something like three. Affordable. Mm-hmm. It's easy for, uh, the as worker, the lady who carries it, it's very easy to carry that.
[00:24:54] So I told them either we solve it with the ruler or we throw the solution out. There is no other [00:25:00] option. So they solved it, the ruler, and it's effectively used again, a learning. See, these are the kind of learnings you learn from a deployment environment where the constraints of the environment. Have honored, respected, you cannot it.
[00:25:13] Outside of that, if people solving it inside out, we'll get stuck to our ideology and tell them, look, the rate is so low, let's use this. Doesn't matter. It's not gonna work. So a lot of our deployment, that's why I have a Maxim Insider Institute. I call it deployment. First day one. When you come with an AI problem, tell me your deployment environment, tell me how to work.
[00:25:39] Space to your AI problem solving. Don't do it the other way around. And even if the AI doesn't work quite as well, that's fine. Don't worry too much about it. Like the joke they say, you know, if Bears two people, you just need to run faster than the other person. You just need be slightly better than what the environment today.
[00:25:58] Mm-hmm. And then build from [00:26:00] that. You don't need to be so far away that nobody can even see you because you're the most brilliant person on earth. Nobody cares. The system. You have to carry the system with you. That's difficult, and that requires a lot of people to stand down from their, you know, excitement about the subject and be highly humble and realize you're really solving human problems.
[00:26:23] That's the transition, that is the most difficult one for the institute and that we've been able to, to a very large entity. Entity.
[00:26:29] Mallory: Wow. I, I like the example that you shared about the farmers saying. I don't know anything that you're talking about right now, but I understand that you care and I think that mixed with the example of using the ruler as the scale object is really an example of meeting people where they are.
[00:26:47] So not assuming we understand with associations, I'm sure you say our membership, right? But like that's a lot of people potentially, or a lot of organizations. So I like the, I really wanna double down on the [00:27:00] concept of meeting people where they are. Understanding the problem, which we all think we do, right?
[00:27:05] We think, oh, sure, I know the problem. We've gotta provide this course. We've gotta have this conference, we've gotta have this topic. But really getting in there and starting from deployment first and working your way backwards, I mean, that is just a really unique way of thinking about it. Shaker, I wanna talk a little bit about something you said at the beginning of this conversation, which was oftentimes you're deploying these solutions with limited data.
[00:27:30] And I think a lot of our listeners and me as well, if I, I would think about deploying a solution at Sidecar. I would want it to be great. I would want it to be perfect or as close to perfect as I could get. I would wanna have a lot of data to inform the decisions we were making. So can you talk about, especially in the social sector, where the outcomes are so important, how you go about creating a solution when you don't even have a ton of data to begin with?
[00:27:56] Shekar: The way you have to stop this is, um. When you start [00:28:00] in specific areas, for instance, when the amount of data is not enough, um, you explain to the ecosystem the approach that you're using. Let use the anthro example. And when I get to 200,000, my error will be that it can be quickly standardized. Okay? Today I have something like 25 or 30,000 babies.
[00:28:25] It's the largest data set in the world. Even then, I know from a scientific I need more data. However, the error that I'm providing and we keep testing with equivalent devices. It could be a digital device, it could be a a mechanical device. Consistently we're beating it. Are we beating it to the level that we wanna satisfied?
[00:28:49] No. So that's the first.
[00:28:54] Systems, which are complex. There is a residual error rate that [00:29:00] may be there that you have to just beat consistently, which is not easy, but it's doable. That is your first point of adoption and trust that, look, we're able to get to this level and using that a a hook, you start telling people, start using it.
[00:29:16] Don't get your critical decisions yet. Use human in the loop. Keep your alternate systems going. Now suddenly the data goes from 10 to hundred to, and when that happens, you keep pre-training your models and they start keep getting better. So you run again. You run parallel with that for a period of time and then you stay, you tell them that, look, our error was 82 grams, but now it's 41 grams.
[00:29:45] Mm-hmm. And now it's 12 grams and now it's hundred grams. People say, wow. Completely open. There's nothing you can look at any instance of anything you wanna at [00:30:00] because of the complex. Don't wanna run away from accountability. That is the secret. It, the fallibility of not having enough data is countered by the openness of acknowledgement that we're vulnerable right now.
[00:30:21] And the way we get past this vulnerability is by working together till we reach a safe spot together. Because we have two choices. Either we do not do it, which means we'll never get the solution, or we work together until we get to a point which we're both comfortable with, and there's gonna be a period of time, it'll be one month, six months, one year, two years, where there is this uncertainty.
[00:30:47] It's part of climbing. The peak in order to reach a level of comfort. Until then, we are not gonna claim something great. We have done. We're just gonna say we're walking still, we're still, it's work in progress. It's the [00:31:00] design of the AI system in a certain sense. And as and when the data increases, almost in all instances in our experience, we have consistently beaten whatever is there.
[00:31:11] We've been able to consistently deliver better outcomes. So just as a small example on the oral reading fluency. One of the critical parameters we have is the cost. And in India you've got a Rupe and one um, hundred Pesa make one rupe.
[00:31:27] Mallory: Mm-hmm.
[00:31:29] Shekar: And I think today, with respect to a dollar, $1 is about 85 rupees.
[00:31:34] Okay. So 180 5 rupees, one rupe has hundred by C. Our solution that we deliver is at five by per inference. Five per inference. If I was, even if you do the math. It's like close to zero five per inference. Uh, 20 inferences. Make a rupe, uh, times 60. It's about 1500 inferences per dollar. That's a constraint.[00:32:00]
[00:32:00] That's the economic constraint you have to respect. If you wanted to scale fast, normally people get a little more happy with the problem solving efficacy, but when you use it in real world situations. What the ecosystem will pay in order to scale it up. Because if I was to say it's X amount, five versus 50 versus five rupees versus rupees versus rupees, the pace of deployment will change.
[00:32:30] It has a direct outcome on pace of data collection, pace of improving the model. It'll fail. So if you make it so less expensive that people will use it, your data will come in quickly. Your model will improve. Then more data. The game of, uh, size, you know, you reach a certain economic stability level, everything starts working out.
[00:32:50] They're difficult problems. They're not so intuitive. I can say it after doing it. I cannot, I could not have said it before doing it. I always joke with people, all [00:33:00] strategies are retro.
[00:33:07] Somewhat random movement in order. Mm-hmm. To reach an approximate goal that are made up in my mind. Yep. Jostled and pushed around a little bit and compromising and doing all kinds of funny things. Then you reach a certain point and then you reflect on it and you say, well, most of this seems to make sense now because we've got about 20, 25 solutions, 25 solutions.
[00:33:25] Plus now we're able to see the repeatable patterns. They're, there is one example of, uh, a lung disease called in the, in some of the states in India, where the individual who has the disease can get a certain financial benefit by claiming that they have the disease. Now they have to demonstrate with an x-ray.
[00:33:46] So in India, an x-ray may cost rupees, which is already taken by another system. But if this individual has to claim, they have to be able to show that. But when they take a photograph of an X, it's free. So our AI model interprets the [00:34:00] photograph of the x-ray. So the government system may have taken the x-ray and they may have it with them.
[00:34:04] So this individual who has to stick the claim has to, again, in context, some process will be there, say, can provide a photograph of the xray. That individual will go to take a photograph. It's interpreted the AI model that they haves, and you can check it out. Roger found silicosis that won. That's state won an award for this innovation.
[00:34:25] The way they thought about it and we were just providing the entire background of the solution for them, the full credit. You know, and in context, the reason I say 5% of AI is not just a statement, it really is because it is only when government organizations, state governments, central governments or large scale bodies move and push that You start getting the big benefits.
[00:34:49] A like India with a. You'll even make a mark unless you start touching the hundred million [00:35:00] mark. You know you have to be at million before people start noticing, hey, something is happening around us. You cannot do anything when you are saying 10. Nobody cares. And truth not, it should only care when you can make a material benefit, material change in the environment so that that's how you the data problem by exposing your vulnerability.
[00:35:23] Explaining it, working together and reaching a point of ance when the solution becomes very well or well designed. It's not easy. There are no easy ways.
[00:35:33] Mallory: It's kind of a paradox, what you said about being open about your error rate and your vulnerabilities makes people trust you more. And I think of that even in my own life.
[00:35:42] When I hear a health expert on a podcast talking about. All things health, and then they admit, maybe there's a question. I don't know the answer to that, or actually we don't have enough data on that. It almost makes you trust them more because someone's saying, Hey, I don't have all the answers. You can appreciate that.
[00:35:57] You can appreciate that transparency. [00:36:00] Shaker, I wanna talk about how VAI is embedded in the government ministries because I think that whole model is neat. And I can also imagine, and I don't know really anything about the government structure in India, but when we think government and the us, we think slow.
[00:36:18] We think bureaucratic. We think lots of steps to get where you need to go. So I'm curious how you all have been able to make such incredible impact. A few years, um, and kind of how that ties into this embedded government structure.
[00:36:35] Shekar: So, um, that's one of the first changes. I come from a corporate environment, so for me all along, uh, especially in a, in a business environment, I have a huge amount of, uh, understanding and respect of, for a customer who puts his hand, her hand in the pocket, takes off some money and has a choice.
[00:36:58] Because of something [00:37:00] that they believe. So I deeply believe I've, I've worked internationally around the world, even in the us. In fact, just as a small anecdotal aside, I was there between 1990 and 95. I was there for 10 years. I was there for the first five when I was responsible, overseeing a team, working with a prime contractor for the US Department of Agriculture, where we automated the entire USDA.
[00:37:28] 90 counties. We rolled out software for the county of automation. I was manager overseeing that whole thing. We delivered that. We managed their entire telecom network with a team of hundreds. So I have had and have met up with the US government officials as well. We built out certain crops centric applications for them.
[00:37:49] Mallory: Mm-hmm.
[00:37:50] Shekar: And I have learned only one thing that. Cares how good your product is unless they really know how much you care about them. And the [00:38:00] care comes at a very human level, and you have to be located with proximity and have a lot dialogue. Human beings have to talk to one another ly at least at the beginning, before you can start solving problems.
[00:38:17] Therefore, we made a very principle change. We actually have our folks. Within an arm length of the decision makers. And if you understand any government around the world, there are certain times where government officials will have a little more time to think of and do things differently. If you are not physically close to them at that time, you've missed the opportunity.
[00:38:44] So I needed to be there close enough that when they whisper our name, somebody comes running and says, Hey, we're here. What do you want? Secondly. Understand. Understand this is probably true around the world, the age demographics of individuals at senior levels and decision making [00:39:00] levels. And we also put people there who are much younger because this is a very simple technique, but we found it very effective because their own children will be around the age of the people who are working for us.
[00:39:12] So when you have a 25-year-old person, reach out to an individual who's a very senior official at about 55 or 60 in the evening at six, and tell them. So I just come to meet you. How are things? We do all this things in ai. Would you like, do you have a two? Do you have two minutes? Can I show you some things?
[00:39:29] They'll listen and then you start cultivating the relationship and then they'll come by and say, Hey, whatcha doing? You show them? No, no, doing all this nice stuff you like to know more. So you build that relationship through conversation and dialogue by taking on the listening role, the accommodating role.
[00:39:52] Role and present it in the most non-judgmental manner to the counterparty. [00:40:00] Not because you know something more, not because you know something about there. Nobody cares. I've told my team, I've some mantra we have. I said, no complaints. I don't want a single complaint about the world because that brings an attitude of working with, says all problems are mine, not everybody else's.
[00:40:19] The counterparty immediately sees that in you. People are intelligent. They'll immediately literally sift wheat from sha. They know these are people who come and complain. These are people who are trying to solve the problem, and these are people who are in front of me and helping me. These are people who are far away and will throw a contract at my face.
[00:40:39] And we're a not for profit. That helps us tremendously as well. So we have no, nothing. Our goal is helping. That is sealed by the government, and we behave like that because we believe that's the only way to be if you wanna be in this space. Service and serving people is our core mission in life. [00:41:00] There's a Hindi term called means service.
[00:41:03] It's it's term, which means service is the biggest. It's the biggest act of goodness that you can do When you embody that in. If you casually speak to people and you create the connections and you create it over a period of time, then you'll find slowly and steadily those matters where you go out to people at six in the evening and say, you know, there's a couple of contracts we've been working for.
[00:41:29] If you can just send out these letters to the state governments, it'll help us in our implementer. Sure. And send in five minutes. Can you write it up? Just finished off right now. But you need to be there for that type of intervention. You cannot be far away. So that's been, um, and that's, we have taken it the next step now, now we're presented 11 states.
[00:41:48] I've got a team of about 50 to 60 people who are now distributed in states. We have followed the same approach to sitting inside state ministries, just like in the United States. Some are federal matters, some are state [00:42:00] matters. It's the same in India. Some are federal matters or central matters, and some are state matters.
[00:42:04] Is some decisions will get made at the state level and some will get made at the center level. So maybe education, as an example, may be made within states. So, you know, Colorado could decide slightly differently or you know, Massachusetts can whatever. It's the same in India. So even, and in India it's particularly so because you wanna get stuff implemented in India, you have to be present across states and if you're presented, so we create all the connections.
[00:42:31] We're the blue. We're literally invisible, but we'll make sure things work we'll. Speak to the central ministries we'll make sure that they help us. We'll speak to the state ministries we'll make sure that they support us and we'll work with both of them. We'll crosspollinate all the ideas. Keep pulling things forward.
[00:42:50] After some time, people see, hey, these guys are well-intentioned, they're competent, and they'll move this forward. So the government of India, that we had the G 20 in India, [00:43:00] 23, and they were kind enough to invite us and told us that, you know, you can have your stall. And we were there in 10 or 12 representing the work, truthfully as the work being done by the ecosystem government, India.
[00:43:16] So that pushed the agenda even further. It made it more internationally visible. It made it more appropriate. So we've gotta do all these things that continues to constantly work towards the mission, which is to make sure that you get the government comfortable and they should be moving, which they do working with us, which they do.
[00:43:39] And if at all there are any inquisit, I'm being very truthful. It's at our end. I keep pushing my people saying, can you think better? Can you connect dots that are not visible? Because that's the nature of our work. Nobody's gonna tell you, you know, what is the area of this rectangle that I don't need you for?
[00:43:56] Mallory: Mm-hmm. They'll
[00:43:56] Shekar: give you a strictly space and tell what the area of that funny [00:44:00] thing that's there. You have to solve that problem. You have to solve problems with ambiguity. Or presence of materiality. If you don't have the discerning part, if you cannot listen well, if you cannot think without bias, if you feel that you're gonna get your past experience and layer it on on the new problem without thinking, gonna fail, you have to be very good in that process.
[00:44:24] And when you do that, constantly, surprisingly, over a period of time, you learn a lot more. Your muscles start. Building that recall that this is how I should work in this kind of a situation. It's like Jerry Rice being the wide receiver who's hidden above anybody else. Mm-hmm. Just because he tried just that bit harder.
[00:44:44] Mallory: Well, my key takeaway from what you just said is. How important proximity is and making sure that you're in the right place at the right time when those conversations are happening. That could be with your boards within the association structure, being sure you're there. If there's an [00:45:00] idea of, well, how could we implement AI in this way?
[00:45:02] You wanna be there. Um, also cross-functional teams having like younger staff interact with more senior staff. I love that. Um, another thing, you know, that's huge for associations is in-person events, annual conferences. Those times when you can connect with your members are so powerful. I also think while nothing can substitute for physical proximity, I think there's also something to be said about digital proximity too.
[00:45:26] And so if someone is online trying to address or solve a problem that they have in their profession, being sure that your organization is there with resources, with links that they can go to. Um, shaker I wanna talk about. So we've spent. An incredible time talking about AI for good. I want to talk a little bit about, so let's say with your healthcare example, using the AI to look at the x-rays.
[00:45:53] Um, that was something that didn't exist prior, but I'm wondering, do you all take into account the thought [00:46:00] of displacing professionals who would normally read x-rays with AI or kind of like human job displacement while you're implementing AI for good.
[00:46:11] Shekar: Right now we don't. Fortunately we don't have that problem, and I don't expect it to be there.
[00:46:16] And I'll tell you why. We're in a resource constrain environment. When you're in a resource constraint environment, you're already stretching the human resource to do a lot more. Okay? Any assistance they get that reduces their load is goods because the demand far exceeds the supply capacity in almost everything we do in India.
[00:46:40] So you are not gonna be in a position yet, and for a long time to say, you know what? Now I've got oversupply. I've got too many doctors for patients. Mm-hmm. I've got too many teachers for students. Okay. The population of India plays in the favor of this. When you have this level of population, the number of [00:47:00] services is very high.
[00:47:02] Second piece is, as society evolves, as requirements go up, as needs go up. Of societal needs is orders of magnitude more than fulfillment capacity. You're not gonna be, I remember back in the 1980s when the first computer systems came, displacing typewriters, everybody sold, thought everybody's gonna lose their job.
[00:47:27] Guess what? 35, 40 years later, jobs morphed. They never got lost. Skill levels will change. It's totally clear that.
[00:47:43] That's gonna be a different think of things, but do need of things and you do need get systems in actually will effective. So see and for the foresee, because unless [00:48:00] we reach a complete level of equivalent, you know, complete robotic capacities. That's still a ways away. Okay. Uh, you know, there's been a lot of people who have come and said, oh, we'll displace this type of work, um, skill to work with this.
[00:48:18] No, not in a reliable manner. Not in the foreseeable future. Not for a country as diverse as India. When you have 30 odd languages, seven odd dialect, you know, believe me, you close to making that statement yet it's a.
[00:48:35] On a different note, we're constrained by the capacity of solving ai, not by the number of opportunities with ai. If I walk out, I'll give you another AI problem. You know, you do have this slight problem in India. We got street dogs. Dogs are there on the street there, maybe around the world, but it's there in India.
[00:48:54] So I joke, joke with people saying, I'll define an AI problem for you. Simple. One [00:49:00] line. Tell me if the street top will bite me or not. That's it. So look at it. Take its photo, find if it's growling, find the socioeconomic environment it's in. See if its mouth is, you know, forming. Find out if it's barking.
[00:49:14] There's a whole, so it's a single problem. A rifle shot problem for a rifle shot. You know, a rifle shot solution for a rifle. Shot it precise like this. If I just walk around, I was one standing at a university. I had taught there. Then there was a big glass and there was children riding motorcycles. So I was teaching ai.
[00:49:35] I told them, here's a problem for you. There's lots of kids riding motor society. Can you guess which one of them may have an accident? That's an AI problem for you. You can literally, if you allow the human side and if you can sit like the budha in the corner without internet, without anything, and just contemplate on problems with ai, in about five hours between the two of us, we come out with about hundred problems [00:50:00] that will help society.
[00:50:01] So. Magnitude is huge and these are point problems and point solutions. Then they'll interconnect. Then they'll realize we were a bunch of, you know, not very intelligent people to these problems. So we throw it out and we redo it. We're at that stage. We're still between the border of experimentation, discovery, and wherever possible push it out.
[00:50:28] We are not in the stage of cocky, self-assured, sure this gonna work so we can get rid of this labor pool or white. Nowhere close that is gonna take a long time and doing it hasty may be more dangerous.
[00:50:42] Mallory: Wow. Well, I'm gonna leave this conversation with a lot of hope and enthusiasm and joy for kind of all the problems that one day humanity will solve in the world, um, and using artificial intelligence as a means to do that.
[00:50:56] Shaker, can you let our audience know where they can keep up with va? [00:51:00] Um, follow any projects that you're working on. Is it the website or LinkedIn or what's the.
[00:51:09] Shekar: That we have.
[00:51:10] Mallory: Okay. And we'll put those in the, the show notes.
[00:51:13] Shekar: Yes. I have an unfortunate personal admission. I'm not that on social media. I don't do anything or any of that. I'm kind of a monk. I've kind of, I still, I still very much enjoy. In fact, I challenge my team sometimes they laugh at me. I tell them my, I still believe that the best way to pro solve a problem is don't have any electronics in front of you.
[00:51:34] Sit in front of a whiteboard or a pen and paper.
[00:51:41] Because if you need fact based information, go to it. I'm not expecting you to know those kinds of things, but problem solving, you need to exercise your brain, which is a muscle. And if you've not done it long enough, guess what? Let's sit and think and let's do the [00:52:00] good old thinking. Now, this is slightly different from modern age phenomenon, which.
[00:52:07] Of a dinosaur of sorts, so, but I still encourage people to say, no, try that. It's very fulfilling. It's a lot of fun because there's no AI computer in the world that can beat you if you master that capability of just being alone and thinking, because you can think so much faster. You can connections than any device where you gotta use your hands.
[00:52:35] Type no. Think. Think and
[00:52:45] because.
[00:52:49] Mallory: And it sounds so easy, right? When you're like, Hey, just think well, uh, I'm trying, I'm trying my best. Um, do you have any upcoming VAI projects that you are most excited [00:53:00] about? Anything on the horizon that, that you can speak about?
[00:53:03] Shekar: Uh, still fairly straightforward. We wanna see if influence, you know, go.
[00:53:08] Okay. We wanna make sure that, uh, right now we're still, now the next phase. What happened was. We many projects project, but they were all not scaling up. So now we're little more focused on getting them scaled up. And already I'm having some climate oriented projects that are coming in. We've got some interesting projects on disease burden estimation that will be coming in.
[00:53:29] So there's a whole set of new projects that will come in. We're all fairly, um, straightforward as it turns out, at least in my scheme of things, I have not yet seen something that is so, um. Exciting. Other than a lot of the advancements that certainly have taken that just in terms of processing, that's brilliant.
[00:53:53] I understand that. But in our world, in social space, we are still at. Very [00:54:00] precise, very exact problem, definition, and problem solving. And right now we, like I said, there's something that'll come on climate, there'll be something on disease burden. There'll be some more things we're looking forward to, just in terms of the problems associated with pregnancy risk.
[00:54:15] Maybe there may be something, we had some interesting projects that we wanted to do, but they're ahead of its time, so we're gonna wait a little bit. We're confident that the environment is ready for it. For instance, there's something where when a, um, when a woman is delivering, nurses then measure a variety of parameters, but they do it on a set of graphs.
[00:54:36] We wanna provide a voice based interface for it so that they're burden all that. We wanna do of conversations between a doctor and a patient where you separate the two, you extract the information outta all that in multiple languages. In India, we wanna make sure that any fraud, so there's a whole set of ideas we have.
[00:54:55] So I tell people, brilliant. Very good. Just wait a little, let's [00:55:00] you know, let's get some stuff implemented in a massive way. We'll take these new ideas. And it's just tons of work. It's just tons of work. We just don't have enough capacity.
[00:55:10] Mallory: Well, you all are doing tons of. Incredible work. Truly, it's been an honor to have you on the podcast.
[00:55:16] We will all be rooting for Vai and watching what you all do in the future. Um, and we appreciate your time. So thank you for being here.
[00:55:24] Shekar: Thank you very much for, for considering our organization and, and as well as me for this, for this evening. Thank you very much. It's been very nice. Thank you.
[00:55:34] Intro: Thanks for tuning into the Sidecar Sync podcast.
[00:55:36] If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in depth AI education for you, your entire team, or your members, head to sidecar.ai.