Skip to main content

Summary:

In this episode of the Sidecar Sync, Amith Nagarajan and Mallory Mejias dig into the startling findings of a recent MIT study: 95% of enterprise AI initiatives are failing to deliver measurable impact. Why are so many organizations investing in AI only to watch their efforts fizzle? Mallory and Amith unpack 10 core reasons for these failures—ranging from building chat-first features instead of end-to-end workflows, to skipping the unglamorous data cleanup that powers real results. Drawing from Vishwas Lele’s insightful LinkedIn post and their own experiences in the association world, the duo shares practical guidance for leaders ready to move beyond flashy demos and into lasting value. Plus, they explore why your staff still prefer ChatGPT over that expensive new AMS add-on, and what associations can do to shift from AI experiments to real transformation.

Timestamps:

00:00 - Zombie Apocalypse & Beverage Goblins
02:41 - Why 95% of Enterprise AI Projects Fail
04:30 - The System vs. Feature Problem
09:03 - Workflow-First Implementation
10:45 - The Tail Wagging the Dog: When IT Leads AI Strategy
16:43 – Consumer-Grade Tools vs. Enterprise Expectations
22:49 – Should Associations Wait on AI from Their Vendors?
27:19 – Partnerships, Not Products: Rethinking Implementation
31:43 – Governance, Change & Measuring What Matters
37:46 – Wrapping Up

 

 

👥Provide comprehensive AI education for your team

https://learn.sidecar.ai/teams

📅 Find out more digitalNow 2026:

https://digitalnow.sidecar.ai/digitalnow2026

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🎀 Use code AIPOD50 for $50 off your Association AI Professional (AAiP) certification

https://learn.sidecar.ai/

📕 Download ‘Ascend 3rd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

ChatGPT ➔ https://chat.openai.com

Claude ➔ https://claude.ai

pWin.ai ➔ https://www.pwin.ai

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00:14 - 00:00:09:17]
Amith
 Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.


[00:00:09:17 - 00:00:27:07]
Amith
 associations and the crazy world of artificial intelligence. My name is Amith Nagarajan.


[00:00:27:07 - 00:00:29:05]
Mallory
 And my name is Mallory Mejias.


[00:00:29:05 - 00:00:40:16]
Amith
 And we're your hosts and we have a pretty interesting topic today. Not that we cover boring things here at the Sidecar Sync most of the time, but we have something pretty fun today so I'm excited about it. How are you doing Mallory?


[00:00:40:16 - 00:00:58:22]
Mallory
 You know Amita, I'm doing pretty well. I was actually just thinking when we record these podcasts, both you and I put our phones in Do Not Disturb, we put teams on Do Not Disturb, we don't look at email. So I realized there could be a zombie apocalypse happening right now. And while we're recording the Sidecar Sync, you and I have no idea.


[00:00:58:22 - 00:01:02:17]
Amith
 Yeah, this could be the last episode. We'll find out. I hope that's not the case.


[00:01:02:17 - 00:01:05:08]
Mallory
 I hope it's not. I hope this is not the last episode.


[00:01:06:21 - 00:01:17:15]
Mallory
 Amita, I've never really asked you this, but what is your podcast setup? Do you like to have a coffee nearby, a snack with you? You're kind of just always locked in. So what's the setup for you?


[00:01:17:15 - 00:01:24:17]
Amith
 I have this giant bottle, this Yeti bottle. I think it's one and a half liters. I've got a lot of water in here and that's about it.


[00:01:26:19 - 00:01:28:24]
Amith
 So pretty straightforward for me. How about you? You got coffee there?


[00:01:30:07 - 00:01:51:06]
Mallory
 You know, I don't know if you've ever heard the phrase beverage goblin, but it's like something that's gone around on social media. I know one of my colleagues, Sophie says she's the same way, but I like to have a lot of beverages. So I've got water. Sometimes I have tea and coffee and maybe like a sparkling water. So I don't know. I like to keep things different.


[00:01:51:06 - 00:01:55:20]
Amith
 So do you actually drink the tea and coffee at the same time?


[00:01:55:20 - 00:02:05:19]
Mallory
 I don't. I will say I don't. Yeah, no, I don't go back and forth. That sounds odd. No offense if anyone listening does that, but I typically do one at a time. I prefer coffee.


[00:02:05:19 - 00:02:07:23]
Amith
 I have no problem sending people if they do that. It's really weird.


[00:02:07:23 - 00:02:10:21]
Mallory
 It's odd. You know, maybe this isn't the podcast for you.


[00:02:10:21 - 00:02:12:07]
Amith
 I mean, I like weird. So, you know, but


[00:02:12:07 - 00:02:21:19]
Mallory
 no, I typically do one at a time, but I do like to have like the water mixed in with the coffee with the sparkling water. I don't know. I just, it keeps me interested, you know, keeps me engaged.


[00:02:21:19 - 00:02:30:05]
Amith
 Yeah. Well, it's kind of like going to a ball game. And if you have one of those plastic cats that has a beer on each side that you could put like a coffee and a tea on each side and drink that while you're doing this.


[00:02:30:05 - 00:02:34:00]
Mallory
 You might catch me on the next episode with one of those hats at me. Don't be surprised.


[00:02:34:00 - 00:02:39:05]
Amith
 That'd be pretty cool. People have to turn it, go to the YouTube channel instead of listening over audio for that one.


[00:02:39:05 - 00:02:40:01]
Mallory
 Exactly.


[00:02:41:10 - 00:03:59:23]
Mallory
 Well, everybody, thank you for tuning into today's episode. Today we're diving into an MIT study that found 95% of enterprise AI initiatives aren't making it to production with measurable impact. That's kind of a shocking one to digest. We came across a great LinkedIn article by Vishwas Lele, founder of Pee-Win AI and a Microsoft regional director that outlined 10 specific reasons why this happens. And so we want to translate these reasons for these failure points for associations and discuss how you can get into that 5% of successful enterprise AI initiatives. So a little bit more info about the MIT project. It's called the NANDA study and it revealed some fascinating insights. So while individual employees are getting tremendous value from personal AI tools like chat GPT, enterprise implementations are failing at an astronomical rate. The gap between personal AI success and enterprise failure tells us the technology works fine. It's the organizational implementation that is broken. So you have employees using chat GPT to draft their emails, to analyze basic data, to speed up their routine work. But when organizations try to formalize AI adoption, they hit a wall that has nothing to do with the AI itself.


[00:04:01:01 - 00:04:29:07]
Mallory
 Here's what's happening at a high level. We're going to dive into this. Organizations are solving narrow problems and calling it a system. They're building chat interfaces instead of workflow integration. They're measuring demos instead of business outcomes. And they're trying to bolt AI onto existing processes without rethinking how work actually gets done. So let's walk through the 10 failure points for enterprise value out of AI initiatives and what they mean for associations.


[00:04:30:08 - 00:04:38:17]
Mallory
 So we've got 10 reasons and we grouped them together based on what they were about. So this first group, we're going to call the system versus the feature problem.


[00:04:39:18 - 00:04:44:12]
Mallory
 The first reason, you're solving a narrow problem and then calling it a system.


[00:04:45:13 - 00:05:07:09]
Mallory
 Organizations build one isolated AI feature like a chat bot, maybe a document summarizer and expect transformation. But these tools often don't retain context between sessions, can't learn from user feedback and don't connect to other organizational systems. So users end up having to paste the same information repeatedly, re-explain their situation every time and eventually they abandon the tool.


[00:05:08:12 - 00:05:40:13]
Mallory
 The next reason, you're looking at chat first instead of workflow first. Too many implementations start with, let's add a chat bot rather than thinking about end-to-end processes. A chat interface alone is not a system, it's just a conversation. Real value comes from orchestrating complete workflows. Like classification, processing, review, approval to follow up with AI enhancing each of those steps. The third reason in this group, the last one in this group is that domain depth beats demo depth.


[00:05:41:13 - 00:06:03:06]
Mallory
 Generic AI vendors can create impressive demos, but if they haven't worked in your specific domain, the solution breaks when it hits real world complexity. And associations, this means not understanding chapter hierarchy, certification pathways, or governance structures or industry specific regulations. The demo might look perfect, but it fails on day one of actual use.


[00:06:04:09 - 00:06:18:14]
Mallory
 So Ami, three reasons here. Do you think organizations building isolated AI features that don't connect to other systems are a waste of time or perhaps like a good step in the right direction for experimentation?


[00:06:19:23 - 00:07:28:01]
Amith
 My view is it depends. I think that if you look at it as a way of getting started, it's great. I think you're right that the nature of a lot of the individual wins that we've had and you compare them to team based wins, of course it's harder. I mean, if you think about it, it's not really surprising to me that most of these projects are failing because they're not, first of all, if you just zoom back out and say, well, what's happened with enterprise technology deployments since the beginning of enterprise technology, right? Forget about AI for a minute and you say, well, think about something like network migrations or operating system upgrades or an even bigger one would be something like going from one AMS to another. These are excruciating projects for people. They require an enormous amount of change management, team coordination, buy-in, all these other factors that are not necessarily technology, but you're right. People do tend to focus on the bits and pieces that can provide individual productivity and they don't necessarily look at the overall team workflow. To me, I think the chat first mindset instead of the workflow first mindset out of that group is the one that really, really jumps out as so critical to understand.


[00:07:29:07 - 00:09:02:08]
Amith
 Workflow and agents actually overlap a lot and people think about the idea of agents as somewhat of this mystical creature in the world of AI, right? Like an agent, oh, that's really cool. It's super sizzly and exciting and I have no idea what it is. It's somewhat intentional, I think, in the context of the way AI vendors tend to work in the world is they want to show you really, they want to say really cool words, but really all an agent is, is taking a workflow and making the AI do portions of it for you, right? So certain portions of the workflow were just fine as they were. Like good old fashioned classical computers could make a lot of decisions and a lot of classical workflows, but some things required human judgment or some things required human processing. Those are the things where AI could potentially be transformational. You mentioned classification, that's a common one, right? So I have 5,000 documents that are uploaded to my website as proposals for my journal. How do I classify those? Well, I have a committee of people that do that. Well, that potentially could be pre-processed by an AI to make it a lot easier, but that requires exactly what you said. It requires a workflow mindset and then it also requires true systems thinking, right? Other than just solving the narrow problem of I know how to drop in the text into a chat bot and get an output, I need to think about how to drive that all the way through the process. And of course that ties to your third point as well, that if you don't understand the domain, then you're probably not going to get the solution. You might have the bits and pieces, but it all kind of comes together from these three points.


[00:09:03:11 - 00:09:12:22]
Mallory
 What does a workflow first implementation look like for you, Ami? It's like what kinds of questions should leaders ask themselves as they're mapping out AI experimentation?


[00:09:12:22 - 00:10:44:18]
Amith
 First question I'd ask is why? Why are you bothering to do this? What value creation are you shooting for? Is there a pain point, right? Is this something that just sounds cool or is there actually a problem here? And a lot of people shoot for solutions that don't actually solve a problem. It sounds totally ridiculous to say that. And whenever I say that, people are like, "Are you serious? Is that real?" And I'm like, "Well, look at the things people are trying to build." They're trying to build chatbots that nobody uses and the chatbot worked just fine, but people didn't come to it. And by the way, that isn't a statement to say that chatbots aren't useful in general. It's just some of the chatbots people build don't make sense. So someone builds a custom chatbot to answer HR questions and they have 30 employees and there's not that many questions. That doesn't make sense, right? That's a really ridiculous example. Probably nobody would do that. There are similar examples where people build things that nobody visits, right? It's like saying, "Hey, I'm going to create a retail store that sells some specific product that nobody wants." It's kind of the same thing. So I would study the problem a little, but I'd ask the why question a whole bunch of times and asking why is a great way of getting to the root cause or the root problem. So I'd start with the pain and then look for a pain point that I could solve and something that is non-trivial. So it has to have a meaningful enough impact if I can solve the problem. So I think people fail to do that. And because they're not asking those questions in the front end more critically, they end up building a lot of things that aren't necessary and end up getting thrown out.


[00:10:44:18 - 00:11:12:07]
Mallory
 Mm-hmm. And I think what you're getting at too is just the idea of being so member-centric when you're designing these AI pilots. And you've mentioned a myth on the pod before the idea, I think it was Amazon that would leave an empty chair in their meeting rooms for the customer, like always having that chair there, whether mentally or physically in the room for your member. Because you can create the best AI solution, but if it's not serving any purpose or addressing any pain for your members, kind of, you know, why are you doing it?


[00:11:12:07 - 00:11:35:10]
Amith
 Yeah. And for Amazon, most of the time I believe that chair is indeed empty. It's just a representation of the customer. In the case of associations though, your members oftentimes are involved in your work as volunteers. So that's advantageous and it can be disadvantageous as well, I suppose. But it really is an advantage to have that insight coming from the outside and bring it in. And so that's, I think, quite helpful if you engage it the right way.


[00:11:36:10 - 00:12:54:13]
Amith
 The other thing I would say that is, I think it's related to all three of these points and some of the things we'll cover momentarily, it's a bit of a tail wagging the dog effect. And what I mean by that is IT deciding all of this stuff. So associations have historically, I wouldn't even say delegated their decision making on technology to IT. I would say they've abdicated it, meaning that most associations, senior leaders have said, "Well, that's the IT department's choice." The IT director, the CIO, the IT manager gets to make that choice. And this is not a negative thing about IT folks at all. I'm one of them and, you know, we love our IT community. The point though is that the IT team represents one voice, not the entire voice. And the reason it's the tail wagging the dog is that the strategy needs to be set based on the business priorities. And the business priorities may not be with the IT organization, either has visibility and understanding or wants to solve. So most of the IT folks I know that are, you know, capable individuals, they like solving complex technical problems and they love working with cool new technology. So what I've seen is a trend. I haven't studied this with a lot of data, but I've just seen this over and over with people I know in the community. The smarter the IT people that you have, the more likely they are to say, "Hey, I'm going to just build my own thing."


[00:12:56:03 - 00:14:07:21]
Amith
 Over and over and over. I've seen this for the last three years where the most capable IT people, right? These are the most brilliant IT folks in the association community. They're the ones who are saying, "This isn't that hard. I'll just go build it myself." And the association CEO says, "Oh, well, my IT person is brilliant. I'm just going to trust them." And you should trust your IT person, but you should also pay attention a little bit and say, "Is that really the best use of our resources? Could we move faster? Do we have a better option?" Building your own doesn't necessarily make sense. It can make sense. It's not a bad idea at face value, but the tail wagging the dog thing is, to me, a thing you just need to keep remembering that. You as the CEO, you as the membership director, you as the marketing director, you have to be involved in this decision. And that's where that domain depth comes from that you pointed out because IT, of course they're in domain. They understand the association well, but from that lens and you have a different lens. And so you have to drive at least part of the decision. If you leave that alone and just say, "Hey, it's technology. It's AI. I'm not going to get involved. IT will figure it out," you're probably going to have some challenges, partly because you're just actually not serving your IT team. You're leaving them alone to figure something out that you really need to be involved in.


[00:14:07:21 - 00:14:37:19]
Mallory
 Mm-hmm. That's a really good point, Amith, because I think the idea is that if all of your staff is experimenting with chat, GPT, and Claude, and getting all of these gains, but 95% of enterprise AI initiatives fail, 95%, that tells me there needs to be more cohesion, right? A really strong sense of leadership and cohesion and collaboration across the organization. And you as the leader, you've got to lead that. So kind of saying, "IT, you handle the tech stuff," I don't think is serving the organization in the best way.


[00:14:37:19 - 00:15:17:14]
Amith
 Totally. And the data in the MIT report and what our experiences with associations are aligned. I mean, one of the leaders at the Blue Cypress HQ level formerly was a senior-level leader at a big consultancy that worked primarily with Fortune 500 companies. And he has told me a number of times that he's seen the exact same behavior that's reported in the MIT study with respect to senior leaders at large corporations saying, "Hey, I don't know what to do about AI. Let's build a chatbot." And they have no idea what the problem is that they're trying to solve. And of course, those kinds of things lead to a lot of challenges. There's other issues, which we'll come to shortly. But ultimately, not knowing what you're trying to solve for is a big problem with any initiative.


[00:15:18:20 - 00:16:04:14]
Mallory
 Segueing into group two of those issues that you just mentioned, Amith, we're going to call these "quality and competition with consumer tools." So the first reason here is output quality isn't clearly better than chat GPT. This is an interesting one. So let's say your AMS vendor adds an AI assistant module for another $10,000 per year. Your LMS has an AI-powered content generation for an additional fee. Your event platform promises AI-driven engagement. But when your staff try these add-on features, they find that copying and pasting into chat GPT produces better member communications, more relevant content, and more useful insights. So if the AI features baked into your platforms can't beat the stuff that staff get from chat GPT, they'll likely keep using chat GPT on the side.


[00:16:05:14 - 00:16:24:06]
Mallory
 The second reason here is mixing up consumer wins with enterprise readiness. Your employees love chat GPT, but for high-stakes association work like certification decisions, accreditation standards, or regulatory compliance, you need specialized tools with audit trails, version control, defensible reasoning.


[00:16:25:08 - 00:16:27:13]
Mallory
 Generic tools cannot handle this complexity.


[00:16:28:20 - 00:16:30:03]
Mallory
 So Amith, my question for you.


[00:16:31:08 - 00:16:44:06]
Mallory
 Associations are seeing AI features, as I mentioned, in their AMS or their LMS, but staff still tend to opt for their personal AI tools. How do you kind of, I don't know, what's your response to that?


[00:16:44:06 - 00:17:52:11]
Amith
 I don't know. I mean, I think all of us as members of a team, myself especially, we're kind of a pain in the butt in some ways because we all have our preferences and we all like to do things our way. That's what, of course, makes the fabric of humanity so spectacular and diverse and interesting, but at the same time, it's difficult to corral when it comes to making choices as a business and then enforce it. Output quality is definitely an issue. If you can get something better somewhere else, you're going to go do it probably. It's kind of like one time a few years ago, I took my wife to Paris for a trip and we were at this super fancy French restaurant and it was like all these Michelin stars or whatever which I kind of thought it was cool, but I don't know. The food kind of sucked in my mind, at least for my limited capability to understand it. I literally ate at a street cart right afterwards. There was a crepe stand right next to it and it was 10 p.m. or something in Paris and I'm like, "Hey, for like $6, I can get a crepe." I was like, "All right, now I'm satisfied." I went outside of the enterprise platform, which was the fancy French restaurant, and got myself something I actually wanted to eat from the really cheap street vendor.


[00:17:53:14 - 00:18:50:22]
Amith
 That happens all the time with software. In the AMS world, I know many of our listeners will feel this pain that you implement this big fancy complex centralized system and then the meeting person still maintains some of the most critical data for their event in a spreadsheet in Excel. Why is that? Are they trying to be non-conformist? Are they trying to violate policy? No, of course not. Most of the time, it's because it's just easier. They don't have the fields or it takes them like 30 minutes to enter the data in the AMS and the various places they're supposed to put it versus just dropping into the spreadsheet. You have this proliferation of these little systems. Sometimes they're just spreadsheets. Sometimes they're little databases. Microsoft Access used to be like a really big thing. There was this explosion of Microsoft Access databases. It was a Windows-based desktop database that made it really easy to spin these things up. You'd have dozens of these little databases all over the business. Again, it's the same kind of idea. People are going to go to not only the lowest friction, but where they can create the most value for themselves.


[00:18:52:04 - 00:20:05:14]
Amith
 That's actually why you go back to the first part of this conversation and say, "What's the problem? What problem are we trying to solve? How do we solve that problem?" You're not trying to build something that's so comprehensive like an AMS or an LMS and just boil the ocean. Pick a problem that actually is shared by a number of your staff that you can solve better. If you're just building a wrapper around a particular model and the new version of chat GPT the next week becomes better, you're setting yourself up for failure. That's part of the issue with people building a lot of these custom AI models for knowledge retrieval or whatever is they can't keep up. No matter how brilliant they are, they cannot keep up and their models are going to quickly be out of date. You have to think about that and say, "I have to plan for that immediate obsolescence and build in a way where there actually is an accrual of value in my tool by doing something custom." Something where the process takes the latest, greatest model, whatever it is from whichever vendor it comes from, combines it with your data and therefore provides so much value that the user cannot replicate that with other tools because you've created novel value. If you're only saying to use this tool because I say so, you're going to have a problem. I think that's a lot of what you're seeing. That's the classical enterprise IT adoption strategy is it's the force feeding thing and it just doesn't work.


[00:20:07:24 - 00:20:24:02]
Mallory
 I'm laughing at your France story because we actually have the same story from our honeymoon in Mexico City. We went to this really fancy restaurant. It wasn't terrible, a meat, but it was honestly not good. I can understand the consumer wins versus the enterprise value Michelin star restaurant because it was fun.


[00:20:24:02 - 00:20:26:22]
Amith
 Then you got some churros afterwards on the street.


[00:20:26:22 - 00:20:47:02]
Mallory
 Basically, except Michelin restaurants were just outrageously expensive. We were like, "You know that moment where you... I don't know if you're like this a meat, but sometimes you don't want to say it's bad in the moment. You're just taking the bite and you're like, "Oh, hmm. This is interesting. It's like a leaf on a cracker." I'm like, "Oh yes, this is exactly what I wanted."


[00:20:47:02 - 00:20:57:09]
Amith
 When I think about it, I'm a student of business and I think that I think about it from the viewpoint of the proprietor of that establishment. They're hanging out in the back. They got like a webcam going. They're just laughing at everyone.


[00:20:57:09 - 00:20:59:07]
Mallory
 I think it's a joke. It's like a social experiment, right?


[00:20:59:07 - 00:21:23:09]
Amith
 Totally. I know that I'm sure some of our listeners are big foodies and they understand this stuff. I understand that I don't appreciate it enough and I'm sure there's a whole another layer of appreciation of this. It's also not the most common thing, right? If you're building for people who have to have that incredible palette and that rarefied understanding of fine cuisine, you're going to have a problem. You have to build for the masses, at least through your first few solutions in AI.


[00:21:25:02 - 00:21:51:21]
Mallory
 I wanted to ask you a question of me because I feel like we see this a lot too at Sidecar where the platforms that we're using, the vendors that we're using have lots of new AI features that are popping up. Do you think it would be the wrong move for an association to sit back and think, "Well, we'll just wait until our AMS has a new AI feature. We'll wait until our LMS does, our event platform." And then once all of those roll out, we'll be getting enterprise value from AI.


[00:21:51:21 - 00:23:47:18]
Amith
 Listen, I regularly connect with quite a few of the people who are in the AMS and LMS world as vendors, as partners, et cetera. And I know that people are working frantically to add capabilities. I'm really excited about that. I'd love to see some of the major technology platforms that are used by associations get refreshes that include a lot of rich AI features. But unfortunately, what we've seen thus far hasn't been particularly compelling. And I say that generically. I'm not pointing at any particular one product. But most of it has been kind of bolt-on stuff where it's exactly what you just said, where it's like, "Why would I use this? This is not even as good as chat GPT." They haven't found the right intersection of the combination of process data and the way people use their system to actually produce novel value. And so I'm super excited about it because having been an AMS vendor for over 20 years, I can think of a number of really cool use cases that only the AMS could solve because it has that data resident within it. And it owns those processes where you could dramatically streamline the workflow. You can improve the member experience. There's so much you could do, but you have to dig pretty deep. And that, of course, does require time. So I think we have to provide these vendors a little bit of grace. We're also a few years into this AI transformation, so I'd like to see them hurry up at the same time. But my point would be that, to answer your question, it's not so much that you should wait, but you should really dig deep and find out what your vendors are doing. You should talk to people there. You should understand their roadmap. You should ask for live demos of the software rather than just PowerPoint slides. And you should evaluate your alternatives in terms of where you're going to do your AI workloads. And some of your AI workloads don't necessarily fit your classical systems anyway. So I wouldn't say that you should wait and see what happens. I do think that if the AMS vendor has a bona fide, legit roadmap for AI features that do solve your business problems, and you feel confident that they have a track record of delivering stuff like this, and you feel confident you're going to get that stuff fairly soon, for sure. You shouldn't go build that from scratch. That doesn't make any sense.


[00:23:49:02 - 00:24:12:00]
Amith
 The other thing you should do if you're in the process of with a seemingly never-ending quest for any AMS that associations do, they get themselves back on the hamster wheel every five to 10 years or so it seems, make this a core part of your selection strategy. It shouldn't just be one extra section on your massive RFP template at the end that says AI features. You should be rethinking what the processes should be in the AI world.


[00:24:13:03 - 00:24:53:08]
Amith
 That's before you go look for an AMS. You need to think about your 2B. What do you want to be when you grow up kind of question in the AI world? Then that's what you should go look for in an AMS or CRM or that type of system. There are many AI-native systems coming out, not necessarily in the association world, but in the CRM world, for example, there's people coming after the Hubspots and the Sales Forces, which has happened since the beginning of time. Every time there's a tech shift, people say, "Oh, well, we're the CRM for mobile," or, "We're the CRM for whatever," and now people are saying the same thing for AI. I think all this innovation is going to ultimately be good, but whether you wait or whether you invest in your own things right now, it's a really nuanced question.


[00:24:53:08 - 00:25:26:15]
Mallory
 Our next group of reasons that you might not be getting enterprise value from your AI initiatives is the implementation and partnership approach, which is a really good segue. But the data is clear. If you're treating vendors like shelfware and not like partners, you're going to have some trouble. Externally partnered deployments succeed roughly twice as often as internal builds. That's a 67% success versus 33%. Success comes from treating your vendors as co-builders who evolve with you and not one-time suppliers who deliver and disappear.


[00:25:27:16 - 00:25:50:03]
Mallory
 The next reason in this group is that you're skipping the unglamorous data work. Before AI can work, get your data in order. This means metadata extraction, document classification, deduplication, standardizing formats across systems. This foundation AI retrieval fails and quality suffers, but the good thing is, Amith, there's a solution for that. Can't AI help?


[00:25:50:03 - 00:26:45:18]
Amith
 Sure can. And I think on the first point too about the partnerships with your vendors, it's really important. One thing to think about is this whole thing we've learned about for quite a number of years now called specialization of labor. As an economy grows and there's more education available and all this other stuff happens, you end up with a high degree of diversification and specializations that come out. It's kind of like this, like the last time you were on Delta Airlines or American Airlines where you flying on a jet that they made, they certainly know a lot about aviation. Their pilots know a lot about flying the plane, but are they the best people to build the plane? Are they the best people to do every single part? So the people who build the plane may not be the people who fly the plane. And so similarly, the association may run the organization brilliantly with the best technology, but may not be the best place to build their own tech and they have to look at a partner who is an expert in that, a partner who does nothing but that. So I think partnerships with your vendors,


[00:26:46:24 - 00:27:03:10]
Amith
 it's really, really important. And you have to look at it as a long-term relationship. You have to look for people that know the domain, of course, but you also have to look for people who have been around a while, but hopefully are going to be with you for a good bit of time because if you have to, you look at it transactionally, it's very hard to get value from it.


[00:27:04:15 - 00:27:14:20]
Amith
 I think the unglamorous work, that's the lack of the sizzle stuff. AI can certainly help you with the preparation of data and the normalization of data and the cleansing of data.


[00:27:15:20 - 00:27:59:13]
Amith
 But you're right that if your data is garbage by the time the AI tries to use it for something, you're not going to get a great experience. So if you have a whole bunch of duplicate member records and you put a customer service agent on top of it and that customer service agent is trying to help a member and there's six Mallory records in the database, it's not going to be a whole lot more effective than a human having to parse through all that. So there's a lot of work to be done there. But to your point, exactly to your point, Mallory, AI can help you with that. It's not a manual process anymore to clean your data up. That was the thing everyone always said. Data governance, one of the first things is let's clean the data and have a plan for making sure data always stays clean. It never happens because it's basically an unmanageable task no matter how big your team is. That's no longer true now that AI is here.


[00:27:59:13 - 00:28:10:04]
Mallory
 Amith, you've worked with many, many associations on some very innovative technology implementations. What do you think a successful association vendor partnership looks like in your experience?


[00:28:10:04 - 00:30:16:24]
Amith
 It's got to be rooted first and foremost in openness. You've got to have an environment in your culture where the culture of the partnership where you can trust each other. And so that means that for the association, you have to believe that the vendor is telling you the truth, that you have to have alignment of incentives. One of the most important things is how you structure your agreements. You shouldn't be looking to try to eke out every penny out of the vendor. You should be looking to have a partnership where the vendor is profitable, but also has an incentive to deliver extraordinary value to you on time and on budget. There's lots of ways to do that, but a lot of times it's adversarial from the beginning in terms of the way you set the incentives up. If the vendor's incentive is to deliver the least expensive implementation in the fastest way possible, are they potentially going to cut corners? Probably will happen at times. Maybe not always, but it'll happen at times. And the other thing I'd say that's important about the truthfulness and the honesty and environment is you have to be open in both directions. A lot of times on the association side, there's so much concern about hitting budget and timeline on big projects that the staff are preconditioned to try to blame the vendor for everything. Sometimes it's fair, right? But a lot of times it's not. So when a vendor feels like the people that they're partnering with and partnering in that context and air quotes are never open to accepting any responsibility for things that go wrong, that's a problem because the vendor quickly distrusts the association, right? It goes in both directions. A lot of associations only looking at it from this myopic view of the association's right about everything. Nobody really believes that, but when you get into the heat of the battle, if you don't have the preexisting establishment of that trust and willingness to take mutual accountability and have aligned incentives, you've got a problem. And what I'm describing is actually uncommon. Most of the time, these things are set up in an adversarial incentive structure to begin with. So that's a big problem. If you're doing a project that's more than three months, more than $100,000, you're talking about needing to think through some of this stuff for really small projects that are quick, maybe what I'm describing isn't as relevant. But if you're going after a bigger project, it's really, really important that you have that alignment.


[00:30:18:09 - 00:31:27:06]
Mallory
 Our last group of reasons falls under governance and measurement. So one reason you're treating governance, trust and compliance as an afterthought. In regulated environments, you can't scale AI without security controls, auditability, role-based access and clear data boundaries. These are not optional add-ons, but day one requirements. For associations in healthcare, finance or law, this is especially critical. You can't have AI giving medical or legal advice without proper controls. Next one, you're measuring demos and not business outcomes. Organizations get excited about benchmarks, cool features, but leaders need concrete metrics. Time to first draft reduced by X hours, external consulting spend down Y dollars, member satisfaction scores up by Z percent. And then finally, perhaps most important, you're ignoring change management. Pilots try to bolt AI onto existing processes without redesigning roles, adjusting incentives, establishing success metrics. You need executive sponsorship, formal training programs, clear communication plans and defined ownership. Without addressing the human side, your pilots will stall.


[00:31:28:09 - 00:31:38:06]
Mallory
 All right, Amith. For associations in heavily regulated environments, what things do you think they should be keeping in mind as they're trying to get enterprise value from AI?


[00:31:39:07 - 00:32:21:08]
Amith
 I think that for all associations, whether you're in one of the fields you described like healthcare or finance or law, I think all associations should consider ground truth to be critically important. And so you have to look for solutions that combine the power of AI with correct answers. So the answers have to be accurate. You can't put something out there that's 90% accurate or 95% accurate. It has to be at least as accurate as the best human expert, right? And that's a very high bar because there's some amazing human experts out there. Now human experts are not 100% correct on anything, none of us ever are, but we can be 99.9% or some number like that. And then you have to measure it. You have to establish a goal and you actually have to measure your AI to see if it meets that bar.


[00:32:22:13 - 00:33:25:23]
Amith
 So you should establish those benchmarks. You should make them clear to everyone. And then you should make sure that you're actually testing to see if you're above that bar or not. And then of course you should assess before you get started in the project, is that even possible with current AI, depending on the use case that you have in mind? For basic knowledge retrieval and for answering questions as a knowledge agent, that problem has been solved now for a couple of years. But for other categories where you're doing more reasoning or other more nuanced things that go beyond knowledge and you're getting into like problem solving in domain or something like that, you might want to do some prototyping there to make sure that those pieces are actually really being properly upheld. And I will also say that the other piece of that is security and data privacy, making sure that you know exactly where your data is going, where it's stored, who has access to it. Those are all pieces of the governance puzzle that are critical in terms of downside risk mitigation, but also in terms of the accrual of value. So if you have a third party that is actually housing your data and they're capturing all the insight, do you have access to that? Can you leverage that to benefit your association? That'd be a question you might want to ask.


[00:33:25:23 - 00:33:34:20]
Mallory
 Philosophical question, but do you think the bigger risk is moving too slowly on AI and becoming irrelevant or moving too fast without proper governance?


[00:33:36:19 - 00:34:59:09]
Amith
 I would say it's a combination of the two. It's kind of evading the question a little bit intentionally because the reality is they're both big problems. If you don't do either of those things, you're going to have a problem. You've got to move fast. You've got to do a lot of experiments, but again, going after low stakes stuff first to build those organizational reps, to build that capability, and then certainly planning out and then carefully executing the bigger projects that have higher stakes. It's again, going back to something like building an airplane, you certainly don't want someone who's never built an airplane to build the plane that you're flying on. You'd like to have someone that has a lot of experience and same thing with the person who's flying your plane. But at the same time, at some point, someone has to get started in their career to do one of those things. You have to have the right process to kind of ramp people up and it starts off with small little things that are low stakes and bigger and bigger. You have to think of AI projects the same way. You start off with small things and you go to bigger things. People a lot of times try to go after the big project without all the other steps you mentioned. You have to have formal training programs. That's super critical. A lot of associations still have not mandated, I'll say that again, mandated their employees all be trained on AI. Pick an AI training program and go do it. Don't make it one of these, if you're interested in this, you can go do it or whatever. Go to them and say, "This is the most critical thing you have to learn right now and it's required. You have to go do it. If you want to keep working here, this is your job description is to go do this."


[00:35:01:00 - 00:35:08:14]
Amith
 The last thing I'll quickly squeeze in that I think is super critical goes back to the beginning of what we discussed Mallory in terms of change management.


[00:35:09:17 - 00:35:55:00]
Amith
 People first and foremost will always look to themselves and say, "How am I affected by this change?" If we're not doing a good job with change management, we're ignoring that basic fact. You have to embrace the fact that people are in fact thinking through their own personal risk and their own personal opportunities as the very first thing, whether it's automatic or whether it's intentional, it's something that happens to all of us. You have to look at that very, very carefully. If what you're doing potentially replaces some job functions, you need to be clear about that. You need to be open about that. You need to look ahead and say, "Is this going to result in a reduction in force or is this going to result in a retraining program?" If you don't do that, people are just going to assume the worst and then they'll freeze up or they'll start looking for another job.


[00:35:56:04 - 00:36:20:24]
Amith
 That's just not a healthy situation. AI is this thing that a lot of people are very much afraid of. They see what it can do and intuitively grasp the idea that a reasoning machine can in fact do much of what they do. That is a big part of the problem with change management. That's been true since the beginning of technology projects. Everything I've ever been involved with, people have had this concern. That concern is amplified to say the least when it comes to AI.


[00:36:22:14 - 00:36:29:09]
Mallory
 Basically, what Amitah is saying is be a human in your leadership. Address the fears. Be straightforward. Be transparent.


[00:36:30:09 - 00:36:34:11]
Mallory
 Everybody listening, we want more than anything for you to be in that 5%


[00:36:34:11 - 00:36:40:08]
 (Music Playing)


[00:36:51:01 - 00:37:08:00]
Amith
 Thanks for tuning into the Sidecar Sync podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in-depth AI education for you, your entire team, or your members, head to sidecar.ai.


[00:37:08:00 - 00:37:11:06]
 (Music Playing)


[00:37:11:06 - 00:37:11:06]

Mallory Mejias
Post by Mallory Mejias
January 12, 2026
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.