Summary:
In this special 100th episode of Sidecar Sync, co-hosts Amith Nagarajan and Mallory Mejias take a celebratory stroll down memory lane, unpacking the journey from the show’s inception in 2023 to today. They reflect on five distinct podcast eras, distill the top 10 AI insights echoed across episodes, and get candid about the surprising speed of tech transformation in the association space. You’ll hear how Mallory used Claude and Google Colab to distill hundreds of transcripts, which past guests left a mark, and what the future holds for episode 200. Plus, an exclusive giveaway for loyal listeners and one golden use case every association should adopt today.
Timestamps:
02:10 - How Sidecar Sync Got Started
04:48 - The ChatGPT Moment & Podcast Launch
06:11 - Transcripts, Claude & Collab: Behind-the-Scenes
08:07 - Five Phases of the Podcast Journey
13:28 – The Evolution of AI Adoption
16:51 - Tiny Models, Big Disruption in Software Development
24:42 - Top 10 AI Insights from 99 Episodes
37:08 - One AI Use Case All Associations Should Implement
40:32 – Rapid Fire!
43:48 - Real-Time Avatars, World Models & What’s Next
47:25 - A Thank You to Our Listeners
👥Provide comprehensive AI education for your team
https://learn.sidecar.ai/teams
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🤖 Join the AI Mastermind:
https://sidecar.ai/association-ai-mas...
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
🛠 AI Tools and Resources Mentioned in This Episode:
Claude ➡ https://claude.ai
Google Colab ➡ https://colab.research.google.com
Zapier ➡ https://zapier.com
Groq ➡ https://groq.com
Sidecar AI Learning Hub ➡ https://learn.sidecar.ai
https://www.linkedin.com/company/sidecar-global
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
⚙️ Other Resources from Sidecar:
- Sidecar Blog
- Sidecar Community
- digitalNow Conference
- Upcoming Webinars and Events
- Association AI Mastermind Group
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
Read the Transcript
🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖
[00:00:00] Amith: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.
[00:00:14] Greetings, everyone. Welcome to the Sidecar Sync, your home for content. At the intersection of artificial intelligence and the world of associations, you are in for a treat today. This is the hundredth episode of the Sidecar Sink. My name is Amith Nagarajan,
[00:00:30] Mallory: and my name is Mallory Mejias.
[00:00:33] Amith: And we are your hosts for the hundredth time.
[00:00:36] Mallory, how do you feel about that?
[00:00:38] Mallory: I'm feeling really excited. Ames, I've been looking forward to this episode probably since episode one. Uh, a hundred seemed so far away at that time, but now it's here. I feel like we've had a, an incredible two year run. I was looking back at our old teams messages. I'm very sentimental.
[00:00:55] I mean, you might know this. I always like, like looking back and having memento. So I went back and [00:01:00] found the message. Uh, it was July 21st, 2023. You reached out to me, you said, Hey, I have this idea thinking about doing a podcast for Sidecar and I wanna co-host it with you. And I remember being, you know, part excited, part terrified because I'd never done anything like that and I can't believe that we are in September of 2025, a hundred episodes later.
[00:01:22] How are you feeling about it?
[00:01:24] Amith: I am stoked. It's just so cool. I've never done a podcast before this one. I've been, I've been interviewed on a bunch of them in the past, but never had been a podcast host. So, um, that's been a really fun journey and it's been exciting to hear so many people. Reach out and say, I learned this on the pod.
[00:01:44] It's really helpful to me. I shared it with my friend, I shared it with a colleague. It's been super, super rewarding. So I love it. It's, it's fantastic. It's a great way to stay engaged with our community. I think we're delivering some really interesting insights to folks. I think the interview series that you primarily do has been really well [00:02:00] received when we bring in experts with different backgrounds to supplement our usual conversations.
[00:02:05] So, um, it's been a lot of fun. I really enjoyed it and it's been a lot of fun hosting it with you.
[00:02:10] Mallory: Well, thank you Amith. Thank you for inviting me to be a part of the the Sidecar Sync Podcast. I was gonna ask you, I figured you had been on many podcasts before, but what was the trigger for you from just being a guest on podcast to thinking, I wanna launch one?
[00:02:24] Amith: It wasn't, yeah, I think for me it was not the experience of being a guest. I've always, I, I mean, I've always enjoyed doing that. It's, uh, but for me it was more about the moment in time that we were at, I guess you said that was July of 23. And at that moment in time, um, you know, we had been out there talking about AI already for about six or seven years, pretty actively in the association market.
[00:02:47] Um, I have an interest personally in AI that spans back almost two, two decades. Um, but I, I started getting active with AI from a software development perspective and really getting deep into the guts of AI right around [00:03:00] 10 years ago. So around 2015, really a little bit, 20 little bit. So, um, and we got into.
[00:03:07] Really driving the message forward with the association community in the late 2010s. And for a lot of years it kind of felt like we were these weirdos kind of, you know, shouting from the rooftops and no one was listening and people were like, wow, whatcha talking about killer robots for? You're really weird, you know?
[00:03:23] And like, I am really weird, but I'm not wrong. You know, this is gonna be important for you guys. Um, you should really pay attention to this and if you get involved sooner, it's gonna be cool. So fast forward through COVID and the pandemic and all the craziness that that was. Um, then, you know, uh, along comes the chat GPT moment in late 2022, and in early 2023, we had this experience where we decided to run a webinar.
[00:03:46] Um, and this webinar was the first time we entitled something generative ai. We had done tons of AI education prior to that, but in April of 2023, we ran a webinar. And we promoted it along with our friends at a SAE and we ended [00:04:00] up with 1,700, uh, people attend that webinar, which is like an order of magnitude bigger than, you know, typical webinars.
[00:04:07] I mean, these days we routinely run intro to AI webinars that have many, many hundreds of people, four or five, 600 people each time attend those, uh, regularly, which is really cool. But back then, um, we had never had a webinar that had that many people. And they were really engaged. I mean, people who attended this thing, um, were asking tons of questions.
[00:04:23] We ran out of time. We had follow-ups, and then we had tons of demand right out of that for, uh, doing these AI boot camps, which were these synchronous instructor led. Um, I think they were four or five week long boot camps. Um, and. Intensive ways of getting people up to speed on ai and there was, we're oversubscribed, we couldn't keep up with the demand for that.
[00:04:42] So all throughout 2023, we kept scaling that up, trying to figure out how to provide AI education. And um, really, uh, in going back to the pod in July of 2023, it just occurred to me that there was demand for a conversation. At this intersection, as we always say, of [00:05:00] all things associations and AI, where we could really try to make AI make sense for the association community.
[00:05:07] It's not about the technology. It's how you apply it to better advance your mission, to serve your team. To do all the great things that you do as an association, but to do them better, to do them faster, to do them at a different scale than you've been able to. So my idea coming back to you in July of 2023 was, Hey, this could be a really cool medium.
[00:05:24] Let's try it out. Let's see what happens. And so I think you and I maybe started chatting about it that summer and that fall and took us, took us a minute to figure out exactly what we wanted to do, but then we launched it later that year. Right.
[00:05:34] Mallory: Yeah, we launched it, I think the first episode I have it somewhere was October the, like October 23rd, 2023.
[00:05:40] So from July to October we worked on it. We kind of honed in on the idea. I built up the courage to to co-host a podcast, and then the sidecar sync was born. Since then, we've had tens of thousands of downloads and I've gotta say. Thank you to everyone who's listening, who's watching on YouTube. We would [00:06:00] not have this podcast without you, without your feedback, your insights, your stories.
[00:06:04] So I've just gotta say, as excited as we are to be here, we're so appreciative that you choose to join us every week as well. So what are we gonna talk about in today's episode of me? That's gonna be a fun one. I've cooked up a really fun episode today. We are going to look back at the previous 99 episodes.
[00:06:22] I was in a bit of a predicament. I had all of the audio files of all 99 episodes locked up in SharePoint and I had, this is on me, I have transcripts, but they're kind of in disparate places. They're on a sidecar website and it would not have. Been the most efficient thing for me to go and like copy every transcript from the website into Claude, for example.
[00:06:42] So Ames had the idea, well, why don't you just run a program to do it for you? I said, okay, Ames. He said, we'll use AI and then we'll talk about it on the podcast. So that's what I did. I went to Claude. I, I didn't really know what I was doing at all. So I asked Claude, how would one. Write a program and run it.
[00:06:59] So it, it was a little bit [00:07:00] meta and then Claude kind of instructed me, well, here's all these options you have. Here are these platforms you could use. I thought about using hugging face, but it seemed a bit complicated. So I used collab on Google collab notebooks. I dunno if you ever, yep. I mean it's like, duh, easy stuff.
[00:07:15] And Claude wrote the code for me and then really instructed me like, here's where you put it. Here's what you do. I got an error message. Okay, here's how we fix that. I ran it. It did take. Kind of a long time to transcribe nine, nine episodes, but all in all, it took just like. 30 minutes of setup and then a few hours of waiting, and I had all of them.
[00:07:33] So with those 99 episode transcript, I dropped them into Claude and did a lot of brainstorming and reflection over the past 99 episodes. So what we're gonna be looking at is the journey to a hundred episodes. We're gonna be talking about the top. 10 insights that come up over and over on this podcast.
[00:07:52] If you've been listening since episode one, I would be willing to bet you can probably guess all of those 10 insights. We will be reflecting amongst me [00:08:00] and Amme, you know, lessons learned, things that we are looking forward to in the next a hundred episodes, and then wrapping up. So let's first start with the journey.
[00:08:10] So we launched right before Digital Now in 2023 when AI was still pretty theoretical for associations. Uh, Claude calls this our foundation setting phase. This was episodes one through 20. So October, 2023 through February, 2024. We. In episode two and one, we talked about AI agents, but it was still quite, agents are coming.
[00:08:32] It wasn't like agents are here in this moment. In episode 10, we had our first I AI predictions for 2024. We talked about multimodal models, which sounds like kids stuff. Now. We talked about the rising of open source, which again we talk about even still two years later, and our fundamentals episodes became one of our most downloaded episodes ever.
[00:08:53] Phase two. We'll call this the tipping point. This is episodes 21 through 40. In spring of 2024, [00:09:00] we started to see associations go from watching to doing. We had episodes on healthcare ai, a really popular technical vectors episode, unstructured data, deep dives. We also had some interview episodes that showed real implementations happening and AI seemed to become a little bit less scary and a little bit more practical.
[00:09:21] Phase three episodes, 41 through 60, we're calling this the acceleration. This was the time period leading up to digital now 2024 in dc, which I don't know about you and me, but felt kind of like a completely different energy from 2023. We went from talking about what AI could do to like, what will it do today?
[00:09:40] Do you agree with that?
[00:09:41] Amith: Totally. Yeah. DC was a moment in time, the DC conference, and we kind of are flip-flopping between DC and Chicago, uh, to bring the conference to both, both of these major association hubs. So we're in Chicago, um, in a couple months. And in any event, yeah, DC last year really did feel like another, uh, inflection point [00:10:00] and moment of acceleration in terms of the actual tangible use cases that people were talking about compared to the year before in, in, um.
[00:10:07] In Colorado where it was really more of really curious, which is great. Um, but in, in BC people were showcasing what they had built and that was cool.
[00:10:16] Mallory: Yep, we had that association. So showcase on the Wednesday of the event, we also had some breakout sessions from association leaders that were showing us work they're actually doing with ai.
[00:10:26] Um, also a note we had talked about the Sidecar AI Mastermind group, and we started having participants in that group present their own projects from their associations. So that was really kind of an inflection point, as you said. Phase four. That was episode 61 through 80. So we're getting more recent. We had our 2025 predictions episode, and we got a little bit bolder.
[00:10:47] We said we thought small models would be beating large ones and kind of changing the economics, which I think is held true. We've also talked about how voice AI and AI avatars would become production ready. Definitely [00:11:00] true, and we saw associations asking about governance, not just tools. So where are we now?
[00:11:06] This is what we're gonna call the new normal episodes, 81 through 99, or I guess a hundred now we've been talking about artificial general intelligence or a GI neuromorphic computing. Physical AI and robotics entering association conversations. Really shifting from how can AI help to AI readiness is the survival of your association.
[00:11:27] And I really like this, and I don't think we planned it this way, we didn't, but how episode 99 with Jackson Boyer, it was almost a full circle moment like we've spent the past. Nearly a hundred episodes talking about technology with, of course humanity sprinkled in there. But I thought the episode with Jackson, if you haven't all listened to it, was really great talking about the importance of, of influence in human connection in the age of ai.
[00:11:50] And now we're at episode a hundred. So Ame, what do you think about, uh, the five phases of the sidecar sink we've seen thus far?
[00:11:58] Amith: I think Claude is really good at [00:12:00] distilling information and categorizing it and coming up with really cool, catchy like. Phrases and titles and things for stuff. So it's, it's, that's pretty amazing by itself that, um, you know, in that 99 prior episodes, or in that timeframe I should say, in just under a couple years, um, we've gone from a lot of theory to a lot of action uhhuh, and I think that's true across the board, even, like, think about how you prepared for this episode and what you described.
[00:12:27] Which I'm not familiar with Google collab, uh, by the way, I'm, I've heard of it, but I haven't ever, I haven't never used it. So you'll, I'll show you one bit. Tell us more about that. Don't worry. Yeah, you have to tell us more about that. But the fact that you could go in there and build a program that you could run on your own.
[00:12:39] And do the kind of, um, you know, digestion of unstructured data manipulated in a whole bunch of different ways and come up with all the stuff you just described is pretty remarkable for a two year span of time, because that would not have been even close to possible two years ago. That would've been a total pipe dream two years ago.
[00:12:57] That's pretty remarkable by itself.
[00:12:59] Mallory: I've [00:13:00] been thinking the same thing too, because we've talked about on the pod recently, but we are working on the third edition of Ascend and even just how much better Claude is. We've used AI to assist us in writing the books from the beginning, but I mean, the quality of the writing that you can get out of these tools now versus two years ago was just honestly insane.
[00:13:20] And I'm, as a side note, very excited for everyone to see a Send third edition. It is going to be the best one yet by far, but we've, we've come a long way.
[00:13:28] Amith: Yeah, we sure have. And you know, I think that, um, when we think about the theory of constraints, that would be kind of bounding in what people's productivity is or organizational output would be, um, that's what's changing so rapidly, right?
[00:13:40] When you think about constraints, you think about labor, you think about capital, you think about, in the case of some businesses, land. The ability to produce physical goods, things like that, that that last category is uncommon for this sector. But, um, the point would typically be around laboring capital are the constraints that would pe prevent people historically from doing a wide variety of things.
[00:13:59] Uh, [00:14:00] technological capability, of course, is another constraint. You know, even if someone had the idea and plenty of, uh, labor and plenty of capital, if the constraint was that the tech, the science simply hadn't yet been invented to do certain things, that'd be a problem. We're entering this remarkable and kind of weird new era where the new normal is very weird, where we have no idea what not to expect.
[00:14:20] Which means we have no idea what to expect. And so that's exciting. But it also means that we have to be more nimble and we have to be more, um, willing to accept that we were wrong about what we thought even yesterday, but certainly a year or two ago in terms of what's possible, what's not possible. One of the first things I tell association CEOs when I'm having a little bit deeper conversation, uh, about AI is, uh, just make sure you reevaluate your assumptions regularly.
[00:14:47] So if people say, well, I tried that with chat PT, it didn't work. And so I haven't gone back. And I'm like, well, when did you try that? They're like, oh, maybe right after it came out. Like, well, that is like comparing, you know, you say, Hey, I used a [00:15:00] computer back in the 1960s with punch cards, and so it was awesome both phones.
[00:15:04] Yeah, it was, it was just fantastic. It was a great experience back then. Um, but you know, in reality we've gone through such a generational shift in such a short period of time with the tech. So, um, I guess my point would be that it's, it's hard for even those of us that are spending all of our time thinking about this stuff.
[00:15:20] It's hard to keep up. Uh, not so much in just like the stats and the data and all the different model releases and all that, but the mindset you have to have and you have to kind of retrain your brain constantly, that just because you think something isn't possible doesn't mean that that's true anymore, right?
[00:15:35] It's, it may not have been true to begin with, but it's, it's very likely it untrue. Um, only moments after you form the opinion, which is really strange and, and I find it ultimately very exciting because there are no upper bounds.
[00:15:48] Mallory: It is exciting. It is strange. It also, we've said this on the pod before, kind of breaks your brain a bit because I've known for a while now that I could create software using ai.
[00:15:59] Right? But [00:16:00] doing it is a whole different thing. And so now that I know that I can actually do it. It's, you're right, there's no upward limit. It's kind of like, well, Mallory, what are you gonna create? Because you can't use the excuse that you're not technical, so what's next for you? And it's very scary, but as you said, very exciting too.
[00:16:15] Amith: But isn't it, doesn't it give you a, uh, like an empowering kind of a feeling, right? Oh, sure. To be able to create things in your own and, and maybe even create stuff that not is only, not only is for your own personal use, that use, that other people could benefit from.
[00:16:27] Mallory: Right, and it's just something I've never thought about.
[00:16:30] You know, we all look ahead five, 10 years what we think our lives would look like. Well, mine has never included creating software that could benefit me or others. I just knew probably not in my wheelhouse. So now that's changed. So now it's okay. I've gotta kind of reevaluate what I. What I knew about myself, what I knew about my future.
[00:16:46] So I'm sure a lot of listeners can relate to that. But I think, well, I think the
[00:16:50] Amith: association, I think the association community at large can relate to that because for the longest time, associations have considered technology and software development to be their [00:17:00] Achilles heel. You know, they say, well, no, we don't wanna get into that because that's gonna require a custom app, and I don't have to have the technical means to maintain that or spend the money.
[00:17:09] Now the barriers are coming down, the costs are decreasing, the maintainability is increasing, and, um, you should really, you should reevaluate those thoughts. Right? And, and I'm not suggesting that everything should be custom software. There are actually some folks in the AI world, uh, who are so incredibly focused on what you just described, that they're saying that the world of SaaS is gonna go away, that the likes of Salesforce and HubSpot will cease to exist.
[00:17:34] Over the next coming decades because it's so easy for Mallory and anyone else to say, Hey, I want a CRM. I want a marketing automation. I want this. I want that. I personally don't agree with that. And, and of course I'm biased because for 30 years I've been building SaaS products and, and pre preceding that, uh, enterprise software products.
[00:17:51] Um, but the reason I think that my point of view is, is really based on simple economics, which is, um, it is true that software is going to become more abundant. [00:18:00] It is true that more and more users are going to be able to create their own software, but I think it's also true that the competitive forces will drive quality improvements and price reductions.
[00:18:09] A variety of broader horizontal software tools like generic CRM and so forth. I'm hopeful that it'll help, uh, specialty markets, like the association vertical as well. Um, but I think what'll happen is, is that even though you could create a generic CRM marketing automation, you'll probably still choose to use something out of the box because, you know, it just works you not to think about it.
[00:18:29] And so the overall cost to you isn't just the cost of the software, but it's. The knowledge that that software just works and plugs into everything and does a certain number of things quite well. Uh, my perfect example for this would be financial software. Even though I could today, and even with, you know, Claude version 10 or whatever, create great financial software, I have zero interest in doing that.
[00:18:49] I'd much rather pay QuickBooks or even a NetSuite or something like that, um, whatever their fee is in order to eliminate that conversation, because I know that that type of software is just gonna work really well [00:19:00] with a battle tested piece of software. So I think it's gonna be a mix, but, um, I do think what's really exciting too is it's, it's the glue that connects systems that oftentimes has been limiting.
[00:19:09] You know, one of the tools that I think has made a remarkable impact with business users is this tool called Zapier. I think you've used Zapier a fair bit over the last few years, right? Oh,
[00:19:18] Mallory: yeah, yeah. I've, I've dabbled for sure.
[00:19:21] Amith: And that's an amazing tool, even pre AI because it allowed you to connect thousands of applications with each other.
[00:19:26] And you know, in a fairly straightforward way, users can click and drag and drop and connect, you know, one technology to another. And when something happens over in HubSpot, to automatically move that data to another system. And you know, it wasn't for enterprise caliber integrations typically, but it could empower business users to solve the last mile problem, which is that last bit of functionality.
[00:19:45] They were all, they always wanted to automate, but really couldn't. I think with MCP and ai, um, you're gonna be able to fill in those gaps and have the glue, the connectivity between different major systems as well as the custom pieces of functionality you need between them. [00:20:00] And that's exciting as well.
[00:20:00] So I think my whole long, uh, point about this is that that which we once considers to be a weakness no longer needs to be that. And I think that's exciting too.
[00:20:11] Mallory: Am you have a knack of looking ahead, of hitting the nail on the head in terms of what's coming with innovation with technology, and you've done that quite a bit on the podcast.
[00:20:22] If you could go back to your episode one self, what do you think that version of you would be most surprised about for your episode a hundred? Like in terms of what's changed?
[00:20:34] Amith: I think I'd be most surprised with quite. Quite happily is the speed at which associations have been actually going out there and experimenting and adopting ai.
[00:20:43] Now, I'm not suggesting to you that I'm satisfied with it because I'm never satisfied. I always wanna push things harder and faster and all that, but the number of associations that are out there that are actually doing things, that AI is a much higher percentage than what I would've guessed two years ago.
[00:20:58] Um, now that [00:21:00] doesn't mean that associations should, should say, yep, check that box. Let's move on to the next topic. They should keep going really hard, but I'm quite proud of this market for moving and, and experimenting. It's, it goes against the grain of the culture of this space in terms of. Moving rapidly and having a willingness to experiment.
[00:21:16] I think that's generally important cultural shift for not just for ai, but just it's really good to be of that mindset I think. But, um, I'm really proud of this market for moving along and, you know, we've, we've obviously played some role in that and I'm proud of that as well. But, um, from a technology perspective, I would say to you outta all the things that I would be surprised by is how extremely powerful tiny little small models are.
[00:21:40] Even though we saw the trend line even before we started recording the pod, it was very clear what was happening in terms of the doubling side in terms of model capability and the having that came with that in terms of cost. We started reporting on this very early that smaller models were increasingly as capable as, you know, last year's frontier model and, [00:22:00] and on and on, and that kept happening.
[00:22:01] But even though I knew the math worked, just seeing what you can do on a phone or on. PC now with a local model. I do this all the time running local models on my MacBook and yeah, I even noticed this in a phone, but I know you can run some models in your phone and they're as good as GPT-3 0.5, the original model that that shipped with, uh, chatt.
[00:22:21] In fact, a lot of the models that I run on my MacBook Pro are better than GPT-4 Turbo, which was at the time we launched the pod, the state of the art. You know, model that cost open ai, hundreds of millions of dollars to trade. That's remarkable that that happened in such a short period of time. So I don't think I would've predicted that speed, even though I, directionally I knew that was happening.
[00:22:41] I might have said something like, oh, we'll have a GT four turbo equivalent model that can run on a PC by the end of the decade, or something like that. But it's, we're already there, so that's amazing. Wow.
[00:22:51] Mallory: So speed is even surprising. You and me if you and me from two years ago. That's
[00:22:55] Amith: shocking. Yeah. I mean our, our species, which most of the time I'm generally included [00:23:00] within is, you know, is essentially.
[00:23:02] A linear species, right? Yeah. We think, we think in that, in those terms. We don't, we're not good at thinking exponential. We're looking for our next meal, not our next thousand meals, you know? Um, so I think that that's a hard thing to go against millions of years of evolution. And so we are not trained to think in terms of exponentials, and it's very difficult to constantly recalibrate our brains and say, well, no, no, no.
[00:23:24] We can't think about the trend line we've experienced. We've gotta think a new order of magnitude every single time. There's a double one.
[00:23:30] Mallory: I wanna move ahead to what Claude helped me call the Wisdom Harvest. So we're gonna look at kind of the wisdom, the insights that we can pull out out of our past 99 episodes.
[00:23:41] Amit, I don't know if you've gotten to this part of our outline yet, but can you guess what the first insight is? The thing we say the most on the podcast might be,
[00:23:50] Amith: I am, I haven't looked at it yet, but I'm guessing that it's something like, the first thing you should do is learn. Get off, go and learn [00:24:00] something pretty much just like that.
[00:24:01] Is that, is that a call picked up on Almost,
[00:24:03] Mallory: almost Nailed it. Get educated training is critical. I don't think that was an exact quote either. So you probably did, uh, deliver it as you and me would have. But that is the first insight. You can't. Govern what you don't understand. I feel like we, we go over that all the time, and it's very reaffirming to know that Claude has gotten the same sense from 99 episodes.
[00:24:23] The second insight is what we just talked about. The pace of change is only accelerating. What feels fast now will feel slow next year, maybe in six months. Number three, you have to experiment. Start small test scale bullets, then cannonballs. That only makes sense to some of you who listened to that episode of me.
[00:24:43] Can you give us a, a quick recap of bullets and cannonballs?
[00:24:47] Amith: Sure. It's one of my favorite metaphors in business, and it comes from Jim Collins, one of my favorite business authors. He had a book called Great By Choice, uh, which covered, uh, a number of interesting lessons. But there's a particular chapter in there, I forget the chapter [00:25:00] number, uh, where he essentially covers this concept.
[00:25:02] And the visual metaphor is to paint in your mind the idea of being out at sea, uh, in a, uh, you know, foggy, dark environment. So you cannot see. In front of you or behind you, but you know, you have an enemy ship out there and you have essentially a limited amount. You have a scarce resource, which is gunpowder.
[00:25:24] Um, you've got a lot of bullets and you have one cannonball, and you have enough, uh, you have enough gunpowder essentially to fire one cannonball, uh, and quite a few bullets. And so the idea is, is that, um, if you don't know, you know, there's one on any ship outlets, somehow you know this, but you dunno directionally where it is.
[00:25:41] So what are you gonna do? Well, hopefully you're not gonna just load up a cannonball and say, you know, hopefully we're gonna get this right. That, that may not, it's a strategy, it may not be a successful one. Um, instead, what a lot of people would choose to do is to say, okay, well let's test this out. Let's just.
[00:25:57] See, like, let's kind of divide the, [00:26:00] the space we think that the ship might be in and fire bullet and listen, because we can't see. So we're gonna listen for a little ping and, oh, uh, we didn't hear anything. Let's fire another bullet. Maybe in another, you know, slightly different direction. Keep adjusting, keep calibrating the direction you're pointing.
[00:26:14] And then you hear Ping, you hear, oh wait, I actually think I hit my mark. And at that point though, this is the key lesson. You have to do something. You have to say, ah, okay, well that other ship isn't like hanging out saying, oh, someone just tipped my hole with. I'm just gonna kick back and chill right here.
[00:26:33] It's just let's listen to some Jimmy Buffet and have a margarita. Not really likely, they're probably gonna keep moving, right? So, um, you might want to immediately load that canonball and fire away. That's when you gotta use your scarce resource and go for it. And so what Collins talks about with his, uh, you know, he has this amazing, uh, peer data set where he reviews all the work that is, is deeply researched, that's based on this, you know, peer matching concept of companies that did essentially had similar [00:27:00] circumstances but chose, chose differently.
[00:27:02] And what he talks about is that there's a lot of companies that do indeed experiment. They do run lots of experiments, they do fire lots of these little bullets, but what they fail to do consistently is to load the cannonball when they have. Clear, compelling data, not necessarily incontrovertible, not necessarily perfect data, but clear data that suggests strongly that they're on the right track.
[00:27:25] That's where they fail to fire the cannonball and therefore they don't win. Um, and so many companies have actually run these experiments and they found the graphical user interface at Xerox par, or they found the digital camera at Kodak. On and on and on. The stories are out there of people discovering ideas and these just innovation ideas, but lots of things like that you find, but then you're like, oh, there's all these reasons why I can't fire the cannonball.
[00:27:51] I don't wanna be wrong. I don't wanna lose my job. What if it's not actually the right thing that I'm gonna hit? You know, if it's something else, and on and on and on. There's [00:28:00] always a thousand reasons why you shouldn't do something. So. In that context, we need to really step back and say to ourselves, it's not just experimentation, it's the willingness to take action and go big once a small experiment actually works.
[00:28:13] That's the, that's the essence of the bullets and cannonballs concept.
[00:28:18] Mallory: Yep. But if you're not experimenting, start, and if you are experimenting, you don't just wanna have a graveyard of experiments, you wanna find the ones that work and then take action, shoot your canonball.
[00:28:28] Amith: What's cool about that metaphor is you understand immediately what you're, what you're talking about.
[00:28:31] It's, it's in that situation, it's life and death scenario. You either, you know, do what you need to do or you're not gonna be around for very long to, you know, to figure it out. You have one shot, literally. Um, and here you kind of have the same thing going on. Even if you don't realize it, you don't have a lot of of opportunities here.
[00:28:49] All of us, all of our business models are going to need to change. And, um, experimentation by itself is a great starting point, but you have to be willing to go after the opportunity when you [00:29:00] find something that works. You know, for example, a lot of people are finding success with knowledge assistance.
[00:29:04] You know, this concept of loading your association's knowledge into a knowledge base and then putting it out there and seeing great results, and you're seeing, oh my gosh, people are coming to the knowledge system and spending 7, 10, 15, 20 minutes at a time per session instead of the typical website visit, which is under a minute.
[00:29:21] And so what's going on? There's something happening here that's deeply rewarding to the consumer of that experience. It's not just the association that's excited. Well, what do you do? You're like, okay, cool. I'm done. Check that box. We're done. Let's move on to the next thing. No, you say, how the hell can I exploit that to its absolute greatest potential?
[00:29:38] How can I take that knowledge assistant and embed it into every corner of the universe in my industry? Make it the defacto standard that people are begging for. Right? How do you go absolutely massive. Take a giant swing at that particular opportunity. And some associations are taking that step. Others are just like, Hey, this is a really cool AI experiment.
[00:29:55] It's really successful. Certainly from our perspective, uh, that's great. Is a [00:30:00] starting point. But take that big swing if you see something that's That's right. You know, right in front of you.
[00:30:05] Mallory: Mm. We, I'm thinking of ASIN three again, just 'cause that's what I'm working on currently. But we have several case studies that are incredible in this edition and I'm thinking of ASIN four already and wondering if maybe in that version we'll see more associations taking those big cannonball moments to be determined.
[00:30:24] Alright, number four. And it's funny 'cause Amit, you kind of just said this, our fourth most mentioned insight is, this is not optional. AI readiness isn't a nice to have. It is survival. Fifth Insight member expectations have permanently changed. They're expecting Netflix level personalization. Sixth insight.
[00:30:43] Personalization at scale is finally possible. One-to-one engagement is achievable. Seventh Insight AI won't replace you, but someone using AI will. It's about augmentation, not replacement. Eighth insight. Your unstructured data is gold. [00:31:00] PDFs, content recordings. That's where the value lives. Our ninth insight move from theory to practice, stop talking about AI and start doing ai.
[00:31:10] And then number 10, Claude, I'll let you know. Claude said this should be your, um, your saying exponential, everything. It's not just ai, it's every technology accelerating. So you must say that a lot as well of these 10 insights, Amit, which I think are pretty spot on. Which of these do you, have you seen associations being most willing to accept, and which of these do you think you see the most resistance on?
[00:31:37] Amith: Getting educated. I think people are, um, it might actually be the answer to both of your questions there. People are, um, naturally willing to accept that, that, that makes sense because if you don't understand something to really do it, you have to understand it if somebody else can't really understand it for you on your behalf and gonna do it for you.
[00:31:55] So conceptually, that seems to make sense to everyone Also. Associations are in the education business. [00:32:00] So, uh, educating seems to be something that they're willing to do for themselves. I'd say that the other side of that though is actually getting people to comply with doing the work. Sometimes it's a bit of a challenge where you might have an association leader say, Hey, we're gonna go and, uh, sign up for the side car AI learning hub, or, or whatever they're doing, and then not mandate it to make it optional, to make it more of a, a carrot rather than a stick type of thing.
[00:32:24] Um, you know, I don't think it's a carrot or stick thing. I'd say have one in each hand. And go for it in both ways, in the sense that you should certainly reward people who are early adopters for learning. But you should also let people know who aren't doing AI training, that if they would like to continue being employed by your association, they will be AI educated.
[00:32:44] They have to be, there's no choice. Uh, I've seen more and more association leaders step up and do exactly that. It tends to go a little bit against the grain at a lot of associations with their culture is a little more collaborative, less. But, um, you know, I think that's really important. So that's the part that I think's been a little bit slower than [00:33:00] I would've liked.
[00:33:01] Mm-hmm. Uh, again, going back to my earlier point about what I've been most surprised with over the last hundred episodes is there has been a lot of adoption and, and a big part of that is the leading edge of it has been learning first, and there's a lot of like true. Truly knowledgeable people in the association market at, at a very deep level on, on ai.
[00:33:17] And it's, that's exciting. And that, that comes from the investment they've made, not really going after it and spending a hundred hours in a row typically. I mean, some people may have done that, but typically chipping away at it and just spending a, a little bit of time every day, every week and getting better and better and better.
[00:33:31] And we're seeing that all the time. So to me it's both sides of that coin that people get it, but then it's actually getting people to do the work. So beyond the people who are naturally motivated, who. Carve out the time. We're curious about it getting everyone on board. What I tell leaders is that they have a leadership imperative to force, not just suggest, but force all of their employees and probably most of their volunteers, the close end volunteers anyway.
[00:33:59] To become AI [00:34:00] proficient. And the reason it's an imperative isn't because it's what's serving the organization, although it certainly does that, it's to serve the people. Because as a leader, your number one goal is to grow your people. Your product is the rate of growth of your team. And if you grow your team, you'll have unbelievable success in your business, whatever that business is for-profit, nonprofit, it doesn't matter.
[00:34:20] Um, if you don't grow your team, you're not leaving, um, you're essentially letting your people go stale. And in this environment that is malpractice. And we say that very strongly. It's upset some people when I've said that, but I really don't care. Um, it is leadership not practice, to not demand that your entire team learn AI and learn it now, so.
[00:34:39] Um, that's probably the area that I get frustrated most when I see a little bit too much of a laissez-faire kind of a mindset about the speed at which people should learn this stuff.
[00:34:48] Mallory: Mm-hmm. And to go back to that bullets and cannonball metaphor, I don't think the captain of that ship is kind of like everyone, can we all agree?
[00:34:57] Do we wanna shoot the, this one bullet here? [00:35:00] I don't know. Maybe they are. Maybe there's different kinds of ships out there, but. Speaking of the imperative to grow your people and educate your team on ai. If you're listening still to the episode right now, we wanna let you know about a giveaway we're doing in honor of a hundred episodes of the Sidecar Sync.
[00:35:19] If you've been enjoying listening to our episodes and you would like to go to LinkedIn, make a post about one of your top favorite episodes of the Sidecar Sync Podcast. Tag sidecar. Am and me. We will be selecting one individual to gift one year of the AI Learning Hub too. I know a lot of, you're already in the AI Learning Hub, so you can also, if you're selected, gift that to someone else on your team.
[00:35:45] So this is kind of a little Easter egg. We wanted it to mention it now for all of our superfan who are still tuned in. So please make your posts on LinkedIn and may the best post win. Okay. Moving to our next section in the Wisdom [00:36:00] Harvest, uh, which is if we could only tell associations one thing, uh, and we divided this out into CEOs, IT leaders, membership teams, education and content teams and boards for CEOs, executive directors, your role isn't to be the AI expert.
[00:36:14] It's to ensure your organization and your people have AI expertise for IT leaders stop trying to control ai, start enabling it safely. For membership teams, every member interaction without AI augmentation is a missed opportunity for education and content. Folks, your content plus AI equals new revenue streams and for boards, if you're not asking about AI strategy, you're not doing your fiduciary duty.
[00:36:44] Claude said it, not me am. How have your conversations with leaders changed over the last two-ish years? You
[00:36:53] Amith: know. There's some very consistent elements, which I've already mentioned in terms of education, getting started, running [00:37:00] experiments, uh, going bigger once you find something that works, that, that I think will be consistent in the, in the next, you know, two years.
[00:37:06] Um, what I think has changed in the last couple years in conversation is the amount of choice that's out there has really grown tremendous. A choice in terms of AI models, providers, where you can run the models, what's called inference, which we talk about a fair bit. Um, there are many options out there and a lot of the associations that are out there doing stuff just kind of default to using open ai, some using Clot or Gemini, but mostly OpenAI, which OpenAI is definitely the leader in the market in terms of market share and arguably, depending on who you talk to in terms of model capability, um.
[00:37:40] There are so many different choices to be had that are out there. Uh, in terms of providers, models, things you can run literally hundreds of different models you can run on dozens of different legitimate providers, some of which offer fast inference, much, much faster than what OpenAI or Azure or anybody like that offers.
[00:37:57] So I think it's important to think a little bit more deeply about [00:38:00] this and don't just go down, you know, kind of robotically to open AI path. Again, if you think of open AI is great, by all means, use them. But, um, consider your alternatives. And that wasn't a thing two years ago. There really, there were, there were other choices two years ago, but not nearly.
[00:38:13] The depth of opportunity that there is now. So that makes it really exciting. Uh, a lot of people also assume that cost is going to be a major hurdle for them, uh, because the cost of some of the major providers, uh, in particular like philanthropics CLO, is extremely expensive to use. So if you're like, oh, well.
[00:38:31] If I use the API for Claude's Opus 4.1 to process all my content, it's gonna cost me $800,000 or something like that. And that may be true, but you also might be able to process your content and do the things you're looking to do with a model that's a 10th the size or even smaller and, and runs for a hundredth or a thousandth, the cost on, you know, using open source on a variety of inference platforms.
[00:38:56] So that choice didn't exist two years ago at the level it does now. [00:39:00] That requires a little more critical thinking.
[00:39:03] Mallory: Alright, we're moving toward the end of the episode, Amme. I wanna ask some, I guess we can do it kind of rapid fire questions, but some of these are worth a discussion as well. Uh, if there was one use case from the past 99 episodes that we have discussed that you could snap your fingers right now in all associations would have it implemented immediately, what would the one use case be?
[00:39:24] You have to pick one thing.
[00:39:26] Amith: That's really tough. As you know, I have a hard time picking one thing and I have a hard time saying anything quickly, but I'll do my best. I would say that the one thing that I would like to see associations snap their fingers and do is implement personalization because it's easy and it works and it's inexpensive, and you can get it done so, so fast.
[00:39:49] It's low risk. Um, and frankly, if you don't do it, you're gonna seem pretty dinosaur pretty soon, if not already. Um, and the value is enormous. I mean, if you [00:40:00] personalize your content, your members are starting to get what they actually want it. It's just an opportunity that's just so incredibly obvious to me.
[00:40:07] So I think that that's out of all the things I wish people could all do, that's universally important. There are others out there we talk about here, but uh, that would be the one that my brain goes to right away. Of course, it's the category of AI that I've been working with the longest for over a decade.
[00:40:21] Mm-hmm. I've been deep into personalization weeds, but, uh, through that experience, I know how hard it used to be and how expensive it used to be, and how many millions of dollars you used to have to throw these personalization engines just to get them running. And now how quickly you can get a wide array of personalization technologies going, um, very, very inexpensively and incredibly effectively.
[00:40:41] If you're not doing it, you're really missing out. That'd be my point.
[00:40:45] Mallory: That was a pretty quick answer. And you said one thing, I'm impressed, uh, which guest? It kind of hurts.
[00:40:51] Amith: It kind of hurts to have to limit it, but I, I must stop right there. That's, that's why I
[00:40:54] Mallory: wrote it that way. Uh, which guest changed how you think about AI or [00:41:00] perhaps maybe one of your favorite guests?
[00:41:01] I know we've had a lot on the podcast and I didn't prep you for this, so if you can think of one right now, who would you say?
[00:41:08] Amith: Yeah, you know, I'd probably point to our conversation with Ian Andrews of Brock. He's the Chief Revenue officer of Brock, longtime colleague and friend of mine I've known Ian for, for decades now.
[00:41:19] And, uh, really smart guy, really insightful, uh, revenue leader across a variety of companies over the course of decades in his career. And, you know, the way he described the demand for inference at.
[00:41:37] OQ cloud.com. These guys are one of several fast inference providers, so their platform is proprietary hardware that literally delivers about a 10 x improvement in speed compared to open ai and Claude and Gemini running, running open source models. And, um, I'd say, I'd say the insight that, um, I took from our conversation with him was that because, you know, he's talking to some of the largest [00:42:00] investors and corporations in the world about their platform.
[00:42:03] The kind of consumption and growth and demand they're seeing and inference, it's just kind of impossible to put your brain around it. Even for those guys, you know, in that space, the speed at which demand is growing is, is unbelievable. So to me, that moment in time was interesting just in conversations with them outside of the pod leading up to that.
[00:42:22] And since then, it's just reinforced it. But, um, the, the demand for this stuff is just absolutely incredible. You don't really see that in the association market as directly. So talking to someone who's immersed in that world was fascinating for me.
[00:42:35] Mallory: That's a great answer. What do you think we'll be talking about in episode 200?
[00:42:40] That sounds like sci-fi today.
[00:42:44] Amith: I think in the next hundred episodes, which should be roughly two years, right? Um, right around digital now, 2027, completely nuts. Um, what will that, what will the world look like? We will have real time. Video avatar [00:43:00] interaction. That's, you know, trained against our knowledge base and our content, um, you know, capable of, of doing some pretty incredible stuff.
[00:43:08] So think about like live real time, perfect caliber, like tutoring against all of your knowledge, uh, professional assistants that are, that there with you. And the video side of it might sound gimmicky to some people, but the degree of, you know, kind of the dimensionality that gives you the interactive nature, not only the, you see the avatar, but the avatar sees you.
[00:43:29] Understands you better because we communicate so much of what we have to say and how we're feeling through our expressions, through our hand gestures, um, all that. So I think that that part of what we're dealing with is gonna just absolutely blow up. We've talked about world models a little bit recently.
[00:43:45] You had Thomas Altman on, I know you guys talked about that. We talked about world models in prior episodes before that, uh, that's going to give AI a new level of kind of, uh, understanding of the world, right? Uh, physical understanding of the world. Coupled with increasing levels of knowledge, [00:44:00] there's gonna be some kind of a, a break breakthrough.
[00:44:03] And fundamental model architecture that really changes the game. Um, some of the limitations we have in the current architectures we work with are, um, you know, they're being overcome bit by bit, piece by piece on a regular basis through advancements. But the fundamental architecture for Transformers that I'm speaking of is, is still the fundamental architecture that's now, you know, almost 10 years old.
[00:44:24] Uh, a hundred episodes from now. Transformers will be exactly 10 years old. I believe we'll have some significant breakthroughs by then that will change things like, you know, context windows won't even be a conversation potentially. We'll just be effectively in the world of infinite context windows with extraordinary instruction, following with long context.
[00:44:43] And I think that's gonna be a really big thing. And I guess the last thing I'd say, since you didn't say I had to pick one, I'm going on and on next time, on the 200th,
[00:44:50] Mallory: I'll be more specific. Yep.
[00:44:52] Amith: Yeah, you gotta, you gotta remember that with me. But, um, the, the last thing I would say, I think in a hundred episodes from now, um, we will have [00:45:00] hundreds of AI agents deployed by hundreds of associations doing really, really rich, meaningful work.
[00:45:06] We're at that tipping point right now for Agentic ai. AI is being used by most organizations at some level, but it's primarily what I'd call consumer grade use cases. What I mean by that is not derogatory at all, is simply that it's an individual user doing work with Clot or with. An assistant asking one question at a time, doing one step at a time, and agent AI is simply the ability to stitch that together.
[00:45:27] The glue in the middle is sometimes AI reasoning, sometimes it's good old classical workflows like step A, then step B, then step C, what you could model out on a whiteboard with a, a good old fashioned flow chart, right? And so that's becoming more and more and more accessible. And so we're gonna be spending tons time talking about agents, probably even more than we have in the.
[00:45:49] Hundreds of associations that have deployed, you know, many hundreds of agents, maybe thousands of agents in the next hundred episodes. So I'm, I'm really pumped about that because the value from that is gonna make the value [00:46:00] we received thus far from AI seem pretty minuscule.
[00:46:03] Mallory: Mm. And I think it's very likely that the association leaders listening to this podcast will be part of that group that has implemented hundreds of agents, everybody.
[00:46:13] Thank you so much for tuning into our hundredth episode of the Sidecar Sink. I can't even believe I'm saying that it has been a beautiful ride over the past two-ish years. We so appreciate you tuning in. We love having these conversations and, and learning new things alongside with you all. So everybody, we'll see you all for episode one oh one-on-one.
[00:46:34] Amith: Thanks for tuning into the Sidecar Sync Podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in depth AI education for you, your entire team, or your members, head to sidecar.ai.

September 18, 2025