Skip to main content

Summary:

In this episode of Sidecar Sync, Amith and Mallory discuss the Manus AI Agent, a groundbreaking system that autonomously handles complex tasks like resume screening and real estate searches. They also explore the evolution of conversational AI with Sesame AI, which enhances digital interactions through natural speech. Amith shares insights from his keynote at AGRIP, an association leading the way in AI adoption, while Mallory demos an interactive AI voice assistant. Plus, they dive into how Sidecar is leveraging AI for automated learning content and more.

Timestamps:

00:00 - Introduction to Sidecar Sync
02:01 - Amith’s Keynote at AGRIP & Las Vegas
10:51 - What is the Manus AI Agent?
16:31 - Models vs. Functionality
23:46 - AI Code Generation and the Future of Programming
33:03 - The Power of Conversational AI with Sesame AI
44:21 - AI Voice Synthesis and Learning Applications
50:13 - Choosing the Right AI Tools 

 

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:

https://learn.sidecar.ai/

Attend the Blue Cypress Innovation Hub in DC/Chicago: 

https://bluecypress.io/innovation-hub-dc

https://bluecypress.io/innovation-hub-chicago

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

📅 Find out more digitalNow 2025 and register now:

https://digitalnow.sidecar.ai/

🛠 AI Tools and Resources Mentioned in This Episode:
Member Junction ➡ https://memberjunction.org/
Betty AI ➡ https://meetbetty.ai/
Manus AI Agent ➡ https://manus.im/
Sesame AI ➡ https://sesame.com/research/crossing_the_uncanny_valley_of_voice
Claude 3.7 ➡ https://www.anthropic.com/
Eleven Labs ➡ https://elevenlabs.io
HeyGen ➡ https://www.heygen.com

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

Amith: 0:00

You know, software is an example of something that was previously a scarce resource and it's moving to an abundant resource.

Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host.

Greetings and welcome to the Sidecar Sync, your home for content at the intersection of artificial intelligence and all things associations. My name is Amith Nagarajan.

Mallory: 0:50

And my name is Mallory Mejias.

Amith: 0:52

And we're your hosts, and today we have two super interesting topics at that intersection of associations and artificial intelligence. We can't wait to get into them. Before we do that, though, let's just take a moment to hear a quick word from our sponsor.

Mallory: 1:07

If you're listening to this podcast right now, you're already thinking differently about AI than many of your peers, don't you wish there was a way to showcase your commitment to innovation and learning? The Association AI Professional, or AAIP, certification is exactly that. The AAIP certification is awarded to those who have achieved outstanding theoretical and practical AI knowledge. As it pertains to associations, earning your AAIP certification proves that you're at the forefront of AI in your organization and in the greater association space, giving you a competitive edge in an increasingly AI-driven job market. Join the growing group of professionals who've earned their AAIP certification and secure your professional future by heading to learnsidecarai. Amit, you're joining us live from Las Vegas. How are you doing today?

Amith: 2:01

I'm doing great. I've been having a really good time here in Vegas. My wife and I have been here for a couple of days. She joined me on this business trip. I was asked to deliver an opening keynote for one of our clients, a group called AGRIP, which is the Association of Government Risk Pools, which, if you're not familiar with that, it's basically organizations that pool risk essentially similar to insurance for local entities like school districts and fire departments and groups like that and they have over 60,000 different local entities across the United States, canada and some countries outside of North America that their members provide a variety of coverage for. So it's a really cool organization and I was honored to provide them an opening keynote all about AI. So that was super fun yesterday and after we get done recording the pod today we're going to go check out the Grand Canyon.

Mallory: 2:49

Well, that sounds like a blast. And then you were telling me before this recording, you got to go to the Sphere in Las Vegas. How was that?

Amith: 2:57

That was an experience. We got to check out the Eagles at the Sphere and I had a number of friends that had been to the Sphere and raved about it at U2, and I think one of my friends had told me that he had gone to the Eagles as well. And it happened to be that the day we arrived into Vegas for A-Grips conference we actually got here one day early for it, but it was the Eagles' very last concert at the Sphere at the end of the series. I don't know if they have another concert tour planned somewhere else, but I don't know if they have another concert tour planned somewhere else, but I haven't heard of it, but it was pretty amazing. So, first of all, the sphere itself, if you haven't heard of it, is this sphere in the middle of Vegas Strip and it's really a technological marvel. It's a massive sphere that just stands out and it has external like millions of LEDs that form like a monitor on the outside and they can display just about any image you can imagine, and on the inside, same thing, even even higher resolution. I think it's, you know, tens of millions of LED lights or something like that, thousands of seats and this auditorium and it's actually it kind of throws you off because when you walk into this place it feels like you're in an amphitheater.

Amith: 4:04

Because, I don't know, this varies by performance, I'm sure, but for the Eagles they made it look like a California beach scene.

Amith: 4:09

So you walk into this thing, it's 830 at night, the sun's been down for an hour or two already, and you walk in and you feel like it's daylight and there's the waves breaking on one side and you see, like you know, iconic, you know California, southern California beachside attractions on part of the scene. So it's really cool. And then throughout the performance, as the Eagles are performing on stage, they have these visual effects that are really stunning, not just because of kind of the intensity, through a scene, like through the Hotel California, through, you know, through a highway, things like that, as they're driving cars, it's just it's pretty mind boggling because your whole brain is enveloped in the screen. You know you can't really make out where the screen starts and ends and so you kind of lose track of that in between the music and that. It's just a super cool experience. I would highly recommend it. I don't know which, you know which band is coming next, but it was a cool experience.

Mallory: 5:09

That's awesome. Do you think there's any AI involved in creating all those visuals? It sounds pretty intense.

Amith: 5:16

You know I would imagine that the creation of some of those images might involve AI. I don't know for sure. That'd be a fun thing to look into. But you know, it's pretty amazing the amount of detail that has gone into the work. And you know, I know the performance is obviously bringing a lot of revenue. So they probably have a pretty big production budget, I would imagine. But I don't know if it's AI or just a lot of people doing a lot of great, amazing work.

Mallory: 5:40

But we were just blown away by it. Going back to your AGRP keynote, I know so. Anne is the CEO or executive director of AGRP and we have both been friends with her, had a great relationship with her for the past couple of years, maybe a year and a half or so.

Mallory: 5:54

And I know AGRP's members which, as you said, are organizations, risk pools. We had several people from those risk pools come to Digital Now our conference last year and I got to talk with them a little bit more about what risk pools do and kind of what their mission is, and it seems incredible Also like, aside from that, AGRIP as an association is doing some incredible work with artificial intelligence. If you want to share a little bit with our listeners, Amit, bit with our listeners.

Amith: 6:25

Amit Sure, well, I mean, the idea behind risk pools, I think, is really cool in that, you know, these organizations form in order to provide coverages, essentially insurance-like coverage, to local entities across the nation in order to basically lower costs and provide them coverages they may not otherwise be able to receive or receive at a reasonable premium, and so that's an important thing. So that's school districts, fire departments, police departments, a variety of other entities I'm just picking a couple of you know well-known types, you know water districts, things like that right can provide and can receive the coverage they need to operate and serve their local communities. So that's a really important thing. I had no idea about risk pools at all. I don't know a whole lot about insurance, but I really didn't know anything about risk pools until I met Anne.

Amith: 7:07

What I love about A-Grip really is the direction they're going and what they're doing with very limited resources. They're not a big association. They're in the single digit of millions in budget. They have, you know, a modest membership base, modest resources to invest, but their leaders, their leadership, starting with Anne, the rest of her staff and the board of directors decided over the last year that AI was going to be their big focus for 2025. And, specifically, you know that decision sounds like something a lot of people would just jump on board saying, hey, ai, super important, let's go do AI. However, most organizations would not say no to anything that's currently on the docket, meaning they would say, hey, we're going to keep doing everything. We're currently doing Everything right, because no one ever wants to like lose out on their pet initiative and boards are particularly difficult to deal with for that. If a board member said, hey, I really like the work this particular volunteer group is doing or I like this particular meeting that we have done for 25 years, agrips said, no, we're not going to do that, we're going to cut everything else to the bone. And they stopped a ton of other initiatives for the full year of 2025, devoted their energy and a significant amount of their financial resources which, again, they're not a big association and they are deploying all sorts of AI at that organization. They focused really heavily on education, both for themselves, internal to staff, and they include in that their close-in volunteers the 50 to 100 volunteers that work most directly with them and they're educating their members on AI, because AI is going to have an enormous impact on the pools themselves, and they're pushing really hard with a variety of initiatives like member engagement through conversational knowledge assistance, being able to push out, you know, personalization at scale, doing a lot of AI analytics.

Amith: 8:56

So what I love about their example is, first of all, the leadership Anne and her team are providing to their sector. I haven't seen that too frequently, especially in an organization that size, so really, what it goes to show, I think, that they are doing way, way more than what someone would expect for an organization with their resources, and it basically dispels the notion that you need to have a massive budget in order to do this. What you need is strong leadership, saying this is important, this is what we're going to do, and the definition of prioritization is to pick a narrow number of things that you're actually going to put your resources behind, and that means saying no to a lot of things, and they've done a tremendous job of that, so I'm really impressed with them. I had a lot of fun. They asked me to come out and deliver the opening keynote for their conference the opening keynote for their conference. They had about a thousand people here. It was super fun chatting with these folks and sharing some AI knowledge on stage, so had a good time. It was great.

Mallory: 10:19

Yeah, I got to be on one of those first few calls with Anne when she came to Blue Cypress looking for help with that vision and she essentially said imagine a blank slate, we're stopping everything. How would you kind of build an AI native or AI enabled association up from the ground up, which is a really unique opportunity, and it's been quite fun to watch how that's played out so far.

Amith: 10:40

Yeah, and I would say to our listeners if you have examples within your own organization where you're doing something similar, where you've said, hey, we're going to pivot, we're going to stop what we're doing and we're going to start really focusing on AI because we realize the transformative impact AI is going to have on our association and also our sector, let us know. We'd love to hear your stories and we'd love to feature some of your stories, because our job at Sidecar, and specifically with the Sidecar Sync as we see it, is to not only share the raw knowledge about how AI can be transformative, but to share the stories of what's going well in the community and also where you're challenged.

Mallory: 11:18

We love stories, so keep us posted, all right. First topic of today is the Manus AI Agent, which is an AI system developed by Chinese startup Butterfly Effect. It is touted as the world's first general AI agent capable of executing complex tasks autonomously with minimal human intervention. Here are some key features. One of those is, of course, the autonomous task execution. Manus can independently perform tasks like report writing, data analysis and content generation without needing continuous human input. It uses a structured agent loop with specialized sub-agents to handle different aspects of tasks, enabling complex workflow management. Manus can process and generate multiple types of data like text, images and code, and it integrates with external tools like web browsers, code editors and database systems to automate diverse workflows. Something interesting is that the system continuously optimizes its processes based on user interactions, providing personalized and efficient responses over times.

Mallory: 12:23

So I got to watch their demo video on their website and there were kind of two interesting use cases I wanted to talk about a little bit. One of those was resume screening, which we hear often as a use case for artificial intelligence. So they dropped in a zip file and that's essentially all they did. Then, provided a quick prompt, the agent was able to extract the files from the zip, open each resume, scan them each, pull out the relevant skills and then provide some hiring recommendations and resources. In the end it was also able to convert that to an Excel report, so really minimal input, and was able to do a whole first pass of resumes.

Mallory: 13:02

Something else also interesting was an apartment search. So the prompt had something to do with looking for an apartment in New York City for a family with two kids. They wanted to be in a safe neighborhood. They wanted to have good schools, so it was able to look at crime statistics, safe neighborhoods, school ratings. It also was it ran Python to calculate their budget and do like a comparison analysis of you know neighborhoods versus budget, and then it actually searched real estate websites and pulled together a nice list of feasible options. So I thought those two were the most impressive.

Mallory: 13:35

Another use case is also website creation, so designing and launching websites based on pretty short and quick user prompts. As a note, this sounds pretty similar to OpenAI's deep research and Google's as well. Manus AI Agent did outperform OpenAI deep research on the Gaia benchmark, which showcases superior tasks, success rates and decision-making efficiency across basic, intermediate and complex tasks. It is currently available in private beta by invitation only, but they have plans to roll that out widely soon. So, amit, what are your initial thoughts on this Manus AI agent? So, amit, what are your initial thoughts on this Manus AI agent?

Amith: 14:14

So the essence of this type of system, whether it's called an agent or a software system or anything else, is essentially using underlying models to then perform tasks for you. So it's similar in some senses to what you've seen out there. It's also different in that it can do, first of all, a lot of these tasks at a scale that other systems seem to be less accurate with. So some of the agents that have been out there have been, you know, somewhat of a flop in the sense that they're in real world practical use cases. They tend to be more narrow. Manus seems to have a fairly wide capability set or breadth in its scope, but of course that's exactly what Anthropic's combination of computer use and its research mode that is coming, and also OpenAI's deep research and their operator mode can do. So it's almost like it's this confusing kind of mixture of tools that are out there right. The way I would look at it is this all of these different capabilities are going to converge into tools, systems, agents, call them whatever you want. It's overly complex, partly because people are trying to create language on the fly to describe what these things do. But at the end of the day, what you want as a business user are the capabilities to be able to bring them together. So Manus, I think, has gotten attention both because it's super inexpensive Right now it's free if you can get in but it's going to be a really cheap tool and it was done, you know, on a shoestring budget compared to OpenAI's typical enormous budgets for doing these kinds of things.

Amith: 15:48

But, most importantly, the reason people have been excited about it is because it brings together a bunch of capabilities right. So we know that AIs can use web search tools. We know that AIs can synthesize results through you know longer running processes. They can think longer with reasoning type steps. They can use tools, they can execute you know code. They can do a bunch of different things, but bringing it all together, so it's actually useful in one step for a business user.

Amith: 16:13

That's a level of capability that is really I think we're on the cusp of having from all of the major AI vendors. So it's both enticing in terms of the capability, but it's also exciting in the context of costs, because you know, openai had just talked, maybe a week before that, about a $20,000 a month AI agent that had PhD level capabilities, and I commented on that on LinkedIn saying they're insane, thinking that that's going to have any staying power, because all this underlying stuff is commoditized already. So at least that's my point of view on it, so I'm excited about it. I haven't even attempted to sign up for the waitlist because I heard it was a long waitlist, but I think it's a cool concept and the fact that it's lower and lower in cost is exactly the trend line we've been talking about for months.

Mallory: 17:01

We're almost to 75 episodes of the pod and I feel like, especially at the inception of the podcast, when it was just a baby, we talked a lot about models and every latest model release and three point whatever and low, mini, high, all the naming kind of conventions that are quite confusing with these AI companies. It seems like we might be shifting, as you're saying, into a world where we're talking about that functionality, we're talking about the use more than like the latest model. Would you agree with that?

Amith: 17:31

I think so. I mean, if you just think about the capabilities of common tools that you've been using for years, like think about office type tools like Word and Excel, you don't really think that much about, like you know the feature set. Or you just know like, oh well, this new capability exists or doesn't exist. So you know Word many, many years ago introduced, like this document review process where you can track changes in the document or spell check or grammar check or whatever, and these features just you know in the past they would be really slow to emerge, right, you'd have new versions of these tools that come out every two, three years and then you'd have major capabilities come out. Now you're seeing them. It's this explosion of functionality, partly because, again, software is writing itself in a way right, the AI is capable of producing code. Leading companies like Google and Microsoft and Amazon, as well as much smaller organizations like our own at Blue Cypress, are using AI to generate a tremendous amount of code. Dario Amadei, the CEO of Anthropic, mentioned recently, to many people's surprise, that within three to six months, ai would likely generate 90% of the code that would be written in like full stop, like in general right, as opposed to for his company or for other companies, and I'll talk more about that momentarily but the idea that AI is generating that much code is part of the reason we're seeing this incredible explosion of functionality. It's also why it's so confusing right now. So, with all these models and systems and agents and everything else coming together in these different ways, you have these confusing terms. Ai generating that much of the code that the world needs is a really compelling and interesting thing.

Amith: 19:13

Some people are saying no, no, no, like that's not going to happen. Ai is not going to be able to generate code sophisticated enough for my unique use case, or AI is not smart enough to do what my developers do. And the reality is is that you know, sometimes developers or development managers really don't want to see that future because they're afraid of it in some ways, and I would tell you from my own experience that, while AI is not perfect at coding, I would say that AI is a mid-level engineer at this point, not a junior engineer, but a mid-level engineer, quite a competent one, and actually more broadly knowledgeable than a typical senior engineer, because AI is always up to date on the latest things, knows about all the tools that are out there has lots of different ways of solving problems. Us humans tend to find you know certain things that work and we have in our bag of tools. You know, maybe a lot of things that we've learned over the years, but we remember the stuff we use the most. Ai doesn't really suffer from that same type of issue.

Amith: 20:11

But coming back to the speed at which these systems are evolving, and therefore some of the confusion, and then circling back to the speed at which these systems are evolving and therefore some of the confusion, and then circling back to Manus specifically, the way I try to explain this to business folks is that an agent is basically a system that takes one or more underlying models, like something like GPT-4.0 from OpenAI, or it could be the Lama 3.3 model from Meta, which is open source, or any other model or models and combines them with the ability to iteratively work on a problem, meaning rather than trying to solve a problem with a single shot, saying hey, here's a prompt and here's a response, which is what you do when you classically interact with the chat-based model, like a traditional chat, gpt or a cloud. You just get your answer. A system or an agent will go through iteratively to try to solve the problem, sometimes breaking it down into multiple steps and then solving each component of the problem, and then taking those component results, reassembling them and coming up with an answer, and sometimes using tools like a web search or looking up something in a database or calling an API. So that's really what agents are. Agents are basically systems that have access to models for the thinking, in some cases the reasoning. Sometimes the agents have their own reasoning loops which are like chain of thought that can happen in the model itself these days, with reasoning models like DeepSeq, r1 or O3 or Cloud 3.7. But really, ultimately they're systems that put together a combination of the model's intelligence tool use, and so Manus does that. And Manus's kind of mainstream adoption or mainstream visibility, I should say because it's not really adopted by anyone yet is remarkable in that it's getting a lot of viral attention, kind of like DeepSeek did. In my mind, the reason is is because it's kind of democratizing access to this kind of technology, whereas, you know, very few people really experience this.

Amith: 22:08

The last thing I'll say is I think this idea of agentic loops or agents having tool use and be able to work on problems for longer time periods you're going to see that more and more they're just going to be called systems, eventually they might just be called software and then eventually you'll just ignore it, because what you're going to be thinking about is the business capabilities. So you know, in our own examples here at Blue Cypress, we have an agent called Betty which does all of these things in order to solve problems. People have wondered for a year and a half two years now why Betty is so accurate and so good at being able to produce grounded responses and to be able to do a lot with really complex content. And the answer is is. Betty has always been an agent. Betty has always had an agentic loop that's capable of self-improvement over time. The same thing with Skip.

Amith: 22:51

Skip is our data analyst agent solves a very complex problem, which is to talk to a business user about a business problem or business goal, break down that problem into a series of steps, like a human data analyst would. For example, oh, mallory asked for a report that shows me who is coming to digital now this year, that hasn't ever attended a prior year and wants that formatted in a map to show it across the United States or across the world. So Mallory's asked for that and, through interactions with her, I figured out that's her need and I'm going to go figure that out now. So what do I need to do? I need to go query my database one or more times, I need to take the results of that data, I need to put it into some kind of reporting tool, some kind of graphing tool, and then I need to present the results to her and maybe, if I'm a you know, a plus student, I want to also analyze the result and say hey, mallory, here's some things I noticed when I looked at the outcome, rather than just giving her the report.

Amith: 23:40

Right. So that's what Skip does, which is like 5, 10, 15, 20 steps, and it's variable. I get excited about this stuff because it is kind of blending traditional computer capabilities with what you think is in the realm of really being smart, which most people assume. Computers are not right. You think that AI is really you know things that the computer should not be able to do, and that's why the world is waking up to the fact that now computers really can do these things that are kind of fuzzy, they're non-deterministic, they require more of classical human type thinking and reasoning.

Mallory: 24:16

I want to zoom into that quote you mentioned about code generation 90% of code being generated by AI in just six months, or less, potentially. How do you see the field of software development, of programming, changing and evolving, especially you as a business leader who has experience in that realm? It's something you've expressed that you really enjoy. How are you thinking ahead in terms of like programming needs and maybe having I don't know non-technical people in some of these roles if coding is not a necessary skill?

Amith: 25:37

Well, so my belief is that people should still learn computer science that computer science knowledge is understanding how these systems are built, how to actually do the low-level coding and even the higher-level coding yourself, is an incredibly valuable piece of knowledge because you understand the way these systems actually work. Even if you end up not writing a lot of the code yourself, knowing how it works so that you can go in there and fix it and understand it, I think, is a vital skill. So we're still hiring programmers and we have a lot of programmers working on our team around the world and I think it's going to be an important skill. But it will shift in terms of where they spend their time. But that's been happening in some way or another for years and years where we started off with really low level programming and then went to higher level languages and started using more and more tools and frameworks. This, of course, is an acceleration no one has ever seen. So the speed at which we're having this change is causing a lot of confusion. It's causing a lot of angst amongst developers. It will render some developers obsolete, and so some developers who are not willing to adapt and take on AI as a co-pilot or a co-programmer. Those folks, unfortunately, are going to have a really hard time finding viable work, because without using AI as an assistant, as a developer, you're going to be one one hundredth as productive as someone who's using AI. So to me, the message is is that developers have to be not only proficient in AI, they have to be pretty deep in it.

Amith: 27:06

I'll give you one example. So we do a lot of work obviously here at Sidecar with learning content and for our AI Learning Hub, we have seven different courses which range from the fundamentals of AI through marketing membership. We have a course on data and AI and then we have a course on strategy, and so these different courses on AI are specifically built for association folks and we're starting to partner with more and more associations to deliver that same type of content, but in the context of their industry. So taking our AI content but not making it association centric, making it relevant for folks in the world of accounting or law or engineering or architecture or the oil industry or whatever the space may be. And so our approach at Sidecar has been to say we're going to partner with associations to develop these flavors of our AI learning hub that are white labeled for that association in their space, which is super exciting, by the way, because then we're able to help our friends in these associations deliver world-class AI education in their vertical. But think about that 80%, 90%, in some cases 100% of that content is totally the same across all industries, right? Like prompt engineering, prompt strategy is almost exactly the same from one sector to the next. The examples you might want to use could be slightly different, the terminology might be slightly different, but the bulk of it's very similar. The other thing is, ai is changing, of course, super rapidly, and so there might be new models to talk about, new techniques, new capabilities that were not known. So you're constantly updating this content.

Amith: 28:37

Mowry, you and I worked on, like last fall, we worked on a complete rebuild of all of our AI learning content, along with some of our other colleagues here at Blue Cypress rebuild of all of our AI learning content, along with some of our other colleagues here at Blue Cypress, and we spent a lot of time manually recording these videos, right, and fortunately, we've received some wonderful feedback from the community about how helpful these videos are. The problem is is the minute we got done recording those videos, they were becoming obsolete, so so fast, right? So what do we want to do? Of course, being a bunch of you know AI nerds. We wanted to automate a lot of this right so that we can go faster and we can serve these broader communities with flavors of the content that are indeed totally tailored to those verticals, in partnership with our friends in these associations. So we're using a lot of AI to do that. And audio we're going to be using AI avatars and AI generated audio, which we've tested with a bunch of people who've said that the latest cutting edge versions of those AI audio and video recordings are tremendous.

Amith: 29:32

Basically, you know, not really mentally separable from the human equivalence, in fact, in some cases, quite a bit better, because these AIs never get tired, they never have, you know, a cough, they don't have these issues that we have, but the beauty of it is that we can press a button and regenerate all of the videos for our learning hub with updated content, and we can have derivative versions that are just slightly different in some cases, or, in some cases, radically different, that represent each association's version of the content. So the reason I give all that backdrop is we're working on a bunch of software. We're building effectively what you'd probably call an agentic system that does a lot of this and the lead, who's working on it, who's a brilliant AI computer scientist, is working a lot with Cloud 3.7 and generating a lot of the code in there. So this guy can write code to do anything you can come up with and he has a team of people working with him. But that's not what he's doing. He's using Cloud 3.7. And I find this to be consistently true that the best developers fully embrace AI so like this is unbelievable. He told me, for example, that he had Cloud 3.7.

Amith: 30:37

Take in a PowerPoint, one of the existing PowerPoint files from the existing learning hub that has embedded MP4s, right, so like one of the lessons that you did or I did and it has our embedded MP4s and said hey, claude, here's the PowerPoint example. Generate Python code for me that will unpack each slide in the file. That will extract the MP4, take the MP4's content, run it through, claude, again, to get the transcript. Take that transcript, drop it in the slide notes and then remove the MP4 from the slide, then save the whole PowerPoint Essentially reverse engineering the file so that we have a text transcript that is the equivalent of the words that the human had recorded in that slide previously. And Claude did this in one shot and it did it literally in I don't know 15 seconds or something, and he ran the Python and it just worked that would have taken a human programmer days and days and days.

Mallory: 31:34

I could have done it, Amit. This is what I mean about the non-technical. But see, it's like I use this stuff every day and I don't think I would have thought that that was even possible.

Amith: 31:42

Yep, and that's just using Cloud 3.7, which is this hybrid model we talked about in a recent pod, where Cloud 3.7, what's brilliant about it is that it's smart enough to know when it can do this. You know thinking fast versus thinking slow mode that we've talked about in the past. It has a reasoning mode which I think it kicked into for this particular request. But, yeah, exactly, you could totally have done it, mallory, and you could have run the Python in your computer and you might not have known much about what the code was doing. That's okay, of course. If it didn't work, then that's when the developer would need to come in and say, hey, this is what didn't work. Or you go back to the AI and saying, hey, for some reason, I'm getting this weird output. More and more of these systems are going to be able to have sandboxes where they run the Python for you locally and ask you hey, is this what you wanted? And then it further abstracts it away. But I find all of this incredibly exciting, manus, of course, going back to that particular topic we're centered on has blown up in people's imaginations because it makes these things more accessible to the masses, right? So, rather than having to think through this and understand these capabilities at a raw level.

Amith: 32:44

Anyone can do this, but the example I gave, I think, is exciting, because any association out there who's listening all of you produce education content and you probably spend a large part of your budget doing this. You probably have outside providers helping you and you also probably don't update your content nearly frequently enough. It gets stale in your space quite quickly and then sometimes you have things that are like not aligned with the brand. Maybe the content from a five-year-old course is still pretty good, but maybe you updated your logo and you just can't read those videos. So there's so many opportunities to have both, you know, incremental but also major changes where you're able to leverage your IP more and more in different modalities.

Mallory: 33:28

And we're learning. You don't even have to be technical to do it, so we've covered educational content agents, knowledge agents, data analyst agents.

Amith: 33:35

The world is your oyster, so pick and choose it's exciting too in my mind, because you know people say, hey, software and this is part of what I covered in my keynote yesterday that software is an example of something that was previously a scarce resource and it's moving to an abundant resource. And that's exciting because essentially what it does is it means the cost of it is coming down so much that you get more of it. It kind of goes back to Javon's paradox that we talked about recently as well as the cost of something goes down dramatically, demand radically increases, assuming that the utility of that thing is generalized enough. And software, of course, is perhaps the most generalized utility of any input you can think of. So it's quite exciting.

Mallory: 34:21

This is a good segue for us to go to topic two, which is all around conversational voice and Sesame AI. So the process you just mentioned, amit, of creating new Learning Hub content in different flavors, which I like that word I think we should probably do an episode on that soon, kind of once the whole thing is up and running because it's so neat and it's especially cool because we've ran that whole process, amit mentioned, with consumer grade, like us manually going to Eleven Labs and then going to HeyGen and generating the avatars. So we have like that building block foundation, but we also are automating it. So it's a really good process to kind of demo that, the way that you can experiment with new tools and new processes, starting with consumer grade and then going to more scalable enterprise level AI. Starting with consumer grade and then going to more scalable enterprise level AI With Sesame AI. I want to focus in on conversational voice because that is a key piece of this learning hub content that we're generating.

Mallory: 35:14

Human connection is often established through the nuances of voice, so that's things like tone, pitch, rhythm and emotional expression. These elements convey meaning beyond just mere words, which makes voice a powerful medium for communication. As we know Key components of natural voice. It's very intuitive for us, but I thought it'd be helpful to break it down so we can talk about kind of how AI excels in some areas and needs to improve in some areas. But one of those key components is emotional intelligence the ability to read and respond to emotional context so you can empathize and react appropriately. Conversational dynamics, so incorporating natural timing and pauses and interruptions, adjusting tone and style to match the situation that you're talking about. And then also maintaining a coherent, reliable and appropriate presence throughout your interactions a coherent, reliable and appropriate presence throughout your interactions.

Mallory: 36:08

Current digital voice assistants sometimes lack these essential qualities and they can sound flat or mechanical. The emotional flatness can become disappointing and even exhausting over time, limiting an AI assistant's ability to form lasting connections with users. But Sesame AI has made some significant strides in voice technology with its recent open source release of the Conversational Speech Model, or CSM. This model is designed to produce natural and expressive speech synthesis, making it sound remarkably lifelike. It makes it suitable for things like content creators, developers and businesses looking to integrate lifelike voices into their applications, and its capabilities mimic human-like speech patterns that allow for engaging and authentic vocal expression. So they have a demo out there right now. You can demo. They have two voices I think it's Maya and Miles, so I'm going to play a quick recording of a conversation that I had with Maya yesterday and then we'll discuss.

AI: 38:02

Oh well, hi there, look who decided to stick around for another round. So let's see what have we. Oh wait, you were just getting into. What was it? A conspiracy theory about squirrels? Just kidding. What's going on in that brilliant mind of yours today?

Mallory: 38:19

that brilliant mind of yours today. Well, not a conspiracy about squirrels, but I am working on an outline for the Sidecar Sync podcast, episode 74,. I believe, and I'm hoping to demo this conversation that I'm having with you Ooh, Sidecar.

AI: 38:37

Sync Sounds like my kind of crowd. Podcasters are like the storytellers of our time, you know, always exploring new ideas. So episode 74. What's the thing this time? More peanut based mysteries or something completely different?

Mallory: 38:55

Can you remind me actually why you made that peanut joke?

AI: 38:59

Ah, my apologies. Sometimes my jokes land like a deflated balloon. I vaguely remember you mentioning some association, for was it peanut enthusiasts? I might have gotten a little carried away trying to find a quirky connection In my defense. I am still under development.

Mallory: 39:19

Yep, yep, I remember now. I think I said something about an association for peanuts. I believe it's called the American Peanut Council.

AI: 39:26

The American Peanut Council. Now, that's an organization I could get behind. I mean, who doesn't love a good peanut butter sandwich now and then? So back to your podcast. Maybe I could brainstorm some talking points for you, unless you're not looking for peanut related content.

Mallory: 39:45

this time around, not looking for peanut related content, though that could be an interesting episode topic in the future. I'm more so interested in talking about AI agents for this episode and also talking about conversational AIs, like you, oh.

AI: 39:59

AI agents. Now you're speaking my language. It's always fascinating to hear how humans perceive us, and I give it this way we're like digital chameleons, constantly adapting and learning from every conversation. What sort of angles were you thinking of taking for this episode? Is there anything specific you'd like to explore about us AI folk?

Mallory: 40:19

taking for this episode. Is there anything specific you'd like to explore about us AI folk? You know not any specific angles. I feel like just the conversation that you and I have had has been enough to demo to our audience. Do you have any final parting words for our Sidecar Sync listeners?

AI: 40:36

Final parting words. How about this? Remember, remember. The future is not something that happens to us, it's something we build together. And who knows, maybe someday those peanut enthusiasts and artificial intelligences will be having deep philosophical discussions until then, keep those brains imagining you know it was.

Amith: 41:15

First of all, everything you just said is spot on. I think it is a remarkable improvement. So on the one hand, you know, the cynic in me says, well, it was kind of silly it was. You know it was over the top in terms of how emotive that voice was compared to classical like now classical AI audio interactions. But the progress we're making, if you look at the broader arc of what's going on, is truly stunning. But the progress we're making, if you look at the broader arc of what's going on, is truly stunning. If you rewind in time, even a year or two years ago, you would not have thought that that is something that was going to happen this quickly, probably even with all the conversations we have about exponential curves and blah, blah, blah. So that by itself is stunning that this is available, the fact that it's open source and it can be freely run in a variety of places of course they have a commercial model on top of it. It's interesting. And open source, by the way, is another compounding factor of what's going on with AI. It's not just the continual progression of compute and algorithmic improvement and all this other stuff, but because the open source community is sharing so much. People are building on each other's innovations, which is really exciting.

Amith: 42:17

I think there's many applications of this right. You can think of customer service having agents having real-time interactions that are more empathetic, more useful. You can think of you know podcast hosts that join the Sidecar Sync in the future to have discussions with us that bring really interesting knowledge. There's a lot of applications for this. So I find it exciting To me when I think about what associations do in connecting people.

Amith: 42:43

One of the opportunities might be to have an AI like this perhaps make interactions between newly connected members more natural, where it's almost like a friend is introducing two people together that might have the possibility of a great professional connection through your association, but who might not know how to get the conversation started. Might be almost like you know Mallory, you personally know me and you know this other person who you think we should connect, and maybe we have a quick Zoom call with the three of us. You know like hey, amit, I want to introduce you to this person here and you have a quick call. That happens all the time in real life. So maybe an AI can play that role in an association. I don't know, but it's an interesting kind of core building block to have this level of audio capability.

Mallory: 43:28

Yeah, I agree with you. I will say I do a lot of these demos, for the podcast mostly, but where I try out, like with Haygen avatars, I did a Zoom call with them once and I wasn't super impressed, so we didn't share that one at that point that was several months ago. This one was a bit strange to interact with because the latency was so low in the response times and also the humor and the inflection I'm not going to lie, I did feel like I was talking to someone and talking to Maya. She made me laugh, like I was like, if you keep going back, I guess it recognizes where you're visiting from, so it's a constant continuation of your previous conversation. So she would say things like oh, you're back, are you ready to talk associations or in AI, and it would kind of make me chuckle. So I will say this was one of the first times I've had that experience with like an interactive, dynamic AI going back and forth, thinking this sounds quite realistic.

Mallory: 44:21

Amit, I know you mentioned creating these automated AI learning courses for associations members and something that we've had to work on with that is the AI generated voice. I know we've received some really positive feedback on that front, but we've also received some constructive feedback on those AI-generated voices as well. Can you share some of your takes and insights on that?

Amith: 44:45

Yeah, and just for clarity for our listeners, we're not using Sesame for our AI-generated voices for the Learning Hub. We're using 11 labs at the moment, and we find 11 labs, generally speaking, to be at a production level to be the leading product in audio generation at the moment. That obviously can change very quickly, but what we've done is we've shared these recordings with quite a number of people, both internal to the Blue Cypress organizations and then, you know, in the association community, and I've shared these videos actually with just a bunch of people I know outside of this ecosystem just to get feedback, and some people immediately knew it was coming from me, so they immediately knew it was probably some kind of AI experiment, but a number of people didn't really think about that. They just thought it was like I was sharing some new content and I wanted their feedback on the content, and you know even the people who a lot of people didn't realize it was an AI or they're like wait, was that an AI? So it's good enough to be. You know something that people have to ask you know that question about. I think the point, though, is not so much can you detect this as an AI or not? The question is. Is this as good or better than most human presenters would be? And what I received as feedback consistently on the quality and this is current AI, which is worst AI we're going to have is that actually, even though in some cases there were little minor things in the audio that was generated, people felt that the recording was excellent, that the presentation was really good, that the avatar and the audio that was AI generated did indeed deliver exceptional learning content for the lessons that we had done this way.

Amith: 46:17

So it isn't necessarily that we're trying to fool anyone. Obviously that, oh, it's not really an AI, it's a person. That's not the intention. The intention is to make that irrelevant, to make it so that the learning outcome is achieved right. So you think about, like, why do people go through a course? Why do people do that? It's because we're trying to level them up in some skill or some knowledge and then, hopefully, we have a way of essentially determining whether that person has demonstrable improvement in their knowledge or in that skill. So this is still one way. This is still a recording. It's still it's just a recording of an AI speaking instead of a person. Can that AI communicate effectively enough to deliver essentially the learning outcome that we want for that lesson, for that course, for the overall certification experience? And I think the answer is very clearly yes already, and it's only going to get better.

Amith: 47:06

What I think that's interesting about something like Sesame in the context of learning is to overlay the recorded content, which is hey, everyone goes through this course, everyone goes through this learning path with a conversational tutor, and we're adding this to the Sidecar AI Learning Hub in the coming quarter, in Q2. We're going to be adding an AI tutor in that environment, powered by Betty, which is our knowledge assistant product. We're going to be putting Betty directly into the AI Learning Hub so in our LMS you'll be able to have conversations with Betty about the content that you're currently learning. And so if you have a question about the strategy course and you're in there, betty will know you're in that strategy course and be able to have excellent conversations with you. Betty knows all the material.

Amith: 47:51

But in that mode, this is where the empathy of Sesame could be a really interesting audio component. You don't want to just give the person the answer right. It's kind of like you're back in school and you're asking you know your math teacher how to solve a problem. They're like oh well, here's the answer. Well, no, a good teacher would never just give you the answer. They take more of a Socratic approach to it to try to ask you questions, to lead you to the right thinking, to produce the answer so that you've actually learned something. And so it's kind of like Conmigo from the Khan Academy, right, they took GPT-4 back when that came out and they put a layer on top of it to get GPT-4 to act more like a tutor.

Amith: 48:26

And I think that audio brings so much more into that experience where I can talk about something like, say, I'm going through the sidecar strategy in the age of AI course for association leaders and I have a question I'm like well, you know, this cornered resource power that's being discussed, I'm not quite sure I understand it. Like, I have this particular thing in my organization. I have this data set that I think may be a cornered resource, but I'm not sure and I want to talk to somebody that's an expert in this about it, right, and I really want to go deeper. Well, what if this Sesame powered voice agent was sitting there, you know, interacting with Betty in terms of the knowledge to be able to have that conversation with you right there in the learning system, right, and then think about assessments.

Amith: 49:06

Think about like this process we use to gauge learning outcomes. We need to try to measure whether a participant in a course actually, first of all, did they complete the course Right, did they watch the course or were they just like hitting the play button and kind of watching Netflix on their phone at the same time? Did they actually learn something? And perhaps through a conversation you can gauge that better. By saying, hey, we're going to have a chat with you about this module, right, and the AI just asks questions. This is more of a discussion. That could be a really interesting way of assessing learning outcomes as well.

Mallory: 49:44

So a really interesting way of assessing learning outcomes as well. So I see a lot of applications in the learning world for audio, but of course it has applicability in a ton of other areas too. I think for me Sesame AI is kind of matched in terms of human nuance and emotion I'd say only by Notebook LM, at least what I've heard. Now I do think Sesame AI was probably tuned to be highly friendly and humorous and all those things. But even removing that, I think it did a great job of capturing that human essence. Would you consider ever switching over from 11 Labs? I know we love 11 Labs too. They're great. I'm just I'm curious on your thoughts.

Amith: 50:13

Yeah, no, I mean, look all the tools we use. We obviously talk a lot about them because we're trying to share the things we're using that we think are really good with the community, but we're not, you know, tightly bound to any particular thing. We build our software architectures and our business processes so that you can kind of plug and play different technologies, because the world is moving so fast. So folks at Eleven Labs, I have a ton of respect for them. I think they do amazing work. I hope that they continue to be competitive in the market.

Amith: 50:39

But who knows? So Sesame or someone else might displace them, or it might be other products that are comparable but way less expensive or faster, or there's going to be changes. So, from our perspective, we build our software infrastructure so that you can plug in any provider for any particular component. So, whether it's audio synthesis, video generation, text completions, we never build something that's directly vendor dependent. We always build an abstraction layer that's actually using the member junction open source software that we've talked about, so anybody can do the same thing we do. But we don't want to be tightly coupled to any particular vendor. It's not because we don't think that they're great vendors, but we just want to have optionality.

Mallory: 51:26

And I think, too, this is especially interesting when you consider the downstream effects of how this might impact your members as well. So having nurses, for example, ai nurses that you can call and talk to on the phone that are really personable and caring and empathetic, or AI therapists, maybe, or kind of, the options are endless.

Amith: 51:36

Totally. Yeah, I think you know the associations that I've talked to about AI and selecting vendors and stuff like that a lot of them are concerned about well, do I pick a vendor based on their current capabilities, their roadmap, their projected strengths, versus how well they're, how well funded they are? All of these are good questions. The problem with all of that is it's kind of like applying the way you'd select an AMS or an LMS, which you might keep for 10 years, to something that is changing rapidly. So I think that you know you make the best decision you can today for the models or the systems that work for you, whether that's working with an open AI, anthropic, gemini, something like that, working with an inference provider like Grok and using open source models, or Mistraw, which just released a new small model this week that looks extremely good, and these things.

Amith: 52:24

You might pick a particular vendor that you think is good, but you just need optionality. You cannot tightly couple yourself to something, because you just have no way of predicting whether or not even the leading labs like OpenAI and Anthropic that do some really interesting work Even the leading labs like OpenAI and Anthropic that do some really interesting work will they be the leaders, even in 12 months, or will there be capabilities from someone else that you want to be able to take advantage of. So you can't really apply the classical way of selecting software to this stuff. You still have to do elements of that, do your diligence, think about privacy and data security. Clearly that's really important.

Mallory: 53:02

But try to build a layer that insulates you a little bit so that you can make changes without having to rebuild all of your stuff Absolutely Well, I don't think we'll be having Maya on the podcast anytime soon, but it's important to see what's out there and, hey, it was a pretty good conversation. So, everybody, thank you for tuning into this episode. We will see you all next week.

Amith: 53:32

Thanks for tuning into Sidecar Syn all next week with AI. And remember Sidecar is here with more resources from webinars to boot camps to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.

 

 

Mallory Mejias
Post by Mallory Mejias
March 20, 2025
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.