Skip to main content

Summary:

In this episode, Amith Nagarajan and Mallory Mejias dive into the world of AI agents, hackathons, and the secret cyborgs in your office. Fresh off a high-intensity AI agent-building retreat, Amith shares how association leaders—regardless of technical skill—can use open-source tools to build powerful, secure AI assistants. Then, the duo explores a framework by AI researcher Ethan Mollick to explain why organizations are lagging in AI gains despite skyrocketing individual productivity. They unpack the importance of visionary leadership, encouraging experimentation within your team (and your members), and building your own “lab” to prototype the future—even with limited resources. This episode invites you to chart your course and navigate AI’s uncharted waters with confidence and creativity.

Timestamps:

00:00 - Introduction
03:11 - MemberJunction’s New Agent Framework
07:00 - AI’s Impact on Developer Productivity
09:35 - Exploring Ethan Mollick’s Leadership-Lab-Crowd Framework
16:27 - Bullets and Canonballs
26:01 – Secret Cyborgs: The Hidden AI Users
28:16 – Encouraging Innovation Through Incentives
39:51 – Building Your Organization’s AI Lab
48:19 – Competing with Yourself: Innovating Before You're Disrupted

 

 

🎉 Thank you to our sponsor

https://meetbetty.ai/

📅 Find out more digitalNow 2025 and register now:

https://digitalnow.sidecar.ai/

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:

https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

Member Junction ➡ https://memberjunction.org

Claude ➡ https://claude.ai

GitHub Copilot ➡ https://github.com/features/copilot

ChatGPT ➡ https://chat.openai.com

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00] Amith: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.

[00:00:14] Greetings, everybody, and welcome to the Sidecar Sync Your Home for content at the intersection of AI and all things associations. My name is Amith Nagarajan.

[00:00:24] Mallory: And my name is Mallory Mejias,

[00:00:26] Amith: and we are your hosts. And as always, we've prepared some really great content that I think association folks and nonprofit folks will love.

[00:00:34] Really excited about this episode, Mallory. And, uh, of course, as usual, we have a lot going on in our world of Sidecar and Blue Cyprus, and just got back from a week at a hackathon up in Utah. Yeah. Where all we did was, you know, code for 15, 16 hours a day and, and, uh, do a whole bunch of AI stuff. So that was super fun.

[00:00:51] But, uh. So I'm a little bit tired from that, but, uh, how are you doing?

[00:00:56] Mallory: I'm doing very well. I can tell you I didn't spend, uh, 16 hours [00:01:00] a day coding every day last week. So we had a slightly different, uh, roadmap. How was the hackathon? Or did anything exciting come out of it?

[00:01:08] Amith: Yeah, we, so we called it an Agent of Fun, which is a hackathon focused on building AI agents.

[00:01:14] And as, uh, you know, longtime listeners of this podcast know agents are AI systems that can take action in the real world beyond just what models can do. Models, of course, can interact with you through chat and can generate images and so forth, but agents, you. Essentially give them tools and you let them go off into the world and do things on your behalf.

[00:01:34] And so, um, we have a platform for building agents on top of the Member Junction AI data platform that was built out. And, uh, it uses all cutting edge, latest contemporary, you know, AI tools underneath the hood, but it basically makes it easy for a business user. To define what they want in an agent, um, which of course you can do in consumer tools like chat, GPT and so forth with custom GPTs.

[00:01:57] And increasingly so with MCP [00:02:00] servers, you can connect them to other things, but for enterprise scale, where you're dealing with your business data and you're dealing with lots of content, um, you need to have an environment that you control. And so we've built out. Platform that makes it possible for a business user to simply in English, define what they want an agent to do, define the tools that the agent has access to, and, uh, and then just works.

[00:02:23] So it's pretty magical actually. It's, it's something we're very excited about. So last week we spent, you know, we had nine, 10 people, uh, in a house up in the mountains. And all we do is work on agents. We have several new agents coming out, uh, and then really work to refine that framework to make it as easy as possible.

[00:02:38] Our goal is. The least technical association, CEO or leader on the planet to be able to go into this tool and to be able to talk to it and have a useful agent that is securely, that's a key word, securely working with their business data. And that's the key difference, is that this runs in an open source platform that you can control.

[00:02:58] You run it in your infrastructure [00:03:00] or in a secure cloud environment. Uh, and you can have it do just about anything you can imagine. Your creativity is your only limitation, and of course you can talk to the AI to get creative ideas, so very, very excited about it.

[00:03:11] Mallory: This sounds nearly magical. When will it be ready to go?

[00:03:16] Amith: We intend? Well, so it's already released. Uh, member Junction is an open source project. We live in a glass house and we release new versions of mj, uh, almost, uh, multiple times per week at this point. So the latest version of Member Junction does include the agent framework. Anyone who wants to use it can download it, install it, and play with it.

[00:03:34] Uh, it's just@memberjunction.org or docs member junction.org is the best place to go for those interested. At the moment, it's a little bit technically involved. Uh, you have to know how to install it. We are working on a website where you'll be able to just go click a few buttons and set up an environment, uh, as a non-technical user.

[00:03:50] That's where we're headed over the next couple months, but very excited about the core infrastructure and the capabilities. Um, we've been building agents for quite a long time across Blue [00:04:00] Cypress. We have products in the market like Betty and Skip and Rex, and a new one called Izzy coming out, which is a member services agent.

[00:04:07] And all these agents shared certain common. Details. And so what we decided to do was make a framework that is very accessible to the average non-technical business user to build their own agents. And our own agents, of course, now are built on top of this new framework because it makes it easier to build agents.

[00:04:24] Uh, but the idea is to empower every single nonprofit and association leader, uh, to do this. And, and, and again, member Junction's totally free so you can go and use this software at no cost, which is a whole, a big part of the goal.

[00:04:38] Mallory: Can you tell us what hackathons looked like maybe five years ago and, and what they look like now?

[00:04:43] Maybe how similar or how different with ai they, they look.

[00:04:47] Amith: Well, you get a lot more done with ai. Yeah. I mean, they look very similar. You get a bunch of people in a house like our, our flavor of hackathons. So, um, our CTO, Robert, Kim and I, we've been doing these hackathons together for, geez, almost [00:05:00] 15, 18 years.

[00:05:01] Uh, at the company we were at previously that I, that I founded, uh, called Appify. We had hackathons every quarter, which were week long events where we'd invite a number of people from across the company to, you know. Basically fly somewhere. We'd usually pick pretty nice places. So we'd go to the beach in Florida, or we'd go to like the mountains or do something kind of fun.

[00:05:19] And, uh, you know, for that week you would be really intensely working. So we would, uh, typically have, and this is still the same now, we'd have, you know, eight to 10 people in a house and. You know, we usually have some kind of a theme where we'd say, Hey, everyone's gonna work on projects related to this general theme, but then have some degree of autonomy within that to allow people to kind of pick projects that were interesting to them within that.

[00:05:40] Um, and depending on what you're doing product release wise, you know, you may have more or less of that autonomy. We didn't have a ton of that this time, that this time it was more directional where we were saying at the front end, Hey, this team of two people. Please work on this. These Tim of three people work on this other thing.

[00:05:54] So we had that type of thing going on. But the, the idea behind it is it's super fun because you have a bunch [00:06:00] of people working in close proximity for, you know, for really long days, but you pull in some fun activities too. So up in Utah, uh, we've got. Great activities like, you know, mountain biking and hiking, and there's a lake there.

[00:06:12] Uh, I've got some e foils out there. So, uh, I think I've talked about that in this pod before. But e foils are basically electric surfboards that actually pop outta the water once you're gone above about 10 miles an hour. And they're really, really fun. And so we, we had an early morning excursion out to one of the lakes nearby and, uh, got a few of these, uh, programmers, uh, flying around, which was super fun.

[00:06:32] But, uh, the coding part is really, really intense, and it, it just, it goes by really quick. If you're into this and if you're having fun, it goes by quick. So that's all very similar to what it's been for

[00:06:42] Mallory: mm-hmm. For a

[00:06:43] Amith: while. Uh, what's different is the amount of actual code being written by human hand versus the amount of code being written by AI and guided by humans is radically different.

[00:06:54] So, I, I can speak for myself, but I know this is. Generally true for the rest of the, the nine folks that were up [00:07:00] there is we didn't write a lot of code ourselves. We used Claude code, we used copilot, we used other tools to pretty much do almost all the actual coding and we supervised it and there were areas where we wrote some code, uh, by hand.

[00:07:12] And there's just certain cases where it makes sense to do that. Um, but I would tell you as a team of, you know, 10 people, we probably got as much done as a team of a hundred people would've gotten done like even 12 months. Probably a thousand times as much as we would've gotten done five years ago. It's insane.

[00:07:30] It's just completely crazy. And part of what I keep trying to emphasize is, yes, we're a bunch of career professionals, software engineers, software architects, you know, we've got people on the team who have, you know, deep, deep expertise in ai. Um, and that's awesome. Most associations, most nonprofits do not have that.

[00:07:47] You have a bunch of business folks and maybe some, maybe some people that are technical or somewhat technical. Um, the really key message here is actually you can do these kinds of hackathons yourself, and you can produce a tremendous amount of output. [00:08:00] Even if it's not something you're gonna say, Hey, like I'm gonna take this app live on my website right now.

[00:08:05] Maybe you have some professional help to guide you if it's, if it's going to be a production quality app. But you can do a lot with these tools that are out there. And remember also that these tools are the worst versions of AI we're ever gonna have. And so you're going to see improvement. So I would really encourage people, you don't need to do like the whole week long thing with 16 hour days if you don't want to.

[00:08:22] But you know, start off with like two days and see what happens. The real key to it is you get people out of their normal routine and you get people together in person, and magic happens. You come up with new ideas, you break down problems that were previously really, really hard to solve. And uh, it's just, it's just a great team building activity too.

[00:08:41] Mallory: Mm-hmm. What could be interesting for sidecar maybe, and I think may you, and. Me have talked about this, Amit, is perhaps to have some sort of association, hackathon, maybe with like one technical person from an association, one non-technical person, and, and see what could be created. Everyone listening. If that sounds of [00:09:00] interest to you, let us know.

[00:09:01] Amith: That'd be super fun.

[00:09:04] Mallory: All right. Well, today we are exploring a framework from AI researcher Ethan Molik, who we talk about quite often on the pod about why individual workers are seeing huge AI productivity gains, but companies aren't capturing those benefits organizationally. We'll break down his this three-part approach, leadership, creating the right vision and incentives.

[00:09:26] The lab. Building and testing AI solutions and the crowd harnessing employee innovation all through the lens, of course of what this means for associations. So first and foremost leadership, Ethan Molik has identified a fascinating disconnect in how organizations are approaching AI transformation while workers are reporting massive productivity gains.

[00:09:47] Some saying AI cuts their tasks time in half. Others reporting three x productivity increases. Companies are only seeing small to moderate organizational gains overall. Molik [00:10:00] argues this isn't a technology problem, it's a leadership. An organizational innovation problem for decades, companies have outsourced organizational innovation to consultants who develop generalized approaches, but that won't work with AI because nobody has special information about how to best use AI at your specific organization.

[00:10:19] Even the major AI companies release models without knowing how they can be best used in your industry or organization. The core leadership challenge is that AI starts as a leadership problem where leaders must recognize both the urgent challenges and opportunities. Urgency alone isn't enough. Workers aren't motivated by leadership statements about performance gains or bottom lines.

[00:10:42] They want clear and vivid images of what the future actually looks like. Leaders need to start anticipating how work will change in the world of ai. While AI isn't currently replacing most human jobs, it does replace specific tasks within those jobs. This requires decisions about where leadership and the [00:11:00] lab, which we'll about later, should work together, build and.

[00:11:05] AI and humans. So Amme, you've worked with association executives for a long time. What do you see as the biggest leadership challenge around AI adoption?

[00:11:18] Amith: I would say there's two things. The first is the hesitance to embrace new technology quickly. Associations have, you know, as a sector generally. Viewed technology as not being their friend, you know, not being their closest ally, perhaps occasionally being an emesis.

[00:11:35] You know, certainly technologies of the past that they've had to implement have been challenging. So I think that's like a, a mental block for a lot of people to say, well, yeah, I know this is advanced and, you know, fast moving technology, but we're not a technology organization. We're, we're slower moving.

[00:11:51] We're an association. We're not experts in technology. And so that mindset. Is a major problem and I am constantly trying to [00:12:00] encourage association leaders at every level, whether they're a leader that's been around for three decades or more, or or a brand new entrant into the workforce. Everyone's a leader in, in this sense.

[00:12:10] Um, to embrace the idea that we can all be really good at this. You don't have to be a career software engineer, or you don't have to be someone who's historically been a big innovator with. Any kind of technology. So that mindset shift is really important from the top down. It's also important to, uh, nurture a bottom up mindset shift that, hey, we too, as a traditionally non-technical association, we too can be leaders and using this technology, we don't have to be, uh, coders.

[00:12:41] We don't have to be experts in the traditional sense. We can take advantage of this just as well as anyone else. And that mindset shift might sound almost silly. But it really is impactful because when people start off with these assumptions that they can't or won't or shouldn't, those things are [00:13:00] blockers for our minds, much more so than the technology stopping you from doing anything.

[00:13:03] And I've seen this firsthand in so many organizations for so many years with other technologies, and I'm seeing it now with ai. So I'd say that's probably the first thing is, um. It's the yes we can kind of mindset, right? Uh, and I know that sounds a little bit silly to some people, but it is so incredibly powerful that I'd encourage you to, to reflect on this and think about it and share it with each other.

[00:13:23] And perhaps one of the best ways to do that is to start off with, um, giving your team resources that you yourself are consuming. So again, whether you're lower in the organizations. Org chart, or if you're at the very top of it, you can do this. You can listen to podcasts. Obviously you're doing that now.

[00:13:42] You can watch videos, you can take online courses. You can do things to experiment with AI tools yourself, and then you can share that with your colleagues. When you do that, slowly but surely you are. Moving the needle, you are changing the tide. Even if you're working against [00:14:00] organizational inertia that's been around for decades, culture that's very difficult to change.

[00:14:04] You change the culture by changing the behaviors, and so you can all make difference and make a difference by sharing. Your wins and also sharing what's not working too. It's not just all, you know, roses and sunshine, but it's, it's, it's real. And so I think those are the types of things that are really key to get organization-wide adoption.

[00:14:21] My other just general comment on the first bit of what you've shared, Mallory, is that I think that the kind of disconnect between organizational value creation and individual productivity gain is, um, kind of the consistency with which organizations implement this stuff. Um, there aren't enough organizations that have put enough, uh, real firepower.

[00:14:40] Behind their training programs to really educate their team on ai and absent a organization-wide education program that includes everyone, you are not saying to your team that you really are embracing this. You're saying, Hey, go use it, but you're not providing 'em the tools, you're not providing the direction.

[00:14:57] Obviously, we believe very strongly in the [00:15:00] need for that at Sidecar, but the point isn't about our stuff. It's just about doing something that's team wide. You have to do it for everyone. That's another related mistake I see people making is saying, okay, well we have a hundred employees, or we have 20 employees, or whatever the case may be.

[00:15:14] Let's find the three to five people who are the most, you know, uh, typical like early adopter type folks and we'll have them play with it. And maybe that was okay in 2022 or 2023 even. It's not okay now you gotta get everyone going with this stuff. That's how you change the culture.

[00:15:33] Mallory: I wanna talk with about what you said with the the yes we can mindset, which I think we often reiterate on the podcast and people who listen often might say, oh yeah, yeah, yeah, I got that down.

[00:15:42] But I wanna double down on the power of sharing learning resources. Like you said, I know going into sidecar planning sessions, sometimes there will be a book that you recommend a me, like, please read this book. Before we get there, or you'll share an interesting episode of a podcast or a YouTube video that you've [00:16:00] seen, and it's just incredibly powerful if you as a leader, and like Amit said, leaders are at every level.

[00:16:06] Share a specific YouTube video, not just, oh, I was watching YouTube videos on prompt engineering, whatever, but a specific one and say, Hey, watch this. Let me know what you think. That's incredibly powerful and I would challenge you all listening to this episode to do. It doesn't have to be this episode that you share with someone, but share some resource that you've enjoyed with some of your colleagues and see the power of that.

[00:16:26] I also wanna talk about the disconnect that you mentioned between individual gains and organizational gains, and even coming from sidecar from the Blue Cypress family of companies where all of us are. Pretty much every day actively involved with generative AI in some capacity. I can still understand the sentiment that while I've seen huge individual gains, it seems to take longer to have those organizational gains.

[00:16:55] Like if we're talking about the, the learning content agent that we just posted an episode [00:17:00] on recently, to me that's organizational change. Like at sidecar, that's going to transform our business and how we offer educational content. But it minute. To get to that point. So I'm curious if you can talk about kind of that leap once you have a good training program in place, from individual gains to that key leadership component of setting the vision of not just what you're doing in your daily work, but where the organization's going.

[00:17:26] Amith: Well, I think the learning content agent is a great, you know, case in point, uh, where we, you know, we at Sidecar and Blue Cypress broadly have been experimenting with AI for a long, long time. And throughout the process of building our content for educating association folks in ai, we've always been looking for incremental improvements to help us not only do the work faster, which is of course helpful, but to do a better job of it and to produce more of it, and to do it more timely.

[00:17:54] And Mallory, you and I did a lot of that over the last couple years in producing various versions of Ascend and producing [00:18:00] versions of the learning content that's on our, on our website. If we hadn't been kind of tinkering and working with chat CPT and then Claude and working with a variety of tools to try to incrementally improve our individual productivity, I don't think the light bulb would necessarily have gone off, um, late last year where we said, you know what?

[00:18:18] We can automate this whole thing. We can take the entire process and completely flip it on its head and take it from this extremely labor intensive thing of recording content to automating it because the AI has gotten good enough and we've experimented with enough to understand how the bits and pieces fit together.

[00:18:34] So. If you wanna create, you know, a system-wide fundamental transform like we've done with the learning content agent. And by the way, those of you that haven't yet checked out the episode between, uh, with Mallory and Jason, uh, from a week or two ago, definitely go check it out and, and watch the YouTube version if you can.

[00:18:49] 'cause that's where we actually have some, some live demos of the software. But what we've essentially done is we've built a software tool that uses a whole bunch of different AI components within it. [00:19:00] To automate the process of generating all of the videos that are on the Sidecar AI Learning Hub. So it's still like our thinking that it's gone into this, but we've utilized AI to generate the audio and video, and there's a whole bunch of steps that go into making that happen at scale, happen with quality.

[00:19:19] Make it possible to make lots of changes, et cetera. And, and that episode talks about that. But to get to that kind of automation, that's a fundamental transform of the entire way to think about how to produce content and serve our community. Um, we had to have been experimenting along the way. You don't just come up with that kind of a groundbreaking, transformational idea just out of thin air.

[00:19:38] I mean, not usually anyway. At least my experience is usually that. There's lots and lots of incremental gains that then lead to a light bulb moment. Um, you know, Jim Collins, uh, he's got a lot of books that I, I'm a big fan of, but he has this particular book called Great By Choice, and in it he talks about the concept of, uh, organizations making investments.

[00:19:56] And the, the metaphor he uses is this idea called [00:20:00] Bullets and Cannonballs. So what he says essentially is imagine yourself out to sea and in a ship. It's dark, uh, it's foggy. You cannot see, but you know, there's an enemy ship somewhere on the horizon that's trying to hunt you down and kill you, and you don't have a lot of gunpowder left.

[00:20:14] But you do have ammo, you have bullets, and you have cannonballs. So. Let's say that you don't have enough gunpowder to really fire more than one cannonball, but you have enough gunpowder to fire a bunch of bullets, AK small bets. Well, what you probably wanna do is calibrate the direction you're shooting and using that precious resource of gunpowder calibrate.

[00:20:36] But how do you calibrate? You fire a bunch of bullets, and initially in this metaphor, of course, you hear nothing, nothing, nothing. Then all of a sudden you hear a ping ping like you actually are hitting your target. Of course you're not doing any damage with just little bitty bullets. Then you know, okay, wait a second.

[00:20:50] Now we're heading in the right direction. Let's go for it. Let's load up all the remaining gunpowder we have. We got one shot at this. Let's load the cannonball up and shoot it. [00:21:00] Do you do that right away? Probably not the best idea. Or if you have calibrated your sites and you're like, Hey, I know I'm shooting in the right direction, and you don't load the can ball.

[00:21:11] That's just as bad. So going beyond this, this idea, what he's shown in that book is a bunch of data. So he, the thing I, one of the things I really like about Jim Collins writing is it's backed by tons and tons of rigorous research. Um, he has this matched pair concept where he compares different companies and how they did over time in different industries.

[00:21:28] And he showed empirically that the companies that. Shot too quickly in terms of the Canonball, of course, didn't do so well because they often were off the mark, right, as you'd expect them to be. But actually what he found was that the companies more often that would do these little experiments, but then fail to load the canonball up and go for it at the right time once they've calibrated it.

[00:21:49] Would fail much more often and these companies, so it wasn't so much, there was a complete lack of experimentation is the point. There was a lack of going for it. Once you found an experiment that works. And I believe [00:22:00] that's one of the reasons Malik's writing is so on point right now in that. People are running a lot of experiments.

[00:22:06] They have a lot of different individuals doing stuff, but they're afraid to pull the trigger on the bigger shot, right? When it's time to go for it. And so I think that's, uh, an interesting thing. Anybody who hasn't read Great By Choice, by Jim Collins, I would highly recommend it. It's an awesome book. Um, and this is just one chapter of, of the book.

[00:22:22] There's a lot of other great content in there, but, uh, I, I really think it's relevant. At this point in time, because if you're not willing to take the big shot once you know it's the right thing to do or you have a reasonably strong belief that it is, um, you're not gonna create a learning content agent like we did, right?

[00:22:36] Like we had the suspicion through lots of experiments that it was good and we invested a lot of money and a lot of time in building that system out, knowing full well that 12 months from now, it might be totally obsolete. I mean it definitionally will be obsolete. We'll keep investing in it. Mm-hmm. But, uh, that's the cannonball, right?

[00:22:51] For sidecar to put that kind of effort behind a single project for a company that's not super big, um, is, is sounds like a big [00:23:00] risk, but in fact the bigger risk is to not do that. To not load the.

[00:23:05] Mallory: That was a really good analogy. I mean, I know Jim Collins came up with it. So it sounds like the, the bullets in that scenario is the training, the education, the everyday experimentation, but you still need the captain on the ship who's helping the team calibrate and saying, all right, let's go for it.

[00:23:21] Does that sound correct?

[00:23:22] Amith: Yeah. And the captain better be paying attention to where the, you know, where the bullets are flying and where they're hitting a target that's meaningful.

[00:23:29] Mallory: Next up we are talking about the crowd. So your employees who are figuring out how to use AI for their daily work, this is where both innovation and performance improvements actually happen because there's no instruction manual for AI and learning to use it well as a process of discovery that benefits experienced workers.

[00:23:47] But here's where things get. Kind of interesting. According to Molik, there's a massive disconnect between official AI adoption and actual use. So studies show that only about [00:24:00] 20% of workers use official AI chat bots at work. Yet over 40% admit to using AI privately and report huge productivity gains. This reveals what Molet calls the secret cyborgs problem workers hiding AI use.

[00:24:16] Maybe for good reason. They may have received scary talks about improper AI use being punished, or they suspect productivity gains might lead to cost cutting and layoffs, or they know that revealing AI use won't be rewarded, but will just become expectation for more work. The solution involves leadership, creating proper incentives and vision.

[00:24:37] Instead of vague AI ethics talks or blanket policies, provide clear areas where experimentation is permitted. Build incentives. Malik argues like promotions, vacations, or even cash rewards for employees who discover transformational AI opportunities. He argues training should be less about prompting techniques and more about hands-on AI experience [00:25:00] and practice communicating needs to ai.

[00:25:02] The goal is to turn hidden innovation into organizational capability. For associations, though, there's also an additional opportunity here. Your members are part of this broader crowd experimenting with AI in their own industries and professions. They're discovering valuable use cases that could benefit the entire membership, but most associations aren't tapping into this distributed innovation happening across your member base.

[00:25:27] Uh, lots to unpack here. Me, uh, again, you've worked with probably the most, I would argue. Forward thinking associations out there, how do you see those associations most successfully tapping into their crowd, whether that's employees or members?

[00:25:45] Amith: You know, my sense of it is that the common theme is people who are willing to ask more questions and.

[00:25:52] I think that's an important attribute of, of leaders, again, at every level, but in every sector, and asking good questions and probing for [00:26:00] information, uh, is, is a skill, right? It's a really hard earned skill, but it's something that people need to really think about in this world. Um, I think that working with your team.

[00:26:10] And finding out that there's applications of AI that maybe were outside of the boundaries of your policies and well, maybe you should be updating your policies. Or maybe, maybe there's, there's things you can learn from this crowd in, in a way that Malik describes that, um, are super relevant to the association world, not just for staff, but as you put it with members as well.

[00:26:30] But I think you have to look at a way of creating an environment that is not only safe, but encourages people to share what they're doing. Um, so these hidden cyborgs, as he likes to call it, I, I don't know exactly are secret cyborgs. Secret cyborgs, uh, it's, it's, it's a, it's an interesting term. I think that the point of it is that people who really are using a lot more AI than they log onto, um, and perhaps they're doing it because they wanna reduce their work hours a little bit.

[00:26:55] I don't know if that's the case or if. Perhaps they just are worried how their boss will feel, um, [00:27:00] about their use of ai. You know, did they do a good job? Is there, you know, did they not do a good job? You know, is that how the boss gonna feel based on their use of, of ai? So. I think it comes back to the same topic in a lot of ways, Mallory.

[00:27:11] I mean, it's, it's, in many ways it's a different outcome, but it's the same root cause, which is that a culture that kind of deflects innovation. It's a culture that is almost like an, you know, it's the body kind of rejecting a new concept. It's, it's treating it as some kind of invading species and your immune system is trying to kill it off the, the, the invading species, of course, and this example being the innovation.

[00:27:33] So I think that the root issue is the same one, which is that. Isn't one that's encouraging people to experiment. I think what we talked about in the last segment around people sharing ideas, you know, people from the top sharing videos or ideas and experimenting openly is good, hackathons are good. Uh, you can also run contests where you can say, Hey.

[00:27:54] Over the course of the month of July, we are going to run a contest and we're gonna have prizes. And it doesn't [00:28:00] need to be like massive amounts of money. It could be that the grand prize is, you know, something nice, but it doesn't need to be, again, lots and lots of money. It's just something fun and you can kind of gamify it and make it a contest, make it fun.

[00:28:11] Associations tend to be really good at that kind of thing, so why don't we do that internally with our staff, and perhaps with our members as well. So I just think that it's, it's ongoing. Quick fix. You can't come from having a culture that's been largely the same for decades and all of a sudden turn it into a super fast moving innovation driven culture just 'cause AI is here.

[00:28:32] You know, culture, the culture's, you know, if there's one thing that's probably more powerful right now than ai, it's culture, and you know, if you don't get that right, if you don't start working on that, it doesn't matter if it's AI or some other technology, you're not gonna be able to adopt it.

[00:28:44] Mallory: Mm-hmm.

[00:28:47] Speaking of gamifying things, I have gotten really into Pilates, which I don't think I've ever talked about on the podcast, but the studio I go to is doing a June bingo. We're recording this in late June right now. And so you [00:29:00] like check off squares based on which instructors you go to and which types of classes you go to.

[00:29:04] And normally I'm not into the gamified thing, but I am trying to complete my whole June Bingo card, so I have been there. Almost every single day this month. So it's good for me, it's great for them. I'm really impressed by, uh, honestly, their whole like, marketing around this thing. So maybe you do a, a bingo card with your staff.

[00:29:21] That could be an interesting game.

[00:29:24] Amith: That'd be fun. Is that the, uh, is that the Pilates where you're on the machine that looks like a medieval con, a medieval torture device? It's, yeah, it's

[00:29:30] Mallory: Reformer Pilates. They also have Tower Pilates. Have you ever done that at me as a soccer? Yeah. I did

[00:29:34] Amith: the thing one time, maybe 15 years ago with the machine where like the, your body gets contorted in all these different positions and.

[00:29:41] It's

[00:29:41] Mallory: really fun and I like your core strength mobility. I don't know, that's a whole separate, we'll have a podcast episode on that. Um, but I wanted to talk to you about the idea because we've, we've talked about that this, this on the pod though, I feel like it's been a minute of, uh, dangling the carrot and what's the other version?

[00:29:58] Using a stick. And so [00:30:00] what you think about, uh, having contests and cash prizes and things like that versus saying, going back to the leadership component, you need to do this.

[00:30:10] Amith: I think you need to do both. I think this, that's another component of association leadership archetypes I'd say, that are common, is this idea of committee driven decision making and consensus and having to get everyone on board at some points in time, especially in times of great change.

[00:30:26] The leader that has the most control has to just stand up and say, this is what we're doing guys. We're gonna go do this, and you all have to do this. And that's what it's, and you know, it can even go so far as say, listen, you get. This training done by X date or you'll be working somewhere else. That might be the most extreme version of the stick.

[00:30:43] But I do think that it's appropriate to push your team. And it's not about who's interested. It's not about who would like to do it or who has time to do it. It's about getting everyone to do it. This is a critically important thing. And so if you're not a leader that's traditionally embraced, uh, demanding [00:31:00] things from people or requiring people to do things in a, in a way that, along the lines of what I'm describing, um, you might need to work on that.

[00:31:06] You might need to work on improving your skills in that area to think about how you can perhaps stay true to yourself and how you are as a leader, what your leadership style is, uh, but explain to your team like, this is not a time where we can afford to wait for consensus and buy-in. The world is changing really rapidly and our team is not knowledgeable enough.

[00:31:25] This critically important technology. So we're gonna do this and yes, we're gonna try to make it fun. We're gonna make it something that everyone, uh, is encouraged to participate in. But make no mistake, it's mandatory and you will do this. So at some point in time you have to stand up and do that, especially if you have a culture that's a little bit of a traditional hesitant culture.

[00:31:45] And what happens is in those types of environments, yes, you're gonna cause a ripple in your organization. Yes, some people will be upset about it, but a lot of people actually will be like, it's about time. It's about time that the leader stood up and demanded this kind of thing, because there's so much [00:32:00] happening.

[00:32:00] And you know what? Those people who become advocates for the more aggressive tone that's necessary at these points in time, those are actually probably your strongest players on a go forward basis. And by not being firm and not being, you know, very clear about what your expectations are, you're actually letting those people down.

[00:32:19] Mallory: I wanna talk about secret cyborgs. The term is funny, but at first when I read how Ethan Molik defined it, I thought, well, that's odd. Why would someone be not talking about how they use ai, but secretly using it? And then I thought about it more and was trying to be a bit more empathetic and realized, okay, that does make sense.

[00:32:37] Because if you have someone on your team that's secretly using AI and perhaps cutting down their work time by 50%, they might fear that by bringing that up. As a leader, you might say, well, do we really need someone full-time in this position? Could we combine this position with another role? Amit, can you talk about how, from a leader's perspective, how you can kind [00:33:00] of combat that or just maybe take it on with honesty and encourage those kinds of conversations?

[00:33:07] Because I can imagine why people might be fearful of saying that.

[00:33:10] Amith: It totally makes sense and I think that for, I'll first speak to the individual that might be on the other end of it, the person who doesn't have power in that relationship, the employee who is fearful of their boss, finding out that they're using a lot of ai.

[00:33:24] And I would say that that's a very time box position, meaning that, um, there's not much longer. Live, you know, available in that, in that path, right? People are figuring out what AI can do. Even the, the least technical people are realizing it. Uh, so there's not much of an opportunity for you to kind of stay hidden in that corner for for long.

[00:33:41] So if you are using ai, take this opportunity to go and promote it to your leadership and say, listen, look at all the great stuff I've done. What that's gonna do for you is it's gonna put you on a path where they're gonna say, wait a second. This individual is not only doing something innovative. Yeah.

[00:33:56] The, the work that they've historically done, they can automate it now and it [00:34:00] takes three hours instead of 40. But, um, look at what they're figuring out how to do. This is transformational. Now, if you're in an organization that doesn't value that and they, you know, let you go or something else like that, you're in the wrong.

[00:34:12] Say that. And some people have fear losing position, especially in this environment. Um, people are gonna figure out anyway that it's possible to automate a lot of the tasks that you perhaps have figured out how to automate. Haven't told anyone. It's gonna get figured out. And if instead of waiting six to 12 months or whatever the timeframe is for people to figure it out, you go and promote that.

[00:34:33] It puts you on a track where people look at you as the innovator and perhaps they'll come to you for more and more help. Now, I will say that most of the work that we do, uh, cer certainly as white collar laborers, but I think this is true for all positions across the economy. Most of the work we do right now is gonna get automated by the end of the decade.

[00:34:51] Of work, most of the individual tasks. It doesn't mean most of the jobs will be eliminated. It just means most of the individual tasks we do are automateable, even with today's [00:35:00] ai, and that's gonna be increasingly true as the AI models and the systems on top of them become smarter and more capable and they're more connected, right?

[00:35:06] Like the agent stuff I was talking about earlier. So I don't think that there is a question of like, Hey, how long can you hide? I think it's a question of like, what do you do to take advantage of this stuff and show that you are an AI powered employee? So that's all my commentary to the, at the employee level.

[00:35:23] For the leader, I think what you need to do is come out and say, listen. We recognize that a lot of the work we've done traditionally isn't going to be done by human hand anymore. And our goal is to automate it. Make no mistake because we have to automate it in order to be effective, competitive, serve our members and meet our mission.

[00:35:40] But that doesn't mean that we're gonna let everyone here go or let a large number of people here go. Now, in some associations, and this is a rarity, um, where the financial constraints are so strong, where there's. Been financial problems for some time, if that's actually what you're doing. You need to tell people that.

[00:35:55] Uh, but most of the associations that I work with tend to be in a position where they're not so much [00:36:00] concerned about cutting costs immediately. Um. Sure long term, they don't wanna spend money where they don't need to, but it isn't so much they say, Hey, I've got 18 people in member services. I only need three to do the job that I've traditionally done, so I'm gonna let 15 people go.

[00:36:13] Of course some people will make that decision. I think the bigger opportunity is to think, okay, well I have 15 intelligent trained people. Who know my membership really well, what can I do with them to go create a dramatically higher value, right, with the human skill? And to be thinking that way, but to be intellectually honest about it.

[00:36:31] Because if you're in an organization that has, you know, legions of people that do redundant tasks, redundant in the age of ai, I think you need to think about that and figure out what you're gonna do. Uh, 'cause those jobs, the jobs that aren't adaptive, the jobs that don't take advantage of ai. Most certainly will not exist.

[00:36:46] I mean, there's nothing we can do about that other than acknowledge the reality and the faster we get that reality pill down and agree to it, that it's there. Um, the, the sooner we'll be able to come up with the right plan as an organization and as a leader. So I [00:37:00] think leaders have to be brutally honest with themselves and with their teams, and they have to, it's one of the reasons I have to get educated.

[00:37:05] 'cause most leaders, you know, who listen to this, who haven't taken the time to educate themselves AI at all. Sounding sounds interesting, but like, I have no idea what that means. Well, that's because you haven't done.

[00:37:19] Mallory: I wanna move to the third piece of Malik's framework, which is what he calls the lab. So we've got leadership, we've got crowd, which we covered, and now the lab. This isn't a typical r and d department. It's designed to be ambidextrous, engaging in both exploration for the. Future and exploitation releasing a steady stream of new products and methods.

[00:37:39] Unlike traditional research organizations, the lab should consist of subject matter experts and a mix of technologists and non technologists. The crowd provides many of the researchers, those enthusiasts who figure out how to use AI and share it with the company, and they're perfect lab members. But here's the key.

[00:37:58] Their is about [00:38:00] building, not analysis or abstract strategy, so they take prompts, distribute very quickly. They build minimum viable products with cross-functional teams centered around simple prompts and agents. They iterate, test and release those into the organization. One fascinating aspect that Molik talks about is building AI benchmarks for your organization.

[00:38:24] So almost all official benchmarks are flawed or focus on trivia or math or coding. They don't tell you which AI does the best. Writing, for example, for your context or can best analyze your specific type of data. So Molik argues you need to develop your own benchmarks for the tasks you. The lab also builds systems that don't work yet prototyping what it would look like if AI agents handled entire business processes.

[00:38:50] And then testing. When new models come out, they create provocations or demos that jolt people into understanding AI's transformational [00:39:00] potential. Amit, I really like this idea that Molik talks about the lab, but the fact is, I mean, most associations wouldn't have. Staff to dedicate to constantly building products and iterating on those.

[00:39:14] So I'm curious if you can help contextualize the lab in a realistic way for our association listeners.

[00:39:20] Amith: Well, I mean, first of all, the last thing we just talked about is how many jobs are going to have a lot less demand on them in terms of the traditional tasks. So maybe actually associations will have a lot more, uh, labor available to work on lab style activities.

[00:39:35] Um, so I think that's an opportunity. Um. My thought process is, is that traditionally labs have been very scientific, right? I mean, even the name, it's deeply a scientific term. Uh, and so associations that have been non-technical would look at that and say, well, we just don't have the capability. We don't have the aptitude, we don't have the time to do these things.

[00:39:54] So. I'd say that's a really weak point for a lot of organizations. Um, but remember what [00:40:00] I was saying earlier about hackathons actually is perfectly suited to tie into this topic. It's not just about the engineers and the scientists and the people who are writing code. Let's say it's about everyone who understands the value equation.

[00:40:13] And what I mean by that is. How do you, whatever it is, your processes produce a valuable outcome for your customer or your member. Right? What's that equation look like and what should that value equation be on a go forward basis? Maybe historically it's meant, you know, certain outputs, you know, certain types of value you've created, uh, but maybe on a go forward basis, those types of value are commodities because of AI and because of other things that are changing.

[00:40:38] So what kind of value should you be creating? That's where you need the lab. The lab needs to allow you to zoom out and say. If we can do everything we do now. For free with automation. Basically what would we go do with the resources we have? How would we create an opportunity so compelling that five years from now people are still coming to us for education, for content, for community, um, for [00:41:00] these types of opportunities?

[00:41:01] What do we need to do? And there's no easy answers to that, uh, which is both a challenge, but it's also where the opportunity lies. So my main point here is you have to start. Some movement. You can't just think about this and say, Hey, we'd love do we'll.

[00:41:22] We'll come back to this in 2026. So you have to start off small now. So put together a hackathon type event with a handful of your people. And if you're, if you've got nobody in that organization, that's a coder, that's okay. Remember, this isn't about building something for production, it's about prototyping things.

[00:41:39] It's about creating concepts that really help you visualize what these new value equations might look like and how you might shift the way the organization. Does work. You know, talking in these abstract terms like value equations and strategy terms, a lot of times people's eyes, you know, gloss over quickly, and that's understandable to a large extent [00:42:00] because these concepts, it's like, how do you actually materialize them?

[00:42:02] But now. With ai, even with fairly limited skills in ai, you can go and prototype this stuff really fast and see what would it look like to deliver value in these different ways. So I think that's a really important thing people have to go do. And I'll, I'll quickly come back and say like, the immediate pressing needs of the organization, um, that stuff.

[00:42:22] So. I would argue that in most cases what I see is organizations that are on this perpetual loop, right? You can call it the hamster wheel if you like, where they're replacing some piece of technology with another piece of technology. They're implementing some new business process or building some new product, and it's just the next thing they're gonna do because it's the next thing they said they were gonna do.

[00:42:43] And it's the same thing that they've done with an incremental, slight improvement, right? My favorite thing to pick on, as many of our listeners know is the a MS upgrade, right? Because the new a MS is largely going to be an incremental improvement over the last one, it's gonna take you a year, two years, three years.

[00:42:58] It's gonna cost you a bunch of [00:43:00] money and it's gonna take a lot of staff energy. And what improvement level will it actually give you? In most cases, it's going to be an incremental improvement at best, you know, 10, 20, 30% improvement in your productivity. Radically transformative. Yet many organizations right now are choosing to commit to such projects knowing full well they're gonna need 6, 12, 18 months.

[00:43:21] Just a hundred percent dedicated to that. Um, I don't think that's a smart prioritization of resources personally. I think that if you deferred those kinds of projects and said, Hey, we might need to replace this piece of infrastructure, whether it's MS and LMS Financial, let's defer six, let's our resources financial.

[00:43:43] Ai, let's go really deep on ai. That would change the game, right? Then you could form the lab. Then you wouldn't be saying, Hey, I can't do this now because of X, whatever X is. I would argue it likely is less important than the transformative changes that are coming your way, like [00:44:00] it or not with ai. So that's my pitch, is that consider what you're doing right now that you can put on hold, not cancel, but put on hold.

[00:44:07] Because some people will say to me, yeah, I know, but my a MS or my LMS, it's so antiquated, it's so bad. It's really, really slowing us down. Totally get it. And I'm not saying you shouldn't do those things, but I'm saying in the wake of, if you look at them in a totally dispassionate way as financial investments, and you say, this financial investment has this potential, ROI and this other financial investment has this other ROI, their ROI from probably a much smaller investment.

[00:44:35] In AI right now is dramatically different than the profile of a classical investment in one of these systems or process changes or you know, new, new products that you might be building. So I would just encourage people to rethink their basket of priorities and say, can we put some of this stuff on hold?

[00:44:52] It might require board conversation. It might require going back to the board and saying, listen, things are changing so much. We have these projects in flight. We really [00:45:00] think we should put projects A, B, and C on hold, and here's why. I know you just approved these at a board meeting last winter, but this is why we believe it makes sense to hit pause on these other things because quite frankly, like again, going back to the A MS, what you're choosing to implement in an A MS that won't go live for 12 months, very likely the functionality that you're asking your vendor to put in place.

[00:45:21] Isn't the functionality you're gonna need in an AI enabled future, and you don't know yet what that is because you haven't done the homework on ai. And at least that's true for the vast majority of organizations still to this day. So that's my point of view on it. I think the lab issue and the inability for people to do this lab type work that Malik proposes is largely because there are a hundred percent busy with activities that are still very much on another rotation of the hamster wheel.

[00:45:45] Mallory: Mm-hmm. And I like what you're saying too, Amit, about there are options here. You can be creative with how you approach the lab. You don't have to have the, the ongoing council, though I have talked to some associations that do have like an [00:46:00] AI council that certain staff are a part of that meets regularly.

[00:46:03] I think the key with this though, is making that council, that group less about. The abstract concept of ai, that that is important when you're developing guidelines and whatnot and actually focusing on that, that building and ideating piece. And I also wanna mention something that we did at this point. I think it was early 2023, we hosted an AI idea, hon.

[00:46:26] And what's funny is that was getting groups of association leaders together and then they were supposed to pick one association challenge and come up with an idea, a prototype that they did not build, just kind of. Thought about and create some sort of solution with ai, theoretically that would solve that association challenge.

[00:46:46] It was an incredible event. We did it all virtually. So that's another example of something that you could do within your organization, except now at that time, I don't think we had the capability to actually build any of the prototypes because it was mostly [00:47:00] non-technical people. Now you could take your non-technical ideas from Anthon and actually build them with.

[00:47:05] Interactive web apps in Claude or code, so you've got options here.

[00:47:11] Amith: Very much so. I think there's it, the opportunities in front of us, so enormous. I wake up every day with excitement about the future because I know that what we are seeing happen with ai. As disruptive as it is, as transformational as it is, it's, it basically levels the playing field.

[00:47:27] It means that the nonprofit sector at large can in fact do everything. The largest corporations and the largest governments in the world would've struggled to do with even the immense resources they have compared to the smaller resources that associations and nonprofits typically have. So that's exciting.

[00:47:44] Um, it's an opportunity to wake up every day and think of new ways to do things, but you have to be willing to change your mindset. I'd go back to that first part of this discussion that you have to start with a flexible mind, where you look at it. Not so much in terms of how it relates to what you're currently doing, [00:48:00] but rather how it serves the end customer, whoever that is.

[00:48:03] And the last thing I'll say on this whole topic is the challenge that I would give you is to form this group and to give them an opportunity to say, listen, um, imagine yourselves building a competitor to this association. Here's what your job is for the next three days at this hackathon or idea on, or whatever you call it.

[00:48:23] This group of 5, 6, 7 people, whoever it is, is going to build a competing organization. This is what this competitor would do. This is how it would create value in areas our organization's challenge to create value. This is how this organization would deliver services and products to the community. This is how this new competitor would basically crush the existing business model.

[00:48:44] Then go build that, right? So why wait for someone out there that's entrepreneurial? Maybe it's an association. Maybe it's not to come and eat your lunch for you. Why don't you go do that yourself? Don't think about the constraints of the past. Think about the way you can best serve the market. And if you [00:49:00] put a group of people together like that, with that kind of a charge, I think you come up with really interesting ideas.

[00:49:05] Go with your business model. And with AI, you have essentially limitless intellectual resources available to you, and they're on demand. And there's nothing stopping you from doing this, so I'm encouraged by this. I think this type of stuff is exciting, and I think every association and every nonprofit on the planet should take advantage of these opportunities and go do this

[00:49:26] Mallory: everyone.

[00:49:26] Hopefully you feel more empowered to tap into your leadership, your crowd, and your lab within your association. Love the idea, a myth of going out and building your own competitor before someone else does it. Thanks for tuning in everyone, and we will see you all next week.

[00:49:43] Amith: Thanks for tuning into the Sidecar Sync Podcast.

[00:49:46] If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in-depth AI education for you, your entire team, or your members, head to [00:50:00] sidecar.ai.

Mallory Mejias
Post by Mallory Mejias
July 7, 2025
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.