Summary:
In this episode of Sidecar Sync, co-hosts Amith Nagarajan and Mallory Mejias dive into the explosive rise of OpenClaw—an open-source AI agent that’s taken the tech world by storm. From being dubbed “the next ChatGPT” at NVIDIA GTC to triggering widespread security concerns, OpenClaw represents both the promise and peril of the agent era. Amith breaks down what OpenClaw actually does, why its viral adoption matters more than its underlying tech, and how its vulnerabilities highlight the risks of giving AI broad system access. The conversation expands into the bigger picture: the shift from models to “harnesses,” why AI agents are already reshaping workflows, and what association leaders should do right now to stay ahead—without putting their organizations at risk.
Timestamps:
0:00 – Introduction
🙋♀️ Chat with Grace on the Sidecar website
👥Provide comprehensive AI education for your team
https://learn.sidecar.ai/teams
📅 Register for digitalNow 2026:
https://digitalnow.sidecar.ai/digitalnow
🤖 Join the AI Mastermind:
https://sidecar.ai/association-ai-mas...
🎀 Use code AIPOD50 for $50 off your Association AI Professional (AAiP) certification
📕 Download ‘Ascend 3rd Edition: Unlocking the Power of AI for Associations’ for FREE
🛠 AI Tools and Resources Mentioned in This Episode:
OpenClaw ➔ https://openclaw.ai
Claude ➔ https://www.anthropic.com
ChatGPT ➔ https://chat.openai.com
Gemini ➔ https://gemini.google.com
MemberJunction ➔ https://www.memberjunction.org
https://www.linkedin.com/company/sidecar-global
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
⚙️ Other Resources from Sidecar:
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖
[00:00:00:14 - 00:00:09:17]
Amith
Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.
[00:00:09:17 - 00:00:24:04]
Amith
My name is Amith Nagarajan.
[00:00:24:04 - 00:00:25:24]
Mallory
And my name is Mallory Mejias.
[00:00:25:24 - 00:00:39:05]
Amith
And we are your hosts. And today we have a whole bunch of interesting news and I don't even know if I'd call it news, Mallory, but just stuff that matters to associations, which I think is potentially newsworthy.
[00:00:39:05 - 00:00:43:11]
Mallory
How do you define news, Amith, if it's not stuff that matters?
[00:00:43:11 - 00:01:00:10]
Amith
Well, I think if you actually watch the news these days, you might find a lot of stuff that may not matter or may not matter to you. But hopefully this stuff matters to our association listeners who we care deeply about. And our goal is to help you on your artificial intelligence and transformative journey.
[00:01:01:13 - 00:01:08:21]
Amith
So a lot to cover, as always, in the world of associations and the world of AI coming together here in this pod. How are you doing today, Mallory?
[00:01:08:21 - 00:01:39:21]
Mallory
I'm doing well, Amnith. Not to sound like a broken record, but all I think about and all I talk about is my house. So updates are we have a microwave now. That's big. That's really big. Went to Trader Joe's, stocked up on some freezer meals. Still no countertops, still no oven or stove, but that's okay. Oh, and we got a dining table. So I know everybody was really worried about us having a dining table, news stuff that matters, and we do. So basically, that's all I've been thinking about, not as much AI this past week, Amnith, I will say.
[00:01:39:21 - 00:01:46:22]
Amith
That stuff matters a lot. But have you been using OpenClaud, in turn, connect to Uber Eats to automatically order your dinner and stuff like that?
[00:01:46:22 - 00:01:59:09]
Mallory
You know, I haven't. And I'm excited to talk to you today about OpenClaud, especially after reading about all the security concerns. There are some things I want to try out, and then I think, "Oh, maybe I should learn a little bit more before I do that."
[00:01:59:09 - 00:02:18:13]
Amith
Yeah, there's lots of cover there. I think that's going to be interesting. Some folks in our community may not have been as captivated by the breathtaking reporting of OpenClaud in the broader AI community. And that may be good, actually, because my short takeaway on this, which we'll talk about more, is it's not that big of a deal.
[00:02:18:13 - 00:02:21:17]
Mallory
Ooh. Well, I didn't know that, Amnith.
[00:02:21:17 - 00:02:23:20]
Amith
Okay. Yeah, which is counter to, which is why I wanted to cover it.
[00:02:24:21 - 00:02:45:15]
Amith
But it's counter to what everyone else is hyperventilating about with respect to OpenClaud. So I'll share more on that in a minute. So if I've gotten you riled up a little bit because you love OpenClaud, good. And let's get to that conversation when we get to it. But yeah, it's good to hear that you guys are making progress in the home front. If you can't get that figured out, then it's really hard to do much in anything else in life, right?
[00:02:45:15 - 00:02:57:13]
Mallory
Yeah, you really start to realize how important just basic things are. It has made me very appreciative. Having a home, having a functional kitchen, just things you don't really think about until you don't have them, and then you realize, pretty important.
[00:02:57:13 - 00:03:22:15]
Amith
Yeah, totally. And office moves are like that too. We just moved into a new office here in New Orleans about three months ago, and we were kind of upside down and inside out for probably a month, month and a half afterwards. And our good friends at ASAE in Washington just moved into a beautiful new office. I haven't had a chance to visit it yet, but looking forward to it. But I know they were busy dealing with that. And those of you who've ever done office moves, this is where I think AI might help you with your next office move.
[00:03:23:16 - 00:03:33:14]
Amith
Perhaps with planning, perhaps with some aspects of execution, but until physical AI gets here, it's going to be a lot of stuff we still have to do when we're changing our locations, whether on the home front or at work.
[00:03:34:24 - 00:03:52:10]
Mallory
Exactly. Amith, I wanted to ask you, because we are in the brink of a really exciting new launch for Sidecar with a new era of AI learning and our AI Learning Hub, not out at the release of this episode, but it will be out, I believe, next week. So can you tease anything for us? Can you tell us what's different about the new AI Learning Hub?
[00:03:52:10 - 00:04:33:16]
Amith
Yeah, we're right on track with our goal of updating the entire Learning Hub by the end of this quarter. So starting April 1st, you're going to have a completely new experience. We are reorganizing the content into eight different functional tracks. And these are essentially tracks organized by your business function. So if you're a membership professional or a finance person, or if you're in marketing or events, we will have functional tracks for you. Now, the courses do overlap in some cases. You might have a course like, for example, the prompting course that we talked about on this pod a few times in the past, kind of in the way back machine. But that is important for everyone. And it's still as important today as it was two years ago.
[00:04:34:18 - 00:06:22:08]
Amith
And so these tracks, these functional tracks, are not mutually exclusive of one another, but they're organized in such a way where if you're getting started with AI and you're not sure where to go, you have a very clear path where you can say, OK, I'm going to do this piece first, then this next piece. Where our current content is organized, first of all, it has less in the way of function specific content. We have a whole bunch of new courses that are going live at the same time that are really diving deep into these use cases around different association specific requirements and needs. But then we've organized them in such a way where it's very easy for people to go down these functional paths. So you really understand what you need to do if you're coming in from a membership viewpoint, or if you're the CEO, you want to look at it from the executive leadership viewpoint. And of course, you can jump around when you access the Sidecar AI Learning Hub, you get everything. And so you can kind of go through it and very easily pick and choose. Now, there are some goals over time where we're going to have different plans where you'll pick different tracks depending on the plan that you're on in order to make it more affordable for more organizations. But if you're on the current plan where you have everything, you continue to have everything. So I think the key to this is we want people to have choice, but we also want people to have the easy button where they can just go in and get started and not be overwhelmed by the volume of content. That's the biggest criticism we've gotten over the last roughly six to 12 months. You know, Malra, when you and I first recorded the actual human recorded videos for the first version of the Learning Hub, which I think we did in late 2024, right? Because we launched it at the beginning of 2025 in its current kind of incarnation. I believe that's around the right timing. You know, you and I both, we just kind of, you know, sat at our computers or stood at our standing desks and just talked and we recorded over these slides and we had a little bit of interactivity. That's what it was. And I think we updated it once or twice, maybe if I recall correctly.
[00:06:22:08 - 00:06:24:20]
Mallory
Yes, we did before the avatars, maybe twice.
[00:06:24:20 - 00:06:38:17]
Amith
It was kind of a giant pain. It was. And we were updating it pretty regularly. People were like, "Oh, you guys are updating it so frequently." But you and I were saying, "Man, you know, we've got updates and stuff way more frequently." And so then we switched to the AI generated content. That was about a year ago when we made that switch.
[00:06:39:20 - 00:06:42:17]
Amith
So it was kind of like, maybe it was more like May of 2025.
[00:06:43:22 - 00:08:46:23]
Amith
And then from that point forward, we started regenerating content anytime little bits of content changed. So if you're not familiar with how that works, the basic idea is that we have this really rich AI agent pipeline that we built that uses a lot of off the shelf tools like KGen and 11 Labs and tools like that, as well as a bunch of Claude and a bunch of Gemini. And it strings it together in such a way where whenever we change the source content, it automatically has the downstream effect of generating fully rendered videos that have an AI avatar speaking with the latest content showing the latest tools. It's really powerful. We've had that for about a year and live for about a year. Now what we've done is really, really enhanced that. Our team has tripled in size in the last year. We have thousands and thousands of people in the Learning Hub. We have a lot of activity. And so due to all that growth, we've invested really heavily in the next generation of technology, which is allowing us to really have this Cambrian explosion, if you will, of content availability. So there are, I think, something in the order of 40 different courses in that order of magnitude available. And they're now organized into these eight functional tracks. It's really quite cool. I'm very, very excited about that. And we have more in stock for later this year. Our roadmap continues to evolve where we're going to be adding a lot of personalization and AI assistive features in the actual learning experience. So when you first log in at the beginning of April, it'll be reorganized, but the same basic experience. You go through a course player, you interact with certain course assets. There will be more interactivity. There's more new types of modalities, not just videos that we think will really improve the learning experience. But probably around the middle of the year, we will have some significant enhancements to the learning experience itself with deep personalization. As you go through your learning path, you, Mallory, will be able to say, "Hey, I'm more or less interested in certain topics." And it will continually recommend different kinds of assets to you, whether it's from the Sidecar blog. Of course, this podcast would be high on the recommendation list, I think, a lot of times. Certainly if, I don't know if it would recommend it to you if you logged in, that'd be kind of funny to find out.
[00:08:47:23 - 00:08:48:09]
Mallory
Oh, listen.
[00:08:48:09 - 00:09:39:04]
Amith
I mean, I listen to our podcast sometimes. It's pretty fun to do that. But anyway, so the personalization is something we've been, you know, we collectively as a group of companies have deep, deep personalization expertise going back 15 years. Our first AI company is a personalization company called Rasa, which many associations use. And so we're using some of the same technology for the next generation of our LMS, essentially. And we're going to be putting Grace, our new audio assistant, into Learning Hub. So you'll be able to talk to Grace at any point in time, use Grace as a tutor, talk to Grace about your learning trajectory, and really get help from, you know, much more of like a one-to-one live instructional assistants kind of a feel. So we're really excited about that too. But coming right now, right around the corner, that's kind of the preview for the summer movie. But what's happening at your theater next week is functional tracks, and they're pretty cool.
[00:09:39:04 - 00:10:10:12]
Mallory
Well, that was a great explanation to me. I feel like when you and I initially worked on that first Learning Hub, and let me say it was mostly you, I did like some of the courses, but you recorded most of it. I feel like we were really trying to get the association audience to understand why AI mattered and how important it was and how impactful it would be. And I feel like, correct me if I'm wrong, now, especially in our sidecar audience, people are like, "Look, we know that this matters. We just aren't 100% sure where to start." And so it sounds like this version of the Learning Hub can kind of help them with that.
[00:10:10:12 - 00:10:18:14]
Amith
It helps people more deeply engage and continue their learning journey. It's not a one and done. It's never been that. I mean, AI learning has always been a continuous journey.
[00:10:19:19 - 00:10:51:03]
Amith
But being able to engage people that are further along in their awareness, their appreciation, and their ability relative to AI as a whole definitely is something that this is geared towards. So if you're an avid sidecar sync listener and you've been to our live events and you've gone through the AIP certification, there's a lot of great stuff coming your way in the Learning Hub that's going to help you just keep going. We didn't even mention the Use Case Library, but this is a library of, I think, about 100 videos now of very specific... Yeah, it's grown a lot. You started that too, Malorie, I think.
[00:10:51:03 - 00:10:53:17]
Mallory
Yeah, I did. And there were only like 10 videos, 100.
[00:10:53:17 - 00:12:21:23]
Amith
That's crazy. Well, we've used a lot of AI to help. And we have full-time people focused on this now, which is also helpful. But those Use Case Library videos are proving to be really popular. We're sending them out. We're going to be doing recommended videos. So giving you a feed of really relevant things. So for people who are further along in the journey, coming back to your question, 100%. This is what we're targeting for making this a really valuable long-term asset. However, I would just throw a little asterisks on that comment and say, even though, yes, our community of people who've been with us for a while are further along and need deeper and sometimes more technical education, which we're focused on, it's really, really, really important that we're super open and welcoming and bring people in who have no concept whatsoever about AI, which is still actually the vast majority of this space. I spent a lot of time talking with association leaders. Many of the people in top leadership positions themselves are yet to really dig deep personally and understand what AI actually means. They might have said, "Hey, I have staff people that are experimenting. We have a policy. We have chat GPT or we have Claude." And they kind of feel like the checkbox has been checked a little bit. They're not saying that, but that's kind of the impression I get. And that's unfortunate because some of those leaders themselves don't understand the strategic implication of AI because they look at it as just a productivity tool, which of course it is. And it's an unbelievable productivity tool. It's like, Mallory, if I were to say, "Hey, you know what? Try doing your job for the next two weeks without Claude."
[00:12:21:23 - 00:12:28:01]
Mallory
Well, I could physically do it, right? The laws of physics wouldn't prevent that, but that would hurt a meath that would really hurt.
[00:12:28:01 - 00:12:29:22]
Amith
That would suck really bad.
[00:12:30:22 - 00:12:44:00]
Amith
It would not feel good. So I would hate to do that myself. And so the point is productivity gains, of course, are incredibly important, but it's more about the shift in what we can do. Just a little side note on that.
[00:12:45:05 - 00:13:19:12]
Amith
So we have this group of people that we hired in January and February down here in New Orleans called the Technology Fellows. And it's a program we've been doing for a while, actually across many companies for decades but in Blue Cypress for about two years. And it's a group of fairly early career, most of them are literally right out of undergrad folks. And we had a couple of people in from out of town visiting and we had the people that are here. So I took them all to dinner one night. And one of the fellows asked me, there's a bunch of great conversations. And we talked a lot about the trajectory of AI, etc. But one of them asked me, "Why did you hire all this?"
[00:13:21:00 - 00:15:15:17]
Amith
And I'm like, "So what do you mean?" He's like, "Well, we're talking about how the relentless progression of AI is going to double in capability at least every six months, maybe faster. Programming is essentially a solved problem. And what does that mean for why are you hiring a bunch of programmers, basically?" And I said, "Well, you can look at it in two ways. You can look at it as I've automated and made everything I've ever done in the past really efficient. That's awesome. So we've taken 90% of the labor, 95% of the labor, 99% of the labor, even 100% of the labor out of that which we used to do." Okay, cool. And that's how people are seeing this for the most part in the C-suite. What they're not seeing is that you can actually do so much more. And that was my answer to these folks. I said, "Listen, if I wanted to do with support to eight companies that we already have in Blue Cypress and build the same products and just incrementally improve them, I don't need any of you. We have cloud for that. We can automate 100% of that. But number one, that's not serving our sector. We believe this sector needs way, way, way more in the way of solutions and services to actually solve the transformative challenge, not just of AI, but just what's happening in the world to serve their members and to accelerate their mission impact. So there's more of us, in our opinion, that's needed in the world. So we plan to launch dozens of different software products in the coming year or two. We're going to accelerate. We've been going pretty fast. People tell me that all the time. But you ain't seen nothing yet, is kind of the point. So we're leaning into it. We're adding brilliant, hardworking humans, and we're adding a lot more AI. So coming back to what your question was and that rabbit hole I kind of went down, ultimately, I do think there's a lot of people who need to start from the very, very basics. And we still have tons of that content. So if you or someone you know, a friend, in quotes, needs the very basics, that's totally fine. There's nothing wrong with that. And we can get you on-ramp from literally zero knowledge. And even if you don't like AI, if you're afraid of it or if you actively dislike it,
[00:15:16:17 - 00:15:24:09]
Amith
probably you should still learn about it because it's pretty important. So we'll help you with that too. But yeah, the new learning hub is there's something in there for everyone.
[00:15:25:14 - 00:15:39:10]
Mallory
I really love what you said, Amita, even though it was a little rabbit hole of thinking about what you can do instead of just automating what you've done in the past. And I also think that dinner conversation would have been an excellent podcast episode. So next time you've got to get everyone's commission, record them, and then we'll just post it.
[00:15:39:10 - 00:15:46:17]
Amith
You know, some of those tech fellows, as they get a little more experience, maybe we'll bring them on the pod and have them talk about their experience in the first few months.
[00:15:46:17 - 00:18:21:18]
Mallory
I would love to interview them. Well, we've got to get to the topic at hand, Amita, which I'm now learning is actually not that big of a deal, but we're going to spend the whole episode talking about it. If you've been anywhere near AI news in the last few weeks, you probably heard the name Open Claw. Last week at GTC, that is NVIDIA's annual developer conference, one of the biggest events in tech. Jensen Huang, NVIDIA CEO, and arguably the most influential person in the AI hardware world right now, called Open Claw definitely the next chat GPT. And compared it to the arrival of Linux and HTML. It is the fastest growing open source project in history. It's also been at the center of massive security crisis, and it may be signaling a shift in where value in AI actually lives. We're going to break all that down today, what Open Claw is, why it matters, what went wrong, and how the industry is responding. And of course, what that means for you as associations. So what is Open Claw? What are we talking about here? It is an open source tool created by Austrian developer Peter Steinberger as a side project that takes an AI model. You could use Claude, chat GPT, a free local model you pick, and gives it the ability to actually do things on your computer. So read and edit files, run programs, control your browser, send messages through Slack or WhatsApp, manage your calendar, the list goes on. Open Claw turns AI from something that you're just talking to into something working for you. And it's not a product from one of the big AI companies. Started as a one person open source project that went by a few different names. Steinberger built it independently and then it exploded. It became the most starred project in GitHub history, GitHub being where developers share and collaborate on code. And stars are essentially likes that signal how popular a project is. Open Claw crossed 250,000 stars in a matter of weeks. And for context, React, one of the most widely used programming tools in the world, took over a decade to get there. Jensen Huang said at GTC that Open Claw exceeded what Linux did in 30 years in mere weeks. And the cultural phenomenon has been pretty staggering as well. In China, people have been lining up outside tech company headquarters with laptops to have engineers install Open Claw for them. Their online courses already teaching people how to set it up, TikTok videos about it, and developers buying dedicated Mac minis just to run Open Claw around the clock as a personal assistant. So I mean, this part of the pod, I just want our audience to understand what Open Claw is and kind of like in a short, I'm not saying you have to say short, I'm saying for our listeners in a short way. Oh, Open Claw is this and this is what it does. So what would you say?
[00:18:21:18 - 00:19:13:03]
Amith
So if you're familiar with Cloud Code, Cloud Co-work or Codex, these tools essentially give an AI model access the tools. These are agents. And what they allow you to do is to let the AI kind of roam free in a way. And so you give it an objective and it does stuff. And Open Claw is very much built with the same general idea in that Open Claw basically you pick your model or models and then you give it objectives. You tell it what you want to do. But Open Claw has access to a lot broader toolset than what something like Cloud Code or Cloud Co-work is enabled to do out of the box. That's one key difference is that it basically has access to your entire computer. It can do a whole bunch, it has access to everything on the internet and you can close things down, but by default, it has access to everything, which is both good and bad. We'll come back to that, I'm sure.
[00:19:14:05 - 00:19:49:00]
Amith
The other part of Open Claw that was novel is the ability to communicate with it from everywhere, so through Telegram, through WhatsApp, being able to directly connect with Open Claw. That is actually an excellent innovation. And to be clear, by the way, I'm not anti-Open Claw. I think it's really cool. But the reason I say it's not that big of a deal, I'll come back to later in more depth. It has to do with what's the utility from this tool and are there other ways to do it. Those are the basic features. It's essentially an open-ended agent that can run in a continuous loop. It's sitting there available, waiting to talk to you, waiting to take your commands through Telegram, through WhatsApp,
[00:19:50:07 - 00:19:58:08]
Amith
and then it does stuff. The stuff that it does is only bounded by the intelligence of the model that you're running and the tools that are available to it on your computer.
[00:20:00:06 - 00:20:13:00]
Amith
Therefore, it has been used for things that more bounded agents like Cloud Code couldn't easily be used for, unless you're a developer and you know how to turn Cloud Code in a certain way to make it do whatever you want.
[00:20:14:10 - 00:20:32:23]
Amith
It's definitely taking the world by storm because of how clean and simple it is, and how you can connect to it from anywhere. A lot of really cool innovations in it, for sure. I'm a big fan of progress, and I think that what was done there is awesome. I'm really, really positive on the concepts. I have more to say on this, but that's the basic intro.
[00:20:32:23 - 00:20:47:04]
Mallory
That was going to be my next question, is it sounds like Cloud Code or Cloud Co-Work? We've talked about this on the podcast before, but you were saying it's different because of the amount of tools it can connect to and less walls around it or less parameters in terms of what it can do?
[00:20:47:04 - 00:22:11:01]
Amith
Yeah. I'll give you another parallel. Within Member Junction, our free open source AI data platform, there's an agent architecture. The agent architecture has this concept where an agent can literally take instructions from you and it can run over and over and over again. Basically this idea of what's called an agentic loop. Think about a prompt that basically runs again and again and again. Each time it runs, you feed it the context of what it did the last time, and you also allow it to tell you what to do. The prompt says, "Hey, the user wants to create a blog post about topic A, B, and C." The AI decides, "Oh, okay. Well, maybe I should do some web research." One of the tools that has access to is web search, so it requests using these tools called web search for certain keywords. Somebody does the web search tool, and that somebody is what we call an agent harness or the agent system, which would be what OpenClaud or Cloud Code or MJ agents do. Once the results come back from the tool, like search the file system, search the web, et cetera, then you run the prompt again. You say, "Okay, here's the result from the web search," and then the prompt runs again. Now the AI has the context it needs to think of what to do next. "Okay, now I got my web search done." Next thing I'm going to do is I want to think for a while about what some of the best topics might be for the blog. "Oh, let me check our site and make sure that I understand what the most recent 10 blogs were."
[00:22:12:02 - 00:22:24:07]
Amith
That's another tool call or another thing that the agent can do. Essentially that's what these agents are doing. Now there are other kinds of agents. There's workflow style agents that are much more like A than B than C type agents, and that's really valuable too.
[00:22:25:17 - 00:23:09:23]
Amith
The idea of an open-ended agent or what we call a loop agent is basically what OpenClaw is. There's a lot of people who have done this. There's Langgraph, there's Crew AI, there's Autogen from Microsoft. There was even a thing called Baby AGI about two and a half, three years ago that came out that was very, very similar to this. What OpenClaw has done is it made it way easier to set it up, and also the timing is great. It's kind of like the Apple Newton relative to the Palm Pilot or relative to the iPhone. Similar concepts at different points in time in terms of the trajectory of tech. When these other technologies first made the scene, the LLMs were so weak that they couldn't do that much and they would make errors. Now even an average LLM harnessed by OpenClaw can do really extraordinary stuff.
[00:23:09:23 - 00:23:14:07]
Mallory
Okay, so it's the timing and it's the ease, it sounds like.
[00:23:14:07 - 00:23:44:00]
Amith
It's the simplicity. I also give a lot of props to the developer for baking in a cleaner way to communicate with the agent through Telegram, through WhatsApp. That's really important. You want to meet your users where they're already at. Actually with MJ, we just added a feature to MJ's agent systems that you don't even have to go into MJ, you can do it through Slack or Teams, which is kind of like the more corporate way of doing the same thing as what you do through Telegram or WhatsApp. But apparently actually our military uses Telegram to share secrets, so perhaps OpenClaw is getting access to those too.
[00:23:45:01 - 00:23:46:06]
Mallory
News, stuff that matters.
[00:23:47:16 - 00:23:53:06]
Mallory
Amith, I want to get your take on Jensen Huang's statement around this definitely being the next chat GPT.
[00:23:54:18 - 00:24:46:19]
Amith
I mean he's not wrong in that what I think he's referring to isn't so much the magnitude of the technology itself, like Linux is an enormously complex and powerful thing and chat GPT is a much "bigger" thing if you think about it in terms of the technology. I think what he's referring to more is the moments in time that kind of define the broader public's awareness of things changing, right? And so I've been saying this year there's going to be a chat GPT moment for audio AI, I think that's coming. I think this is a chat GPT type moment for broader awareness of what agents can do. Because we in our little world of AI adopters and association AI people, we're like pretty aware of Cloud Code, co-work, etc. But the rest of the world, a lot of people just don't realize you can do these things and if something goes that viral like OpenClaw, it really builds public awareness and then it paves the way for a lot more. So I think he's right that it's a big deal in terms of public awareness.
[00:24:47:20 - 00:24:53:18]
Amith
My main point would be there are other ways to do the same thing that are far more secure and safe, which we'll talk about.
[00:24:53:18 - 00:26:00:12]
Mallory
Well, that's a great segue because I want to talk about the security crisis around OpenClaw that we've seen in the past few weeks. Within a few weeks of it going viral, of course, security researchers started pulling OpenClaw apart and what they found was alarming. The most critical vulnerability worked like this. If you visited a malicious website while OpenClaw was running on your machine, that site could silently steal your authentication credentials and take full control of your agent. Because the agent has permission to run commands and access your files, that meant total control of your machine from potentially one web page. A broader security audit found about 500 vulnerabilities in total with eight of them being critical. Over 30,000 instances were found exposed on the open internet with zero authentication. The project's plug-in marketplace, Claw Hub, had over 800 malicious plug-ins out of roughly 10,700. So nearly 8% of everything in the store was designed to steal credentials or install malware. Attackers even set up fake OpenClaw installer websites that became the top result in Bing's AI search, tricking people into downloading malware instead of the real tool.
[00:26:01:16 - 00:26:36:01]
Mallory
China's National Computer Network Emergency Response Team issued a formal warning about OpenClaw security risks, specifically flagging threats to critical sectors like finance and energy, and a study from Token Security found that 22% of organizations already have employees running OpenClaw without IT approval. Shadow AI with deep system access that security teams don't even know about. So this is the core tension of the agent era, the same capabilities that make OpenClaw incredibly useful. Access to your files and emails and apps and command line are exactly what make a security flaw catastrophic.
[00:26:37:03 - 00:26:37:13]
Mallory
Amith,
[00:26:38:17 - 00:27:00:15]
Mallory
I'm guessing you're not surprised by this. I know we often talk about open source projects and something that you said on the pod before is they're open. So if there is something malicious talking about like an open source Chinese model, people could go in and see it. Is that what we're talking about here or is this just like a function of the fact that this can control your machine that we're seeing all of it?
[00:27:00:15 - 00:27:34:09]
Amith
Yeah, I mean, look, so a couple different things. Open source is actually really good from a security perspective because it's discoverable. You can review it. There's a lot of, especially for widely used projects like that, you very quickly get people digging in and finding these holes. And then hopefully they get patched and people are a little bit more discerning about where they download stuff from and all that. But the general idea, independent of like a malware version of OpenClaw, which people unintentionally download, which is, that's a problem for a lot of software. You say, "Oh, I want to download whatever, some utility," and you go to some random website that seems to have the thing that's been going on for a long time, spoofing sites and getting people to download garbage.
[00:27:35:13 - 00:28:57:04]
Amith
But with this, because of the volume of interest and the amount of content out there about OpenClaw, I think there's been more of it. Now what it does, what OpenClaw itself does, independent of these plugins and malware attacks and so forth, it by itself is dangerous in the sense that essentially the philosophy behind it is, let it do anything. And then you can clamp down on what it shouldn't be able to do, which you can do with other tools as well. But it starts off with the assumption of, "I can do anything I want." So I would make one recommendation to begin with is if you do want to experiment with this, yes, do the thing that a lot of people have been doing. Go out and get a Mac Mini. It's a great device for doing exactly this. You don't need a brand new one. Go on eBay and buy like a junky old one because you're probably not going to run a local model. That's one thing like a lot of people have a kind of mistaken perception of is like the things that people are doing with OpenClaw that are really impressive generally are using higher end frontier models. Local LLM inference on a Mac Mini, you're going to maybe be able to run like a $4 billion or $8 billion parameter model, anything bigger than that. Unless you have a ton of RAM or something like that, which Mac Minis generally don't have, you're going to have a hard time with that. You're not going to be able to do any reasonably intelligent model. Even with the model compression we've talked about on this pod a bunch of times and how smaller models are getting smarter, the more impressive things that you want to be able to do with OpenClaw, which draws you in, probably means you're going to be using a Clod or a chat GPT or something like that behind the scenes.
[00:28:59:02 - 00:29:33:19]
Amith
That's actually important because the probability of getting spoofed and used in a way is way higher with a dumb model. A dumb model that has access to lots of tools versus a really smart model that has access to a lot of tools, I'd rather have the really smart model have access to those. They're less likely to wipe out your computer. Coming back to how to do this, if you want to do it, make sure it's a contained environment. Don't run OpenClaw directly on your own computer. If you have a big enough machine and you're familiar with a thing called Docker, you can run a Docker image and run OpenClaw in it where it can't get out. It can only access the resources in that environment.
[00:29:35:10 - 00:29:43:06]
Amith
There's actually ways to break that too, but ultimately the idea behind the Docker or the Mac Mini is it's a contained environment. It's a sandbox.
[00:29:44:16 - 00:29:51:09]
Amith
You want to experiment with it, by all means do so, but protect yourself and don't do it on your corporate network. Don't do it authenticated to any corporate resources.
[00:29:52:13 - 00:30:09:07]
Amith
Just play around with it that way. I guess the big issue is that when you have something that blows up this fast, usually it's not expected. No one can predict this kind of growth. What was a kind of a side project just to automate some interesting home projects, I think was the original inspiration,
[00:30:10:13 - 00:30:22:15]
Amith
very quickly turned into something massive. Of course, there's security vulnerabilities. This thing hasn't been hardened. It hasn't been tested. It's out there in the wild and you've got at this point probably millions of people that are using it in some way.
[00:30:24:17 - 00:31:08:10]
Amith
Again, I'd go back to why. Why are you interested in using OpenCLAW? It's likely to be able to create more degrees of freedom for the AI to get more stuff done or to do tasks that it can't do for you right now through the interfaces you're used to. My question is, are there other ways to do that? The answer is yes. I'm happy to talk about that more, but OpenCLAW, the reason I say it's not that big of a deal is literally within weeks, everyone else is like, "Oh, okay. We can add some of those same features into Cloud Co-Work. Into Cloud Code, into Codex, into Gemini, and all these other tools." And Anthropic's definitely ahead of the pack in terms of the major labs and adopting features that are OpenCLAW-esque. So the innovation's great, but I think you probably will be fine with your current tooling, is my guess.
[00:31:08:10 - 00:31:30:06]
Mallory
Okay. Yeah. You kind of jumped a little bit to topic three, Amith, but I do want to talk about that a bit. So NVIDIA had a response, which was NemoClaw, and that's an enterprise-grade wrapper, W-R-A-P-P-E-R, that installs on top of OpenCLAW in a single command and adds a sandbox, runtime, policy-based access controls, and a privacy router that keeps sensitive data local.
[00:31:31:15 - 00:31:51:20]
Mallory
I know OpenAI hired Peter Steinberger, the creator of OpenCLAW, which I thought was interesting, and Anthropic released CLAW dispatch, which one commentator called OpenCLAW for grown-ups. So we are seeing kind of this broad industry response, like you said. So what... And you've kind of teased multiple times, there are other ways to do this. So what are those other ways, Amith?
[00:31:51:20 - 00:33:17:23]
Amith
Yeah, I mean, look, I'm a big fan of Anthropic and the CLAW products, and so I would tell you a couple of things that you can do in that ecosystem. One is, you mentioned it already, Mallory, dispatch. So dispatch is a way of having a continuous thread with co-work. It's a way for you from any device, and it can be your phone, it can be another computer, to be able to interact with the CLAW co-work environment on your computer. So the idea is you have co-work running on your computer, you decide which tools it has access to. You can give it access to your entire file system, or you can limit it to just a handful of things. You can give it access to the web, or you can, once again, limit it to just a handful of websites. You can give it access to computer use, where CLAW co-work can actually drive your web browser and log into your bank account and do other things like that, which maybe should cause you to be a little bit concerned, but at the same time, is the capability worth noting? You choose what to give it, but the point is, is you can actually interact with CLAW co-work through dispatch from a remote location. They're adding the wiring in to be able to do it through WhatsApp and Telegram, that's same playbook, but also corporate channels, like I mentioned, like Slack and Teams and Google equivalents of that. So that is one example. CLAW code has another thing called remote control, which is basically the same idea. It's a little bit different. It's more technical. It's for directly controlling different instances versus the dispatch ideas, more of like you just control your CLAW co-work instance across all of the different conversations.
[00:33:19:08 - 00:34:45:06]
Amith
And I'm actually not up to speed on whether Gemini has an equivalent thing happening. I don't know that they've announced anything, but I can't imagine that they'll be too far behind. Right. And Kodak's has very similar things. I think they released something similar to remote control. I forget what they call it. And they are in the process of rolling out essentially all the same features. And so these are all AI agents. And AI agent, again, is a smart AI model. And that model runs in a loop. And that model is given a task and given a set of tools, just like you would with a human that you hire, right? And you say, "Hey, I'm going to hire this person. Their job is to clean out my garage. And to do that, I'm going to give them access to the garage." That's a tool essentially. And I might give them a bunch of trash bags to throw away all the garbage I haven't looked at in 20 years. And that might be the task. And I don't tell them, "Oh, start over here on the left side, then work your way to the back." I could do that. And they may or may not follow it because they're human. And the AI is the same way, right? I give it a task. I can just give it just the outcome I want. I want 3D blogs all about the meaning of life. And it will go and create them. Or I can give it a lot more specific instructions and hopefully it will follow them. But these are kind of open-ended what we call loop agents. And there's tons of ways to create them. OpenClauds by no means the only way to do it. They just made it really easy. And they made it easy for the average person to install it and give it access to everything. So instead of the "annoying" security messages from Cloud Code where you have to approve certain things, which by the way you can turn those off.
[00:34:46:11 - 00:34:56:13]
Amith
But in fact they call it "dangerously skipped permissions" is the flag that you have to set when you invoke Cloud Code if you want to not have any of those checks. And there's a reason they called it that.
[00:34:58:01 - 00:34:58:17]
Mallory
I love that topic.
[00:34:58:17 - 00:35:05:07]
Amith
There are ways. Yeah, and it's appropriately named. I do use that, but I use that in sandboxes.
[00:35:06:18 - 00:35:35:01]
Amith
But anyway, really the core innovation has to do with the simplicity, the ability to call tools, the ability to run in a loop. That's been going on for a long time. And there's a lot of ways to do that. There's ways to do that very safely and securely. You can go to one of the major labs. They all have agent-type tools. You can use open-source software like Langgraph, Langchain. You can use Member Junction, which has an agent framework. There's a bunch of ways to do agents that you can control a lot more tightly than just running OpenCLAW on all your computers.
[00:35:35:01 - 00:35:45:22]
Mallory
And you just touched on this, but I wanted to ask because you mentioned open-source Member Junction AI data platforms. So the agent framework on top of that is pretty much the same thing as OpenCLAW?
[00:35:47:11 - 00:37:19:14]
Amith
Well, I like to think of it as much more enterprise-grade. So it's kind of like, you mentioned Nemo Cloud with a lot of policies and security and logging and auditing. One of things OpenCLAW didn't do initially, I think it's been updated to do this, is to basically track the actions it took. But when you're running an agent, it's important to have guardrails or boundaries for the agent. And that has to do with tooling, but it also has to do with stuff like, am I going to limit it in terms of the number of times it can run? Am I going to limit it in terms of how long it can run? Am I going to limit it in terms of how much money it can spend? Am I going to monitor it as well in terms of tracking every single prop that it runs, every single time that it does anything? And do I potentially have other either heuristic-based or agent-based things looking at the logs to say, hey, are there holes here? Are there issues here? And so more enterprise-grade agent platforms will do these types of things. Of course, MJ does all that stuff and a whole bunch of other things to make it possible to have a whole lot of safety around these types of systems. What you're talking about essentially is the scaffolding or what people in the industry call a harness, a way of essentially holding the power of the model. The model is this amazingly powerful thing. Everything else we're doing around it is to build the tooling, the infrastructure to harness the power of that AI and to let it do more. And as opposed to just talking to chat, GPT, or cloud, it only can interact with you through text. We're opening that up a lot. The question is, how do you do that in a way that's responsible, safe, secure, et cetera? And I think that's a really critically important topic, not just for associations internal to their own operations, but for their teams.
[00:37:20:17 - 00:37:38:11]
Amith
It doesn't take anything more than one employee who has access to a lot of your key information to decide on their personal computer to use OpenClaw. And that machine, let's say, has access to log into your organization's SharePoint. Lots of things can happen. So educating quickly on this is really important.
[00:37:38:11 - 00:37:52:10]
Mallory
Especially with that token security, which I'm not super familiar with that organization, so I don't know how accurate that their survey was, but 22% of organizations having employees that are running this without approval, that's really scary.
[00:37:52:10 - 00:37:53:13]
Amith
It is.
[00:37:54:15 - 00:38:15:13]
Mallory
I mean, you kind of just hinted at this, and this is where I wanted to finish our conversation today, that maybe, and correct me if I'm wrong, but maybe it's not so much about the model that you're picking. Maybe models are becoming kind of commodities, but it's more about the systems that you build around it, this layer on top of it that determines what you can do with it. Do you agree with that statement?
[00:38:15:13 - 00:38:56:06]
Amith
Totally. And that's the trend line we've been following for a couple of years, literally. There was differentiation for a while, if you think about the GPT-4 days. For a long time, people were saying, "Hey, we need somebody else to have a GPT-4 Calibre model." The open source community was way behind that for a long time. We kept saying, "Hey, is there a GPT-4 Calibre open source model?" And even with like, Claude was not anywhere close to GPT-4 for some time, nor was Gemini was, didn't even exist actually, but the original Google models, there was this thing called Bard, if you remember, from the Waveback machine, and it was terrible. And so there's been a lot of progress, so much so that the model now, just a handful of years after the chat GPT moment,
[00:38:57:07 - 00:39:38:16]
Amith
the model is essentially commoditized already. And all the labs know this. There are very smart people over there, obviously, and they realize there's no moat around the model itself. That's a plug and play commodity. But if you build really great user experience around it, if you make it easy for people to do things, if you make it possible for them to collaborate, if you start to gain some insight into their organizations and their personal lives as well, can you provide memory? Can you provide safe tool use? Can you provide a way to collaborate so that you really harness the power of these models in different ways? Now, if you were to take Claude code, for example, and jailbreak it and plug in GPT-5.4, you'd get very similar results, I believe. Now, Claude is particularly good at coding.
[00:39:39:17 - 00:40:04:13]
Amith
The Claude, I should say, Opus 4.6 model is particularly good at coding, but I think you'd get very similar results from similar models. You wouldn't get great results if you plugged in last year's models, but the harness is very, very important. And we're finding this to be true with our own stuff at MJ and all of our agent companies is that we're building infrastructure that's model agnostic and inference provider agnostic, which gives our customers as much flexibility as possible, portability.
[00:40:05:16 - 00:40:21:03]
Amith
The big thing about it is if the harness is open source, you then have a little bit more control than, or really a lot more control, I should say, than if the harness is proprietary. That being said, I think that the best product is going to be differentiated by the harness, not so much by the model.
[00:40:22:04 - 00:40:38:12]
Mallory
So as the value moves from model to harness, as you said, I'm sure we have an association leader listening that's saying, "Amith, we just got our team approved for the team's chat GPT account. And it seems like now we're kind of moving to the next phase, which is this agenetic layer.
[00:40:39:13 - 00:40:40:11]
Mallory
What do you say to them?
[00:40:41:16 - 00:40:51:03]
Amith
Well, I think picking a primary tool that you use across your business is smart. And it could be chat GPT, it could be Gemini, it could be Claude.
[00:40:52:09 - 00:41:13:24]
Amith
We've talked about this a bunch. We're a big Claude fan, both because of the quality of the models, but also the way they approach model safety. And safety is a big part of their organization. You pick a primary tool. Now, it doesn't mean that you preclude people from using other tools, but picking a primary tool is a good policy move because it gives people kind of a foundational layer that they know they can work with safely.
[00:41:15:02 - 00:41:51:12]
Amith
And that's important. But at the same time, I think having at least a handful of people that have access to other tools that they can experiment continuously and say like, "Okay, should we switch?" And I don't think you should switch every three months, but I do think that you should sign one-year contracts with these harness providers, like whether it's Claude or chat GPT, because you may want at the end of a 12-month term with chat GPT to switch over. Now, most of these things actually the size associations are, they're not enterprise agreements. They're just purely online things that you can cancel at any time. It's actually fairly easy to move. And these companies, of course, very smart. They are making it easy.
[00:41:53:16 - 00:42:27:15]
Amith
They're making it super easy for people to switch, right? So saying, "Oh, okay, you have chat GPT memories. Come over to Claude. We'll bring them with you. And we'll export your conversations out of chat GPT and import them in here." Now, over time, what's going to happen, in my estimation, is you're going to have companies try to put up more and more wild gardens in order to prevent people from pulling their data out. And then the larger companies, ultimately, you're going to end up with, yes, certainly if there's an emerging player that has a majority share in the space, you're going to end up with antitrust and all this other stuff that happens. People saying, "Well, you can't unfairly block a user from extracting their data."
[00:42:28:16 - 00:43:12:17]
Amith
So data will become more important, just like the Microsoft Office format that we all use now, the Docx and XLSX formats. These are the new open office standards. And it was not out of the goodness of Microsoft's heart that they became open source standards. It was because they were essentially forced into it. It does help them actually in a number of ways because it broadens the standard of those document formats. But Google certainly would be, I'd say that there are Google Docs, for example, not being able to create a Microsoft Word document would be a problematic thing if it was proprietary. So open standards will become more and more important to create lower switching costs for customers, which I think is really important. So it is an important decision. I would tell you if you've chosen Copilot,
[00:43:13:18 - 00:43:45:24]
Amith
this is not an anti-Microsoft thing. I'm a big fan of Microsoft. I love what they're doing directionally, but they're moving slower. That's I think a fairly uncontroversial statement that the speed at which Copilot is evolving is way, way, way slower than others. I'm really excited that they partnered with Anthropic to bring co-work into Copilot. That'll help some, but it's still a little bit slower moving platform. And so if you're super comfortable with Microsoft and you're okay with that, that's fine. But again, let some of your people have access to more advanced tools or different tools that you can keep experimenting.
[00:43:47:05 - 00:43:55:07]
Mallory
My last question for you, Amith, have you, do you have any plans to utilize OpenClaw yourself? Have you tried it or are you sticking to what you know?
[00:43:55:07 - 00:44:35:16]
Amith
I've gotten close. It's not so much sticking to what I know. I have tried, I've not tried it, but I've gotten close a couple times to spinning up a project in a Docker and just testing it out. And every time I've done it, I'm like, I just keep asking myself why I'm bothering because I have other AI agents, some are cloud code, some are other things that just work and they are accessible everywhere. And I can communicate with them through whatever channel I want remotely or directly. They have access to whatever resources I want them to have. So for my use cases, it's not that interesting. And so I've just kind of studied it from one degree removed and I think I understand it well enough on that basis. So I haven't had, there hasn't been like a use case where I'm like, yeah, I'm willing to give this a shot.
[00:44:35:16 - 00:44:48:16]
Mallory
Right. And I feel like if you have access to tools that can do that in a safe and secure way, that's probably the best route to go. I don't know. After having this conversation, I would be a little bit too fearful for now to try anything like this out myself personally.
[00:44:49:23 - 00:45:50:09]
Amith
Well, normally, you know, it is not my posture that I want people to be fearful of trying out AI. But in this particular instance, I think it's important to have some awareness of what you're doing and certainly to try things. But if you've never done anything with AI agents rather than jumping into OpenClaw, try cloud code. Even if you're not a developer, download cloud code, install it, play with it, or try cloud co-work. Those are both agents that can do a tremendous amount for you and they kind of guide you through the process of elevating their level of permissions. So rather than starting off with the mindset that the agent can do whatever it wants, it starts off with the mindset that the agent can do nothing other than talk to you. And as the agent chooses to try to do things, it asks you for permissions, which you can choose to grant or deny. You can grant single-time use or perpetual use of a particular tool. You can give it a scope of use to that tool that's as wide or as narrow as you want. So I would recommend people start with that stuff. And if you find that there's some use case where OpenClaw can really solve a problem for you, by all means, go experiment with it. Stick it in a sealed container, basically.
[00:45:51:12 - 00:46:03:14]
Mallory
Don't let it out. Well, everybody, the takeaway for you all is that we've been saying this for years. The agent era isn't coming. It's already here. And the organizations that are going to be best positioned aren't the ones that picked the right model,
[00:46:03:14 - 00:46:09:03]
(Music Playing)
[00:46:19:21 - 00:46:36:20]
Mallory
Thanks for tuning into the Sidecar Sync podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in-depth AI education for you, your entire team, or your members, head to sidecar.ai.
[00:46:36:20 - 00:46:40:01]
(Music Playing)