Sidecar Blog

Claude's Design Coup & The Curse of Work Slop | [Sidecar Sync Episode 132]

Written by Mallory Mejias | May 4, 2026 3:19:44 PM

Summary:

In this episode of the Sidecar Sync, Amith Nagarajan and Mallory Mejias dive into Anthropic’s latest moves with Claude Opus 4.7 and the new Claude Design, a conversational visual creation tool that can generate decks, prototypes, one-pagers, landing pages, and even animations from simple prompts. Mallory shares her first hands-on experiment with Claude Design, while Amith breaks down why stronger visual intelligence matters for computer use, design workflows, and real-time AI applications. Then, the conversation turns to Ethan Mollick’s idea of “de-weirding” AI: the tendency for organizations to treat AI like ordinary enterprise software, measure adoption instead of value, and unintentionally create “work slop.” For associations, the message is clear: AI transformation cannot live solely in IT, and leaders need to move faster, focus on business outcomes, empower experimentation, and preserve the weirdness that makes AI so powerful.

Timestamps:

00:00 - North Georgia Wineries & Chicago Innovation Hub
04:22 - Claude Opus 4.7 and Claude Design Arrive
07:48 - What Claude Design Can Create
10:50 - Amith’s Take on Claude Opus 4.7
13:17 - Claude’s Visual Intelligence Gets an Upgrade
19:31 - Grace Demo at Innovation Hub
22:41 - Ethan Mollick and the Danger of “De-Weirding” AI
34:48 - Should AI Live in IT?
42:17 - From Adoption Metrics to Real Business Value

 

 

👥Provide comprehensive AI education for your team

https://learn.sidecar.ai/teams

📅 Register for digitalNow 2026:

https://digitalnow.sidecar.ai/digitalnow

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🎀 Use code AIPOD50 for $50 off your Association AI Professional (AAiP) certification

https://learn.sidecar.ai/

📕 Download ‘Ascend 3rd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

Ethan Mollick's article in The Economist ➔ https://shorturl.at/XIrjS

Claude Opus 4.7 ➔ https://www.anthropic.com/claude

Claude Design ➔ https://www.anthropic.com/claude

Claude Code ➔ https://www.anthropic.com/claude-code

👍Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00:14 - 00:00:09:17]
Amith
 Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.

[00:00:09:17 - 00:00:25:22]
Amith
 My name is Amith Nagarajan.

[00:00:25:22 - 00:00:27:17]
Mallory
 And my name is Mallory Mejias.

[00:00:27:17 - 00:00:29:10]
Amith
 And we are your hosts.

[00:00:30:11 - 00:00:39:01]
Amith
 And we've got a lot going on here in the world of AI. And at the intersection of associations this week in particular, Mallory, I just got back from a trip to Chicago.

[00:00:40:02 - 00:00:51:02]
Amith
 There were over a hundred people at the Sidecar Innovation Hub up there and everyone was buzzing about talking about the latest in AI. And so this will be a great continuation of that. How's your week?

[00:00:52:02 - 00:01:18:00]
Mallory
 My week has been good. I took a little bit of time away from the office and went to some wineries in North Georgia. I don't know if you know of me, but there's wineries up there, mountains. It was really beautiful and honestly felt like I was in Tuscany for a moment. They have this whole Italian themed vineyard. So I had a lot of fun doing that, but it sounds like you had some fun as well in Chicago. Different experience, but similar in the sense that it was exciting, I'm sure.

[00:01:18:00 - 00:01:23:00]
Amith
 Yeah. There was some wine involved one evening at an Italian restaurant. So very similar.

[00:01:23:00 - 00:01:23:13]
Mallory
 Well, there you go.

[00:01:24:18 - 00:01:29:11]
Mallory
 What did you find was different about this year's Innovation Hub than previous years?

[00:01:30:18 - 00:02:56:16]
Amith
 Well, for those who've been with us before at Innovation Hub, we started that event as a Blue Cypress event, actually about, I think, three or four years ago. And it was designed to be just a small regional get together to allow people to share their best practices, their ideas, their challenges, really as a community with each other. Obviously at this intersection of AI and associations and transformation. And it started off, you know, very modest in size and we did it both in Chicago and in Washington, DC for several years. And more recently we decided to switch it to Sidecars. And Sidecars is a pretty decent sized event organizer at this point with Digital Now. And just made sense to do that. And then also to run it only in one location per year and to pick the opposite city from where we are hosting Digital Now in that given year. So Digital Now, we are back in DC this year, October 25th through 28th, in case you haven't heard. And we decided to do Innovation Hub in Chicago this year for that reason. So they're kind of, you know, at the opposite ends of the calendar in a way in the spring and the fall. So that's the background. And what was different was the size of the event. It still has a very community type feel, even with 100 people in the room, but it was very much a community type feel. There were a lot of exciting stories shared. I really learned a ton just talking with folks who came from a variety of places, actually. There were about half the room was local, but a lot of people flew in from various locations. So it was really fun.

[00:02:57:19 - 00:02:59:23]
Mallory
 Awesome. Did you run into any podcast listeners?

[00:02:59:23 - 00:03:31:12]
Amith
 I did. I ran into a few and it was awesome hearing feedback from folks and that they listened to the Sidecar sync in various ways. One person told me that he listened to the Sidecar sync on the Stairmaster at the gym. And I'm like, well, you can associate our podcast with extreme pain from being on the Stairmaster, I guess. But there were a couple of other stories in different locations, dog walking, things like that. So it was great that we can provide a form of entertainment and education while you're on the move.

[00:03:31:12 - 00:03:44:13]
Mallory
 I love that. Also, I feel like when you're on the Stairmaster, you need some motivation and excitement in your ears. I typically would listen to music, but the fact that someone listens to the Sidecar sync means you and me and me. So I'm like, oh, I'm going to keep up with the energy this episode.

[00:03:44:13 - 00:03:56:14]
Amith
 You know, Mallory, I thought the Stairmaster was like the worst invention ever. But then actually, this is still a while back, probably in the last 15 years, I met another machine called the Versa Climber. Have you ever seen that?

[00:03:56:14 - 00:04:02:17]
Mallory
 Yes. I was actually in my mind just thinking what's worse than the Stairmaster. I was like, it might be that one. Yes, I agree with you.

[00:04:02:17 - 00:04:07:20]
Amith
 I think it's even worse. I think it's a little more interesting, perhaps, but it's just extraordinarily painful.

[00:04:07:20 - 00:04:13:09]
Mallory
 Yeah. If we have any listeners on a Versa Climber right now, let us know. We'll keep you inspired.

[00:04:13:09 - 00:04:21:00]
Amith
 And we'll be very impressed that you can do anything while in the Versa Climber. And if you haven't heard of a Versa Climber, we'll throw a picture of that in the show notes as well.

[00:04:22:01 - 00:06:28:24]
Mallory
 Absolutely. Well, today we are looking at what's new from Anthropic. Claude Opus 4.7 dropped April 16th and alongside it, Anthropic launched a consumer product, Claude Design. And then we're coming back to our friend, Ethan Mollick. If you don't know him, he's a Wharton professor and author of Co-Intelligence. And we've cited him probably over a dozen times on the show at this point. He just published something in The Economist that reframes why so many organizations, associations included, are failing to get real value out of A.I., even when their own employees are seeing huge productivity games. And his word for it, he has a great ability with words. His word for this is de-weirding, and it's a concept worth sitting with. So stay tuned for that. But first, we want to start with what is new from Anthropic. So as I mentioned, on April 16th, Anthropic launched Claude Opus 4.7 alongside a consumer product called Claude Design. This was one of the first times Anthropic has shipped an application layer product together with a model release, a notable move that pushes them up the stack and into direct competition with Figma, Canva and Adobe. So I want to talk first about Claude Opus 4.7. It is obviously a direct successor to Opus 4.6, and it's more of a focused improvement rather than a generational leap. As a reminder to all of you, Opus is the most powerful model in the Claude family, followed by Sonnet and then the smallest, fastest model, Haiku. It is said to feature stronger agentic coding and long horizon autonomous task performance, and it can verify its own outputs before reporting back. Vision capability in Opus 4.7 roughly tripled up to about 3.75 megapixels. And this matters for diagrams, technical drawings and any detail heavy visual work, as well as computer use, which we've covered on the pod and its ability to interpret detailed screenshots and small text. Anthropic is calling Opus 4.7 more tasteful and creative when completing professional tasks, producing higher quality interfaces, slides and docs, the direct bridge to Claude Design.

[00:06:30:00 - 00:07:02:05]
Mallory
 So what is Claude Design? I was very excited to see this pop up. I hadn't actually seen the news article. I just was in Claude and said, what is this? And then watch the little announcement. But it is a conversational visual creation tool from Anthropic, available in research preview to pro, max, team and enterprise subscribers. So you describe what you want. That could be pitch deck, prototype, one pager landing page, and Claude generates a working draft on a canvas. You can then refine that draft through chat, inline comments, direct edits or custom sliders.

[00:07:03:06 - 00:07:53:09]
Mallory
 It can ingest your code base or design files and automatically applies your team's design system. So your brand colors, typography, et cetera. And then you can export it and PDF or PowerPoint or standalone HTML, or you can push it directly to Canva, where the files stay fully editable. As a note, Figma stock dropped five percent on the announcement, and Anthropic CPO Mike Krieger resigned from Figma's board two days before launch. Now, why this matters for you all? Well, we know you produce huge amounts of visual content, newsletters, conference slides, sponsor one pagers, board decks. And it is built for that audience, people who know what they want, what they need, but maybe don't have the design background or the time to open Figma. I'm going to share my screen really quickly. I did a very brief test

[00:07:53:09 - 00:08:29:24]
Mallory
 and hopefully this is showing through, but there's no audio and this isn't animation. This is something I did not know how to create at all. Prior to Claude design, I dropped in a prompt that said something like, show me an animation where someone is learning about AI. That's all I said. Claude design then prompted me with multiple questions that I had to click through and answer. It said, who is your protagonist? And then even had the options. Is it an association leader? Is it someone on the sidecar team? What are they trying to achieve? Or they're trying to earn their AIP, their association AI professional certification or not. Um, it asked me

[00:08:29:24 - 00:09:16:07]
Mallory
 about the story arc and what I wanted to happen in there. Did I want there to be little punch phrases throughout or was it just an animation? And I clicked through my answers and within probably five minutes I had the animation that you just saw, which I was pretty impressed with. It even asked me, do you want the animation to look like we're inside the sidecar AI learning hub to which I clicked yes. And I feel like we got some pretty good visuals, some accurate names of courses. You'll see the sidecar logo was preloaded in there. We did, um, our CMO, Erica had preloaded the sidecar kind of brand guidelines into it. So that's how the colors are accurate. And we have the correct logo and the AIP image, but overall I was pretty impressed with Claude design. Amith, what do you think about that little animation?

[00:09:16:07 - 00:09:30:18]
Amith
 I thought it was really cool. And, uh, the only question I had is how does it know what the inside of the sidecar AI learning hub looks like? And I wonder if under the hood they're using Claude mythos to hack into our site. That's the real reason they released mythos is to hack into sidecar.

[00:09:30:18 - 00:09:45:07]
Mallory
 I know that I think on the backend, you're right Amitha. I think the number one goal with mythos was, um, to get into sidecar and to see all the great content that we have. That's right. It's a good question though, Amitha. How did it does look like the inside of the AI learning hub? Not exactly, but.

[00:09:45:07 - 00:10:49:15]
Amith
 It probably had a educated guess that, you know, well, sidecar is an AI learning platform for association folks. It knows what associations are, it knows what learning platforms are, and it knows a lot about AI. So I don't think it was too hard to guess, uh, something that's approximately right. So it's really, really cool. So I was excited by it. I think, you know, you've been able to do bits and pieces of this with the model directly through a number of other tools, the Claude desktop app, Claude on the web. You can do this kind of thing with chat GPT and Gemini as well, by the way. But what Claude design did is it's the same playbook that they used for co-work and before that with Claude code to put a really clean, simple user experience on top to make it less, um, you know, really less intimidating, I guess is the word I was trying to grasp for. So that people will start to use this for a wide variety of things. I mean, design is a pretty broad field and I can see why Figma and Canva and a number of other players that are more typical design tools, even though, even though that they have, you know, AI enabled many aspects of their products. I can see why those folks are getting hammered a bit in terms of their valuations.

[00:10:49:15 - 00:10:53:03]
Mallory
 Have you tried out Claude Opus 4.7?

[00:10:53:03 - 00:13:16:11]
Amith
 Yes, I've been using it since it came out. And, uh, I think that I have two, two viewpoints on it. One is from a consumer perspective, um, in the desktop app and on the web, you don't have as much control over the model's effort level. So it's called adaptive. And this is the thing actually back in August of last year, GPT five initially had its kind of, uh, effort level router, which there was a lot of pushback on because they said, well, it's really leading to the dumber version, right? Of GPT five and open AI very quickly responded to that and gave you controls back. Um, but, um, that's basically what Claude has done with 4.7 is that there's thing called adaptive reasoning where the model kind of self selects the effort level. Um, I've seen it work fine for the kinds of things I do through the consumer UI, but I don't use it that much. I use Claude code a lot more, even for things that aren't programming related. I use Claude code because I just prefer the interface. It's just, um, there's more controls. There's more power there. There's certain things you can't do in there that you can do in the desktop app, but in any event in that environment, you can still control the thinking level. So that has more to do with the user experience. Um, coming back to Opus 4.7 more broadly, it's, it's a really significant jump, you know, what happens in a dot one release, you know, from four, six, four, seven, four, five to four, six, et cetera. Or in the world of open AI, you know, yesterday open AI released GPT 5.5, which we're not covering today in a lot of detail, but it's worth mentioning simply that it is out there and compared to GPT 5.4. It's a big jump as well. Similarly, Kenny K 2.6 is a pretty substantial jump over 2.5. So I think that the dot releases in a lot of our minds from being conditioned by software releases of the past outside of AI, they tend to be very minor things. You know, something like a 0.5 to 0.6 or something like that would mean not a lot new, but it is actually a pretty considerable upgrade in terms of power. You did mention something I wanted to point it out and make sure our listeners understand is the visual capabilities that make cloud design possible powered on top of this new model also are really powerful for computer use. You already said that Mallory, but I just want to reinforce that, that computer use is really only as good as visual understanding. And so if the model is increasingly intelligent from a visual perspective, it'll be able to do a much, much better job with all things computer use, which is really exciting as well.

[00:13:17:22 - 00:14:00:17]
Mallory
 I was going to ask you, Amith, because I, everyone knows I love cloud and I use it for all things personally and professionally. And I have been using it a lot for design elements in the house with that we just bought. And I'll drop in photos from Pinterest that I like and kind of pictures of my space and see how we can make those two intersect as best as possible. But I do find as an end user in the past that Claude really struggles sometimes when I give it images and it will say things that are totally incorrect. Like that wall is green and blah, blah, blah. And I'm like, no, the wall is white. I'm not sure what you mean by that. And so I'm excited about this tripling and visual capabilities. Did you notice that Claude code, for example, struggled with images on your end or I don't know if that was just a me thing?

[00:14:00:17 - 00:16:08:04]
Amith
 No. Well, the things I use it for, you know, you copy and paste a screenshot of something going on in an app or whatever. That's pretty simple for Claude's been able to handle that for probably a year, maybe six months, kind of compresses in my mind when these things started becoming capable of certain things. But it's been able to do it for a while. And those types of screenshots are more pointed where we're saying, hey, like this particular feature doesn't work. Take a screenshot of that region of the screen. But yeah, it's definitely something noteworthy relative to their competition with open AI as well. There's two areas where Claude has really narrowed the gap or maybe taken the lead relative to chat GPT. One is what you're describing right now. It's visual intelligence and the other has to do with audio. Claude kind of quietly updated their mobile app, I think about a month ago, maybe maybe as a few weeks back. And the new mobile app now has an audio mode that's just as good as chat GPT's mobile app, in my opinion, which for some time was really the best one out. But what the Claude app did in the past was more of a walkie talkie style user interface where you had to press a button every time you stop talking. And now they have a more natural detection of when you stop speaking, which the chat GPT app has had for a while. So I actually switched over to that for my use too. You know, by the way, I want to say for the record for all of our listeners that Mallory and I were big Claude fans, even back before people knew what Claude was. So we've been on it. And I do think Claude is both really powerful. But what I like about the company as well is they seem to be deeply mission driven. Everything I've read about them, everything I've heard them say is quite consistent. It's pretty indicative of a culture that is aligned on the importance of the mission, which is around safety. And that means a lot to me as a user as well. I care a lot about that, obviously, I think most people do. And I just think they have out of the bigger labs, I think they have the strongest commitment to that. Either that or they're really good at marketing it because I've bought into it. So but I do think I do think it's legit. So but I'm a big fan of the improvements they've made in their models. I think it's it's a really interesting neck and neck race between them and open AI and Google Gemini as well. But Claude keeps coming out on top for me because of the usability and the user experiences they provide.

[00:16:09:07 - 00:16:13:02]
Mallory
 I agree with that. But then you're also you're a big fan of Gemini Pro as well, right?

[00:16:13:02 - 00:18:52:20]
Amith
 Yeah. So what I don't like about Claude is it's incredibly expensive at the API level. And so for the apps that we build at Blue Cypress like Skip and Betty and others, we think that Gemini is a better choice. It's just as intelligent. But I should say that's true as of Opus 4.6 Gemini 3.1 Pro had parity. I'm sure Gemini is going to be cooking up a 3.2 or 3.5 or something in the very near future that will leapfrog or catch up or whatever. But I mean, at a certain point, the intelligence of these models is so high that it's good enough. It's close enough for most of the application workloads. And Gemini 3.1 Pro is considerably less expensive and also considerably faster than Claude. So as a result of that in applications like Skip, which are extremely token hungry, you know, typical requests to skip might chew up half a million, a million tokens. That really adds up. So for our customers using these products, we want to have something that's a little bit more cost effective, just as intelligent, a little bit faster. Also, Gemini has a really powerful model that I don't think is that anybody has something comparable to, which is the Flashlight edition of the latest Gemini. And that's essentially their smallest, fastest, least expensive model. It is really, really fast. It's quite intelligent and its cost is, I mean, it's almost free from my perspective. So Gemini 3.1 Flashlight, you can do all sorts of really cool workloads. For example, classifying documents. If you had a million documents you wanted to go reclassify, it's a reasonable model to use for that. It's also so good in terms of latency that you can use it for real time apps. So, for example, back in Chicago earlier this week, we demoed Grace, who Grace has been on the pod with us. And if you haven't caught that episode, I'd recommend you go check it out. We did a demo with Grace. That demo we did actually just a few weeks ago, Mallory, is nothing compared to the intelligence of Grace today, because we upgraded it. Grace, we took her into the shop and we replaced the brain. We took out Claude Haiku 4.5, which is a pretty smart model and relatively quick. And, you know, I would say reasonably priced. It's, it's from anthropic. So everything anthropic, it's like going to like, you know, the Ferrari dealership or something. That's all, it's all high end, but, but it was reasonably priced. But Gemini 3.1 Flashlight's like an order of magnitude less expensive. It's also considerably faster than Claude 4.5 Haiku. And it's also way smarter. So Grace is now way more capable, faster. If you, if you checked out Grace a few weeks ago, I'd recommend you go talk to her again, because she's even smarter and even faster at responding now. So I think there's, there's reasons to use different things for different projects.

[00:18:54:05 - 00:19:09:09]
Amith
 When I'm doing stuff with Claude code, I don't really care that much about any of those things. It doesn't matter to me if it's a little bit faster, a little bit slower. I just like the Claude code user experience. And that's kind of what has me in there. Gemini has a really great CLI or, you know, command line interface as well.

[00:19:10:11 - 00:19:30:12]
Amith
 But I just haven't gotten used to it. So there's a lot in terms of the human side of this as opposed to, you know, are these things really commodities? Basically, yes. But like we all develop preferences for different things and habits. Like, do you prefer the Uber Eats app or the DoorDash app? They're functionally basically the same thing, but you get used to one or the other. And I use, you know, one of those all too often.

[00:19:31:12 - 00:19:44:08]
Mallory
 Yep. And something about the brand too, creating trust. We were just talking about Anthropic. Sometimes when a brand kind of builds that, cultivates that trust with you, you're loyal to it. What was the reception on the Grace demo at the Innovation Hub? What do people think?

[00:19:44:08 - 00:21:15:22]
Amith
 I heard a lot of positive feedback. You know, it was cool. So these are people who generally are at, you know, kind of leading the charge in terms of AI adoption relative to the market at large. And many of these folks had not experienced not just Grace, but real time audio conversation with that level of depth. And richness to it. We put Grace kind of through her paces, you know, like for the pod, we introduced Grace and said, "Hey, you're on the pod." And I did the same thing on stage in front of the attendees at Innovation Hub. I said, "Hey, Grace, you're here with us at Innovation Hub." And she knew all about Innovation Hub because it's one of the content assets she's been trained on. In fact, she threw something on the screen about Innovation Hub and she welcomed the attendees and it was pretty cool. So, and then we, you know, we had a great conversation. I also switched into demo mode and talked about various sidecar things and she was able to navigate through that beautifully and offered me the ability to, you know, with the opportunity to join the Sidecar Learning Hub. And when I politely declined, she offered to send me an email. And so it was really cool. It was a good demo. And I think people really got a clear sense of what's available today, what's possible right now. And the talk that I gave was really about the trajectory. That's what I tend to speak about is the exponentials and the trajectory of this stuff. Because what we're trying to help people do with planning is to look ahead a little bit. You know, you probably can't really look ahead three years, but you can look ahead for the next 12 months, maybe 18 months. And the point is, is that it's continuing to get better, faster and cheaper, which normally those combinations are not available. So it's pretty cool.

[00:21:15:22 - 00:21:56:23]
Mallory
 Well, speaking of trajectory, I want to move to our next topic of today on how we can try to get more value out of artificial intelligence at the organizational level. So all the way back in episode 89, if you were still with us at that point, we covered Ethan Mollick's leadership, crowd and lab framework. So a quick refresh for all of you, if you're not familiar, for AI to transform an organization, you need a few things. One being direction from the top. That's the leadership component. You need employees experimenting with real permission and incentives. That's the crowd component. And then you need a dedicated team pushing boundaries and feeding discoveries back in. That is the lab component.

[00:21:57:23 - 00:23:13:21]
Mallory
 And a new piece for the economists, Mollick asked a different question. Why isn't it working? Why are companies seeing far smaller organizational gains than the individual productivity gains their own employees report? And his answer, companies are de-weirding AI. They're actively sanding down what makes it strange, treating it like normal enterprise software and squandering exactly what makes it transformative. He has this great line, treating this technology as another software deployment is like receiving a mysterious alien artifact and immediately using it as a paperweight. He also has a few ideas for where de-weirding AI is showing up. One is the automation reflex. So if you have 30 percent productivity gains, that becomes 30 percent workforce cuts. Because if AI is just normal software, cost reduction must be the only visible lens. There's also, I really find this one interesting. The KPI track trap. Leaders set compliance metrics like 90 percent of employees must use the tool weekly, which produces what Mollick calls "work slop." Endless extra memos, extra PowerPoints, extra summaries nobody asks for. Usage dashboards look great, but the value is close to zero.

[00:23:15:00 - 00:23:48:16]
Mallory
 He talks about secret cyborgs, which we covered in episode 89. When incentives reward visible usage over real value, employees who actually reduce their workload with AI have every reason to hide it. And then the concept of IT as a graveyard. Mollick's line, there is a natural place where de-weirded AI goes to die. The IT department. IT's core mandate is to minimize risk. AI demands the opposite. Handing sole ownership to a department built around risk elimination is, in his words, a category error.

[00:23:49:18 - 00:24:02:07]
Mallory
 Why we think this is relevant for you all as associations? Well, small ops teams mean AI often lands with the IT or operations person by default. Usually the person most oriented towards stability and least positioned to experiment.

[00:24:03:07 - 00:24:46:05]
Mallory
 Association leaders reaching for adoption metrics will get usage, but maybe not value. And the strategic question Mollick wants leaders to actually ask is what does it mean to rebuild an organization around the fact that one person can now produce 100 times more output? So, I mean, I thought this was such a great read. You shared this with me. You all know we love Ethan Mollick. When you think about the association leaders that you work with or even maybe that you saw at the Innovation Hub, as you said, they are leading the charge. So maybe they're not the best example. Do you feel like leaders are treating AI like Microsoft Office 2.0 as in, are they attempting to de-weird AI by treating it like just another software rollout?

[00:24:46:05 - 00:25:15:10]
Amith
 I don't think they're attempting to de-weird it, which I love that term, by the way. Mollick is such an entertaining writer and it's one of the reasons I love his content. He's great, great insights, but the way he packages up his content is awesome. But I don't think they're attempting to do that. I think what they're attempting to do is to figure out how to like think about AI and the only lens through which they understand technology is the way they've deployed software for decades. And so, yes, a lot of them are treating it like the next AMS implementation or the next LMS implementation.

[00:25:16:16 - 00:25:26:00]
Amith
 And that is a big challenge. And I think that ultimately people do need to approach this considerably differently in order to get the most out of it.

[00:25:26:00 - 00:25:30:19]
Mallory
 I think the KPI trap, that one got me because on the surface,

[00:25:31:23 - 00:25:53:17]
Mallory
 I mean, I feel like we at the Blue Cybers family companies, we try our best to get people actually out there, actually experimenting. But I could see something like this playing out. You know, we want all of our staff to go through the Learning Hub or all of our staff to use X amount of new tools per quarter or something like that. And then in the end, not reaping that value. So can you talk a little bit about that kind of paradox?

[00:25:53:17 - 00:26:22:24]
Amith
 Well, that's, of course, what it's going to do. And that's been the game forever, where you have an objective. People will often find a way to gamify it and not in a positive way. Right. Gamification can be good in terms of competitiveness and fun and all that. But what I'm referring to is gaming the system is where you aren't necessarily aligned with the spirit or the intention of the objective, but you're meeting the surface level numbers. That are being measured. And so, you know, there's a parallel to this in the world of software development too,

[00:26:24:01 - 00:27:28:01]
Amith
 where there's just another silly Silicon Valley term, but the thing called token maxing, which sounds like some ridiculous, like bad habit people have or something. But it's basically measuring developer productivity based on how many tokens they've burned, which is so incredibly stupid, because what that simply means is that I can just like say, oh, well, Claude, just eval my whole repo and reason over every single line of code every single day. And if I'm getting rewarded because I've spent 100 million tokens a day or something dumb like that, it's just ridiculous. So there's a lot of that kind of inputs based measurement, that is what I call it. And that's been going on before, but now it's just easy to miss the mark on that. Now, I don't think it's a bad idea to have certain headlines that are like, hey, everyone's trained or everyone has the app installed as a prerequisite to have like a broad participation goal where we're saying, hey, listen, if the company achieves or if the association achieves 100% AIP certification, we all are going to have a great party and we have to achieve 100% and it has to be by the end of the quarter and we're not going to throw the party if even one person doesn't participate.

[00:27:29:17 - 00:28:36:05]
Amith
 That scenario can be very powerful because it creates team, you know, a team commitment to do it together and bring everyone along. It also sends the right message that, hey, we don't want to leave anyone behind. But that's not a productivity goal. That's more of like a team morale building thing. That's more of an idea to emphasize the importance of something. And it's also kind of episodic intentionally. But I would recommend a little bit different approach. So what I would say is focus on the business objectives as you always should, which is what are the annual and quarterly objectives? What are the key results that you're going to measure to determine whether you actually achieve the business goals? So the business goal might be decrease member churn or increase member retention to put the positive side of that. And we want to increase member retention by 25% or by a certain number of people. And so that's a key. That's an objective. And the key result would be achieving a particular measurable outcome. And then we align with that. Or in the case of product development for software, a lot of times it's, hey, there's a certain amount of velocity we expect from each developer. We're going to and that would lead to certain features and products being shipped.

[00:28:37:10 - 00:29:25:02]
Amith
 And what we can do there to measure it is determine, you know, just basically raise the bar on the output so we can say, well, we know that people are 100 times more productive. So we're going to shorten deliverables by, you know, 5x, 10x, 20x. We might take what previously would have been sized as a three month effort for somebody and give them three days. And that's what we've been doing. We've been compressing timelines. We've been demanding a lot more. And people have risen to the challenge and a lot of times exceeded it because we're trying to be unreasonable. But the goal is to achieve more as a business, to serve our customers better, to deliver better quality products and services. A good example of that is the Sidecar Learning Hub Revamp we did at the very end of Q1. So for those of you that are not familiar, at the very end of March, we completely revamped the entire Sidecar Learning Hub curriculum.

[00:29:26:04 - 00:29:37:23]
Amith
 There are now 70 courses. When Mallory and I were personally recording them, pre-AI use, we had, I think, seven courses, if I recall correctly. So that's literally an order of magnitude increase in the number of courses.

[00:29:39:02 - 00:29:43:14]
Amith
 And a lot of, and actually up until recently, I think we had 12 or 15 courses.

[00:29:44:17 - 00:30:18:16]
Amith
 And so we really lean heavily into more and more AI tooling, more AI for review. And the team has grown a little bit. We've been making investments in people, but ultimately, you know, it's an outsized gain relative to that. And we're just getting started. We expect to have way more content than that, and we're organizing it differently into these departmental tracks, which is a massive effort to do it that way. But we did all that because AI enabled us to do that. And if we hadn't set a really aggressive objective, I mean, a lot of people would say 70 courses, eight departmental tracks. That's like a team of six people for two years, right?

[00:30:19:17 - 00:31:02:24]
Amith
 But we did it with a team of two people focused on this. Others helped. We had, you know, several other people working on it, but two people were heavily focused on this for about three months. And we got it done. So I think that's what you do. I think you put pressure on timelines to achieve really good business outcomes so that it's not slop, but it's actually what you want to be outputting and just give people the mandate. Hey, figure out how to do this five times faster. And pre-AI, that has just been, you know, totally unreasonable, obviously. It still kind of worked, by the way, even before AI, to just shorten timelines. Maybe not by 5x, but by 25 percent or something like that. People tend to rise to meet the occasion. But with AI, we can do some pretty crazy stuff. So that's kind of my reaction to that part of the MOLC piece.

[00:31:04:04 - 00:31:58:22]
Mallory
 I feel like you make a really good point, Amith, because going back to what you said about the business objective, let's say we're talking about increasing member retention by 25 percent. Looking at that from MOLC's work slop angle, without clear direction or leadership, a staffer at an association might say, OK, increase member retention. I'm going to write a memo about that. I'm going to publish a report on that, and then I'll create the slide deck about our previous strategies. I could see how using AI, you could create a lot of, I don't want to call it work slop, but maybe it is that. Without necessarily making a meaningful change in a step toward achieving the business objective. But I like the idea of shrinking the timeline a bit, because I feel like that element is like, OK, how can we use AI to get to this next step, as opposed to how can I use AI to make myself look like I'm very busy and utilizing cloud at every chance I get? Does that make sense?

[00:31:58:22 - 00:32:14:10]
Amith
 Totally. And, you know, just going back to this week's activity in Chicago, I was talking, there's one particular CEO I was talking to has some mid-sized association. I think they're eight, ten million dollars in revenue, twenty, thirty employees, something like that. Very typical kind of middle-sized group.

[00:32:15:11 - 00:32:23:18]
Amith
 And he was telling me about some of the stuff that they were doing now, that they were going out there and getting things done in literally days that might have taken. First of all, they might not have even done,

[00:32:25:08 - 00:33:22:18]
Amith
 but things that could have been done maybe in a year or two are being done in days. And the question of would it have been done? Is it that extra summary no one asked for or is it actually adding business value? You kind of know, you know, you know, if it's useful and if it's just an extra PowerPoint, I don't think the world needs another PowerPoint slide deck. We need more value for members. We need to create more optimizations and better outcomes. There's all sorts of things we can do. We tend to have a pretty good idea of that. So I think there's some clarity around this is like, is that useful? And ask the question very in a cutting way, almost not because you're criticizing the person, but you're criticizing the work and you use that as a learning experience. If someone created, you know, 100 new slides in a slide deck or like that did not move the needle at all. There's no novel information in there. It's just presented differently. We've just wasted a bunch of time. It's important to actually be honest with yourselves about that and say like, no, we don't want more of those slide decks. We want to do more of the things that actually move the ball forward.

[00:33:22:18 - 00:33:43:18]
Mallory
 I want to speak for a moment to our IT folks who I know listen to the podcast because the whole concept of IT is a graveyard. I was kind of scared to talk about it. I think it's it's sounds controversial. It gets a reaction out of people. But Amith, what is your take on IT not being the spot for AI?

[00:33:43:18 - 00:33:45:21]
Amith
 First of all, we love our IT colleagues.

[00:33:45:21 - 00:33:48:07]
Mallory
 We do. We love. I know where there's many of you.

[00:33:48:07 - 00:38:46:22]
Amith
 And IT performs such a crucial role, particularly in the age of AI and the focus on risk mitigation and cybersecurity and making sure that the lights stay blinking in the server room so that the systems don't go down is abundantly important in the age of AI. So we can't lose focus on that. They have the responsibility to keep systems up and running. If you say, hey, we're going to criticize IT for not moving fast enough and then they start moving faster and then your email system doesn't work or your AMS breaks, you're going to start screaming at them. And so they, of course, their mandate is to keep operational stability and keep safety and security top of mind. So totally empathize with that. But I don't think Moloch is wrong either. Two things can be true at the same time that seem like they are directionally incompatible. But these are not directionally incompatible statements. The key here is that IT isn't the owner of AI, whereas IT historically has been the owner of other technology rollouts. The key message here is IT is a participant in this. They're not the owner. And actually great technology rollouts in the past, even things like our favorite topic of AMS implementations, the best AMS implementations were never owned by IT. IT again had a critical role to play, but they never owned the project. The project was always owned by a team of people, typically led by one person who was not on IT because they're the business owner for the project. And the same thing is true now. Only the superpowers that have been bestowed upon all non-IT people are truly remarkable. What you showed earlier, Mallory, what we see routinely out of people who are not technical, building apps, building websites, building all sorts of amazing stuff. It means that actually IT can focus on what they're naturally focused on, which is system stability, safety, security and risk mitigation. But the rest of the business needs to balance that mindset, which is important, with achieving their business objectives, which is to move the ball forward and to do so aggressively. How do we add more member value? So I actually think that Onus is on the membership department and the marketing department and the events department, and most importantly, on the CEO and executive director to push hard. While respecting IT's mandate to keep things secure and to keep things stable, but to also push ahead in other areas. I think business owners who are AI forward in their mindset, who are not on IT, actually are really well positioned to do this. I'll give you an example of how we're approaching this. So going back to the Sidecar Learning Hub, Jason, who leads content, educational content for Sidecar, is not a developer. He's a really smart guy, but he's not a developer, never written a line of code in his life, as far as I know. But he actually prototyped and built a solution that was used for a lot of the work I mentioned earlier in rebuilding a lot of our content and building a whole bunch of new content. Now, his solution is not scalable. It is not production grade. It is Swiss cheese when it comes to cybersecurity. But we didn't care because we were able to sandbox it in a way that gave him lots of degrees of freedom to be creative, to work efficiently, and to move his priority forward. We are now, this quarter in Q2, taking his ideas, his concepts, things he's proven out through a very successful experiment in Q1, and production grading them. Meaning we have development team folks who are equivalent to IT, right? And they are taking his ideas and they're putting them into a production system so that we can repeat it a thousand more times and get reliable outputs. And that's a great way to balance it out, is you experiment, you move fast, you test things, and you actually create business value from these experiments. Then you pull them back into central systems. Then you pull them back into enterprise frameworks. For example, we're obviously users of Member Junction, which is our data platform and our agent platform for associations. We use it ourselves. And Member Junction is the place that you go if you want to deploy a bulletproof production grade enterprise agent. Or if you want to have data flowing in that you're able to analyze at scale, it's great. But it's non-trivial to put stuff in there. And that's intentional. Even though the technology barrier is very low, we want to make it so that there's governance around a system like that. But whereas at the same time, someone like Jason could go out there and do his agent work, which is what he built for all the stuff I mentioned. And then promote that into a production grade enterprise class agent a little bit later. So it's kind of like a teamwork thing that I think we need to lean into people's strengths. I don't think IT people are going to, you know, and I don't think they should, they're not going to throw caution to the wind and say, well, whatever, it's the age of AI. Let's just go see what happens. You don't want people doing that who are responsible for mission critical systems. So I actually I agree completely with Malik, but I also think that the criticism shouldn't be applied to IT. It's more of the organization's mindset around where this should live. And it needs to live everywhere, which Malik says as well, not just in technology. If you lean on IT and say, hey, I'm not going to bother learning AI, I'm the CEO, I'm not technical, I'm just going to lean on my CIO because the CIO or the director of IT is the person I've always leaned on for tech stuff. You're making a giant mistake. If that's your mindset.

[00:38:46:22 - 00:39:27:10]
Mallory
 Yep. I think you're so right. I meet the whole I'm not technical thing and just the fact that, but I also can empathize because if your IT person or IT team is naturally the most tech savvy individual or group of people on your team, it does feel like it makes sense if you're talking AI as a leader or even someone entry level and you don't maybe fully even understand what you're talking about to defer to someone who does. And so I think it's more about empowering your team and like Jason, making them feel confident about, okay, maybe the prototype is Swiss cheese, as you said, which is pretty funny on me. But like encouraging that and realizing there's a lot of value in even that Swiss cheese version at the beginning.

[00:39:27:10 - 00:39:30:16]
Amith
 I enjoy Swiss cheese. I think it's a great product.

[00:39:32:07 - 00:40:51:07]
Amith
 And I think ultimately in the kind of figurative version of Swiss cheese, it plays a role because if you try to fill in all the gaps right away, you'll never get anything done. And IT, again, the mandate is stability, safety, security, and that's a good mandate for IT. You do want IT to be accepting of change and drive forward and the best CIOs and directors of ITI know that they do a good job balancing this, but they first and foremost need to remember that they're there to serve the business and that they might be the strongest technical people in the organization. They most certainly aren't the most knowledgeable about AI, almost always, is what I see, especially in larger organizations. You have people out there who are really, really, really fluent in what AI can do, like Claude design. Lots of people in IT don't even know that exists yet there's probably people out there in marketing departments and membership departments at associations that are playing with it right now. And it might be causing nightmares for some traditionally minded IT folks who look at that as a massive cybersecurity hole. And it is if you just say, hey, membership person, here's the keys to our firewall, go ahead and plug in your app directly to our public facing surface area. Don't do that. There's still roles to be played that are important around governance, but the two can coexist. And that's exactly what we're trying to do at Blue Cypress across all of our organizations. And I do see associations successfully doing this. It's a very solvable problem.

[00:40:52:11 - 00:41:47:11]
Mallory
 I feel like if you're listening to this episode and you are struggling to see association level, organizational level value from AI, maybe a good and I know we talk about mandates and whether we should do that. I know some associations are very anti mandate and I get that, but maybe something you could highly encourage or mandate instead of 90% of staff using this tool by the end of the quarter, the end of the year, whatever that may be saying department wide. Each individual staffer or by team taking a department goal, whether that's membership marketing events and seeing if you can build prototype some sort of solution with AI to solve that. And if it doesn't work, whatever. If it's Swiss cheese, whatever. But maybe just maybe a couple of those prototypes could work or could spark an idea, maybe with IT involved that could actually solve a major association issue or problem. And I feel like that's a good place to start.

[00:41:47:11 - 00:42:08:00]
Amith
 Totally agree. And you know, I want to wrap up my commentary on all of this by sharing a thought on leadership. You know, Mallory, have you ever been with a group of friends that are kind of all indecisive? They're all being, you know, super agreeable, but they're like, you're trying to pick a movie to go to or go to a restaurant and everyone's like, oh, what do you want to go do? What do you want to eat? Yeah, everyone has their own opinion.

[00:42:08:00 - 00:42:10:06]
Mallory
 But I've been a part of that. I've been a part of it, too.

[00:42:10:06 - 00:44:47:24]
Amith
 Totally, totally. And believe it or not, I have been as well where I try to different everyone else. Really? It occurs every once in a while. Yeah, it occurs every once in a while. I'm mainly with my wife, but with friends as well. But in any event, you know, those, if that goes on for too long, you know how annoying that is for everyone? That's how your team feels when you're not making a decision as a leader. It's exactly how it feels. Leaders need to lead. And the more that there is change in the air, the more the rate of change increases, the more you have to be willing to act. That does not mean you as a leader are always right. You just need to know that it's time to make a choice, make the choice, pay attention to it, and change course when needed. But many association leaders are so incredibly committee based in their mindset. They're so consensus driven that it drives their teams bonkers and it actually dramatically impairs your ability to move quickly. So along with all of Moloch's other great advice, I would simply add my own, which is start making decisions a little bit faster. The OKR you might leave for yourself as an individual is if you are in a position of any authority to try to cut your cycle time for decision making in half. So if you typically say, well, let me think about that for the next week, give yourself two business days. If it's something you think you'd give yourself a month, give yourself a week or two. Push yourself harder to get to a decision. Your decision, like my own, are often very wrong, directionally wrong or off by a little bit or whatever. But you got to pay attention and then change. And if you tell your team, like, OK, and oftentimes people are afraid to do something that not everyone's bought into. And the simple answer to that is everyone on the team needs to have the expectation that everyone will be heard, everyone will be listened to. But once the decision is made, everyone, whether you agreed with it initially or not, everyone has to buy into it and move forward. Whether they agree with it or not or liked it or not, that's the decision we're moving forward. And that unfortunately isn't the culture in a lot of teams in association land that I've seen. And this is true in a lot of companies, too. What that is is very simple. It's called a toxic culture. You have to fix that. If you have that, too, that's a different problem. It's actually far more challenging than anything in my opinion, but it's important to work on that. Because if you have that kind of environment or if you have leadership that's not making decisions or making decisions too slowly, the stuff we're talking about week over week, Mallory, in terms of model releases and cool new widgets and Claude. Doesn't really matter. You're not going to get very far. So that's always been true in organizations. It's more true now than ever. A.I. is like a giant magnifying glass and it will find all of these issues, all of the weak points.

[00:44:49:00 - 00:45:12:14]
Mallory
 Great point, Ameth. Anthropic is handing association tools that make it dramatically easier to produce real visual work, decks, one-pagers, prototypes without a design team. The Moloch piece is the warning label. If associations receive these tools the way most organizations do by handing them to I.T. or an ops person and measuring success by adoption rates, you'll probably get work slop and automation math instead of actual

[00:45:12:14 - 00:45:18:03]
 (Music Playing)

[00:45:28:21 - 00:45:45:20]
Amith
 Thanks for tuning into the Sidecar Sync podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in-depth AI education for you, your entire team, or your members, head to sidecar.ai.

[00:45:45:20 - 00:45:49:01]
 (Music Playing)

[00:45:49:01 - 01:15:20:13]