Sidecar Blog

Snooze to Win: Why Digital Naptime Is AI’s Most Productive Hour | [Sidecar Sync Episode 83]

Written by Mallory Mejias | May 23, 2025 3:16:58 PM

Summary:

This week on the Sidecar Sync, Amith Nagarajan and Mallory Mejias dive deep into two cutting-edge developments in the AI world. First up is the mind-bending concept of "sleep time compute"—how LLMs might learn and improve during their downtime, transforming into smarter, faster assistants overnight. Then, the duo breaks down OpenAI's $3 billion acquisition of Windsurf, the booming arena of AI coding tools, and what it means for developers and associations alike. From persistent memory to prototype-ready AI partners, this episode is packed with insights for both techies and the tech-curious.

Timestamps:

00:00 - Introduction
02:11 - Updates to the Sidecar AI Learning Hub
06:00 - Mastermind Insights on AI-Powered Learning
08:05 - What Is Sleep Time Compute?
10:31 - How Test Time Compute Evolved
15:19 - How Sleep Time Compute Mirrors Human Memory
19:27 - Real-World Use Case: Skip’s Learning Cycles
32:30 - Windsurf & AI-Powered Coding Tools
38:32 - Application Layer Is the Future of AI Value
44:07 - AI-Driven Prototyping for Associations
53:06 - Final Thoughts

 

🎉 More from Today’s Sponsors:

CDS Global https://www.cds-global.com/

VideoRequest https://videorequest.io/

🤖 Join the AI Mastermind 

https://sidecar.ai/association-ai-mastermind

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:

https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

📅 Find out more digitalNow 2025 and register now:

https://digitalnow.sidecar.ai/

🛠 AI Tools and Resources Mentioned in This Episode:

Claude Code ➡ https://docs.anthropic.com/en/docs/claude-code/overview

Claude Desktop ➡ https://claude.ai/download

Windsurf ➡ https://windsurf.ai

GitHub Copilot ➡ https://github.com/features/copilot

Cursor ➡ https://www.cursor.so

OpenAI Codex ➡ https://openai.com/blog/openai-codex

ChatGPT ➡ https://chat.openai.com

Gemini by Google ➡ https://deepmind.google/technologies/gemini

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00] Amith: You know, we don't have technical strength. We don't have a thousand developers. We're not Amazon, we're not Netflix. But the field's leveling, and you now have the ability to do this. If you take the time just to go and experiment with this stuff, welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights, and developments in the association world, especially those driven by artificial intelligence, you're in the right place.

[00:00:25] We cut through the noise to bring you the most relevant updates with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amith Nagarajan, Chairman of Blue Cypress. And I'm your host. Greetings everybody and welcome to the Sidecar Sync Your Home for content at the intersection of artificial intelligence and all things associations.

[00:00:50] My name is Amith Nagarajan.

[00:00:52] Mallory: My name is Mallory Mejias.

[00:00:54] Amith: and we are your hosts. And today, hopefully you will not be put to sleep, but we're gonna be talking about some really interesting topics in the world of ai and, uh, you'll be hearing about it shortly. So, very, very excited about this particular set of topics.

[00:01:07] It's gonna be, I think, quite impactful and, uh, quite exciting for everyone. How you doing, Mallory?

[00:01:13] Mallory: I'm doing well. Am I feel like AI is, is getting more and more like humans than we know. I mean, even AI needs to sleep. We're gonna talk about that in just a bit, but I, I thought that was interesting to, to preface the episode with.

[00:01:26] Amith: Yeah. You know, I think AI is learning how to sleep. AI doesn't necessarily need to sleep, but, uh, when AI sleep sleeps, maybe it wants to. Yeah. You know, and, and when AI sleeps, uh, interesting things start to happen, which is something we're gonna be talking about in detail. I think that it's just yet another branch of opportunity and research in the world of ai.

[00:01:46] So, uh, can't wait to talk about that in more detail. And, uh, meanwhile. Sidecar, I know we've had some very exciting activities, um, in terms of our AI learning content for association folks evolving, brewing, developing, not quite sleeping, um, over the last several months. And, uh, we're about to roll out and by the time all of you are listening to this or watching us on YouTube.

[00:02:11] By that point in time, you'll have all new content on the Sidecar AI Learning Hub. It is actually all AI generated content. And I'll take a second to explain that. It is the, the content itself, the actual material is not AI generated. It's generated by, with a little bit of AI assistance, but primarily by us on, uh, team Sidecar.

[00:02:32] But what we do from there is we utilize an AI driven, uh. Software system that we've built that essentially generates audio and video for the content, which allows us to much more, rapidly change it. So we're super excited about it the last time Mallory and I, along with our other colleagues, uh, on the learning hub.

[00:02:50] Made big updates was in the fall. And uh, as you all know, AI is changing so fast that the content from the fall is, you know, largely in, in many ways still good, but in many ways out of date, unfortunately. So, uh, that's the reason we're shifting to this model so that we can make incremental updates constantly and push those updates every couple of weeks to the Sidecar Learning Hub.

[00:03:10] So we're extremely excited about this.

[00:03:13] Mallory: It's an incredible feat, Amit, and I know you mentioned when you and I, uh, worked on the content in the fall, we used AI pretty heavily to help us generate slide decks and whatnot. But even so, it was quite a tedious process. We had to sit down at our computers and record everything slide by slide, which we were happy to do.

[00:03:31] But, uh, at that time I thought, wow, we're, we are moving as fast as we possibly could. And then I think Amis, you had the idea. When would you say it was the first time you thought, wait a minute. We could probably AI or, or did you always think that, is that the question?

[00:03:46] Amith: Well, I mean, the ideas came to mind at various points in time to me, as AI video and AI audio generation got better and better, that at some point in that curve there would be quality, high enough or even better than the average human.

[00:03:58] Um, and, uh, it would be an opportunity. And so I started thinking about it probably a year ago, but, uh, I'm a fundamentally a very lazy person. I don't like doing things more than once. Um, I don't know,

[00:04:09] Mallory: am me, if I call you lazy.

[00:04:11] Amith: I'm extremely lazy and with I'm, I have a very specific type of laziness. Mm-hmm.

[00:04:15] I don't like repeating the same thing more than once, uh, unless it's skiing. I do like skiing. A lot of the same runs repeatedly, but, uh, at work I like doing new things all the time. So you can call me a spoiled brat, but I've, uh, had the good fortune over quite a few years of primarily been focused on like the what's new and what's interesting kind of work.

[00:04:32] Not all the time we all, we all have to do things we don't like, but the point is, is that when I run into a task that I do not like, uh, which is any task that's repetitive. I try to find a way to automate it. I've always been like that. I've been doing that since the beginning of my career. And now with ai, it's like being a kid in a candy store because we can automate things that previously were totally outside of the realm of anything other than science fiction.

[00:04:54] So we are living in interesting times and, uh, this new content has been reviewed by a bunch of people internally and externally and gotten really positive feedback. Uh, so I'm, I'm really excited about it. We're gonna always have a sliver of human content. Recording a generation in various aspects of the Sidecar Learning Hub.

[00:05:14] We think that's an element and it's an important addition. We're gonna focus there on things that don't go outta date quite as quickly, uh, to add personality and to add humanity to the learning hub. And I, I think that's probably a good blend. We're gonna experiment and learn and, you know, for our association friends, it's an interesting thing to be talking about.

[00:05:32] Because of course you're interested in AI learning content, but also because you yourself are probably a prolific learning, uh, delivery organization. Most associations have their hands in learning. Some generate the majority or the substantial majority of their revenue. From learning. And so when you have the opportunity to consider new ways of accelerating the delivery of learning from idea to reality, it's, it's an interesting thing to be thinking about.

[00:06:00] So we'll be sharing more and more about this, uh, with our mastermind group, uh, which is a, a small, intimate group of very dedicated practice. TERs of association management who are on a learning journey, uh, together, uh, with us and with each other. Uh, this group meets once a month. Uh, we've talked about it in the past.

[00:06:17] It's an awesome group that's been together for, uh, about 18 months now. A little bit longer than that. And, uh, just the last meeting that we had, there was a detailed, uh, you know, discussion about how to actually. Do AI at this scale with your educational content. So we'll be sharing bits and pieces of that with sidecars listeners and possibly building a course on the Sidecar AI Learning Hub, all about how to build an automated ai, uh, education pipeline.

[00:06:44] So really excited about it.

[00:06:46] Mallory: Yep. Quite fun too for us to be the Guinea pigs and exactly like you said, Amit, see what works, see what doesn't, and then share all those insights with all of you. So hopefully you can take like the next best step in that direction. Amit, I also have one more question for you.

[00:07:00] You said you don't like doing repetitive things. I would argue this podcast is quite repetitive. We're on now, episode 83. Are you planning to automate the Sidecar Sync podcast?

[00:07:11] Amith: Not at all. This is super fun to me. And so if we, if we recorded the same topic over and over, I'd find that quite boring as what I believe our listeners and our viewers.

[00:07:20] But uh, I think that it's super fun. It's actually a great touch point each week. For me where I know we're gonna be talking about this stuff, uh, it helps me, you know, reflect and put together thoughts on how I might want to frame certain topics, uh, with the association market to make them most helpful.

[00:07:38] Uh, and there's always new ideas that come out of this too. Mm-hmm. So it's actually, it's, it's a routine, but it's not repetitive.

[00:07:44] Mallory: Yes. Well, you heard it here then Amitha and I are here to stay. Uh, today we have two exciting topics lined up for all of you. We're talking about sleep time, compute. Hopefully that doesn't make you too sleepy.

[00:07:56] It's actually quite interesting. And then we'll be talking about the open AI potential acquisition of Windsurf and some other coding tools that are out there. So first, sleep time compute. Over the last year or so, maybe a little more than that. We've seen language models push to think harder by giving them extra test time, compute seconds while users waited, giving models more time to think, allowed them to craft better responses.

[00:08:23] I. But every extra second, increased latency and infr cost AKA, the cost to actually run the model and the model still forgot things between chats. So a new research paper from Letta, which is a uc, Berkeley spin out, best known for its MGPT work, tackles that bottleneck with sleep time compute. The idea is pretty simple.

[00:08:45] Keep the agent busy during downtime, so you have a heavyweight sleep agent that runs after hours, reorganizing knowledge and writing distilled insights into a persistent memory because that memory survives across sessions, the live primary agent can answer almost instantly the next morning without burning fresh GPU time.

[00:09:06] This persistent state architecture shrinks real-time compute by about fivefold and still boosts accuracy by up to 18% on tough reasoning tasks. According to lettuce benchmarks, the breakthrough matters because it turns an idle chatbot. To a night shift analyst that keeps learning instead of starting every conversation from scratch.

[00:09:25] So I worked with chat GPTA bit to see how this might apply to associations and it, it came up with two interesting examples. One might be a member service agent that digests a year's renewal FAQs overnight. And can greet your members with a confident one shot answer at 9:00 AM the next morning, or perhaps a regulatory watch agent that scans new rules overnight, stores key points in this memory, and then delivers a curated briefing with your morning coffee.

[00:09:54] Sleep time compute shows that memory plus off peak reasoning unlocks lower costs. Faster replies and continuously improving service. Exactly The mix that would benefit associations of course, or frankly any business for that matter. So Amee, can you, there's really a lot to unpack here. Sleep time compute.

[00:10:14] When you sent this to me, I thought, oh man, that's a great topic for the pod. Uh, I wanna talk about test time compute a little bit. 'cause I feel like it does relate to sleep time, compute. Um, can you talk about. Both of those and how they relate to each other. Perhaps one solves something that the other can't.

[00:10:31] Amith: Sure. Let's zoom out a little bit and talk about, uh, some of the history behind this. The, the process of scaling ai. So some of you may have heard these terms, scaling laws, and a few years ago there was a lot of conversations about how scaling laws seem to continue to hold, meaning that as you increase the amount of computation that you threw at the training process for ai, the models became smarter.

[00:10:54] So that's basically what the so-called AI scaling laws. Were to show, uh, and in fact they did hold true for quite some time. They started to not hold true to the original benchmarks after a period of time. But there's, there's still truth to the fact that if you throw more compute at training, you typically get a better model.

[00:11:13] Of course, being smarter about how you train and being more efficient at how you train is definitely an opportunity. Algorithmic improvements or an opportunity. Uh, but that was kind of the first dimension of compute scaling, was training time, making the model smarter through. Better and more training.

[00:11:29] Now, test time compute was this concept that, uh, is actually kind of an awkward term, which is of course very much the domain of AI folks. Uh, and you know, we're very good at that generally as an industry of coming up with weird acronyms and strange words that might mean something to the, to the nerds, but not a whole lot to everyone else.

[00:11:46] But test time. Compute test time essentially is when you use them all. So training time is when. There's these massive computers creating the model from scratch, essentially. And then test time is when you use the model. So Mallory, when you type into Claude or chat PT and you, you know, hit enter, um, as soon as you hit enter, that message is transmitted across the network and eventually gets to a computer where the model is running.

[00:12:09] We call that inference. Um, test time is another term that basically means the same thing, so. Models have historically, for the history of neural networks, basically always been trained to respond as quickly as they can, meaning they infer from the input. What the output should be. They're probabilistic machines, meaning that they'll say, Hey, for this sequence of inputs, what should be the outputs?

[00:12:33] That's basically what they've been doing Now, um, what what's interesting is, uh, late last year there was the first release of a so-called reasoning model, and we covered it in detail. First one, it was called Strawberry, uh, from ai. And then later when it became, it was called oh one. And since then, lots of has happened in reasoning models.

[00:12:51] We've talked about that a lot. And by the way, there's actually a new lesson on reasoning models in the sidecar learning hub. Update that we just were talking about. Um, but reasoning models, essentially what they do is they invoke a new modality of thinking, uh, when you're querying them, when you're asking the model a question.

[00:13:07] So previously the models would only react as they just essentially react as quickly as they possibly could. They would not edit their response as they went. Even if they potentially found a mistake, they didn't look back at all and they didn't really stop to think and say, Hey, what is the nature of this problem?

[00:13:24] How can I best solve it? Let me break it down into pieces. Uh, what's called chain of thought a lot of times today. So models were not able to do that. You could do those things in agents that sat on top of models, but models themselves didn't have the ability to do anything other than the instantaneous type of responses, respons quickly as possible.

[00:13:41] So with test time compute, what we were saying is, Hey, if we give the model the opportunity to think longer, then the model might be smarter. It kinda makes sense, right? If the model has certain fundamental capabilities through training. But if we say, Hey, model, like you get, take 10 seconds to think about this, or take a minute to think about this, or take as long as you want to think about this.

[00:14:03] Like us. Probably if I asked you a question and gave you a. Zero time to respond. You'd have a harder time coming up with a great response versus some things, you know, you'd probably wanna step back and think about them and say, Hey, like what's the best way for me to solve this problem? You'd start working on it.

[00:14:18] Then you might go edit your response. You kind of like keep going back and forth, right? So that's what models are doing with test time compute. So this second scaling law was through test time compute, which is to say, hey, give the model more time to think. When you ask a question, which is separate from training, and the models became smarter and smarter, we've seen that with oh 1 0 3 now oh four from OpenAI.

[00:14:41] That's their series of models. We see that extremely clearly with Google's Gemini 2.5 model. Both Flash and Pro are reasoning models, which means that they take advantage of this idea of test time compute, uh, as does the cloud, 3.7 sonnet. Model when you put it into the extended thinking mode. And we've seen dramatic improvements in really complex reasoning across domains like math and physics and biology and a number of other domains as well.

[00:15:06] So that's what test time compute is about. And um, you know, it is. Really, those are the two dimensions of scaling that we've had mm-hmm. Thus far. And what sleep time compute is about is this new third dimension of potential scaling where we can say, Hey, what if we threw compute resources, um, not during training and not when Mallory asks a question of chat GPT, but when perhaps Mallory's not asking a question of chat g pt.

[00:15:30] Mm-hmm. And what can the model learn from that? And how can the model improve? That's essentially what this is about.

[00:15:36] Mallory: And what does that look like in practice? Am meet? So would you provide access to some large data source initially? That it, that it can kind of study while it sleeps, quote unquote, and then when I ask it questions, it's providing these quick, uh, one-shot responses.

[00:15:52] Amith: One other analogy that might be helpful in explaining this is thinking about like university graduates. I often say that, you know, the best models are like maybe originally they were high school graduates and then they were university graduates, then they were elite university graduates. Now they're PhD graduates at the 80th percentile, right?

[00:16:07] So these models are really, really smart and really well versed in a wide array of domains, which is cool. But models today are fixed in that moment in time, meaning when OpenAI releases oh three, which they just did, the full oh three, that model, that particular version of the model, will never get smarter.

[00:16:25] So it's like having this amazing, you know, multi PhD individual that knows all this great stuff, but every time you interact with that model, it doesn't remember anything about what it got. Right. What it got wrong. And so that model will never be better than the day it was born, so to speak, which is fortunately not true for us because we are continually rewiring our brains based on our continuous experience loop.

[00:16:51] And so model architecture right now is still that way. Models are essentially. Fixed in time af as of the end of their training processes. Now you can do other things. You can do what's called fine tuning. You can do additional training through something called reinforcement learning. There's a lot of cool stuff you can do at the model architecture level, but they all require, um, significant development processes and they're, they're not things that you do in a continuous basis.

[00:17:15] So models are kind of frozen in time, like that university grad who's brilliant but is incapable of remembering what you told them the day before. When the next day comes around. Yeah, you'd find that quite frustrating, um, at work if you had a team member like that. Right? So then the question is, okay, well how do we, how do we deal with that?

[00:17:33] How do we improve on it? And so, um, a lot of different things have been done. Um, you know, people have been doing things like building scratch pads and trying to give models, forms of memory, mem, GPT, which she referred to, which was from the same group of folks. Um. Attempted to do that where it was trying to basically create like a scratch pad, uh, for memory, uh, to make it possible for models to quote unquote have persistent memory in an an earlier version of this, uh, concept.

[00:17:57] And the idea though is that the model actually has no memory. It's just essentially, uh, like a separate component that the model has access to, that has memory. That's what this is about. So now sleep time compute. Coming back to that. So the key here is, um, what are you trying to do when you sleep? Um, well, first of all, just.

[00:18:16] Not thinking about the science behind it. You wanna get rest?

[00:18:19] Mallory: Mm-hmm.

[00:18:20] Amith: Why do you wanna get rest?

[00:18:21] Mallory: So I can be better the next day, you know? Yeah.

[00:18:24] Amith: So you'll, you'll feel better, you'll feel refreshed, right? Um, you'll get started again anew. And you'll perhaps, perhaps in that process too, something else is happening, you know, that's kind of quiet and not really something we think about a lot, but our long-term memories are being formed.

[00:18:38] Some things are being pruned. We're kind of having like a cleaning process, both for emotions and for thoughts. And there's this distillation that's occurring where the brain is essentially saying, oh. Um, this thing really was important today. Mallory learned this really important thing, uh, or had this experience that was really emotionally positive or negative, and then kinda lodges those into your memory and mm-hmm.

[00:19:02] And actually in some cases, rewires the way, uh, your, your brain is actually working functionally. So it's quite interesting how that works. Now our, our AI architecture is ridiculously simplistic compared to the way a biological neural networks where we try to. You know, we try to learn from this process, right?

[00:19:18] So the idea behind sleep time compute primarily is to emulate what happens in a biological neural network, a K of the brain when we're sleeping. So. I'll give you an example. We're actually implementing this concept and have been for about a year in one of our AI tools called Skip. So Skip, if you haven't heard me talk about it before, essentially is a data analyst agent.

[00:19:39] So what does Skip do? Skip is a conversational AI like Chachi pt or Claude or uh, Gemini. You talk to Skip. Skip is, has private access to your data. So data that you consolidate into an AI data platform from your A-M-S-L-M-S, whatever your systems are. And then Skip is able to, um, talk to you about your business and also write reports.

[00:20:02] Basically that's the primary function is to create analytics and reports, um, and skip. Needs to understand quite a bit about you as a user, your organization, uh, overall, and of course your data in order to be effective. So what have we historically done when implementing Skip for Clients? We've tried to learn a lot about the organization and the data and put in a bunch of information into Skip's brain.

[00:20:25] Um, and make it possible for Skip to be quite effective, and that works pretty well. You know, that gets us 80, 90% of the way there, sometimes 95 plus percent of the way there, but users are constantly coming up with new ideas and having new questions, right? And so Skip may not have seen a particular request, or some users might use slightly different types of terms than others.

[00:20:45] And so Skip might fail. At solving the user's problem. So I might say, Hey, I wanna run an analysis that shows me member retention, but I want to correlate that member retention with how long the member's been with us and also, um, what their level of education is. So run a report, generate an analytics, you know, and an analytical kind of view of that.

[00:21:07] Um. So that might be pretty straightforward sounding to us, but, um, skip might interpret that in different ways depending on how much prior experience he has had in solving problems like that for you. Right? So what if I, what if Skip gets it wrong? Well, skip gets it wrong. And I say, well, no, that's not quite right.

[00:21:23] You pulled the data from the wrong place. Uh, it's really not what I was looking for. So I had this conversation with Skip, where I'm giving feedback and skip's like, oh, okay, cool. And then Skip will be able to fix the problem and. And give you a revision and eventually, you know, you get what you want, right?

[00:21:37] It might take 2, 3, 4 turns and we asked the question, well, how can we make Skip just continually learn? And also have transference of knowledge from conversations with one user to another across an organization. And so sleep time compute, we don't call it that. We call it a learning cycle, which is not nearly as cool as sleep time compute.

[00:21:55] We should have called it that. I was talking to Thomas Saltman, who, uh, quite a few, a few of our listeners know. Um, and he and I were chatting about that. We're like, yeah, we, we totally should have called it that, but, um, drop

[00:22:04] Mallory: the ball. Done. Yeah.

[00:22:05] Amith: We, you know, we, we tried to call it something a little bit more generic, but.

[00:22:08] Learning cycles. Essentially what happens is this, it's very much what you described at the beginning of this segment where essentially outside of when a user's asking Skip for anything, skip on his own says every so often, and this is typically actually done every hour or so not, not overnight necessarily.

[00:22:25] And Skip will say, Hey, um, I had this long conversation with Mallory and I also talked to these 20 other users, and in these conversations, what did I learn? Let's see. Um, Mallory really liked it when I did this. She really didn't like it when I did this. Um, and so it's kinda like if you ever have, uh, done journaling where maybe at the end of the day you are in the practice of saying, Hey, I'm gonna write down some of the things, like some of my experiences, some of my thoughts, some of my feelings from the day.

[00:22:50] That can be both therapeutic and it can also be a very helpful way of, of learning. Um, that's kind of what Skip's doing. Skip has this journaling process where Skip's saying, Hmm, that's interesting. Uh, what I learn and how does this compare to everything I've ever learned before? Or because Skip's quote unquote journal is everything skip's ever learned in these prior learning cycles.

[00:23:08] So Skips saying, well, here's all the notes I've ever taken before I've learned these things. And then in some cases like, oh, what Mallory really means when she says A, B, C is what AME means by something else, because I have a different terminology set. Um, and so then in the future. Skip becomes smarter not only dealing with Mallory, but dealing with everyone.

[00:23:29] Um, so that's the way these learning cycles work. This happens actually quite slowly offline. In fact, uh, you can utilize what are called batch APIs through all the major AI providers to get much cheaper rates. You just get much slower response times. Um, and in these, in this process, you get back your feedback.

[00:23:45] But if you get it back half an hour later or even a couple hours later, it didn't really matter that much. And so. Then that feedback essentially gets stored in this quote unquote journal, right? What I'm calling, uh, a journal is basically the scratch pad, but it's a distillation of knowledge using really high, high horsepower or high compute.

[00:24:04] Uh, so we're using like oh four and we're using Cloud 3.7, and we're gonna keep pushing the boundary using. Really the most expensive slowest models to do the distillation of knowledge to say, Hey, what are the key elements of insight that I need to glean from these 5,000 conversations I've had in the last day?

[00:24:21] And then how do I consolidate that with everything I've ever learned before? And then what happens is in the future, every time future users come and ask questions, that distillation of knowledge, that journal is immediately and instantly available to skip, to learn from. And so Skip will be able to utilize that to improve the quality of his responses.

[00:24:38] Um, with Skip specifically, we're very early in testing and rolling this out. We actually not have not rolled out this capability to any, um, users yet, but we're, we will soon, like in the next 30 days. Uh, but our testing so far shows very positive results. It's, it's a really exciting, you know. Additional dimension of scaling.

[00:24:55] Mallory: Mm-hmm. Sleep time compute is a good name, but I, I will say you're onto something amme with the AI journaling. I think that could, that could definitely get people interested. What I wanna ask you is, it seems like it's less about giving the model access to some repository of data to study beforehand and more about the ability to learn from previous experience and previous interactions.

[00:25:18] Is that correct?

[00:25:19] Amith: I think you could use it in both ways. Okay. But in our use case for Skip, for Betty, for other products, we develop, learning from the, from the interactions with users is really, really important. Uh, and until this innovation came along, uh, it was really something that required, uh, a, you know, a serious level technical CIS admin or developer even to go in and provide that additional knowledge to these tools.

[00:25:43] And now we're in this continuous learning loop essentially, where. As you interact with these systems, uh, they'll just feel smarter every, every day that you use them. They'll feel smarter, they'll be faster, and they'll be better. So it's pretty exciting. Now, the underlying neural network, right, the underlying models we use have not gotten any better in terms of mm-hmm.

[00:26:01] Their knowledge. But what we've essentially done is built an engineering solution on top of that basic layer. To make, uh, the, the system smarter. At some point, if the neural networks become, uh, more liquid or more, uh, elastic in their nature, uh, that will be great. But, um, ultimately that also has some risk to it because, you know, models tend to be shared across organizations.

[00:26:22] So, um, do you really want other organizations, uh, behavioral changes to affect the way your version works? There's a layering concept that people are working on, or what I just described won't be happening, but mm-hmm. Um, there's. All these different things happening at the same time. This innovation, I think my main takeaway that I'd share particularly with the non-technical leaders of associations is, uh, models, um, themselves as they get smarter.

[00:26:48] It's great, it's exciting, but this innovation means things are gonna happen faster. The capabilities of the system that you're using, whether it's an AI for, you know, conversational intelligence or if it's. A coding tool or whatever it is, these tools are getting smarter and smarter and smarter. And if you think that we're about to slow down because we've had so much progress, I think it's quite the opposite.

[00:27:10] It's, it's gonna continue to compound and drive progress forward at an even. Crazier pace. So pretty nuts. Like

[00:27:17] Mallory: can it get faster? I know it can. I know it can in theory, but it, it's crazy. I mean,

[00:27:22] Amith: yeah, the speed of progress, I mean the, the numbers are just math, right? Yeah. For our brains, we're already blowing up, but, uh, you know, we'll, we'll see.

[00:27:29] But, uh, sleep time compute definitely is something I think people should at least be aware of at a minimum, because it's something that, if you're thinking about building one of the systems that you described earlier, Mallory, you know, member services agents or something else, um. The capability to get smarter in this way is a fairly new concept.

[00:27:47] Hmm. And so being able to, to understand that, that is possible. And so if you're working with your team, whether it's an in-house development team, a third party, uh, or, or a product company, um, many of them might not even know that this is a possibility. So you, as the non-technical individual in the room can come in and say, Hey, have you heard of sleep time compute?

[00:28:05] Mm-hmm. You can start to weave that in. It might solve some of these problems that you're saying are unsolvable, so, mm-hmm. That conversation.

[00:28:12] Mallory: I think one day this podcast is gonna turn me into a technical person. I don't know exactly how yet, but I'll, I'll, I'll just start saying, you know, I'm pretty technical and Yeah.

[00:28:20] Uh, Amit, my last question. You, you're going your way, my last question on this topic is, uh, kind of my gut reaction when you shared this information with me, which this sounds like it could be. Actually expensive running extra cycles while the model's quote unquote sleeping. Maybe it could have a, a negative impact on the environment as well.

[00:28:38] Um, you said that wasn't exactly the case, so can you address that?

[00:28:43] Amith: I actually think this is gonna help in all of those areas. So first of all, uh, most of these sleep cycles, like you and I, you have the opportunity to run them offline at night. And so the power grid is less busy in the evenings. Um, you know, it's, that's driven by a lot of factors.

[00:28:58] Obviously people aren't doing as much stuff at night. Mm-hmm. Uh, also you don't need as much power to, to cool things with air conditioning. All that kind of stuff affects power consumption. So you tend to have both less expensive power and sometimes you have surplus power available in the evenings. Um, that makes it, uh, more environmentally efficient, uh, in, in many cases to do what I'm describing.

[00:29:18] Uh, the other thing is, is that you don't really care so much about latency. So you can send your workloads basically anywhere. Um, so you might have data centers that are too far away, uh, to be, you know, effective in terms of latency for a realtime application. So that gives you another opportunity. And the other thing that's important to point to back from the research.

[00:29:37] Is that they found that because of the learning from the sleep time computer or what we call learning cycles because of the learnings, it actually decreases the use mm-hmm at inference time. So it makes the models faster because they can refer to this distillation of knowledge and solve a lot of problems that previously might have required both multiple turns of a conversation, which is of course very expensive in terms of uh, GPU or LPU time, and also with respect.

[00:30:03] To, uh, environmental impact. So the net effect of this is use offline resources that are less expensive and less environmentally impactful, uh, to improve the efficiency of your online resources. So I think it's actually a really positive story, um, in, in all those ways.

[00:30:18] Mallory: Mm-hmm. Perfect. It sounds like AI deserves a nap time just as much as we do, and it's beneficial for all of us, AI and humans alike.

[00:30:28] Next up, we are talking about the open AI acquisition of windsurf, and we'll cover some other coding tools as well. So Open AI has reached an agreement to acquire windsurf and AI powered coding tool, formerly known as Coded for approximately. Just some small change, $3 billion making its largest acquisition to date.

[00:30:48] Windsurf is an advanced AI integrated development environment, or IDE that leverages large language models and AgTech AI to automate and enhance the coding process. It's recognized for features like Cascade AgTech ai, which enables autonomous code generation and refactoring local code base indexing engine, which allows efficient context aware code, suggestions, and super complete.

[00:31:12] Which predicts developer intent and offers inline code completions. The acquisition is widely seen as a defensive and strategic move for open ai, which faces rising competition from Google, anthropic, Microsoft, and fast-growing startups like Cursor. Speaking of. Beyond Winder, there are several powerful AI coding tools worth considering.

[00:31:34] GitHub co-Pilot is a widely used assistant that integrates directly into popular ides, offering real-time code completions, chat-based help and multi file editing capabilities. Cursor, which I just mentioned provides a full featured AI powered IDE experience with advanced multi-line auto complete chat driven code edits and deep context awareness perfect for power users who want granular control over their coding workflow.

[00:32:01] Meanwhile. Claude Code from Anthropic Shines as a terminal based AI assistant, designed for complex multi-step coding tasks, bug fixes, and code base exploration, catering, especially to developers comfortable with command line interface or CLI environments. Each of these tools brings unique strengths that complement different coding styles and project needs.

[00:32:24] And Amit, I know you have some experience with maybe for sure, probably all of them if I had to guess. But I feel like my first question here is more of a, A declaration. We can always kind of follow the dollars, right? If we wanna look at trend lines, if we wanna look at where we're going in the next few years, follow the money.

[00:32:42] Obviously OpenAI making this $3 billion acquisition and Windsurf makes. Makes you probably realize that this is a direction we need to focus in. Uh, is that shocking to you, AME that we're gonna be putting more dollars into AI assisted code?

[00:32:57] Amith: Not at all. And I think there's going to be, you know, continued investment in this area.

[00:33:02] And, you know, coding has been seen for some time now for several years as this killer app for the current generation of language models. And it continues to be the case. Um, you know, windsurf is, um, it's one of the tools in this space as you mentioned. And I think, by the way, what you just shared. Shows that you are pretty technical.

[00:33:22] Going back to the earlier segment, uh, you know, with, with, uh, all of these AI coding assistants, uh, they all do something in common, which is they, they build software for you, right? That's what you're trying to do. You're trying to say, Hey, what can I do to build software without being a coder or to be a more powerful coder?

[00:33:39] And what I would point to in terms of the trend line is the ability for a so-called non-technical person to build software. To build applications, to add to existing applications and do it in non-trivial ways. So for a long time we've had the ability in a variety of different ways to build very simple things.

[00:33:57] Uh, for example, there's a product called Airtable that came out years ago that made it possible for business users to create databases in the cloud. It was. Not like SQL Server or Postgres or these other, you know, developer oriented databases. It was very, very simple for people to create apps. Uh, I mean, even if you rewind in time, prior to that, we had Microsoft Access for the last, you know, 20 plus years that allowed fairly non-technical people to build meaningful business applications.

[00:34:23] Uh, but beyond like declaring what. Type of information you wanted to store. Beyond that, you kind of needed a coder to come in and like build things for you. And what's changing now is the ability to talk to an AI and say, Hey, I want the app to do this and this and this and this. I want my membership application to work this way when the user comes into the website.

[00:34:41] I want my pricing to work this way. I want my, um, you know, I want my functionality for abstract submission to work in these other ways. I'm using association examples intentionally because. These custom, you know, code-based things are way, way more accessible to everyone now. Uh, but coming back to your, your broader point, Mallory, about following the money, that's generally an interesting path to, uh, consider.

[00:35:04] Mm-hmm. I think it's true that, uh, is, it's. Oftentimes, um, a line that gives you insight into where things are going. Uh, at the same time, sometimes those insights might be directionally correct, but the timing and kind of the the magnitude of the investments may be wrong. I actually think in this case, um, the amount of money they're spending is trivial to them and, and a very, you know, it's, it's kind of an irrelevant amount.

[00:35:29] It's more about them getting into the coding space. Mm-hmm. And that's how big these dollars have gotten in the world of, of ai that, that represents less than 1% of open AI's market. You know, not market cap is what I was gonna say, but their latest valuation, they're, they're still a private company. Um, so I would point out that, um, you know, ultimately, um.

[00:35:49] The model business, meaning the, the business of, uh, building the underlying AI models like GPT-4. Mm-hmm. And Claude 3.7. That is a race to the bottom there is going to be. Really a very, it's gonna be very, very difficult for companies to make significant money in building and selling models. There's free open source options available that are nearly as good as the commercial counterparts.

[00:36:15] Some argue that the open source market will at some point overtake the commercial market. We talked about that with deep seek R one back at the beginning part of the year, uh, where that model was as good as. Oh one from OpenAI, uh, you know, which is obviously a proprietary, uh, piece of software. So, um, if, if models are becoming cheaper and cheaper and cheaper and eventually close to free, how do you make money?

[00:36:37] You can't scale your way out of something that's approaching zero in terms of, of revenue and profit. So all these companies are heading to the application layer. So the application layer includes coding, it includes agents, it includes. Things that integrate into business applications. It includes research.

[00:36:56] It's all the utility that you get as a business user on top of the model. So you think about people that are starting to form opinions and, uh, loyalties even to certain tools. Mm-hmm. Whether it's Claude or it's open AI or anything else, it's not really because of the model. It's, the model is very, very similar between Claude mm-hmm.

[00:37:15] And chat GP t's latest underlying model. Um, but it's about user experience. It's about simplicity. It's about low friction. It's also about connectivity. So one of the things that has been going really well for the Claude team is they, I mean, they were the people who proposed the standard called MCP, which we've covered recently on the pod.

[00:37:32] And the model of context protocol or MCP opens up AI systems to all sorts of connectivity with other tools. Um, as we both covered in, Mallory demonstrated in the podcast. It's really, really exciting. And Claude was the. Earliest adopter, uh, of this standard. Um, and now everyone else is following. So, you know, that makes Claude more functionally valuable to me, uh, than chat PT because chat PT is likely to very soon support Malo context protocol, maybe even by the time you're listening to this.

[00:38:02] Uh, but as of this moment in time, it does not so. It's those kinds of things that make the ecosystem better to lower friction and improve like the business value. Uh, and coding is just one of those applications. In fact, just to kind of put an exclamation point on this, um, OpenAI, in addition to their, the windsurf thing, they recently announced that they're hiring, um, the former Instacart, CEO, uh, Fiji semo, who is.

[00:38:26] Was and is on the OpenAI board, uh, to be the CEO of, not OpenAI, but open AI's applications business, uh, which I believe will include this as well as, you know, chat, GPT as a consumer product and a number of other things. So the applications business is clearly going to be where the money is at. Mm-hmm. Um, and you know, clearly our thesis is that if you focus even more specifically within apps on particular verticals or particular highly, you know, specialized use cases, you can build something deeply meaningful for people and also have a path to, you know, sustainable business at the same time.

[00:39:01] Mallory: Hmm. For our technical folks, ame, I know you have experience with most of these tools. If there's someone technical, I don't know if this would be the case, but someone technical listening to this podcast who has not experimented with any of the tools that I mentioned, or AI assisted, uh, generating code, what would you say about the experience, one, using these tools, if you have any favorites, and then two, kind of what that experience is like developing software with AI versus without.

[00:39:30] Amith: So we have a lot of people using Claude Code. That's our mm-hmm. Hands down our favorite coding tool. Uh, it's far more powerful than anything else we've tried, including Windsurf Cursor, rept, um, you know, Microsoft Visual Studio Code. It's not to say that it, it actually is used. Instead of those things, you still need what's called an IDE, which is this, you know, overall like visual development environment where you can see your code and edit it and do do things.

[00:39:55] But having a command line interface, um, for developers is super powerful because, um, it allows the, the tool. To interact with your computer in ways that, uh, these other software tools really generally can't. Um, it's also just much, much smarter in terms of where, where Claude Shines is being able to deal with super complicated long running processes where you wanna, you know, go through an entire code base and make certain types of changes or check for problems, uh, look for performance optimization opportunities, or, or in some cases, build entirely new apps completely from scratch.

[00:40:28] So. If you consider yourself, you know, kind of an intermediate to advanced developer, I would say get cloud code. Um, it runs on Mac, it will run on Windows using something called WSL. So it's pretty easy to install. Um, OpenAI has a competitor product they announced, uh, fairly recently called Codex. Um, I don't find it to be nearly as good as cloud code.

[00:40:48] Um, so cloud code is at. In my opinion at the moment, the King of the Hill. Um, as far as IDs are concerned, I know a lot of people who love Cursor. We have some team members that use Windsurf. Most of our team still uses Visual Studio Code because, um, they actually have within Visual Studio Code something that's equivalent to Cursor and Windsurf, which is called the, uh, copilot.

[00:41:07] Agent mode where, uh, copilot, which is, you know, it's kinda the first ai, you know, that most people had experience with in the developer world. Um, that tool now has gotten a lot more powerful, kind of quietly in the background. It doesn't have as much buzz around it as cursor. Um, my point of view is that absent and acquisition from a major technology player like OpenAI, um, products like Cursor and Windsurf are gonna have a really hard time.

[00:41:29] Because these products are going to, uh, I mean, there's, it's really a commoditization, um, unless you have enough, uh, scale. Uh, and the cool thing about cloud code is because it's produced by a model developer, they're closer to the metal, meaning that they are able to take advantage of the model in ways that I don't think these other vendors are able to do.

[00:41:49] Again, I'm not an expert in cursor or windsurf. I've used one of the two of those tools. Um, I just think that cloud code is, is worth checking out and a lot of people still haven't even tried it out. It's, it's. Kind, maybe a little bit intimidating looking initially 'cause it's a command line thing. Um, but if you're a developer, check it out.

[00:42:05] I think you'll find it to be quite interesting. Uh, one thing you can do with cloud code that's super easy after you install it is just go in there, open up cloud code inside one of your projects and say, explain this project to me and you'll see some, an interesting point of feedback and say, and ask Claude code, uh, to perhaps solve an issue that you have in your backlog and give it the issue, A-A-U-R-L to your GitHub repository if you use GitHub or, or a description of the issue.

[00:42:28] And cloud code will go through, um, every area of your code base necessary to solve, uh, a bug, a bug report, or a feature request, uh, and give you back a complete, you know, uh, set of changes that you can review easily. So that's for the developers, for the non-developers, and I'm, I'm talking to you, I. The CEOs who de declare yourself decidedly non-technical and you delegate all of your technology stuff to your IT folks.

[00:42:52] I'm talking to you guys as well as everyone else who's in the non-technical camp. Um, get yourself access to Claude. Install the desktop version of Claude, which works both on Mac and on pc. This does work, what I'm about to describe does work in the web version as well as just not nearly as good. Um, so in the desktop version of Claude, go in there and have a conversation about some kind of an app.

[00:43:14] That you want to build or maybe you just built on your website. Like a common thing associations do is they put their prospective members through a whole lot of pain to sign up. So it's very typical that you'll have an e-commerce process where it's like, oh, I have to go through this step and this step and this step.

[00:43:30] Or maybe you have a membership application that you paid tens or even hundreds of thousands of dollars to some developer to build. And a thing has been sitting out there for years and it's, you know, really crusty old software. Right. And it's, it's, it's, it maybe wasn't even great. Initially it's really not great now, and the problem is, is that it's not the highest thing on your priority list because it kind of works.

[00:43:50] But your members and perspective members do not like this thing. And so, um, what I would recommend you do is take a few screenshots of the current application on your website. You paste them into Claude, the desktop app. Um, and then what you do is you say, Hey, Claude, this is my current membership application.

[00:44:08] It kind of sucks. I want your help improving it. Give me a prototype of what a new membership application would look like. Hit enter. Um, what you'll see very quickly is Claude will start, and by the way, put Claude into extended thinking mode, which is this thing where Claude thinks more deeply as we were talking about earlier, um, and.

[00:44:25] Quad will very quickly come back to you with a fully interactive prototype. It won't be functional yet, but it'll show you, Hey, this is what I imagine you could do, um, for your membership app. And you can go back and forth and say, well, you know, right now it's like 18 steps. In order to become a member, I wanna really reduce that as much as possible.

[00:44:42] Can you take a look at the flow and gimme some suggestions on how to improve it? So Claude puts on his ux ui hat, um, and, you know, is deeply empathetic with your business. And by the way, if you tell Claude, give him, give Claude a URL to your website and say, read up about our. Association, he'll do that as well and come back to you with, with, uh, better insights.

[00:44:59] But through this process, over the course of 15, 20 minutes, you could prototype a completely functional, like, not, not functional in terms of connected to your data, but like visually functional prototype of a new membership application. Or if that's not your problem, maybe uh, people wanting to sign up to be speakers or people that are searching for volunteer opportunities.

[00:45:18] Prototype these member facing things that are giant pain points for you and ask Claude code to help you build a better way. Now, um, you might say, well, that's really cool. This is really exciting, but now don't I still have to hire a developer to take that prototype and make it real? That's where it gets really, really exciting because you can say, Hey Claude, thanks so much for this beautiful prototype.

[00:45:36] It's exactly what I want. I love this thing. Um. Now I really want to go implement it. And let's say you're a little bit technical at this point, you could say, Hey, Claude, talk to your buddy Claude Code. Claude Code has an MCP server. So your Claude Desktop app can talk to Claude Code and say, Hey, I want you to create this as a React project or an Angular project or a view project.

[00:45:58] These are just different, uh, software development frameworks. And go create it here and build it until it runs. By the way here, uh, use a local database for now, just prototype the database and then we'll have someone else later on securely connected to the data. And you could do that. And I know you actually have like a functioning true app, which then you can actually use cloud code directly to rewire it, to connect to the real data source via API or whatever.

[00:46:22] So you can go through this. Crazy fast iteration, uh, and build stuff. And, and you could also say, Hey, Claude, I want three different versions of a member application, right? Which might take a human like three weeks to do, you know, a week each or something, and cost a lot of money. You're never gonna do that.

[00:46:37] Well, you can, you can do that. And what about using Claude? But also go to Google, Gemini, or OpenAI at the same time and ask for the same thing and come up with the best answer for you. So, um, these AI tools can be used to solve business problems. That are at this intersection of technology, that's normally your Achilles heel.

[00:46:54] As an association, you say all the time, you know, we don't have technical strength, we don't have a thousand developers. We're not Amazon, we're not Netflix. But the field's leveling and you now have the ability to do this if you take the time just to go and experiment with this stuff. So for the non-Tech East, go check out Claude.

[00:47:10] Uh, I, I, I point to this one, not because it's necessarily better at coding than the others. I mean, it's, it's one of the best, but it's just the easiest one to use. It's just so simple to use Claude's desktop app to see. What they call an interactive artifact, and it's just pretty damn cool too.

[00:47:24] Mallory: That's incredibly exciting and I feel it could also be a bit overwhelming if we have some association leaders here who have traditionally, like you said, outsourced dev work.

[00:47:34] They just don't have that strength within. This almost makes me think that you would need to reevaluate that whole part of your business because is it. Worth maybe hiring a few technical ish people and having like a really small team that can just iterate on projects like this all the time. What do you think Amme, or is it, is there still value in having some of that work outsourced

[00:47:56] Amith: if you have a dev team?

[00:47:58] I. You need to make it their mission in life to become experts at using these tools. If they tell you don't, they don't like AI or they use ai, but they're still slow in developing responses for you or building things, they're not using ai. The way I'm talking about using ai, there's ways to help your dev team.

[00:48:13] Come up to speed on this stuff if they're willing. You know, I, I do know some folks that are deeply technical, but are frankly, giant skeptics of AI and say, no, no, no, it's not gonna be perfect. It's not gonna be as good as my code. And the reality is, is that may have been true three years ago, and it's not true now.

[00:48:27] So if you have a dev team. You have to push them and pull them. If you need to demand from them that they become truly native AI developers, it's critical. 'cause your velocity is gonna go up by probably a factor of five x, maybe even 50 x. That's what we've seen on, on our dev teams. Um, it's just ridiculous how much productivity you have.

[00:48:44] So if you have a dev team. Get them up to speed on this stuff. Um, by the way, we're thinking of building an entire series of software development courses, uh, specifically tuned for the association world, um, in the Sidecar Learning Hub. If you think that's an interesting idea, we'd love to hear from you. So please drop us a line.

[00:49:00] We have a feedback loop through the pod that Mallory can explain it a bit 'cause I forgot how that works. But, uh, we also have obviously email or you can do hello at sidecar ai and you can also hit us on LinkedIn. We'd love to hear from you if you think that's a useful idea. Uh, we'll probably put some public facing videos on YouTube as well that give little snippets of things.

[00:49:16] Like what I just described with, with using Claude Desktop. Now, if you don't have a dev team and you outsource this stuff, which is very, very typical, just make sure the team that you outsource to is up to speed. Because if you're spending tens or hundreds of thousands of dollars to get these little micro features out of people and it takes weeks, months, or forever to get them, and they aren't that great when you get them, which is unfortunately the common experience with custom software developers or people who do configuration or customizations of packaged software, uh, go demand more.

[00:49:44] You know, now that you can get more and you should expect more. Um, and finally, if you don't fall into either of those buckets where you don't have your own dev team and you don't traditionally outsource software development work and you just kind of don't do anything with software, you just kind of use outta the box software and you, you know, kind of make your members pay the price in terms of high friction, just.

[00:50:02] Just go try this and see what happens. And you can see that it is actually quite possible for you to do almost all of it yourself. Maybe you go hire a freelancer on Upwork, or you hire a team that knows associations really well. Uh, we obviously know people who do that kind of stuff, but the point is there are ways to do this that are dramatically different than what you think.

[00:50:22] Is the way to do this. So it's not only lower cost, but faster, higher quality, more reliable. It's just an exciting time. We can do things for our members, for our audiences that there's no way we would've been able to do, uh, even the largest of associations of the largest budgets and the largest technical teams cannot do what now.

[00:50:38] The very smallest association with the smallest budget can do literally in days.

[00:50:42] Mallory: Mm-hmm. Even all my non-techies out there, the fact that we can go to Claude and create an interactive prototype of an idea that we have, I mean, that just wasn't even possible quite literally a year ago. Right? So to think that like any idea we have as it pertains to your work, even if you're non-technical, you can potentially build or start to build.

[00:51:02] That's. Very exciting. And Amme, what Amit just mentioned is, uh, in the show notes, if you're listening audio only right at the top of the show notes, there's a send a text button and you can text Amit and me and the Sidecar Sink Podcast and let us know if you think that software development course would be interesting.

[00:51:21] I think personally Amis, maybe we could do a route for like technical folks and non-technical. Uh, I think that would be a really cool way to get everybody involved in that conversation. And yes. Let us know if that's something you'd be interested in.

[00:51:34] Amith: We've covered a lot of ground here and, uh, if sleep town compute didn't put you to sleep, maybe the technical conversation did, but I hope neither did, and it's, uh, it's, it's a great time to be alive.

[00:51:44] It's a great time to be an association leader. It's a great time to explore and experiment.

[00:51:50] Mallory: Absolutely everybody. We will see you all next week after some good sleep, hopefully.

[00:52:06] Amith: Thanks for tuning into Sidecar Sync this week. Looking to dive deeper. Download your free copy of our new book, ascend Unlocking the Power of AI for Associations at Ascend Book. It's packed with insights to power your association's journey with ai. And remember, sidecar is here with more resources from webinars to boot camps to help you stay ahead in the association world.

[00:52:29] We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.