Summary:
This week on Sidecar Sync, Amith Nagarajan and Mallory Mejias dive deep into OpenAI’s biggest moves of the year—starting with how ChatGPT has evolved into a no-code platform capable of building full applications in minutes. They explore the implications of AgentKit, voice-first workflows, and whether associations should adopt or avoid these locked-in ecosystems. Mallory reveals how ChatGPT Pulse, a $200/month personal AI assistant, is reshaping expectations around newsletters and member engagement. Finally, they unpack OpenAI’s trillion-dollar infrastructure pacts with Nvidia, AMD, and Oracle—and what it means for cost, control, and the future of AI accessibility. Associations, take note: the AI platform war is accelerating, and your strategy matters more than ever. Timestamps:
00:00 - Welcome and Countdown to DigitalNow 2025
05:05 - What Amith Will Cover in His DigitalNow Keynote
08:21 - ChatGPT Becomes a No-Code App Platform
12:29 - Lock-In Risks with AgentKit and Platform Control
16:41 - Private Cloud Agents vs. All-In on OpenAI
21:51 - What Is ChatGPT Pulse and Should You Care?
25:37 - Why Associations Must Embrace Personalization
34:01 - The Trillion-Dollar Infrastructure Power Play
38:31 - Will AI Compute Get Cheaper?
43:16 - Final Thoughts on Industry Shakeups
👥Provide comprehensive AI education for your team
https://learn.sidecar.ai/teams
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🤖 Join the AI Mastermind:
https://sidecar.ai/association-ai-mas...
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
🛠 AI Tools and Resources Mentioned in This Episode:
AgentKit ➔ https://openai.com/index/introducing-agentkit/
ChatGPT Pulse ➔ https://openai.com/index/introducing-chatgpt-pulse/
rasa.io ➔ https://rasa.io
https://www.linkedin.com/company/sidecar-global
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
⚙️ Other Resources from Sidecar:
- Sidecar Blog
- Sidecar Community
- digitalNow Conference
- Upcoming Webinars and Events
- Association AI Mastermind Group
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
Read the Transcript
🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖
[00:00:00] Amith: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.
[00:00:15] Greetings, and welcome to the Sidecar Sync Your Home for content at the intersection of artificial intelligence and the world of associations. My name is Amith Nagarajan,
[00:00:25] Mallory: and my name is Mallory Mejias.
[00:00:28] Amith: We are your hosts, and as always, we have prepared for you for your listening pleasure, all sorts of craziness that have to do with the world of ai because things don't seem to ever slow down either.
[00:00:39] Mallory.
[00:00:40] Mallory: They never do, but you know what else? Never slows down time in general. And Amit, we're about a month out from digital now, 2025. How are you feeling about that?
[00:00:50] Amith: I'm pumped. Uh, we're ahead of target in terms of registrations, and we're looking forward to not only the biggest digital now ever up in Chicago, November 2nd through fifth.
[00:00:59] We'll [00:01:00] keep hammering that home November 2nd through fifth. You do not wanna miss that. And we will have a record crowd and we'll have, uh, probably over 300 people. Um, is the, is the target and we expect to have incredible content. Uh, we've got amazing keynotes lined up, amazing sessions lined up and. For the very first time we have an awards program.
[00:01:21] So we'll be recognizing innovators in the community for the outstanding work they've done to contribute to the advancement of their individual association and to the broader association community, uh, by sharing their learnings and sharing their good works. So, really excited about that. The community is, I think, really firing on all cylinders.
[00:01:40] Uh, we just had that amazing sidecar MVP summit, uh, which is up in Utah for, um, a day and a half. Back in late September. And, uh, this is, uh, perfectly timed for the broader sidecar uh, community to come together. So really excited about that. We've got amazing growth in the A A IP certification program. [00:02:00] All of it's come together, so I am, I am really pumped.
[00:02:03] Mallory: I'm pumped as well. I will be there. So everybody listening, if you're attending Digital now, 2025, come up to Amit and me and tell us hi and all the things you know about our lives from this podcast. It's always a pleasant surprise hearing, stuff like that. But Amit, working on Ascend third edition, which will also be officially released at Digital Now 2025.
[00:02:23] I've come to really. I just admire and appreciate the innovative work that the association community is doing, and so I think it's awesome that we have a dedicated awards program to recognize those innovators.
[00:02:36] Amith: For sure. Yeah. There's so many things that are happening in the world of associations right now, and we want to recognize not just the sizzle and the sizzle's, always exciting.
[00:02:44] People wanna pay attention to the sizzle, the headlines, the things that really catch your attention, and we've got plenty of that, but. I would say that some of the really basic day-to-day things people are doing to improve their efficiency by 5%, by 10%, by 20% are really [00:03:00] where a lot of the unsung heroes are playing right now.
[00:03:03] Where we have a lot of people who've mastered the basics of ai. They've taken their time, they've gone through, they've put in the reps, right, and they've learned a lot. And they're automating things. They're semi automating things, and they're saving themselves time. They're saving their team members time.
[00:03:17] That in turn is translating to two things. Number one, of course, better throughput, better member responsiveness, uh, but it also is seeding the idea machine, which is this flywheel of creativity that only can come from your own direct experience working with these tools. When we always talk about how do you start with ai, well, you just start, you have to get going.
[00:03:36] You put one foot in front of the other, so to speak, and, and you move down that path. And so we're starting to see people who are in fact doing that for, in some cases now, for a couple years. And that's resulting in people saying, wait a second, I did this really basic thing with Claude, or chat PT or Gemini.
[00:03:53] I bet I could do this other thing. And that leads to, oh wait a second. But we could also do this. So creativity, I [00:04:00] think is, is very much this thing where you have to have some momentum and you keep building that, you know that over time. And so I find it actually really important to recognize people where they're at.
[00:04:10] To look for people who are doing really interesting day-to-day things in their own workflow or at the team level. Certainly large scale enterprise transformational things where people are deploying, you know, AI at scale. Of course, we wanna talk about that. I think it's really important to recognize people and leaders.
[00:04:25] Those of you that are listening in to the Sidecar Sync, I'd recommend that you. Give people props. You know, when you're talking about big things happening in ai, don't just focus on the big, you know, moonshot goals. Those are very important, but also think about the people who are making that incremental progress and starting to add masks to your flywheel and spinning it just a little bit faster each day.
[00:04:44] Mallory: Mm-hmm. John Spence has a great phrase. He was a keynote speaker at last year's Digital now, and he's also been on the podcast before, but of catching people doing things right and I love that, right? Because we're so often, oh, I catch someone doing something wrong, but catch someone doing something right today, [00:05:00] how they're using ai, something innovative they're doing with their daily work.
[00:05:03] It it makes an impact, for sure. Amis, I've gotta ask because we're a month out. Do you have any idea what you're talking about in your keynote session?
[00:05:12] Amith: I have a general idea. Okay. You know, 30 days is, is an eternity in the world of AI and in kind of my crazy brain. So I, I think that, um, I'll settle on the topics in the next couple weeks.
[00:05:22] Uh mm-hmm. Just narrowing it down, I like to open the conference with kind of a macro view. I don't like to talk about. One specific or two specific topics too much. You'll definitely hear me talking about agentic ai, but I want to get past the hype and really get down to the meat of it and talk about how people can do things and how people are doing things with agentic ai.
[00:05:41] Uh, we'll have a lot of sessions on agentic ai, by the way. Uh, but ultimately I think that, uh, I'll be trying to frame the conversation around like the journey that we've been on and the journey that we're, where we're at in the journey at the moment, and where we're heading. So that's what I tend to talk about each year.
[00:05:56] Digital now with, um, some sub themes that are. [00:06:00] Zoomed into what's happening at the moment. Last year, for example, we talked about, uh, voice and at the time audio models were just starting to become good enough for close to realtime interaction chat, CPT, uh, advanced Voice Mode or what they called the realtime API had made its debut maybe two months before digital.
[00:06:16] Now last year, uh, since then, so much has happened in the world of audio and we've talked a lot about audio, which is appropriate for. A podcast, of course. Uh, we like audio a lot, but, uh, you know, it's, it's, uh, definitely going to be featured and I think the intersection of audio. Agentic AI is super interesting, so getting to your members and to anyone else for that matter, when they're on the move or they just prefer to speak.
[00:06:40] A lot of us aren't the best at typing, and so even if you're pretty good at typing, you can probably speak quite a bit faster, but also there's more richness to what you're saying. We've talked about the information density of different modalities at times, where we talk about video having very high information.
[00:06:56] Audio, having a little bit less information, right. And then text having the least [00:07:00] information compared to the three. And so when you just type something in, the AI can only infer from that literally the characters that you typed. If you and I type the exact same thing mm-hmm. The AI just gets that.
[00:07:10] Whereas nowadays the audio models aren't simply doing, you know, speech to text, then text to speech, they're actual native audio models, and they're able to actually hear emotion and tone. And speed and all of these things. So it's pretty amazing stuff. So audio agents and the macro view, those will be kinda the high level topics, but that is subject to change.
[00:07:30] So you're just gonna have to come to digital. Now for those of you that are listening and figure out, uh, I know I've talked to a couple of our keynotes that are, uh, planning cool things for the event and they're doing the same thing. They're kind of narrowing down just generally what they're gonna focus on.
[00:07:43] But so much happens every day in the world of AI that uh, it's a little bit hard to be a hundred percent specific.
[00:07:49] Mallory: We are looking forward to it. Amme, I wanna introduce today's topics. We're covering three big open AI announcements. So first we're gonna talk about Dev Day 2025, [00:08:00] where chat UPT has transformed into a platform that potentially lets anyone build apps without code.
[00:08:06] Then we'll be talking about chat, GPT, pulse, the $200 per month AI assistant that researches for you while you sleep. And finally, open AI's trillion dollar infrastructure deals with a MD, Nvidia and Oracle that reveal where AI is really headed. So let's jump into Dev Day 2025. One of the major announcements really is that chat GT is now a platform where you can build and run entire applications directly inside the chat interface.
[00:08:35] They demonstrated this with live examples of data visualization tools and interactive games running entirely within chat conversations. Agent kit relevant for what we just discussed, Amit, is their new toolkit that eliminates coding entirely. It's not a low code solution. It's no code. So users can describe what they want in natural language, and agent kit builds functioning AI workflows during demos.
[00:08:59] Someone [00:09:00] built a complete customer service automation system using only voice commands. The system could. Intake requests, route them, generate responses and escalate to humans when needed, all created by speaking to the AI for under five minutes. Now, I'll let you all know I tested out the agent builder myself.
[00:09:18] I didn't find it so intuitive. Uh, I have some experience with platforms like Innate In and Zapier. I find those much more functional and user-friendly at this moment, and you're also not locked into the open AI ecosystem when you use those tools. Just a note from me. They unveiled GT five PRO in the API with aggressive pricing to undercut and, and Google SOA two.
[00:09:40] The AI video model that we covered recently is now available via API with synchronized audio generation. The new Codex upgrades include direct Slack and GitHub integration, so now AI can read your team's conversations, understand context, and push code directly to repositories. They also [00:10:00] launched Image one Mini, a smaller but highly capable image generation model at 80% less cost than flagship models.
[00:10:07] They're also testing a model selector that lets users choose between different image models based on budget and quality needs. Plus we're seeing new voice APIs that are 60% cheaper than before. The real wake up call came from CEO of OpenAI, Sam Altman, when he said individual creators can now build what previously required entire companies.
[00:10:30] A lot to unpack there. Amit, what is the the most exciting takeaway for you?
[00:10:35] Amith: Well, I'll start with what you wrapped on, which is Altman's quote, and I do think he's right that an individual kiss has unbelievable power and potential to create Now. That is true across a lot of different platforms. You pointed this out earlier.
[00:10:48] It's not just OpenAI, they're clearly the leader right now in terms of consumer mind share. Uh, they have over a billion monthly active users on chat, GPT, that I think is greater than all the other competitors [00:11:00] combined. So they're clearly the leader in terms of consumer adoption, uh, and so they have an opportunity and there's also some responsibility to pave the way in terms of what the next thing should be for the platform.
[00:11:10] Uh, I think chat, PT. Along with Claude and along with all the other major players in conversational ai, of course want to be the platform because when you're the platform, the universe kind of revolves around you. Uh, and when you're a participant in someone else's platform, you don't have that level of control.
[00:11:28] And clearly the profit follows the control. So there is an opportunity here of significant proportions, and that is what everyone's chasing. I think your earlier point actually is worth really hammering home for our listeners is that, um, when you go to a full stack solution like Open AI's Agent Builder, or if you were to go to the same thing from another, a number of other companies, you are fully integrated, which means, um, it's kinda like the Microsoft days of old.
[00:11:55] You are a hundred percent bought into that layering of technology. [00:12:00] Use open AI models, you inference or run the model on their. Cloud infrastructure with their security environment, whatever that is, whatever you sign up for, uh, the next layer up, you know, in terms of APIs and then the agent builder, you're in their environment.
[00:12:15] And then presumably from there, you would either use their UI directly through chat, JPT, or you could build apps on top of that. They did also have some tooling around, uh, conversational UI that you can embed in your own app, which is quite convenient. So let's talk about the pros and cons of this. The pros are, it's easy.
[00:12:32] So you have this fully integrated working stack where if you wanna build a solution and you're, if you're comfortable being a hundred percent based on open ai. It's, it's not quite turnkey as you encountered mm-hmm. When you're trying to use agent builder, but it is much more integrated than anything else you're gonna see out there, at least at the moment, to my knowledge.
[00:12:51] However, the downside is what you pointed out. You are fully at Sam Altman's, you know, Beck and call. Essentially, you belong to him. [00:13:00] And obviously based on the way I said that, I'm not particularly comfortable with that. And it's not about Sam, it's about anybody. I like the idea of optionality. I like the idea of choice, and it's not because I don't like open ai.
[00:13:10] I'm actually a big fan of theirs. I think they've done amazing work. I've got concerns about some aspects of the company, but the same could be true for a lot of other businesses. Um, so this is not anti open AI at all. To be clear, this is more about optionality. The world of AI is changing so incredibly fast.
[00:13:28] I think having the ability to switch models, to switch model providers, to use other inference providers like Grok with a Q or cereus or a number of other providers that have faster chips, being able to move to model providers, perhaps like Google, who have innovative architectures. Is really important.
[00:13:46] And so if you are, uh, on a singular provider, you are limiting your choice. Now, that may be a trade off that's reasonable to you, and that's fine. You just have to, in my mind, the goal is to be aware of that, so that trade off is okay to you. By all means, [00:14:00] go forward and build on the OpenAI stack. Just know that you are completely in that environment.
[00:14:04] Probably for a while because models are easy to swap in and out, right? You can very easily change models if you build your software even reasonably well. There's lots of ways to very quickly swap models in and out, inference providers, in and out. But once you go to a higher level of abstraction, a more complex layering, and you put your business logic into something like agent builder.
[00:14:23] Now you're really gonna have a hard time porting that over to another platform to move from their agent builder to Zapier to, uh, you know, Google's agent builder or to anything else that takes effort. It takes time. So I do think that there's trade-offs. I think that, uh, prototyping in an integrated stack like that could make sense, uh, because it's quick and easy.
[00:14:44] But I also think you have to be thoughtful about, is that the architectural decision you want forever? Uh, to me security is paramount and so I am really concerned about any of the major labs, OpenAI and, and, uh, Claude as well. But, um, [00:15:00] it's the ability for them to have your full stack of compute in their cloud.
[00:15:04] This frequency with which these major players are changing their terms and conditions about privacy is concerning. I'll make a comment about philanthropic, who normally is actually, these guys are normally very privacy forward and very focused on alignment and safety, but very surprisingly to the community.
[00:15:21] About three weeks ago, they announced that they were just gonna start training future models based on prior data that you share. Um, which shocked a lot of people. Like, wait, hold on a second. That was the opposite of the default setting before. And so when that happened, of course you can address that, but.
[00:15:36] It's really problematic when these companies, which are already very large companies, they're already extremely powerful and they already have, and power isn't so much because of their revenue. Um, it's, and they have decent revenues now and they're growing fast, but it's more about where they live in the stack and how dependent you are on them.
[00:15:53] The power of the vendor is determined. Much more by that than by anything else. So my point about all this is that [00:16:00] because of the fluidity with which these vendors seem to think they can just change their terms and conditions and they have the power to do so, um, I would prefer to have optionality. I would prefer to be an association that says, listen, I can put a protective bubble around my content.
[00:16:14] I can absolutely leverage ai, but do it in a safe way where I have optionality. And if a vendor decides to make those kinds of changes, I'm not subject to their whims because it would take me six months to port my agent architecture over to something else. So that's my thought process. Um, I don't think that it's a bad idea to use an integrated stack.
[00:16:31] I just think you have to go in with your eyes extremely, you know, pride open with a crowbar. Mm-hmm. Because if you ignore this, you're gonna regret it later on that you didn't think through it.
[00:16:42] Mallory: So your recommendation would be to employ agents in your own private cloud infrastructure.
[00:16:48] Amith: Maybe. Okay. I mean, I think it depends.
[00:16:50] If you're a small association with limited IT resources and you look at it and say, you know, the risks to us are pretty minimal, um, and we don't have the resources to do anything outside of leveraging [00:17:00] a single stack, then OpenAI or you know, I guarantee you Claude is gonna have a similar architecture very soon.
[00:17:06] Gemini already does, right in the Google ecosystem. So picking an integrated stack is okay. Um, if you're a little bit larger, more sophisticated technology or you know, organization and you say, listen, I'm not sure that that makes sense to us. We have a wide array of disparate data sources. Um, we're really concerned of our, our proprietary data, both structured and unstructured.
[00:17:25] Um, you might wanna think about it, you might want to put some reps into selecting the right architecture for you. Um, and certainly if you're a larger association, you really need to, you know, associations sometimes spend months or even quarters, sometimes over a year. Selecting pieces of infrastructure like an A MS or an LMS.
[00:17:44] Um, I'm not suggesting that associations adopt that cycle or that timing. For selecting an agent architecture, but I do think they should spend a little bit of time diligencing. And it's not just diligencing the vendor or the technology capability, but it's what I'm talking about and what you mentioned earlier.
[00:17:59] It's how [00:18:00] much lock-in do you have? How much control are you to seeding to the vendor? Again, they're not bad or good, it's just they're different approaches. Each has, it's like everything else in life. There's pros and cons to it. I just wanted to highlight it because when the market leader has an announcement like this, there will be a lot of people who semi, somewhat blindly just, you know, rush off into that, you know, future without thinking through the consequences.
[00:18:23] And of course, part of our job on the sidecar sync is to help people navigate that journey. But I would recommend. Taking a little bit of time on this, uh, we at Sidecar are going to be producing an agent selection, uh, toolkit, which is basically the idea of a downloadable set of resources that'll help you think through this exact decision making process, and that's gonna be coming in the next couple of weeks.
[00:18:44] Um, so I'm, I'm excited about that. I think that's gonna help the community quite a bit, is just basically a set of decisions that you have to think through. Really gather some data in, in an objective way first. And this is also, by the way, not something you just say, Hey, it person that you have, go figure this out.[00:19:00]
[00:19:00] They probably are going to be overwhelmed by it. Um, but even if they're not overwhelmed by it, this is a business decision. This is not a technology decision. It's more about what is the surface area of your competitive opportunity and what are you comfortable with in terms of, of the data privacy side.
[00:19:17] Mallory: I would also add to that, Amit, that even if you don't wanna be locked in to open AI through the agent builder for whatever pros and cons that you just talked about, it's still a really good idea, especially if you're non-technical, to just play around with it to kind of understand the logic, how things connect.
[00:19:33] Start asking questions of hmm, like how is data flowing from one. Piece to the next, because I've been dabbling with that more recently and I find I can see my skills growing where I'm like, ah, okay, I understand how these workflows work, uh, what inputs and outputs I need to get the result that I'm looking for.
[00:19:49] So I would encourage you all to check it out, even if you have no plans to deploy an agent. I
[00:19:54] Amith: totally agree. I think that the experimentation you're describing is super important. I think the thing you just need [00:20:00] to do is set expectations ahead of time that you know the tool that you use to experiment isn't necessarily the tool that you'll go live with in production.
[00:20:07] Or maybe you use, do use it for a bit, but you have plans to migrate to something more enterprise grade. At some stage. Um, the other thing I'd point out is the agent builder from, uh, from OpenAI is, it's kind of like some of the other agent builder tools that are out there. It's got a slick looking ui, which makes it look like it's just a pretty picture and it's got, you know, a flow chart on it and anyone can do it.
[00:20:27] And that's true. But if you kind of click one level down, you'll see that there's, it's not necessarily code in the traditional sense, but. It's a bunch of fairly complex configuration settings that you have to put into each of these nodes. Now you can't ask AI to help you set it up, which is helpful. Um, but at the same time, just remember, this is still not a super polished technology.
[00:20:49] That's true for all of the agent builder tools. There's lots of agent tools. There's architectural software out there like, uh, Lang Chain Lang Graph. There's software like, uh, auto Gen. [00:21:00] There's obviously the member Junction agent platform that we've built and open sourced for this community. All these tools are still early, so you gotta remember they have some rough edges.
[00:21:09] And so just remember that like, it's not, don't think of this as like going into Microsoft Excel or Word or something like that, that's been around for 20 plus years and it's perfectly polished for you. Uh, again, I'm, I am a hundred percent in agreement with Mallory that you should go out there and test this stuff out and play with it.
[00:21:25] But, uh, just be aware that there's, there's some things you're gonna have to kind of tolerate as you're, as you're moving at the speed of ai.
[00:21:32] Mallory: I saw a post too that I think you liked on LinkedIn. Amit from Christopher Penn who was one of our keynote speakers from Digital Now 2023 I believe. And he was saying the same thing.
[00:21:42] So that made me feel a little bit better. Like yes, it's a no-code platform, but you still gotta know like schemas and all the things I don't even know how to talk about. So anyway. Moving to chat, GPT Pulse, which I feel like I kind of missed in the news with Dev day and all the other things that were going on.
[00:21:58] Sam Altman, CEO. Yeah, he had [00:22:00] your
[00:22:00] Amith: finger off the pulse slightly there.
[00:22:01] Mallory: Ah, they're clever. Sam Altman is calling chat. GPT, pulse his. Favorite feature they've launched in a long time. It was launched September 25th of this year, 2025 for pro users at $200 per month on mobile. So while you sleep chat, GBT researches topics relevant to you based on past conversations, saved memories and connected apps.
[00:22:24] Then it delivers five to 10 personalized briefing cards to you when you wake up. From actual demos, we saw users get soccer match analysis, Halloween costume suggestions, and toddler friendly travel itineraries. If you connect Gmail and calendar, which is optional, pulse drafts, meeting agendas, recommends restaurants for your trips and surfaces.
[00:22:45] Task reminders. Users can curate what they want, researched, and after your morning brief, it says That's it for today. So you have no endless scrolling like you might on social media. OpenAI is framing this as democratizing personal assistance, [00:23:00] uh, at $200 a month. Yeah. Maybe we can question that. Why should you as associations care?
[00:23:05] Well, to me, this sounds a lot like an email newsletter, first of all, and members will expect this proactive, personalized service from you too when they're waking up to AI curated industry insights and conference prep tailored to their interests. So, Amit, I, like I said, I was off the pulse with Chatt Pulse.
[00:23:24] Do you think. Association should be racing to transform their newsletters into something like this.
[00:23:30] Amith: Well, first of all, I should say that any joke I attempt to make has to pass the, will it make my teenagers gag test. I think I did it. I nailed it there.
[00:23:37] Mallory: Oh, will it? Yeah. I was like, I don't think they would laugh at that one of me, but I did.
[00:23:40] No, exactly. Which, which
[00:23:42] Amith: makes it funny to me. So,
[00:23:44] Mallory: dad joke, qualifier.
[00:23:46] Amith: Yes, exactly. So, um, right on the bullseye for that. So my thought process is this is a great feature. This is a great addition to the ability for people to consume content that's relevant to them as far as associations are concerned. [00:24:00] You guys are at the center of the universe for your particular field.
[00:24:03] You have, uh, in many cases the most respected brand as a content authority and community in your field. And you've been publishing content, most of which you probably have behind a paywall or behind some kind of member access only type of, um, you know, limiter. So it's not in the public domain is the point.
[00:24:20] Uh, and yes, you should be doing this too. You should absolutely be doing this. You know, we've had this, uh, now 10 year. A long running experiment going called rasa.io. Uh, the website is just that. It's r aa.io and it's one of our companies at Blue Cypress. And. The Sidecar newsletter for the tens of thousands of you that receive it multiple times per week.
[00:24:40] It is powered by rasa.io. Um, as are newsletters from hundreds of other, uh, associations and many, uh, uh, companies use rasa as well. The thesis behind RAA was very simple. So, uh, over 10 years ago when we started the project, we said, could we do a better job than the generic? One size fits all way of communicating.
[00:24:59] [00:25:00] Could we personalize content at scale using artificial intelligence to really understand people and serve them better? Ultimately, the goal for Rasa is as we stated in our core purpose statement. To better inform the world. So algorithmically, that particular product has always been about how do you get people informed and then get them on their way.
[00:25:21] So rather than sucking them into one of these spiral loop type things, which is problematic for lots of reasons, uh, you inform them and you move them along. And that's what we've done for over 10 years now, with that, with that company, I think there's lots of ways to do this. By the way, this isn't a pitch for Rossa.
[00:25:35] It's more of the pitch for the idea that. Personalization is the critical, critical sauce that people expect from you everywhere. So the email newsletter was the first thing that, you know, we focused on. 'cause it was the easiest thing to do also. It was super high impact. It's kind of this like going back to the unsung heroes kind of theme from earlier in the pod.
[00:25:53] Um, it's something people basically have no respect for, right? The email newsletter is this thing that everyone does or most people [00:26:00] do, but no one really thinks of it as important. No one thinks of it as like this strategic asset yet. It probably has the most frequency of touch with your audience out of anything you do.
[00:26:11] Your email is in their inbox, in some cases daily, some cases more than daily, and a lot of times you hear from your members, oh, you sent me too many emails. That actually only happens when you send them emails. They do not want, when you send them emails that are really helpful, they ask for more of them, if you would believe that.
[00:26:27] You know, we have lots of customers who say, Hey, can you increase the frequency of that email for me? So the point I would make is going back to what you said, Mallory, when you have a major company like this that has a billion plus monthly active users, that's offering at the moment for $200 a month, but soon will probably be free, you know, this kind of service.
[00:26:46] You better pay attention to this if you're an association that's in the business of providing information in your industry. Because if you can get it more easily through chat, GPT, you will. So what will the association have to do? You'll have to lower your [00:27:00] barriers, make friction less, increase the quality, and you have a, you have a fighting shot here because you have content that no one else has, and you have a brand that's the envy of, of all in your field.
[00:27:10] And so if you then apply just a little bit of advanced technology to this and personalize your content and deliver it in an engaging way. You can solve the problem, you can get incredible engagement, and we have empirical results again, over a decade for hundreds of associations, seeing exactly that. The nice thing too is the cost and the complexity of doing exactly what you described earlier.
[00:27:29] Mallory has kept dropping, you know, year after year after year. It's easier and cheaper to do it. So lots of options in terms of how to go about doing it, but I would highly encourage people to put this. Very high on their list of things to do, and the newsletter, by the way, is just one surface area. It should also apply to all of your other emails, right?
[00:27:45] So if I'm gonna send Mallory an email that says, Hey Mallory, you need to come to digital Now. Should I send Mallory a generic email that I send to 10,000 other people, or should I send her an email that points out. The sessions at Digital Now and the speakers at [00:28:00] Digital now that she might be most interested in.
[00:28:02] Right. Should I create a subject line just for Mallory that's perhaps in the style that she's most likely to engage in, uh, engage with? Perhaps it's humorous or perhaps it's very factual based on what we know about Mallory. Sure. It'd be great to write an email to Mallory that is. Totally tailored for her as the content just for her.
[00:28:20] And that's just one more example. It's an email marketing thing for a conference. What about your educational opportunities? How about your volunteer opportunities? All these things are problems of constraints, meaning you want to be able to deliver a high quality. Quality of service. You want people to all feel that they're at a luxury spa being checked in for some highly anticipated, enjoyable service, but they really kind of feel like they're going through the McDonald's drive-in most of the time.
[00:28:46] So that's the reality of it. The good news is the technology's here. I mean, you see it in front of your face. You can use this stuff in your association to solve this problem too.
[00:28:55] Mallory: As I was thinking of my personalized digital now email eth, I was thinking of, you know, what's on [00:29:00] the SNAP bar? What meals are we serving?
[00:29:02] I like to really get a full picture, you know.
[00:29:04] Amith: Totally. Yeah. I. People come for the content at least some of the time, and they, they stay for the food I guess. But uh, or maybe it's other way around. Maybe they come for the food and then they just absorb some of the content along the way, way to food. A little bit of
[00:29:15] Mallory: both.
[00:29:16] A little bit both. It's gonna pretty good. Everybody likes to a little sweet. I always, always show the
[00:29:19] Amith: content, uh, the conference organizers for, for digital now, and just other stuff I'm involved in. Just say, listen, people, people do want some healthy options. Have an apple or two, but they always want some junk food.
[00:29:29] They want those cookies in the middle of the afternoon. It may not be the best thing for 'em, but you know, conferences are kind of fun. They're a little bit less, it's like a mini vacation of sorts. Uh, it's also a great place to upload new content into your brain.
[00:29:41] Mallory: I remember you telling me exactly that and I was like, Ooh, score.
[00:29:44] So we did like a donut bar and some cookies and brownies and people loved it. But I mean, I wanted to ask, um, obviously we're familiar with raa.io because it's in the Blue Cypress family, as you mentioned. That seems like an easier entry point into personalization. Um, but it also seems like. [00:30:00] A more narrow solution in that it only is focusing on the newsletter even though it has high impact.
[00:30:05] So for associations that are thinking bigger with personalization, what do they need to be thinking about right now? Like aside from newsletter, you've gotta get this in order. If you wanna take personalization to the next step.
[00:30:18] Amith: I mean, I'd just start with the first step, right? Which is like, pick the easiest possible thing you can personalize.
[00:30:23] Mm-hmm. And it probably will be in your newsletter, it could be something else. Personalizing your website tends to be a lot more complex because there's just lots of moving parts to that and there's a lot of technology and there's usually at least one or two vendors involved, so that tends to be a little bit harder to do.
[00:30:36] Um, just as a quick, uh, uh, quick note, rasa actually now has a product, they've had it for about six months. Called campaigns, which actually personalizes any email. So you can use rasa to personalize not only your newsletter, but the example I gave you of like a conference invite or you know, educational marketing.
[00:30:53] Any kind of email can be personalized with that rasa engine. Uh, and there's other ways to use that particular product is a general purpose [00:31:00] engine to personalize just about anything. But what I would say is the first thing you should do is. Do an example of personalization and test it with like a small number of people just to see what the reaction is.
[00:31:11] If you're, if you're trying to calibrate whether or not there's value in personalization, if you already kind of passed that and you're like, yes, I know this is really important and really valuable, then I'd go ahead and run like a full scale, you know, execution of like one piece of it. Um, so the other thing that you can, you can look at is, you know, how do you personalize, uh, recommendations of people.
[00:31:31] So a big part of what we do in associations is we associate. And so when we associate, that means we're connecting with other members of our community. They might be like-minded, they might have different interests than us, but how do I connect each person coming to an event with people that are really great fits for them?
[00:31:48] And the same style of technology can be used at scale to make incredibly good. Recommendations, uh, for people in your community. And that's an totally untapped area. When people are connected by their [00:32:00] association, they obviously gain immense value from that. And there's an emotional connection back to the association where you're like, oh, I met these great people at this association.
[00:32:10] I will always go back to their conference now. Right? Because you're part of that community more so than ever, and you feel just immense value. You can't even really put. Uh, a particular number on that, right? Like you can say, oh, that particular session I learned this much and I got out of it. But when you meet great people, all of us I think are wired this way to like just really have a different emotional reaction to that than just learning something.
[00:32:31] Mallory: Uh, my husband and I are scheduled to go to an association conference in May of next year in New Orleans, so I'm really excited about that. Amme, I won't, you could probably figure out which association it is. I won't say it by name, but I'm really excited to go kind of undercover and just see how things go there.
[00:32:47] Amith: Yeah, check it out. Report back.
[00:32:50] Mallory: Alright, topic three, open AI just orchestrated the largest AI infrastructure agreements in history, committing over a trillion dollars through interconnected deals with [00:33:00] a MD, Nvidia and Oracle. Here's the breakdown. A MD will supply six gigawatts of GPUs starting late 2026 with OpenAI potentially taking 10% ownership of a MD through warrants.
[00:33:14] Nvidia signed to provide 10 gigawatts via a hundred billion dollars investment, and Oracle secured a $300 billion five-year contract to provide the cloud infrastructure funded partly by NVIDIA's backing. The scale is staggering. 16 gigawatts total is enough to power multiple major cities. And though it might seem like these are independent deals, they're a coordinated strategy.
[00:33:38] Open AI is hedging against chip monopoly by splitting between a MD and Nvidia. While Oracle provides the data centers, the financial flows are circular. Nvidia invest in OpenAI, which buys NVIDIA chips and leases Oracle Cloud capacity, which Oracle builds using NVIDIA's funding. It's the same companies acting as investors, suppliers, and customers [00:34:00] simultaneously.
[00:34:01] Why we think this matters for associations? Well, these infrastructure investments determine AI's accessibility and pricing for the foreseeable future. The A MD partnership should theoretically break NVIDIA's monopoly and lower prices, but the massive demand might keep costs high regardless. The late 2026 deployment timeline means open AI is betting billions on capabilities that don't exist yet.
[00:34:23] They're building for super intelligence, not today's chat, GPT. This isn't just about having more servers, it's about who controls a compute that powers all AI advancement. So amitha a trillion dollars in infrastructure commitments. Is that what you see as kind of sustainable growth, or do you think we're watching a bubble in real time?
[00:34:44] Amith: I, I think it can be multiple things at once. I guess trillion is the new billion. Mm-hmm. And, uh, it's, it's staggering. You know, how much money we're talking about being invested, but it's also a fraction of the opportunity. So I think you're gonna see a lot more than this over time. And if you aggregate the entire [00:35:00] industry's commitment to infrastructure build out, and then you add in public sector funds that are.
[00:35:04] Being added on top of that in a variety of different parts of the world. Um, the numbers are enormous and the reason is, is the prize is so big, the opportunity to get truly intelligent machines out there, which is, you know, an advanced version of what we're dealing with right now, right? We have right now is quite useful, but it's not what people are shooting for.
[00:35:23] But you have that at scale. You're talking about a transformative, uh, situation for the economy. You know, we have a hundred plus trillion dollar global GDP right now. A lot of people are saying this could add 10 to 20 percentage points of growth per year if we bring this online, and that that may be an underestimate.
[00:35:39] Um, but you know, no, the economy, no individual economy tends to see that on a sustainable basis. And certainly global GDP has never experienced that rate of growth, uh, for a sustained period of time. So. It could be transformative in terms of the wealth creation, the productivity output, lots of things. It could also be, uh, incredibly problematic in terms of what it [00:36:00] does to labor markets, as we've talked about on this PO before getting back to the topic at hand, specifically around these parties of OpenAI, Nvidia, A MD, and Oracle.
[00:36:11] Uh, and Microsoft's in the mix as well, in, in different ways, kind of behind the scenes. Um, but. What's happening here is you do have money flowing in kind of a circle at times, and it doesn't mean that it's not legitimate. It just means that you have to pay really close attention to the numbers. Because if Nvidia invests a hundred billion dollars in open AI and knows that roughly 50 to 75% of that money is gonna come back to it in the form of orders or maybe an even higher percentage, I don't know how the math works out.
[00:36:40] Is that really an investment or is that. Something else. Right. What exactly is that and how does, what does that mean in terms of their margins? They know that some of that's coming back to them. They know what their gross margin is. Uh, now would they get most of those orders anyway? Perhaps? Would it be as, uh, you know, as schedulable?
[00:36:58] As plantable as [00:37:00] the way they're doing it? Probably not. Um, and is there competition there? There certainly is. You know, Nvidia is by far the leader, just like OpenAI is at, on, on the, on the, uh, infra on the software side. But, uh, a MD has some really compelling offerings. They're not sitting still and they're moving fast, and there's a number of other companies that are out there going after it.
[00:37:19] You know, the inference game is where the action is gonna be. Training models will always be important, but the amount of compute you need for training compared to the amount of compute you need for inference, it's not even going to be a comparison. It's, it's going to be thousands of times the size of the training market, that that is the inference market.
[00:37:37] And so when you think about like inference specific chip architectures, like grok with a Q Cereus, uh, Google's, uh, advanced TPU design that they've talked about, there's some interesting stuff happening out there and I don't think people are paying enough attention to that. So this is definitely an interesting deal.
[00:37:55] It, to me, it simply underscores the point. There's a lot of opportunity here [00:38:00] for associations. This means, to me it means three things. First, um, it's obviously that AI is something you should pay attention to. Not that people who listen to this podcast haven't been doing that for a bit, but it's, it's a big deal.
[00:38:12] Number two, it's gonna get cheaper because all of this spending means that. As supply increases, you know, the demand is increasing at this torrid pace, which is why this, why people are investing so much to fulfill that demand. Uh, but as competition heats up and more and more and more players are providing inference services, it's going to drop the cost.
[00:38:31] The other thing that I think is not being paid attention to is the speed at which models are getting smaller. Faster and therefore cheaper to run. So if you think about the progression, we have talked about a lot on this podcast how today's small models or so-called small models are about as intelligent as this time, last year's frontier models, right?
[00:38:52] So, and it's probably, actually, that's a conservative statement. It's probably more like six to nine months behind in terms of. Small to mid-sized open source models [00:39:00] relative to the best of breed open, you know, uh, frontier Proprietary models. So if you, for example, compare GPT five Pro right now to, let's say, the best open source models, what's the gap?
[00:39:12] I think the best open source models right now are somewhere in the GPT-4 Oh. Era, maybe a little bit better than that, uh, in some areas, but, uh, in fairness, they are a fair bit better than GPT-4. Oh. But the point is, is that GPT-4 oh was absolutely state of the art 12 months ago. Uh, so you know, we are less than 12 months behind with models that you can run on your computer.
[00:39:34] What that means is that for a lot of workloads, which many of the things associations need to do do not require G PT five PRO or GPT seven Pro or whatever will come, you know, after that, um, you can get a lot of amazing things done with technology like G-P-T-O-S-S 20 B, which is a small model from OpenAI, or the clean series of models from Alibaba that you can inference on fast chips from the providers I've mentioned already in this pod.
[00:39:59] So. [00:40:00] There's so much opportunity, but there's so much supply coming into the market. It's going to make inference effectively free.
[00:40:08] Mallory: Mm.
[00:40:09] Amith: That's the key message to me is like, you do not think about cost because the cost will essentially erode itself down to zero over the course of the coming quarters and years.
[00:40:19] Mallory: Is this circular financial flow that you talked about amme? Something we have traditionally seen with technology in the past?
[00:40:26] Amith: If tradition is, you know, kind of all prior bubbles? Yes. Okay. I mean, most recently with the telecom buildup leading to the.com boom and then bust, we had the famous Cisco Nortel kind of circular dealings that a lot of people have been talking about more recently.
[00:40:40] I lived through that, so I'm familiar with it. But you know, it's essentially when suppliers and customers kind of invert the relationship back and forth and the money's flowing back and forth. Um, you know, it, it's, the problem with that is to be, it's, it's not problematic. Again, these are not improper deals.
[00:40:55] These are done in the open people. Are these are public companies? Open AI's? Not yet, but probably [00:41:00] will be soon. But you know, obviously Nvidia and Oracle are, and so the disclosures are out there of what they're doing. There's obviously a lot of, a lot of people looking at this stuff, so it's not that what they're doing at its face value is improper and it, it kind of makes sense at a lot of levels, but it's hard to trace the money.
[00:41:14] It's hard to know how much revenue is really growing when the revenue is an investment, which becomes revenue, which becomes an investment. And then there's multiple parties involved. Another example is the information reported yesterday that Oracle's, uh, projected gross margin on this massive cloud deal is around 15%.
[00:41:32] One five, not five zero. Five 0% would be a pretty nice business, but 15% gross margin. If the top line is big enough, then there's enough dollars flowing to the bottom in theory, but gross margin, that's that thin. You know, it's better than a grocery store, but it's certainly not what you'd find to be an attractive, you know, technology enabled service.
[00:41:49] So, um, if there's no profit in the future, when you talk about, you know, the compression of these deals, that should yield a very different mindset in terms of. Where [00:42:00] to invest, for example, and then also what that means in terms of valuations of companies. So, uh, I do think that people are going to have at some point a rude awakening from some kind of, you know, bubble collapse.
[00:42:11] But I don't think that means that the technology's bad. I don't think it means that there's not going to be extraordinary gains to humanity to. Everyone who's involved in this at some point, but there will be a lot of people who lose a lot of money. So a lot of investors will lose their shirt. A lot of companies will go out of business.
[00:42:26] There will be consolidation. Uh, but that's somewhat natural in the, you know, whenever there's a remarkable new technology. You know, older technologies end up getting wiped out and even people who are in the race for the new technology, you know, tend to collide. And eventually the, the best thing wins. Or not always the best, but usually that's what sorts out.
[00:42:43] You think about like, going back to the electrification wars of alternating current versus. Direct current. We wrote an article about that with a CDC as the theme, right? And so when alternating current became the big thing, the people who were, had all this infrastructure for direct current, um, they had a hard time, let's say.
[00:42:59] And so that's gonna [00:43:00] happen with, you know, different flavors of ai, different flavors of chips, and it's just moving much, much faster.
[00:43:05] Mallory: Everyone. Thank you for tuning into today's episode. We've seen three announcements, but really one strategy and it's that open AI is exerting its power over everything from chips to chat to newsletters and everything in between.
[00:43:19] Thank you all for tuning in, and we will see you all next week.
[00:43:24] Amith: Thanks for tuning into the Sidecar Sync Podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in depth AI education for you, your entire team, or your members, head to sidecar.ai.

October 9, 2025