Skip to main content

Summary:

In this episode of Sidecar Sync, Mallory Mejias and Amith Nagarajan cover a trio of urgent topics shaping the AI landscape. They kick off with Anthropic’s surprising rise to enterprise dominance—fueled not despite, but because of its safety-first approach. From there, the conversation heats up around accelerating predictions for artificial general intelligence (AGI), with some forecasting human-level AI by 2027. Finally, the hosts zoom out to tackle the hidden but massive infrastructure demands powering this AI surge—and why nuclear energy might be the unsung hero. Plus, Mallory introduces us to her rambunctious new puppy, Chai, who’s stealing hearts (but not the mic) behind the scenes. Along the way, you’ll hear updates on new tools like Google Opal and NotebookLM, thoughts on open source vs. closed models, and why humility and daily learning are the keys to thriving in the AI age.

Timestamps:

00:00 - Meet Chai the Puppy & Summer Catch-Up
03:06 - MemberJunction & AI Agents
08:09 - Why Enterprise Loves Anthropic
13:26 - Trust, Mission, and Long-Term Thinking
16:06 - Anthropic vs. Everyone Else
24:04 – How Close Are We to AGI?
28:21 – Superintelligence and Societal Disruption
31:53 – Will AGI Arrive All at Once?
40:17 – AI’s Power Problem: Infrastructure at Scale
44:31 – Nuclear Energy’s Comeback?
47:46 – Final Thoughts

 

 

🎉 Thank you to our sponsor

https://meetbetty.ai/

📅 Find out more digitalNow 2025 and register now:

https://digitalnow.sidecar.ai/

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:

https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

Google Opal ➡ https://opal.withgoogle.com/landing/

NotebookLM ➡ https://notebooklm.google

MemberJunction ➡ https://memberjunction.org

Claude by Anthropic ➡ https://claude.ai

Quen-3 ➡ https://huggingface.co/Qwen

Moonshot AI ➡ https://moonshot.ai

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00] Amith: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.

[00:00:14] Greetings, and welcome to the Sidecar Sync Your Home for content at the intersection of all things AI and the world of associations. My name is Amith Nagarajan,

[00:00:25] Mallory: and my name is Mallory Mejias.

[00:00:27] Amith: We are your hosts and, uh, really excited to be here today, as always, of course. But, uh, it's been a couple weeks since we've recorded a live episode due to all the things that happen in the summertime.

[00:00:38] Uh, how are you doing, Mallory?

[00:00:40] Mallory: I'm doing really well, Amith right before we started recording, I let you know, and I guess I should let our audience know that we just got a new puppy in our household. Her name is Chai and she is about two and a half months old. And, uh, we have another dog who's five years old and she, I was telling Amme, was very easy, very [00:01:00] mellow, very chill.

[00:01:02] Chai is not that, uh, so she is rambunctious. She's right here next to me in her crate. If you all hear any. Doggy cries. That's probably what it is. But I gave her a treat in there, so I think she'll be busy. But anyway, I've basically been, uh, in dog mom mode for the last few weeks. What about you, Amith? What are you, what are you up to?

[00:01:19] Amith: Well, that sounds like fun. I mean, uh, puppies can definitely take the sleep out of the evenings. Oh yeah. And um, they, they're, they've got some challenges, but they sure are cute. That's, uh, that's how they, that's how they stay in business. Right. So the, the, the barks and other dog windy noises we hear in this episode will not be ai and hence sound effects from your end.

[00:01:36] They will be.

[00:01:37] Mallory: Real, the real deal, real world.

[00:01:39] Amith: Yeah. Well, my wife and I just took our dogs who are a lot older. Um, my older dog just turned 12 and the younger one is 11 and a half, so they're a little bit, a little bit further along in their life journey, but we just took them on a fun little road trip. Uh, we're in the Utah Mountains in the summertime as a lot of our listeners know.

[00:01:54] And, uh, we just decided to go to a different mountain town from the Park City area out to Jackson Hole, Wyoming. And we took [00:02:00] the dogs with us and got to see a couple of national parks in the area. And, uh, really just have a great time. It was pretty mellow because, uh, the dogs aren't really up for lots of long distance walks or anything at this point, but, uh, they loved it.

[00:02:11] We loved it. It was great. We just got back from that and, uh, now I'm just waiting for my kids to get home from camp. One just landed after three weeks in the Alaska wilderness and the other one's coming home from three weeks to arts camp, art camp in, uh, Michigan in a few more days. So, uh, it's, it's, it's a good summer out here in the mountains.

[00:02:28] Mallory: Wow. Busy, busy. Few weeks for the both of us. Have you been. Thinking about AI in your past busy few weeks, uh, aside from, you know, in your, your daily long walks with the dogs, or have you been focused on the nature.

[00:02:42] Amith: You know, um, I was gonna say, not at all. You know, I'm just kind of chilling right now, but anyone who knows me for half a second knows that that's not something I'm good at.

[00:02:50] So I have been enjoying myself out here very much. It is, it is really magical to be out here in nature and get to enjoy that, but, uh, the world of AI never stops, does it? Mm-hmm. And, uh, it's an exciting time. [00:03:00] Uh, there's a lot of cool stuff happening. Some of the topics we're gonna be talking about today I think are fascinating.

[00:03:04] And, uh, in my world, I spend a lot of my time deep in the weeds of the technology, both because I really love it. I always have been a. Super nerd when it comes to programming and tech and all that stuff. So this is, uh, kind of the, a renaissance time period to be in that field. So having fun, uh, working with our team at Blue Cypress on our agent architecture, which is powering a lot of our products and we're putting it in the world as an open source system, uh, as part of the Member Junction platform.

[00:03:28] So lots and lots of work has gone into that. Uh, super exciting, it's just, it's kinda like the new Google Opal product, but much more enterprise scale. Uh, I dunno, have you played with Opal yet, by the way?

[00:03:39] Mallory: I haven't played with Opal, but I actually, I sent you a post about it on LinkedIn because Oh, cool. I watched a quick demo video and it looks incredible.

[00:03:46] Have you played around with it?

[00:03:47] Amith: I haven't actually worked with it myself, but I've watched some other videos and I, I read the content and it's, it's very much what we're going for with the Member Junction environment in a much more enterprise scaled, secure way. In your own e. But the idea for those that haven't looked [00:04:00] at it, and we'll include a link in the show notes for everyone.

[00:04:02] Uh, Google Opal is, uh, kind of a consumer grade agent builder. So it's designed so that any user can just go into a visual interface and you're kind of designing something along the lines of a flowchart. So if you can think of your process, it's like, Hey, step one, I wanna prompt the AI to come up with good ideas for topics for a LinkedIn post.

[00:04:20] Two is I wanna research each of those topics and figure out which of those topics might be trending in my community. Step three, I want to pick one topic and then develop an outline for the post. Step four might be develop an image for that post. Step five might be go ahead and post it, or it might be, let me review it and then post it.

[00:04:36] So very, I would call that a very kind of, uh, basic agentic flow. For some people they go, oh, that sounds really awesome. Let's go automate that. Well, the good news, by the way, is you can do that with Zapier, you can do that with LinkedIn. You can do that with a whole bunch of tools like. box.com has agentic AI tools that can do stuff like this.

[00:04:52] And now Google has entered the fold with Opal, which is their consumer facing, very simple, very clean, and very [00:05:00] cool looking, uh, UI, where you can just literally drag and drop these widgets on the screen and connect. Them with lines and you know, specify your prompts and all that. So we've been working on something somewhat similar.

[00:05:10] The difference is, is that in an AI data platform designed for the enterprise, you have all of your business data in your environment, and then you can run your agentic flows safely within that without your data ever leaving. Data platform. So from a consumer perspective, what Google is doing and other, a number of other companies are working in similar links.

[00:05:27] I'm so excited about it. It's gonna empower everyone to do so many cool things, and we're doing something similar kind of for associations and anyone else, for that matter, who wants to, who wants to have a free platform to do that in their own data ecosystem. So I've been like super busy with that for weeks and weeks.

[00:05:42] It's been a lot of fun.

[00:05:43] Mallory: Mm-hmm. And we all know too, when Google releases something, they tend to do a really good job with it. Notebook. LM was fantastic, so I am sure Opal will be the same. Amit, is the Agentic platform open and available now or about to be? Yeah. Okay.

[00:05:57] Amith: Yeah, it's in the RA most recent release of Member [00:06:00] Junction, if you go to docs dot member junction.org, you can download it, you can install it.

[00:06:03] It takes a little bit of technical skill to get that going, but we actually, and by the end of this quarter, we're gonna be introducing a new product called Member Junction Central, and this is gonna be a hosted platform where any user can go to a. Website, um, and create a member junction instance, uh, just with a few clicks to the mouse and it'll do all of the cloud provisioning, the database, setup, the installation, and we'll maintain it, upgrade it for you, all that kind of stuff.

[00:06:25] We'll really take all the work out of it. And then there'll be connectors where you can say, oh, I wanna bring in my HubSpot data. I wanna bring in my a MS data. I wanna bring in my LMS data. Click, click, click. And a few hours later, all of your data's sitting in a secure, a data platform that's totally open source and totally controlled by you, which you can then run all your agentic flows on top of.

[00:06:42] So we're pretty excited about that right now. You can. In, in fact, use it. You just have to download the software and know a little bit about how to set it up, which is, you know, I'd say kind of a low to medium grade IT skill. It's not yet like business user centric, but it will be very soon. Uh, so very excited about that.

[00:06:57] By the way, you mentioned Notebook, lm uh, we're [00:07:00] talking about so many cool AI things before we actually get into Right, right. I know. Is this episode, this, this is pretty sweet. Um, but. Notebook. LMI just saw one of our keynote speakers from last year's digital now. Neil Hoyne, who's the chief strategist at Google, just did a post I think yesterday talking about notebook lms new video feature.

[00:07:17] So just like they've, you know, really done a lot to revolutionize the way people create short form audio, you know, short podcast, things like that with notebook LM, they're doing the same thing with video now, which is pretty astounding. I haven't played with that one either yet, but I can't wait to get into it.

[00:07:30] Mallory: Maybe we'll have to dedicate a full topic or a full episode to kind of the latest Google advancements in the near future since we know when they do something they do it well.

[00:07:39] Amith: Sounds great. Let's, let's do that.

[00:07:42] Mallory: Alright. Today we are exploring an AI company that we've discussed many, many times on the pod, and that is anthropic.

[00:07:49] We will look at how this safety focused company is actually dominating enterprise adoption. Then we will examine AI development timeline predictions, particularly around artificial general [00:08:00] intelligence or a GI. And finally, we'll discuss the massive infrastructure challenges that will affect AI availability and pricing for.

[00:08:08] All of us. So first and foremost, why enterprise AI safety actually drives adoption. Anthropic, the AI company behind Claude, one of our beloved favorite tools here at the sidecar sink has achieved something remarkable. 10 x revenue growth to $1 billion last year. Now exceeding $4 billion annually with their CEO saying they're on pace for another 10 x in 2025.

[00:08:33] What makes this especially interesting for associations is that 80% of their revenue is B2B, and they're now leading an enterprise API usage, even surpassing open AI in some areas. The fascinating backstory is that Anthropic was founded by former open AI employees who left over safety concerns. And you might be thinking the safety focused AI company would slow them down, but it's actually become their competitive advantage.[00:09:00]

[00:09:00] Businesses want trustworthy and reliable AI and respect Anthropics focus on understanding why things go wrong, what is typically called interpretability. Claude four is becoming incredibly popular with programmers and developers and sees these technical users as early adopters who open doors. Funny enough, when I was doing research on this topic, I read something that stood out to me.

[00:09:24] When Anthropics founders gather for dinner, they discuss how quote unquote, weird the company's growth is. They are admittedly science nerds focused on safety, not sales. Yet their safety first approach has become synergistic with business needs for associations. We think this raises important questions about how prioritizing safety and trust can actually accelerate AI adoption rather than hinder it.

[00:09:50] So ame, this is quite an interesting read. It seems almost counterintuitive. Uh, what's your initial take on this?

[00:09:58] Amith: Um, a few different thoughts come to mind. First of [00:10:00] all, I think this will resonate deeply for people in the not-for-profit association and broader social sectors because they're so deeply mission.

[00:10:07] Focused philanthropic wants to bring safe AI to the world, and they wanna do it in a way that is respectful and, uh, inclusive. And again, above all else, they want it to be safe because they realize the immense power of ai. Now they're one of many leading labs that have remarkable, uh, capabilities. So they alone perhaps, uh, cannot.

[00:10:28] Change the game, but they can influence it. And so, uh, what you're describing though, in terms of mission centricity is something that I deeply believe in. There's a whole movement in the world of for-profit companies called Conscious Capitalism, another one called B Corps, another one called Evergreens, that are companies that believe in having deep mission first prioritization.

[00:10:49] And if you think about it, um. Really what it boils down to is priorities over in terms of timescales. So if you think in very short term, uh, you can optimize [00:11:00] for a different outcome than if you are willing to think a little bit longer term. So Anthropics game, uh, I think they're may be a little bit, uh, too modest in that they're very smart business people too, and that they are looking ahead and saying, Hey, if we really stick to our values and align on our purpose statement, and we.

[00:11:18] Invest the way we said we're gonna invest. People are gonna stick with us. People are gonna find that our products are really good, but most importantly, they can trust us. And that's so incredibly critical. So it doesn't surprise me at all. Uh, I've been a fan of theirs for, uh, quite a number of years. Um, and you know, we use a lot of their stuff, not exclusively, but we use a lot of philanthropic stuff, both at the API level for some of our products.

[00:11:39] Uh, a lot of our developers use Claude Code. I personally use the Claude Desktop app as my primary AI day-to-day. I think you do as well, Mallory. Um, and so, um, you know, it's a company and, and part of the reason that I feel that way is I just feel more safe and secure. Uh, I don't know if that's great marketing or if there's really some substance to it.

[00:11:57] I, I believe there's a lot of substance to it. I'm somewhat saying that Kidding. [00:12:00] But, um, part of it is just, you know, walking, you know, walking down the path that you said you were gonna walk down and, and staying committed. Um, so I actually don't think it should be surprising because, uh, people want, especially in a world that things are changing really rapidly, um, the people that you work with.

[00:12:15] Are critically important. Uh, in every business that I've been involved with, I've always been in B2B, and I've always focused on how do you show that you're the right team and the right people and have the right culture that people wanna align with? Because the product you happen to have at the moment, uh, is it's gonna change.

[00:12:32] And so. The way it's gonna change is dependent upon the people that are behind it and the culture and the value system. So to me, it's exciting to see a company that has this mindset being so successful and they're not alone. Um, there are a number of other companies that have a similar commitment. Misra based in, in France, uh, has some outstanding models.

[00:12:50] They're not quite at the frontier level in terms of absolute. Power levels, but they're very closed. And they too are also deeply focused on this interpretability [00:13:00] transparency. They do a lot in the open source community, not only to share their models, but to also, um, really clearly articulate to the world how they train them and their approach to it and, and, and how they test 'em.

[00:13:10] So, um, my bottom line is I was both excited by this news and both because I'm a fan of the products, but just, uh, I'm a fan of, of doing good business and finding a way to align. Your, uh, goals with your customer and with societies more broadly.

[00:13:27] Mallory: I feel like when we think about innovation, we think about speed moving quickly, not getting too lost in the details, but Anthropic is really a case study and a company that's doing incredibly innovative work.

[00:13:39] Some of the most innovative work in the AI space, but also not necessarily moving slowly, but taking that time to make sure safety parameters are in place. So for you a as who I would call an innovator through and through how. What lessons do you feel associations can learn from this in terms of moving quickly on [00:14:00] AI adoption, but doing so in a way that's true to their mission and true to their own culture?

[00:14:05] Amith: Yeah, and I think, you know, with associations that's such a natural thing that people are aligned with mission. And I think sometimes mission is unclear though. You know, part of what I wrote about in my first book back in 2018, the Open Garden Organization, I wrote in that book about the importance of having clarity around purpose.

[00:14:21] Mission statements combine the the who, the what, the where, the why. They combine kind of everything and mission. And the term mission actually is fundamentally. Has a, a start, a middle and an end, right? So missions begin and missions end in comparison purposes. Purpose statements, um, don't really change. At least well-written ones, uh, can stand the test of time indefinitely.

[00:14:43] And so, uh, I think that, uh, rooting on a very clear purpose statement. Uh, can really be quite helpful, uh, if you want to dig deeper into this. Uh, Jim Collins, in my mind, is the authority on, on this type of fundamental, uh, form of, of culture work. Uh, his book, good to Great as well as a number of other, his [00:15:00] other books talk about, uh, this idea of core ideology, which is the mixture of core purpose and core values combined.

[00:15:06] Those are two elements that essentially form the bedrock of the organization. Uh, and then on top of that, you have this idea of an envisioned future, which is a combination of BHAG and other things that you do in terms of the where you're going. Uh, but in any. In any event, the, the thing that I'm pointing out here is to your question of associations and how they interact with this, the more deeply rooted that sense of purposes and the clearer the values are, then the better that is to guide your AI approach.

[00:15:31] So in some cases, if your culture is murky, if your understanding of your culture. Both you as well as everyone on your team and, and your close end volunteers. Perhaps there's some work to be done there to say like, well, why are we here? Like, what's the point of our organization? Why do we exist? What is the, the statement that we can make as to our purpose, our reason for existence that will stand the test of time, something that will mean as much in 20 years as it means today?

[00:15:56] And then let's focus on aligning our AI initiatives to support [00:16:00] that purpose statement. So I do think it's a navigational beacon, a north star of sorts.

[00:16:04] Mallory: Mm-hmm. Do you feel like it's too early in the AI exponential curve, converging curves we're experiencing to draw a line in the sand and say, we're only going to use AI companies or models from AI companies that prioritize safety.

[00:16:21] Do you feel like you shouldn't take that stance just yet? What? What are your thoughts on that?

[00:16:26] Amith: I think there's some, some pretty obvious black and white areas and there's also a lot of gray areas. Um, the black and white part is if you see a company that's just kind of haphazardly, you know, doing whatever the hell they want, without any regards whatsoever to alignment and safety, maybe you should think twice about using the stuff they have, either as a consumer or.

[00:16:43] As from a business perspective, uh, do you need to go to the ultra gold standard? Whoever touts that, they are the safest, most aligned, which Anthropic clearly is, is one of the leaders in that space, maybe. Um, but is that the right choice for you? I'll give you an example of where I think Anthropic may be the wrong choice.

[00:16:59] [00:17:00] Anthropic has really powerful high-end models, so their Claude four series has Opus and Sonet. I believe they have a version of Haiku coming, which is their, their very small model. Which is faster and cheaper, but they're expensive. I mean, Claude four is a very expensive series of models. Um, they made their older models a little bit cheaper, but their, their most cutting edge models are very, are amazing, but they're also extremely expensive.

[00:17:23] I think it's something like, uh, $70 per million tokens of output from the Opus model and, and $10 for input. And to give you a point of comparison, if you inference on gr grok with a q to be clear, uh, the fast inference hardware folks who have their own, uh, cloud, very secure cloud that we use a lot, you can get models that are comparable to Cloud four sonnet in terms of performance.

[00:17:46] For literally one 20th. The costs that are also about 10 to 20 times as fast when inferenced on that platform. Uh, that's also true for Cere Cereus, which is another fast inference provider. Uh, and there's others that are coming down that path with open [00:18:00] source models. So open source models cannot say the same thing that philanthropic says.

[00:18:04] Philanthropic. Part of their thing is, Hey, the way we develop our models, the way we contain our models, in terms of how we deploy them, are all aligned and done with this safety focused mindset. But it's at the ultra premium range, and you don't need ultra premium for everything you do. It's incredibly powerful for some things, but there's lots of AI workloads where you can do just fine with something like a Quin three model, uh, or this new model that came out about a week and a half ago called Kimmy K two, which is those are both Chinese companies.

[00:18:32] Um. One from a company called Alibaba, which is known for e-comm in China. And another one is from, uh, an upstart company called Moonshot ai. And to be clear, these are not models that you inference in China, but you can inference them, uh, on US cloud providers or cloud providers wherever you happen to live that can provide those models since they're open source.

[00:18:51] So I guess the point I would make is ultimately it's probably a hybrid strategy. You may choose one or two frontier model companies like an [00:19:00] OpenAI or a Google. We were talking about them earlier. They have an amazing set of models, uh, with the Gemini, uh, the Gemini series that often don't get mentioned in the same breath as Anthropic and OpenAI and, and actually unfairly.

[00:19:12] So because their models are. Really just as good in terms of overall performance. So there's choice, and choice is good fundamentally. So I think that you should make your own choices based on your priorities, but ultimately, I do think this topic is really key and I wanna highlight one thing that's the non-obvious part of what you said earlier and just really drive the point home.

[00:19:33] They at Anthropic, as well as everyone else, myself included, believed that Anthropic would probably be a little bit behind the true frontier. That is the belief they had when they started, and that is the belief that they had actually up until recently within Enro, Dario Ade, the CEO, along with the rest of his co-founders.

[00:19:50] Have publicly said this, that they may not be the absolute latest or most frontier capability because of their focus and safety and alignment. They might always be [00:20:00] slightly behind, but the mindset was that's actually okay because these models are getting so good, so fast, that being even a small percentage.

[00:20:07] Behind on timeline or capability is fine. And that was true and still is true in terms of it being a reasonable, uh, uh, a reasonable off offset right to having a, a, a, the safest model available or one of the safest models available. But the point I'd make is, is that now in this recent news release, um, there's saying, Hey, actually the interpretability and the alignment focus is making our models better.

[00:20:30] Because they, they're so, these guys are a bunch of really smart, technical people. They don't do most of their alignment manually. It's this process they have, which is a recursive self-improvement loop where the model is improving itself using a constitutional AI approach. That's the concept and a term that anthropic folks have been.

[00:20:48] I think they coined it, and then they're, they're really focused on it, where essentially they define the value system for the AI. And values are, uh, things that are somewhat hard to define. So the best way to reinforce values is through [00:21:00] examples. You know, you kind of know it when you see it type of thing.

[00:21:02] Like, oh, you know, is this a good or a bad example of value A, B, or C? Uh, and so what they've done is they've trained their models to learn their value system, and they're constantly having models look at the outputs of other models to help them become better and more aligned. And lo and behold. That process actually makes their models a lot more capable as well.

[00:21:21] So it's really interesting. Um, I think that their alignment research is paying off in unexpected ways. So it's a, it's a nice bonus. I think they still have a good chance of winning, even if they were slightly behind, but right now they are not.

[00:21:32] Mallory: It's a great point, Amme, and I wanted to share too, while prepping for this episode, you all know if you've listened to the pod before that I pretty much exclusively use Claude and I was having it help me synthesize some of this information for the next thing I'm going to talk about.

[00:21:46] And it actually booted me out of the chat and said this violates our, uh, terms of service because I was talking about a GI and kind of like misuse of ai. So I might be on some watch list unfortunately right now. But, uh, I thought, hey, it's doing a [00:22:00] good job. It's doing what it's supposed to do, so. Moving to the next part of this episode, I wanna talk about the AI arms race and timeline predictions for artificial general intelligence.

[00:22:09] So leading AI experts are making increasingly aggressive timeline predictions for a GI, Jeffrey Hinton and AI pioneer. Believes that there's a 10 to 20% chance that AI development ends in human extinction. So that's kind of a, a hard pill to swallow. There are some people that are a lot more hopeful in terms of our AI future, but despite these concerns, both western tech firms and Chinese counterparts are accelerating their pursuit.

[00:22:37] A GI, artificial general intelligence that could replace most dust jobs. The predictions are getting a bit more urgent. So Anthropics co-founder says, when I look at the data, I see many trend lines up to 2027. Demis Hassabis of Google. DeepMind thinks AI will match human capabilities within a decade. And Mark Zuckerberg of Meta has said Super intelligence is in sight.[00:23:00]

[00:23:00] What's driving this race is the belief that benefits will accrue mainly to whoever achieves the breakthrough first. So it's kind of like the prisoner's dilemma. Everyone knows that they should slow down for safety, but they're convinced others won't, so they push ahead. This is the part that I got flagged on.

[00:23:16] Uh, industry experts outlined four ways that AI could go wrong as we approach a GI, one of those being misuse. So bad actors using AI for harm. Misalignment. The AI not wanting what we want it to do. Mistakes, so truly unintended consequences and structural risks. Cumulative societal harms For associations, this timeline means preparing yourselves, your members for.

[00:23:41] Potentially massive disruption, uh, within, I don't know, the next decade, the next few years. Amit, I'm, I'm curious on your take about a GI hearing that we could get there by 2027, that's just two years away at the time of the recording of this pod, what, what are your thoughts on this part [00:24:00] of the episode?

[00:24:01] Amith: Well, uh, one thing I'd say, just to open up my commentary on this topic is, uh, I'm guessing Zuck might have seen that when he was in his metaverse or something in terms of having super intelligence in sight, because I don't think we're quite there yet. But, uh, maybe he sees something no one else does.

[00:24:16] Uhhuh. Um, in any event, um, I, I think that the ranges of timescales are highly dependent upon what you define as a GI or. Super intelligence, but I think the most useful definition of A-G-I-A-S-I, et cetera, isn't so much like the different technical terms. They all mean technically different things to different people, but the idea that AI can do most human labor, right?

[00:24:36] So if AI can do most of the work that we do, then that is generally useful. It's already, I think, quite. Useful, but, uh, AI right now doesn't have agency. Generally, it's kind of trapped in a box that's very rapidly changing. As we open the episode up, there are consumer grade tools where you can have AI kind of go into the world and act on your behalf increasingly in an autonomous way.

[00:24:56] And to me, that's actually one of the main barriers to realizing. The [00:25:00] functional utility of what people call a GI, where an AI can go out there and do most of the work that most of us do. Well actually I think in a today's AI like Cloud Opus four, GPT, uh, oh 4 0 3 Pro, uh, quite frankly, is quite a bit better than a lot of the people that are out there at a lot of different tasks.

[00:25:16] It's better than I am and a lot of the things that it does for me, and so. Um, therefore I would say actually I'm quite comfortable with the idea of AI going out there and doing these things. Um, I think it's gonna be a boom in terms of productivity. I think it has massive numbers of side effects, uh, that could be problematic because, again, of how quickly things are happening.

[00:25:35] We've talked in the past on this pod about employment in general. Um, d uh, from Anthropic has said that a very large percentage of white collar jobs will be automated, as you've talked about before. Um, and I think he's probably right. What I do think that will happen in parallel with that though, is so much economic growth that you will create new categories of opportunity for the people who are able and willing [00:26:00] to go after those new opportunities.

[00:26:01] Now, able and willing are both important terms able is about the preparation, which is education, which is, you know, skills which. Preparation through practice. Uh, and then willingness obviously is the question of both at a societal level, do people need to work? Which is big. It's a big question. Uh, but does a particular person need to work?

[00:26:19] Right? And is there a societal safety net that provides for folks who decide they don't want to. Not because they can't, but because they feel like they don't want to. And there's all these different layers of questions that come with that. So, um, you know, coming back to this issue of an extension, extension class event, like, will AI come to kill us?

[00:26:35] Um, I think that, you know, the probabilities are just people's hallucinations of what these things are. There's no math. Behind any of those numbers. Um, I think it's a non-zero percentage chance, but that's true for electricity. That's true, certainly for nuclear power and weapons. It's true for the bicycle, right?

[00:26:51] Like there's lots of ways that you can kill lots of people with seemingly basic inventions. Um, so I guess my point would be that, um, we have no idea. [00:27:00] But we know there's a risk. We also know that the genie's outta the bottle and it's things are gonna happen at the pace that are gonna happen. At this point.

[00:27:07] There's no force that I know of, governmental or natural or anything else that can stop ai. So the conversation isn't about, in my opinion, the conversation isn't about stopping it. It's about. Being prepared for what's coming, which is about an everyday incremental approach to learning and adapting. And I think at the highest levels of government and people who are in very large companies as well, need to be very thoughtful about working together to collaborate, to think through what does it look like if half or more of the current tasks that we do are automated.

[00:27:39] Does that mean that half or more of our people are gone? Or does it mean that we have an opportunity to create something else? And going back to our earlier conversation about purpose-driven business and longer term horizon thinking versus shorter term horizon thinking. There are a lot of companies that absolutely will just have reductions in force.

[00:27:55] They'll say, Hey, you know what? We have 10,000 employees we can get by with 3000, [00:28:00] 7,000 people. See you later. And they'll go, some of these people are even celebrating this, which personally find appalling. But like they're saying, Hey, you know, like AI has helped us get rid of a whole bunch of people. Um, and you know, there's obviously some short term financial benefits, but I also think that's incredibly shortsighted.

[00:28:15] I think organizations that invest in their people and say, Hey, like you're here for a reason. You're doing stuff that's really important for our members, for our customers, for our, perhaps for our society. We're gonna invest in you and we're gonna help you skill up so you become 10 x of what you used to be.

[00:28:30] Those people, I think, are going to become amazingly capable of doing things we cannot imagine, right? We're just starting to see little previews of this now. Um, so that makes me very excited about all the great things that we're gonna be able to do at the same time. And, you know, I'm, I'm obviously an AI optimist.

[00:28:44] I wouldn't be dedicating, you know, so much of my time to it if I wasn't, but. That doesn't mean that I'm also like putting blinders on and saying, well, you know, I think we'll be fine. You know, I think there's a non-zero chance of an extinction level event too. I just don't know. It's 20%, it might be 2%, it might be [00:29:00] 0.2%, but regardless of what it is, it's non-zero, so we should be actively thinking about it.

[00:29:05] Along with what happens even prior to something at that scale of if you say, Hey, 10 million jobs are gonna be lost globally in the next 12 months, which that number may be very low. Um, what does that mean? That's typically means a lot of bad things happen when you have those kinds of numbers of unemployed people.

[00:29:21] Mallory: Amit, do you think, I know right now we're in the thick of working toward a GI kind of piece by piece, but do you think it's gonna be the event where. We wake up one day and we have a GI, do you think it'll be a bit more slow or will it be sudden all at once? We'll have it and panic perhaps

[00:29:40] Amith: in, in my mind, for some people it will certainly experience a GI that way because they've been in their bubble, and that's actually probably a large number of people that.

[00:29:47] You know that a lot of people I talked to about ai, they're like, yeah, yeah. You know, I've been paying attention to it. I remember, you know, I know chat PTs out there. I also know it hallucinates quite a bit and you know, they have kind of knowledge circa early 2023, and I'm like, oh, when's the [00:30:00] last time you logged in?

[00:30:00] Like, oh, I haven't logged in. I've never done that. Or I, I logged in once. I had it, make a cocktail recipe for, for me and or, you know, some of the like. Kind of Gen one use cases of silliness from chat PT circa late 22, early 23. Right. Um, and that's better than having no idea what it is. But nonetheless, actually, some of those people will probably actually be more shocked than the people who admittedly have no idea what AI is, because at least those people are open-minded to the fact that they have no idea.

[00:30:25] Whereas some people who are like, oh yeah, you know. My, my, my kids use it or, yeah. Some of my employees are doing all sorts of great stuff with ai. Like actually I hear that from a number of senior executives. They'll go, oh, we're, yeah, we're big in ai. We're doing lots of cool stuff. I'm like, oh, that's awesome.

[00:30:38] That's so exciting. Tell me, tell me more. And they're like, oh yeah, I can't tell you too much. I like your senior

[00:30:43] Mallory: executive. I like how you do that. That's good. I'll

[00:30:46] Amith: keep that general, uh, you know, so, but there are, so, there's a lot of people out there in our, in our world and I, I try to be encouraging. I'm.

[00:30:52] So pretty aggressive, as you guys know, um, that if you don't get on this, you're gonna face an extinction event in your career. So I think it's best to, [00:31:00] to learn and to be a student of AI every day. But my bottom line on it is, I have no idea. I think that those of us that are deep in this field will not necess, will not really feel like some step change necessarily, but will probably feel more like.

[00:31:12] Hey, like, oh yeah, we're here and we've been on this journey for years now, and so this is amazing. And so it's a little bit harder to realize how much power you have in your hand. Like think about if you were to go talk to Mallory of early 2023 and say, Hey, Mallory, two years and six months from now, I'm gonna have this and demo Claude desktop with.

[00:31:34] CPS with Opus four, with all the stuff you do every day, right? Or of image tools or Google VO or whatever to Mallory of early 2023 that Mallory would've been completely blown away by it. Like you wouldn't even be able to comprehend it because you wouldn't have been on that two and a half year journey.

[00:31:48] And some people will experience a GI that way. Right. Some people would say that what you have today is a GI, in fact, the definition of a GI from 10 years ago, absolutely this would be a GI and people don't really, really call [00:32:00] that. 10 years ago we were dealing with highly specialized single purpose.

[00:32:04] Machine learning models that were trained on private data sets. They were not foundation models. They were built for very specific predictive use cases. And, uh, the capability we have today, 10 years later, is absolutely a GI in the minds of probably 90% of the people that you would've talked to 10 years ago.

[00:32:20] So it is a bit of a moving goalpost. Mm-hmm. So, um. I, I think ultimately it does matter, um, because the experience and how you perceive the world is your reality. Um, but to kind of soften the blow and to make it more of an opportunity than a risk, you gotta do the same thing we always talk about, you know, this is the sidecar Broken record episode, which is where we say to you every single time, you're doing the right thing by listening to us and hopefully others as well on ai.

[00:32:45] And, you know, just investing a little bit of time every day. You know, I go on stage and talk about AI and people say, oh, what should I do? I say, listen. Lock off a 15 minute appointment with yourself every single day and knock it, knock out 15 minutes of ai, dedicated AI learning. And that [00:33:00] can be experimenting with a tool.

[00:33:01] It can be watching a video, it can be listening to a podcast. It can be reading a book. It could be anything you want, but allocate that time. If you do that day after day after day, within a very short period of time, you'll be one of the AI fluent people that are out there, people who are not only AI literate, but you'll be an AI badass.

[00:33:18] So go do that. I

[00:33:20] Mallory: like it. Easy, digestible thing that you can go and do and listen to the side cursing. Broken record episode. I like that. Amit, I wanna talk briefly too about the, the open source closed source debate, which I think is one of the episodes we did right when we launched the Sidecar Syn podcast.

[00:33:37] Maybe like those first 10 episodes or so. In doing my research for this episode, I was reading about kind of the safety parameters that these AI companies use and how they often use. A second AI or AI layer to make sure that however you're prompting the first AI model is in alignment with terms of service and making sure you're not misusing it or, uh, using the model for harm [00:34:00] and.

[00:34:01] My, the research was saying that while open source models definitely progress the field, that they're also just really easy ways for bad actors to get access to these models, to remove that secondary AI layer and then, you know, potentially use them for harm. So I just wanted to get your take really quickly on kind of the balance there of, 'cause I, I believe you think open source in general is a good thing, but kind of how it might lend itself to the bad actors as well.

[00:34:29] Amith: I, I, it, it is a good thing in my mind for a lot of reasons. Um, the, the bad actors absolutely are going to take advantage of open source models and use them for tremendous harm. Yet, open source is still a very good thing in my mind because the goodwill outweigh the bad. There's far more good guys than bad guys in the world.

[00:34:44] And if we can get enough people doing good things with good ai, that will outweigh the bad. And the reason I still think that that's better than closed source in, you know, as a general statement, is that the closed source thing requires trust and central authority. You have to say, Hey, Sam Altman, Dario Amide, and [00:35:00] Desis, the open AI andro in Google Chiefs in those areas respectively.

[00:35:04] Uh, we will place our trust in you to not only, uh, have the right intentions, but to be competent at that level. Which is extraordinarily difficult, even for the most competent people in the world to get it right. And so I'd rather have choice. I'd rather have multiple different players. I'd rather have the possibility of misuse along with the possibility of people being able to inspect these models, understand the, the depths of their inner working, create versions of them that are tailored for specific purposes.

[00:35:33] Um, and the bottom line is, is we have no say in the matter. We collectively, as a species do not control this. It is, it is out of our hands at this point. You can claim that regulation could somehow, you know, put a cork back in this and sled on open source. I don't believe that there will still be some countries somewhere that won't comply, and all the open source people will work there.

[00:35:52] And there's plenty of countries that are hugely, uh, pro open source, including the United States as of, uh, last week where the, the statement that [00:36:00] came outta the White House very, very clearly backed, uh, the development of open source models very strongly so, which I think is a really good thing. There are downside risks, but there's downside risks to everything, right?

[00:36:11] So in my mind, open source, what it does is it creates this enormous level of choice. Uh, it does create better transparency. Um, you know, one thing we that might be helpful is to look back at the history of computing and what's happened in the open source movement broadly in the last couple decades. When you think about people who look at like, the most secure systems and software components for things like network security or operating systems.

[00:36:35] Um. A very large percentage of that now is all open source stack. You know, all stuff that is. Freely available open source software that runs the internet, that runs the network stack, that runs a tremendous amount of the stuff we rely on for our most mission critical requirements. And the reason for that, I mean the economic argument is very strong, but in addition to that, it's.

[00:36:56] Very secure because, um, the process for understanding [00:37:00] what's in the software is open for everyone's inspection. And a lot of people do deeply interrogate these open source products. Now, AI models work a little bit differently. Um, they're open source, but source is actually, the source code is actually very limited.

[00:37:11] It's more about open weights and open documentation in the training processes. So I think that lends itself to the earlier conversation about transparency, um, and being able to create alignment because you can also take these models and do post-training and RL. You know, types of approaches to basically drive better behavioral alignment with your own use case.

[00:37:30] So I'm, I'm a big open source proponent. I am not an anti closed source person. I think there's places where that model economically can make a lot of sense, and we use a lot of that stuff. But, uh, I think that you should have as much choice as possible, particularly early in a game like this.

[00:37:45] Mallory: The last topic we wanna discuss today is AI infrastructure, which is really the undercurrent of, of all of our conversation.

[00:37:52] Thus far, the infrastructure needs for AI development are staggering. We know this projections show frontier AI companies [00:38:00] will need two gigawatt data centers and 2027 and five gigawatt, and 2028 just for single model training. Total US Frontier AI demands could reach 20 to 25 gigawatts by 2028. That's twice New York City's peak electricity demand just for training with at least as much needed for everyday usage.

[00:38:21] The infrastructure race is global. China added over 400 gigawatts of power capacity last year compared to just several dozen gigawatts in the us. This is driving calls for dramatic policy changes, including making federal lands available for AI infrastructure, accelerating environmental reviews, and creating strategic reserves.

[00:38:39] Of critical components. Philanthropic has proposed some solutions including building large scale AI training infrastructure, like we mentioned, the federal lands, expedited permitting and power line buildouts, and also broad based infrastructure for nationwide AI deployment, like accelerated geothermal and nuclear permitting, transmission [00:39:00] corridors and workforce.

[00:39:01] Development for associations, really for everybody, right? This infrastructure challenge will affect AI availability, pricing, accessibility, and of course competition. Ames. These infrastructure needs are, are massive. Uh, what do you think that means, kind of, as I said, is the undercurrent of today's whole episode, but what do you think that means for AI availability and pricing and competition globally?

[00:39:27] Domestically? What are your thoughts there?

[00:39:30] Amith: I mean, first of all, I think those numbers are probably missing a zero or two. Um, and maybe not in that timescale, but within a few years thereafter, because, uh, while training is gonna be a massive workload, inference is the demand. When you use a model, you're doing inference on it, and that's going to absolutely blow up.

[00:39:46] Way beyond any of the estimates that we can possibly understand because of the value it's creating. Uh, so I don't think this is going to, the demand side of it is going to drive us to have to do something different. Um, so, you know, my thought process on it is that there are a lot of ways [00:40:00] that science can help, uh, and this particular type of energy consumption is.

[00:40:04] Different than others. Uh, this type of energy consumption can be constricted to geographies where we can put data centers that are perhaps less desirable places to live, or places where we can both generate and consume the power in the same location, thereby eliminating some of the transmission problems or narrowing them.

[00:40:21] Uh, it also opens up the possibility of using sources of power that might be otherwise less desirable. For example, intermittent sources of powers that are complimentary, uh, with other forms, uh, could be mixed in. So you say for example, what. L uh, I might be able to generate the power in the, in the Arizona desert, but I need the power somewhere else.

[00:40:40] And therefore you have this lossiness of transmission as well as like, you know, just a really old grid that makes it hard to transmit power. Even short distances, much less long distances, uh, in the current infrastructure we have. But you can. You can solve for that because most of these data centers are gonna be new builds.

[00:40:54] So if you build the data center and you co-locate or very closely locate a new nuclear [00:41:00] reactor or you know, some other source, uh, you can dramatically mitigate some of those issues. Uh, but we have to move, we've gotta go do this. Um, there are new nuclear. Technologies like SMR Small Modular Reactors, which I think we've talked about a tiny bit in prior episodes.

[00:41:14] That'd be a, a fun thing to really drill into and get an expert on here, which I'm certainly not, um, in energy or, or any of these technologies. But, um, I read a lot about them because I find it fascinating. And what I would tell you is there's a lot more science than there are. Active solutions because things have historically moved so slow, uh, to an extent for good reason because these are things that potentially can have, uh, environmental impact.

[00:41:35] They can have safety, uh, concerns, but, um, we have to maintain those, those thoughts and concerns, but uh, at the same time move a lot faster if we're gonna keep up with the demand. 'cause we're just, we're gonna run out of power to be a player and not much just the leader. You know, we want to continue to be the so-called leader.

[00:41:50] I don't know that we are, but to be one of the leaders, we're not gonna maintain that ability at all if we run out of power.

[00:41:57] Mallory: It's a, it's a good point for our [00:42:00] listeners who are rightfully concerned about environmental impact and the ways to do this in the, the smartest, most effective way, where hopefully as a nation, as a society, we can kind of keep up with the demands of ai, but doing so in a responsible way.

[00:42:14] What, what are, what do you say to them?

[00:42:17] Amith: I think people who are pro-environment and concerned about carbon footprint and in general, all of the other side effects of growing energy consumption should become the biggest advocates for the newer forms of nuclear technology. It's the technology we have, and I'm talking about fission, not the fusion.

[00:42:31] Fusion will be great when we get it one day. Maybe that's 10 years from now. Maybe it's never, but for now we have new forms of fission. And these new forms of vision are portable. They're smaller, they're replicable, they're scalable, and uh, they've been proven, uh, actually quite a few different ways.

[00:42:47] There's different flavors to this that can be deployed. And again, China is way ahead of us. Uh, we have some experiments going, but it's, you know, you can count them on, on two hands. And, uh, we need to do a lot more, a lot faster. Uh, I think [00:43:00] alternative, uh, energy generation through wind, through geothermal, through solar.

[00:43:05] All wonderful things, but intermittent sources of power and smaller sources of power don't solve the problem the way nuclear could. I think you could solve a very large chunk of this issue with nuclear. Um, there's a lot we can do and we have the skills, we have certainly the scientific skills, the engineering and construction labor needed to go and do this at scale.

[00:43:24] Um, this is like an interstate highway type project. It's gonna require. Massive investment from every level of government, particularly the federal government. It's gonna require us to, to just go for it. Uh, we're not going to be able to keep up if we don't push this way. And, and this isn't like a question of is this gonna blow up our energy needs?

[00:43:41] It's a question of how, how many zeros are you adding to those estimates over the course of the next 10 to 20 years?

[00:43:47] Mallory: So nuclear fission is something we should be keeping an eye on?

[00:43:52] Amith: I think so. I mean, I think even classical fission style technologies have become dramatically safer. Over the course of the decades as we refine the [00:44:00] technology, yet we haven't deployed new net new nuclear capacity in this country in a very long time.

[00:44:04] That's starting to change. Uh, you look at other countries that are very pro-environment, and you think about in Europe, for example, which tends to be more climate forward than the United States, and particularly a country like France, which is known for being pretty environmentally centric. Uh, they generate the majority, actually, I think the, the substantial majority of their total energy consumption through nuclear.

[00:44:24] Power and they have for years. Uh, so there's something to be thought about there. If you're anti-nuclear for some reason because of safety concerns or you think there's environmental issues there that can't be solved, um, go take a look at what France is doing. Right? So that's, that's not a small country.

[00:44:37] Uh, it's not the scale of United States population or GDP or anything like that, but it's a big country and, and they generate a lot of power from nuclear. So I think there's a lot to be learned there. And uh, you know, I think what China is doing at the scale, they're doing it. Also should be studied. They figured out a lot of stuff over.

[00:44:51] There's a lot of smart people over there. They're working extremely hard. There's a lot we can learn from, you know, not looking at them with envy, not looking at them with anger, [00:45:00] but to say, Hey, they're doing some stuff really well. Like, let's, let's go look at that. Not saying everything they're doing is perfect.

[00:45:04] There's lots of issues, but there's things we can learn from almost anyone. And China's doing a lot of stuff the right way in order to move, to move fast and, and win.

[00:45:12] Mallory: Hmm. Well, Amit, in this episode I hear, I hear my puppy starting to to cry a little bit. I guess that's my cue to, to wrap things up. I dunno if timing can hear it.

[00:45:23] Um, I wanted to say we've, we've covered a lot of ground. We've talked about philanthropic, we've talked about safety, we've talked about a GI, some potential AI doomsday scenarios, and we've talked about infrastructure in terms of today's episode, all of that considered, what do you think is a key takeaway or the key takeaways for our association and nonprofit listeners?

[00:45:43] Amith: I think we just need to all stay pretty humble. Um, even if you think you know a lot about this stuff, it's hard to really know a lot about this stuff. And so it's important to keep investing your energy in it. I also think it's important to share this with others. Um, even people who are not necessarily super.

[00:45:59] Uh, [00:46:00] receptive of ai. Uh, we all know them. They're people who are kind of anti AI in some way. Maybe not like deeply philosophically, but they just have concerns. I would still encourage those people to at least try it and to try different things, uh, because I think it's our responsibility to try to bring as many people along with us as we can.

[00:46:16] It's sidecars mission is to educate a million people by the end of the decade on AI in, in the association sector specifically. Um, we think that's. You know, our small dent in the universe, if we can do that in the next four and a half years, uh, we're making a difference. We're helping this sector not only be, uh, sustainable, but to thrive in the future.

[00:46:33] And I'd love for everyone to have a goal like that where, you know, your goal as an association leader is to get everyone in your community or a large number of people in your community. AI fluent AI native, AI capable, so that they have not just a fighting chance, but a reason to be optimistic about their future.

[00:46:49] So to me, that's the big takeaway, kind of always. But the humility that needs to come with that is, is really hard, especially if you spend time teaching others about this. You know, it's hard to be a student at the same [00:47:00] time. You know, it's a different gear you have to put yourself into with intentionality.

[00:47:03] Mallory: Mm-hmm. Building the plane as we fly it is what they say. Everybody. You heard it here. Stay humble. We'll continue keeping you up to date with the latest AI news on the Sidecar Sync Podcast. We'll see you all next week.

[00:47:19] Amith: Thanks for tuning into the Sidecar Sync Podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes, and if you're looking for more.

[00:47:29] In depth AI education for you, your entire team or your members head to sidecar ai.

 

Mallory Mejias
Post by Mallory Mejias
July 31, 2025
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.