Skip to main content

Summary:

This week, Amith Nagarajan and Mallory Mejias dive deep into Google's groundbreaking AI breakthrough—AlphaEvolve. In their first-ever single-topic episode, the duo unpacks why this agent’s ability to autonomously invent and refine algorithms marks a pivotal moment in AI history. From shattering a 56-year-old math record to its potential to reshape how associations tackle "unsolvable" problems, Amith and Mallory explore what AlphaEvolve means for science, business, and the association world. Plus, they discuss how associations can remain relevant in the face of rapid AI advancement—even when the tech seems impossibly complex.

Timestamps:


00:00 - Introduction
06:43 - What Is AlphaEvolve?
09:28 - Breaking a 56-Year-Old Math Record
15:53 - Applying AlphaEvolve to Association Use Cases
24:47 - Three Key Features: Human Readability, Purpose, Evolution
32:06 - The Power of Digital Twins & Experimental Campaigning
35:33 - The Association's Role in Educating Technical Members
41:41 - Reframing Assumptions & Encouraging New Thinking
45:23 - Final Thoughts 

 

🎉 More from Today’s Sponsors:

CDS Global https://www.cds-global.com/

VideoRequest https://videorequest.io/

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

 Find out more about Sidecar’s CESSE Partnership - https://shorturl.at/LpEYb

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:

https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

📅 Find out more digitalNow 2025 and register now:

https://digitalnow.sidecar.ai/

🛠 AI Tools and Resources Mentioned in This Episode:

AlphaEvolve ➡ https://shorturl.at/3vzXm

Gemini ➡ https://deepmind.google/models/gemini/

Claude by Anthropic ➡ https://www.anthropic.com/index/claude

Perplexity AI ➡ https://www.perplexity.ai/

Betty ➡ https://www.meetbetty.ai/

Isomorphic Labs ➡ https://www.isomorphiclabs.com/

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00] Amith: This AI is better than any member I can talk to. It's more knowledgeable, smarter, better answers, obviously faster, and that is not what they thought it would do. Even the people who are signing up and paying for a product like this, they're blown away consistently.

Welcome to Sidecar Sync, your Weekly Dose of Innovation.

[00:00:19] If you're looking for the latest news, insights, and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates with a keen focus on how AI and other emerging technologies are shaping the future.

[00:00:36] No fluff, just facts and informed discussions. I'm Amith Nagarajan, Chairman of Blue Cypress. And I'm your host.

Greeting Sidecar Sync listeners, we're excited to be back with you for an amazing episode, all about artificial intelligence, and we will be talking about how this particular topic actually is super relevant to the world of associations, and you'll find out very soon why I introduced today's episode that particular way.

[00:01:01] My name is Amik Nagarajan. 

[00:01:03] Mallory: My name is Mallory Mejias. 

[00:01:05] Amith: and we're your hosts. And it is an exciting time in the world, isn't it, Mallory? And with ai Oh yeah, AI in particular. It's just kind of nuts. So, um, I've been excited about this topic. I remember reading the news and we're gonna get into that in a minute. But, um, a lot of people are gonna be wondering why we're covering something that seems like a deep science topic, which we tend to do from time to time.

[00:01:26] We'll digress from the world of like, Hey, here's how you use this particular tool to. Big picture stuff, but uh, this one definitely is notable. I think in fact, so much so that I think it's our only topic for today, right?

[00:01:37] Mallory: It is our only topic, and I was just reflecting with Amit before we started recording.

[00:01:41] I don't think we've ever done an episode on a single topic. We've done some evergreen episodes, um, you know, about getting your board on board with ai, uh, or data or foundations of ai, but this is our first AI news item episode. That's just one topic. That's how important we believe this is.

[00:01:58] Amith: Yep.

[00:01:59] Mallory: Amit, how have you been doing this week?

[00:02:01] Amith: I have been really good. I've been looking forward to this. I've been diving into a whole bunch of project work with our teams in different areas, which actually some of our project work is kind of related to this, so that'll be fun to talk about. But, uh, just all around good Here in New Orleans, this time of the year is, as you know, from living here, and it's probably similar in Atlanta.

[00:02:17] It's just starting to get really, really nasty outside. It's like 95 degrees or pretty close and a hundred percent humidity, so I'm not enjoying that. But otherwise, I'm doing great. How about yourself?

[00:02:28] Mallory: Honestly, I hate to tell you Amit, but the weather's really nice here in Atlanta. It has not been that hot.

[00:02:33] I mean, we've been getting like some peak, maybe 83 Fahrenheit days. Oh man. But overall, the weather's been really nice. I've been getting outdoors a ton and we actually, both my husband and I just got new bikes. Nice. So we are really excited to get out and about and ride those.

[00:02:48] Amith: Did you get, uh, e-bikes or, uh, no, no, no motor.

[00:02:51] Mallory: It's funny, it's funny that you asked that because my husband really, really wanted to get E-bikes, which are just kind of. In my opinion, outrageously expensive. So we didn't, we couldn't have one of us have an e-bike and one of us not, which is what he proposed initially. I was like, I'll be behind you.

[00:03:06] Like wait out. So we ended up getting, I think they're hybrid bikes. So road bikes slash gravel bikes. Yeah. We had mountain bikes before, but we, we never went mountain bike, so we decided we needed just a more standard bike, I guess.

[00:03:19] Amith: That'll be fun. I love, I love cycling, uh, up in one, not so much here in New Orleans, although you can do a little bit of cycling primarily here up on the river levee and, uh, oh yeah.

[00:03:27] Also the lake. Uh, you can do that because that's kind of like, there's nobody driving there and uh, it's actually pretty smooth. So, but Atlanta, I imagine you have some pretty good trails and pathways to go ride on.

[00:03:38] Mallory: Some pretty good hills and that's why we realized we went to this one trail probably last summer with mountain bikes and we couldn't get up the hills.

[00:03:46] We'd have to get off the bikes, walk them up, everybody on their road bikes is passing us. So, uh, we decided to make that change, but very excited to start using those and get outside.

[00:03:56] Amith: E-bikes are pretty impressive that I'd say like, um, my wife has one up in Utah and when we go mountain biking, um, it's just kind of amazing.

[00:04:03] I, I actually don't particularly like riding it most of the time. Okay. Because it's heavy and with mountain biking you wanna be able to kind of move around a lot and, you know, have a lot of maneuverability, but, uh, but it sure is fun going up hills.

[00:04:14] Mallory: Yeah. Oh yeah. It's much, much easier. Uh, Amit, I wanted to talk about on the pod this new CES sidecar partnership that's been announced.

[00:04:24] If you can share a little bit about that.

[00:04:25] Amith: Sure. So, um, CESS is all the STEM associations. So STEM societies, uh, they have a wonderful community of people, particularly in that sub niche. And it's funny because for people that aren't familiar with the association market, just having, you know, a handful of associations for associations like the national body with a SAE, and then regional bodies like Chicago Forum or any of the state SAEs, people are surprised that there's associations for associations even at that level.

[00:04:53] But there's of course, even more associations for. Associations that are specialty based. So there's CES and there's Navi for the bar executives, there's AMZ for the medical society executives. Uh, so there's a lot of these wonderful organizations. What's cool about them is they hyperfocus on content and ideas in their particular communities.

[00:05:11] So in the STEM societies, in the world of cs, uh. The needs are similar to other associations. Of course, most associations share certain commonalities, but STEM societies oftentimes have much deeper needs in the area of scientific and technical publishing, uh, and content and, and amongst other areas as well.

[00:05:29] They tend to have lots of meetings that are, uh, you know, formal in nature, scientific proceedings. Um, there's those kinds of things that are going on. So their requirements and their focus there, both in terms of business and technology, uh, are concentrated. And that's why these types of groups form is because they want.

[00:05:44] Talk about the issues most relevant to them. So we're super excited to partner with ces. They are the premier body that, that, uh, exists in this space for STEM societies. The partnership with them is awesome. It's a, uh, member benefit to get a discount on the sidecar, uh, as a sidecar AI learning hub. And, uh, we couldn't be more pleased to partner with those guys.

[00:06:03] So, very excited to roll that out.

[00:06:06] Mallory: Awesome. And so just to be clear here, the Sidecar AI Learning Hub comes to their association members at a discounted rate, the full AI Learning hub, right?

[00:06:16] Amith: Yeah. The full AI Learning Hub is available to them, as well as the, the prompting course is also available. Anything in the Sidecar AI Learning Hub, which includes those two options or actually three options 'cause there's the learning hub.

[00:06:27] Without the certification and there's learning hub with certifications. So, um, assess members through their membership, they have a new benefit, which is a discount on all the sidecar products.

[00:06:37] Mallory: Very cool, and I feel like that was a really good segue into our topic for today, which is very heavy on the science.

[00:06:43] Today we're talking about Google's alpha evolve. Amit, I'm really glad you flagged this for me because you sent me the LinkedIn post and honestly you probably feel the same way often, but when I see such an inundation of of information around ai. All the time, pretty much every day. It can be hard to decipher what's really of an impressive magnitude, even though all of it is like what I need to pay attention to and what I don't.

[00:07:07] So I feel like I might have just glazed over this post until you sent it to me and said, all right, I think this is a, a full episode quality.

[00:07:15] Amith: I think, uh, I think I first learned about this through YouTube maybe. Okay. I don't think it was LinkedIn. I think it was YouTube and, uh, YouTube's algorithm for recommendations is it, it knows me quite well.

[00:07:24] It sends me a lot of really nerdy stuff like this. And, um, you know, this, this particular topic as we get into it, I think might initially sound, uh, both intimidating perhaps, but also. Not particularly relevant to a lot of association leaders, so can't wait to get into that. But, uh, it just caught my attention because I think it represents a significant, uh, new capability that has thus far, as far as I know, only been hypothesized, but hasn't yet been proven by anyone else.

[00:07:50] So, um, let's get into it.

[00:07:52] Mallory: Yep. Alpha Evolv is a cutting edge AI agent developed by Google DeepMind designed to autonomously invent, test, and refine computer algorithms for a broad range of complex real world and scientific challenges. Alpha Evolv leverages the power of Google's Gemini large language models, combined with an evolutionary framework to go beyond simply generating code.

[00:08:15] It actively discovers new algorithms. Optimizes existing systems and solves mathematical problems that have long alluded human experts. So to get into a little bit of how it works, it uses two versions of Gemini, Gemini Flash, which rapidly explores a vast space of possible solutions. And then Gemini Pro, which focuses on deeper, more nuanced analysis.

[00:08:38] And here's an overview of how the process works. So first, a user provides a task and an evaluation function. So basically the metric for success for that task. In Gemini, flash rapidly generates multiple candidate solutions. Next step would be Gemini Pro analyzes and improves the most promising candidates, and then automated evaluators rigorously test each solution against the defined metric that you provided at the beginning.

[00:09:05] Finally, the best performing solutions are selected. Mutated and recombined in successive generations evolving toward optimal or novel algorithms. This method allows alpha evolve to autonomously refine not just short scripts, but entire code bases and system level solutions producing readable and auditable code that engineers can easily review and implement.

[00:09:28] I wanna talk a little bit about, uh, the discovery that has been getting a lot of press with Alpha evolve. So it broke a 56-year-old mathematical record by discovering a more efficient algorithm for multiplying four by four matrices, reducing the required number of s scaler, multiplications from 49, which is the previous best to 48.

[00:09:48] This surpassed the results set by German mathematician Stren in 1969, a milestone in computational mathematics. So at a glance, that might not sound the most impressive. We'll, we'll break that down just a little bit. But Amis shared with me this great YouTube clip and I wanna, uh, insert just a piece of that in the podcast because he gives a quick overview of kind of how Al Alpha evolve did this and why it is so impressive.

[00:10:11] So I'm gonna play that now.

[00:10:13] Youtube: Most of the time entries of your matrix are going to be real numbers, but alpha evolve realized if we use complex numbers, which have real numbers, as a subset, we can get some magical cancellations and reduce the number of multiplication to 48. A lot of researchers would probably assume using complex numbers would make the problem more difficult.

[00:10:32] But Alpha evolved, somehow realized that's a good approach. Four by four is very small, but just as a reminder, we can do this recursively for larger matrices. In fact, the larger the matrix, the bigger the effect because now instead of 49 times 49, you have 48 times 48 for eight by eight matrices, and the gap keeps growing the bigger the matrix.

[00:10:53] Mallory: Beyond breaking this 56-year-old mathematical record and a set of over 50 challenging mathematical problems, alpha evolve, matched state-of-the-art solutions in about 75% of cases and improved upon state-of-the-art solutions in roughly 20% of cases. So Amitha, I often do this on the podcast. Uh, I like to quote you when you share something with me because I feel like I can pull a lot of insight from that you shared.

[00:11:19] Link, I think it was to the LinkedIn post, and you said Absolutely stunning and a hundred percent predictable. I've gotta ask, so explain to our listeners, what did you mean by that?

[00:11:28] Amith: Well, you know, I. People that are deep in this stuff have been expecting systems to have this concept. I would broadly categorize as an early, um, exploration of this concept of recursive self-improvement, which is where a system is able to improve upon itself and improve upon itself, which is effectively what's happening here.

[00:11:49] And so that's, it's predictable because there's a lot of people working towards this. The folks at DeepMind tend to focus on these types of unsolvable problems. I find a lot of their work to be just. It's incredibly inspirational. Uh, and so, uh, a hundred percent predictable is because we know that we have a lot of the core elements to do this.

[00:12:08] Uh, still, even though you expect it to happen, it's stunning to see. So that's, that's where I was coming from. I was just really excited about this.

[00:12:16] Mallory: So you talked about recursive self-improvement, that's, that was the phrase you said, right? And the ability for AI to now discover novel algorithms, which is something as far as we know, has not been shown before.

[00:12:30] Can you kind of provide some more tangible examples of um Sure. Like situations where algorithm discoveries have changed the world?

[00:12:39] Amith: Well, I mean, algorithmic improvements have helped us do everything over time from, you know, from the earliest stages to where we're figuring out how to do very complex things now.

[00:12:49] So algorithms are essentially, it's, it's a complex, fancy word for saying it's a step-by-step way of solving a type of problem. So, you know, if we know how to solve more problems and more and more complex problems, and then if we come up with smarter, better, faster. More efficient ways of solving the same problem.

[00:13:06] There's value there too. So we might know how to do matrix multiplication. We've known how to do that for a long time now. Uh, but we know how to do it in a way that's pretty compute intensive, right? So if we can improve that by some degree of efficiency in this case, you know, one out of 49 doesn't sound like a massive increase.

[00:13:23] Uh, but as these matrices become larger and larger, the percentage of, uh, efficiency increase by this new algorithm actually goes up quite a bit. Uh, but the point is, is that. Even that, that level of whatever that percentage is, it's very small, is actually a stunning impact. If you think about global energy consumption by AI systems, which heavily, heavily rely on matrix multiplication, that's the core of what inference is doing.

[00:13:47] That's the core of a lot of what happens in training. Uh, if you can make that slightly more efficient, that's a pretty big deal. It's both good for performance, but it's also good for energy and cost. Um, but to me the examples are. Actually, literally anything you can imagine that's in this category of unsolvable.

[00:14:05] So Mallory, uh, I'd remind some of our listeners to go back in time in the sidecar sync to our, uh, episode, uh, on, uh, material science. And I'm trying to remember what it was called specifically, but there was a material science episode, alpha. Alpha fold was the bio, uh, bio related one, but, uh, but very similar.

[00:14:23] Um, in fact, it was the, we talked about the materials genome in that episode. We'll have to go back and look it up 'cause my memory's failing me on this. But in that conversation, I actually think it was also Google DeepMind that had a materials, um, ai, it was discovering novel materials. And this was.

[00:14:38] Incredibly interesting because they were actually able to physically fabricate many of these materials and test their properties that were hypothesized by the ai, uh, and prove that the AI was correct about the vast majority of them. Um, so. Essentially we, um, we had there one example you mentioned alpha fold.

[00:14:58] Alpha Fold has gone through several of its own evolutions, uh, in this case in alpha Folds case by humans evolving it. But in the most recent alpha fold three, for example, that's being, uh, commercialized by a lot of different people. But the people over at Isomorphic Labs, which is another branch of Google that Demis sais also leads, they're doing novel drug discovery with Alpha Fold three, and they've.

[00:15:20] Built a layer of software on top of that. I'm sure the concepts in alpha evolve are bleeding over there and back and forth. But, um, so, so let me zoom out for a minute and try to explain why I think this is a big deal. If a computer system can be given an arbitrarily complex problem and said, improve upon this, make this better, solve this problem for me that I don't know how to solve, that's very different than what we've been doing.

[00:15:42] Even with the state-of-the-art AI systems we have in our hands, which in many respects are still. Effectively required to have somewhere in their training set something that essentially contains the answer. So. Thus far, this is not a hundred percent true what I'm saying, by the way, but it effectively is thus far, all of our AI systems are capable of doing anything that's in their training set, but not really generalizing in a broad sense and being able to create new solutions that didn't exist.

[00:16:11] So with Alpha Evolve. The, the thing you had mentioned earlier is matching state-of-the-art solutions. So there's the 50 challenging problems that they ran through it, right? And in 75% of cases, they matched the current best known algorithm to humans invented by humans, right? Um, but those known, those like known algorithms were not included in the training sets.

[00:16:31] So they were able to essentially create new algorithms rather than use. Prior information. So that's by itself very impressive. And then in these 20% of the scenarios where they created new algorithms that were better, that's quite fascinating. Now they did this through a system. This is not a new model.

[00:16:47] This is the smart use of engineering on top of the underlying models, they actually used older versions of Gemini, Gemini, flash and and Pro 2.0, which are both excellent models, but they're not even the latest 2.5. So when they do this again with Gemini two five, which is a reasoning model. That has its own level of increased intellect.

[00:17:05] It'll be quite fascinating to see what happens. Um, so to me it's all about the stuff that we don't know how to solve, right? Like, um, for those of you on YouTube and have heard me talk elsewhere, I have this flywheel in my background all the time and it's there to remind myself as much as share it with anyone else.

[00:17:18] But the very first thing is, is that we wanna seek and destroy unsolvable problems in the association market. Um, that to us is what drives us to move forward as a family of companies is to find these so-called. Unsolvable problems, and then we wanna go figure out how to actually make them solvable.

[00:17:33] Right. Um, so it's, uh, it's, that's the place that I love to spend time thinking about because that's where you can really unlock new business models, new sustainable sources of revenue for the association community. Um, so we'll get more into that shortly I'm sure. But, um, there will be ways to apply this idea for lots of organizations, not just people that are in deep in science and math.

[00:17:53] Mallory: Mm-hmm. I love the terminology, seek and destroy ame. I'm guessing, was that a, did you come up with that phrase? Yeah,

[00:18:00] Amith: that's got my fingerprints on it. Yeah, we, we had some, we had some lively debates in our, uh, planning meeting. We were coming up with that, but eventually we got everybody on board with it.

[00:18:07] But yeah, I just, I like, you know, visually captivating imagery where we're like, you know what, that's what we're gonna go do. We're gonna seek 'em out and we're gonna just crush these unsolvable problems.

[00:18:17] Mallory: Yeah, it definitely makes you feel something. So I like that as well. Amit, you talked about the idea of creating novel algorithms.

[00:18:24] I think for some of our listeners, even for me included it, when we use a large language model, it can seem like AI's coming up with novel, uh, quote unquote solutions. For example, if I give it some ideas we have about digital now, it might come up with this theme that I had never thought of, but what you're saying is in its training somewhere that information exists.

[00:18:44] It wasn't creating something truly.

[00:18:47] Amith: Yeah. I mean the, the, the models we have now, especially the reasoning models, are able to synthesize new ideas in a sense, and that they're able to combine ideas mm-hmm. From other ideas. Right. So it's not, there has to be like a copy of a piece of text. Okay. That literally answers your question.

[00:19:02] That's, and that in fact, even going back to the earliest language models, that's not what they did. Um, they were, they were creating new answers, but if they hadn't been trained on something. It would be extraordinarily unlikely for them to come up with something truly unique. Now, these models have gotten better, partly also because they have access to tools where they can write code and run it.

[00:19:21] They can search the web. So that was, of course, originally the domain of just perplexity and a couple of other early innovators. But now every major AI tool has built in web search. So. They can, they can go discover new information that's outside of their, their true training set. So that statement is evolving as we speak, but, uh, ultimately the way I think about it is that, um, so far, these models, they don't work the way our brains work in terms of this creative space.

[00:19:47] They have very limited ability to, you know, really ponder the problem and kind of go through the creative process the way. A lot of, a lot of people do in order to solve novel problems, right? You think about like, what's the journey of a scientist that's trying to find a cure for something? Um, it's not really a linear path, right?

[00:20:05] Mm-hmm. It's always, mm-hmm. Always all over the place. You hear all these stories about people waking up in the middle of the night and in their dream state they thought of this, or, you know, they're walking their dog and they saw something in nature that inspired her to think of a new solution and on and on and on.

[00:20:16] You know, the apple falling on Newton's head, right? That kind of stuff. So, you know. AI doesn't really have that, uh, experiential type of component, and therefore it's, it's not as creative yet. That doesn't mean it won't be, but at the moment, you know, what we use day-to-day in Claude and Chat, GPT is not that level of creativity, and creativity is the ingredient that drives any kind of new discovery, whether it's in poetry or in science.

[00:20:38] Mm-hmm.

[00:20:40] Mallory: If you all have been listening to the podcast for a while, amis, you've definitely said multiple times. Even if the AI we had today doesn't get better at all, we will continue to see discoveries or, or new applications over the next probably, I don't know, five to 10 years. I feel like this is a prime example of that, right?

[00:20:56] Because it's not a new model, it's simply engineering that was put together with the models we already had, right?

[00:21:02] Amith: Totally. Yeah. And you know, I'll give you a quick example of that. In, uh, one of our AI agents, it's a tool called Skip, which does, you know, data science and, and analytics and stuff like that, uh, through a conversational interface up until the most recent version of Skip, which we're about to release, um, we would run a request through our agentic pipeline once.

[00:21:20] So essentially what would happen is, is he'd go to Skip and say, Hey. I need to analyze my member data to figure out retention by geography and see if there's a correlation between member retention by geography and age, or you know, just, I'm making stuff up, right? Just any arbitrary question. Skip gets to work, looks at your data, figures out what to go pull in, starts writing code, puts together a report, sends it back to, so that happened once.

[00:21:43] Now with AI costs dropping so much and compute being more abundant, we're actually running some of these components of that agent pipeline. Dozens of times in parallel, uh, and then picking the best answer. So the final step that Skip goes through into creating a report is to actually generate fairly complex code.

[00:22:00] It can be in many cases, thousands or even tens of thousands of lines of code. And rather doing that once I. We say, Hey, do it 3, 5, 10, 50 times in parallel. And then we have another AI process that's capable of evaluating the output and picking the best answer, which in some ways is similar to what Alpha Evolve is doing.

[00:22:18] Now. Our stuff is not nearly as advanced as what they're doing in terms of testing different, different algorithms, because what we're doing doesn't require novel algorithmic discovery. But in a way it's a similar process because we're basically running. A lot of these processes over and over, and then iteratively improving within the agent.

[00:22:34] It's not in the model layer, but it's in the ENT layer. To me, that's why in past episodes have said many times, the terminology is interesting, but not necessarily that important because what happens in the model versus what happens in the agent layer or the application layer to the user? Does it work or not?

[00:22:49] 'cause the question. Um, you know, so that, those are some things I think I'd point people back to.

[00:22:54] Mallory: Yep. I wanna start zooming out just a bit and really talk about what makes Alpha evolve unique, at least what I've found in my research. So the first, which we've started to touch on is that general purpose capability, unlike previous domain specific AI systems, we just talked about alpha fold.

[00:23:11] That was specifically for protein folding Alpha Evolve is a general purpose algorithm discovery agent, so it's capable of tackling a wide variety of problems, computing, mathematics, engineering. I'm sure we can even extrapolate further than that. The second piece of what's unique about it is, uh, it's evolutionary search.

[00:23:29] So the evolutionary approach combined with automated evaluators enables it autonomously to explore and optimize the solution space far beyond what's possible right now with traditional AI code generation tools. And then. Human readable output. So the system produces clear structured and auditable code, making it practical for real world deployment and human collaboration.

[00:23:53] Amis, of these three things that I mentioned, so human readable, output, general purpose, capability, and evolutionary search, uh, can you kind of talk about each of these unique components, and maybe we'll get into how these might apply to associations.

[00:24:07] Amith: Yeah, totally. I, I think each of those is really important.

[00:24:09] I'll actually start with the last one first because it's probably the easiest one to describe. I think it's really important that AI systems communicate in natural language, uh, AKA natural language to us humans as opposed to some kind of funky computer communication mechanism. Uh, because that makes it not only interpretable and discoverable, but it also, it's something we can, you know, keep up with, right?

[00:24:29] As opposed to computers could probably find a much more efficient way to communicate because. Our natural languages are designed for our brains, and computers can do things differently. But, uh, that's important. Uh, not all systems of this type necessarily will have to work this way, but I think it's really important to have human readable output and human readable steps along the way.

[00:24:47] One of the leaders in this, what I'm referring to kind of broadly, is this category called interpretability, uh, which is a really big important dimension of AI research. Um, actually Anthropic who we've talked about a, a number of times in this pod, they're the makers of the Claude, uh, AI system. Um, those guys are.

[00:25:04] Really, really good in, uh, in, in their research efforts and their emphasis on interpretability and human readable output is a big thing that those guys and many others emphasize because, uh, that's like the, the side of it. We can see those guys go into the model itself to understand what's happening in the actual neural network.

[00:25:19] Uh, and that's super interesting, but. A little bit off topic. So in terms of general purpose capability, we've touched on that in the first part of our discussion, Mallory, where this is not just about mathematics, certainly not just about matrix multiplication, but it can be about anything. So imagine you had access to a system like this in your association.

[00:25:35] You said, Hey, I need to improve member retention. Go figure it out. And so what are some of the common playbooks that people would pull up and say, well, we should run an engagement campaign. Let's try to figure out how to get people to more of our events, because we know there's a correlation between people who attend events and better retention now is that I.

[00:25:56] A correlation? Or is it causation? Meaning are people coming to the events kind of come to the events and they renew because it's a byproduct of being at the event? Or is it some other effect? Right. And we don't necessarily know that, but we might think, well, that's one playbook or playbook theme we know is.

[00:26:13] Drive event attendance to try to drive retention. What about sending out better content? Um, that's another common one. What about just communicating the value of membership that they've received? A lot of people forget about all the things they do with the association. These are things like in our brains common association, uh, ideas.

[00:26:28] But what about just. Come up with other ideas, right? So what if we had a system that could explore the space around member retention, automatically hypothesize a bunch of possible solutions, and then, um, come up with, hey, here are 10 different things you could go test and then help you actually go test these, these novel ideas.

[00:26:45] Some of them might not be very good. Some of them might be at, you know, at the level of your current tactics and some might be better. Um, so that's kind of applying the concept from. The world of likes, say math or engineering to some, a domain in, in business. Um, the evolutionary search part is a really important piece to, to come back to and make sure that's clear.

[00:27:04] So, so how does evolution work over a very long period of time for, you know, biological species? You know, stuff happens in our environment. Uh, largely it's various forms of radiation that cause, you know. Us to effectively have mutations right in our DNA and over time that causes slight variations to occur in one generation to the next, to the next, to the next.

[00:27:26] And some of those adaptations are helpful and some are not. So, um, when those adaptations are helpful. That particular branch of those generations tends to thrive and the other branches tend to not thrive, right? And so, over a period of time, um, that happens over thousands and thousands or millions of generations, and you have a lot of evolution, a lot of change that occurs.

[00:27:49] Um, so in this whole evolutionary computing category, which is very closely aligned with ai, but it's its own branch of computing and, and computer science research, um, people have been exploring this space for a long time. And pre AI or pre. Modern ai, it was, you know, a very slow process to do this. But now with AI coming up with thousands of candidate algorithms, the evolutionary piece tied to it to say, okay, what if we mutate these algorithms a little bit?

[00:28:12] What if we change the approach to this part of the algorithm or this part of the algorithm? Making these small tweaks rather being caused by environmental factors, they're caused by intentional modification or intentional mutation. Um, and then we test the offspring, the next generation of the algorithm.

[00:28:27] So you take an algorithm and you. Tweak it a little bit and then you test it and tweak it a little bit and you test it. Now in the context of a business domain problem, we're a little bit of waste from being able to actually do this because to actually test these different ideas, you don't wanna really test all this stuff on your members, right?

[00:28:41] And say, Hey, what would happen if we send them all this? Now you could AB test things, and you can definitely get close to empirical results or actual empirical results with subsets of your audience, and that's very helpful for things you think are likely to work, but just to. Like have the craziest possible ideas and test them on your live members.

[00:28:57] You're probably not gonna do that anytime soon. So if you think about some of our conversations over time in this pod, the idea of digital twins or simulation systems, what if you had a digital twin for your membership? I. And your membership through all the data and all the attributes you have across every system and every interaction you've ever had was modeled into an effective digital twin of your association's membership, which essentially means it's a simulation of how your, you know, 10, 20, 30,000 members would behave to different stimuli, uh, down to the individual note, an individual members.

[00:29:28] So if this message goes across, this is how this member would react, and then what would happen to this other member, you know, because there's all sorts of chains of things that happen based on social and, and so on and so forth, right? So if you had that kind of a digital twin of your membership and then you had an evolutionary discovery algorithm tied into a system like this that could test out all sorts of different ideas, like how can we improve engagement?

[00:29:51] How can we get more people to events? How can we drive increased renewals? And I'm picking frankly, what will eventually be seen probably as fairly pedestrian problems, but they're what occupy our minds today if those are our issues. But if you could test all these different hypotheses from an evolutionary algorithm against the digital twin, it's not a hundred percent real.

[00:30:07] But it could get pretty darn close to giving you a good prediction about which algorithms might be good for you and which ones might not be good for you.

[00:30:14] Mallory: Wow. I laughed when you said digital twin Amit, because I think you and I are evolving into the same person slowly, because as you were talking, I'm like, oh my gosh, what if someone had a digital twin of their associate?

[00:30:25] And then you said it and I said, wow. Okay. We're we're obviously very iny. Totally. Well, I wanted, I wanted to dig into that because it seemed like the business application example you gave was a bit more theoretical, uh, in actuality with, maybe not right in this moment, but with Alpha evolve, let's say six to 12 months from now.

[00:30:44] Is that something an association could use it for? Like if they gave it access to some data source and could run all these theoretical outcomes?

[00:30:52] Amith: I think this is probably more of like a, by the end of the decade kind of thing, as opposed to six to 12 months. I think the amount of compute you need, um, and the level of sophistication you need will be different.

[00:31:02] Um, you know, uh, let me actually zoom in on, on a little bit different version of the problem. I started to describe what if you wanted to make a decision on where to have your next annual conference and you have. Five different choices, you know, in terms of contenders and you're, you're having a hard time making the decision.

[00:31:17] Well, you know, that's something you could say, well let's, let's test that through this process with the digital twin. Let's simulate how that's gonna happen. Um, and then once you pick a site, you might say, well, what's the best marketing campaign for this conference? And there might be essentially an infinite number of variations of how the marketing campaign might work in terms of timing and messaging and sequencing and, you know, channels and all this kind of stuff.

[00:31:40] You, you could have. Essentially an evolutionary algorithm with ai. Come up with a whole bunch of different hypotheses for what's the best campaign, and then test them on your digital twin and see which one performs the best. And then the feedback loop, of course, would be okay. You pick one or two of them to a, you know, to run against your actual real population.

[00:31:59] You get the continuous loop of feedback from that, and that feeds right back into the process. So the algorithm is, is hypothesizing and test. Testing the outcome against a population of, you know, essentially fake or digital versions of each of your members. That in aggregate our digital twin of your association's membership.

[00:32:16] And then that feedback loop is all synthetic. But then when you get real data based on using the experimental idea, once you think it's, it's good, um, that further reinforces the cycle. So I think these things are going to be real, but. This is, this is not stuff that's gonna happen anytime in the near future.

[00:32:31] Partly 'cause the data prerequisite is where most associations will have a tough time, you know, coming back down to Earth for a minute. Um, you know, most associations just have a hard time answering a question like, you know, which of my members receive these publications and have taken these courses, uh, and have been to our website more than once in the last three months.

[00:32:47] Right? Because that data is stored in their CMS and their LMS and their a MS and they have a hard time getting that resolved. So. People need to solve for these more basic challenges first, then feed into the kind of thing we're talking about. So what we're talking about today, everyone is not, science fiction is totally real and it's super exciting.

[00:33:05] Um, but it's more about helping train your brain on where things are going so that you can anticipate this and say, you know, I remember on the sidecar sync back in 2025, I was hearing about this really cool alpha evolve thing, and now it's 2027. You, you just have that noted in your brain somewhere and you're like.

[00:33:22] We're designing our next system that does this. By then, AI has doubled in power, you know, 3, 4, 5 times. So what can you do? Then? Maybe this is like, you know, just something you can click a button on, on a website to get at that point.

[00:33:35] Mallory: Yeah. Wow. Absolutely mind blowing. You're right, it does sound like science fiction, but it, it's real life folks.

[00:33:41] I wanna talk about how Google is also using this internally, so not just for mathematical challenges, but they're using it to improve their own business. Um, it's been deployed within Google to optimize large scale systems such as Borg, which is the company's cluster management system. Its new scheduling heuristic.

[00:33:58] Improve the utilization of Google's global computing resources by 0.7%, translating into millions of dollars in savings and significant energy resource conservation. The AI has also contributed to hardware design, so optimizing circuits in Google's tensor processing units or TPUs, and improving the speed of key AI training kernels by up to 23%, resulting in a 1% reduction overall.

[00:34:24] In model training time. Uh, so that's one piece how Google's using it internally, which I think goes back to what you were saying amme about maybe how potentially an association could use this internally as well. I also wanna mention that, uh, in terms of the future, DeepMind is developing a user interface for Alpha Evolve and plans to launch an early access program for select academic researchers with the possibility of broader distribution in the future, which is something we'll keep an eye on.

[00:34:51] Um. I'm curious, Amme to talk about the downstream effects of associations that have, perhaps researchers as part of their membership and how this will obviously will change a lot within the realm of research. How an association can kind of grapple with this, provide education on it, things like that.

[00:35:12] Amith: Definitely, I'm, I'm glad you brought that up, Mallory, because you know, I think that for an association's internal business operations, this is interesting, but not the most immediately pressing or really available thing like we were just discussing. I mean, if an association really wanted to test ideas like this and was pretty far along with their, like data aggregation and AI data platform and all that kind of stuff, they could totally do experiments with technology like this.

[00:35:34] Not Alpha Evolve specifically, but um, there's ways of emulating these concepts. Today. Uh, but most associations aren't gonna be playing with that for internal operations. But to your point, many associations have communities full of people that are doing scientific research or doing other things that are kind of in a similar vein that this discovery, this capability, uh.

[00:35:57] Would be extremely relevant for those folks. So I think associations need to be the conduit through which their members, their community learns and continues to stay informed on what's happening in the world of AI and contextualizing it for each of those communities, just like we attempt to do here for our association friends.

[00:36:15] So there's an amazing opportunity, I think, for all associations, quite frankly, to be the center of AI learning for their communities, whether it's communities of mathematicians or computer scientists, or if it's communities. Of teachers or doctors or lawyers or whatever it is, um, the association knows the context of that space probably better than just about anybody.

[00:36:37] And so to be able to bring AI content into that world and to contextualize its that it's helpful and it's relevant, is an amazing opportunity both to advance your mission as an association, um, but also to drive revenue. Because if you are consistently providing great content on this topic, we can tell you from our own experience it generates a lot of interest, which is exciting.

[00:36:58] And then from there, there's opportunities to develop, um, you know, member value ads where you can perhaps provide some content as part of membership, but certainly to develop courses and deliver incredible amount of value. To your members. And so, yeah, I mean, if you think about this topic and you're listening to it and you are anywhere, even a degree or two separated from, let's say a scientific realm, but certainly those that are directly in it, um, this is absolutely a topic you should make sure your members are aware of.

[00:37:23] To not do that would be, you know, really problematic is association for your space is, is the way I would put it. So I think it's an incredibly important thing and, and a big opportunity.

[00:37:34] Mallory: Mm-hmm. I wanna look at the human element of that. Opinion to Amit, because this is obviously very new. Uh, you said it was predictable.

[00:37:43] I, I would say many of our listeners, including me, right? I don't think I would've predicted this per se, but you spend a lot of time thinking in this realm. And so given that it's new, given that it's a bit hard to understand, it's a bit ethereal. And if you imagine an association of. Computer scientist or people that we deem highly technical, I could imagine our listeners saying, well, we cannot produce content on something that Google to DeepMind is, has been studying for years and years.

[00:38:09] What do you say to that? If an association feels intimidated by the, the level of expertise that it seems, this requires.

[00:38:19] Amith: Well, so a couple things is that, first of all, at a minimum you can make sure they're aware of it. So you can share a link in your newsletter that doesn't take that much effort at all, right?

[00:38:27] So that's one thing you can do without trying to assert any level of expertise. Uh, you can also partner with people, um, to deliver AI content, people who have deep expertise in AI that can help you develop. Content, develop, learning modules, things like that. We actually do that. By the way, for those of you that aren't aware, we partner with a lot of associations to help them with training for their audiences where we create content specific to their industry.

[00:38:49] Um, but there's, there's tons of people who can help you with that. That is an option as well, is to use some of your resources to develop that content with outside expertise. But, um, the one thing I want point out about what you said about people who are in technical realms and. Doctors, it might be math, science, it could be engineering.

[00:39:07] Um, a lot of times the association staff are in fact, intimidated to even raise the topic. Uh, that's even somewhat technical because they assume that they're, you know. Triple PhD average member is already way up to speed on all that stuff. And my experience has been that the people who are like super deep in a particular realm, they might have like a conceptual understanding of how AI works.

[00:39:30] And this is even true for computer scientists and particularly even AI researchers in computer science. But they're so deep in their one area that they often don't see the pattern. They don't recognize the macro trends, and a lot of times they assume. Things that might be based upon something that they researched years ago.

[00:39:48] So, uh, a lot of times the folks that are deeply technical and and particular narrow fields don't see what's happening overall. And that's part of your job as far as I see it as an association, is to, to take that somewhat uncomfortable stance of saying, listen, we think it's important that we provide, you know, an AI intro course for our engineers.

[00:40:06] Um, even if they're. In a field that's adjacent to computer science or adjacent to even AI directly. Uh, it might be a little bit uncomfortable, but that's where the growth opportunity is, right? If you just keep doing what you've always done and stayed in a comfortable lane, well that lane may eventually just end.

[00:40:20] Maybe that lane goes off the side of a cliff 'cause it's not needed anymore. That lane may not be the place to stay. See, sometimes you gotta, you gotta switch lanes and, uh, this hits home for me 'cause I'm teaching my youngest right now how to drive. And, uh, she's, oh God, she's not a big fan of lane changes, but you know.

[00:40:35] Try to make sure that she does plenty of those. 'cause when I get outta the car, when she turns 16 shortly, I wanna make sure she's safe. So

[00:40:42] Mallory: just get her an e-bike, you know, to push it off a little longer.

[00:40:46] Amith: Actually, I'm, I'm pretty excited about her driving. It's gonna be great for

[00:40:49] Mallory: her. Uh, Amit, I wanna double down too on something you said just a bit earlier about.

[00:40:54] The association having context and is arguably perhaps the best entity in the world in terms of context for their profession and industry, at least within their, their geographic region. So don't doubt yourselves, you have that context. A singular computer scientist may be a technical may understand how AI neural networks work in theory, but you've got the greater context on kind of all of that put together, if that makes sense.

[00:41:21] Amith: Totally. It. Uh, I think that's a really good way to, to phrase it. And, you know, I think one of the things that we need to do a good job of, and I, I have a hard time with this a lot of times, is to, uh, zoom back out from time to time and retest our assumptions, retest our beliefs, our views on what is and isn't, what can be and what.

[00:41:42] Can't be. Uh, we have these, you know, deeply calcified systems of assumptions as people, and they're used as essentially heuristics to give us shortcuts, uh, so that we don't have to reprocess everything we think we know. But sometimes those heuristics or shortcuts essentially lead us down a false path.

[00:41:59] And it may have been true even six months ago, but it's no longer true today. And it's really hard in an environment that's changing this fast. But I find that the people who hold on of those assumptions. Most dearly are the people who oftentimes are the most intelligent, best educated, and deep in some space, uh, because they've always been told they're the smartest person in the room.

[00:42:19] And so it's your job as the association to say, maybe you are, but you haven't paid attention to this. And you can say it in a much nicer way than that if you want, but sometimes it's good to go knock people on the head and say, listen up, listen up. You gotta look at this. And I do think that's the job of the association.

[00:42:32] Your job isn't to just kind of like, be the bystander and just say, Hey, I'm gonna hang out here and just give you the same stuff I've always done. Um. Your job is to, uh, optimize for the success of your members and to help them do their work, which ultimately influences the wellbeing of the world. And that's what I find motivating about helping this space.

[00:42:50] So I don't think it's a matter of, well, no, our members would never want that. And, you know, our members can't tolerate that. Our members have never used that. You know, we've been hearing that a ton over the last couple years with another one of our companies, Betty, uh, Betty's, our knowledge agent. And, you know, that company, um, has worked with close to a hundred associations at this point and growing really fast.

[00:43:09] I. And many of them are very, very technical organizations. Uh, tons of medical societies, engineering societies, nursing societies, um, accounting organizations, people that have extremely deep technical content and subject matter and consistently one after the next, after the next, after the next in deployment, come back and they are told by their most experienced members, this is amazing.

[00:43:31] This AI is better than any. Member I can talk to. It's more knowledgeable, smarter, better answers, obviously faster, and that is not what they thought it would do. Even the people who are signing up and paying for a product like this, they're blown away consistently. And so there's that assumption set. If you can show that people would use a tool like that.

[00:43:48] To inform the decisions they're making in their field, right, in their field, their expertise area. They most certainly will be open to the idea of the association, giving them some insights on AI as well, and contextualizing it for their field. So it's a massive opportunity. If you don't do it, somebody else will.

[00:44:05] You know, for as an entrepreneur, I look at this and say the brand asset and the corner resource in terms of data and content that associations have, it's such a compelling opportunity to build, uh, to build businesses, to build franchises within. Your world where you have distribution, where you have relationships, you have content, and, and you have this incredible brand value to not use that is nuts to me.

[00:44:27] And it's just, it's basically sitting there waiting for you to go capture it. You can generate a lot of revenue from this if you think about your business model in a creative way. Um, and you can, you know, do an incredible job serving your members.

[00:44:41] Mallory: I would say you have a knack for predicting things pretty well, or at least on the podcast.

[00:44:47] I feel like a lot of the things you predict am me have come true in some regard. So I'm curious, looking ahead with Alpha Evolve, what do you expect to see near term, long term by the end of the decade? What, what kinds of challenges will we find solutions to? How do you expect to see this play out?

[00:45:02] Amith: So what I think is gonna happen with this particular technology is everyone's gonna replicate the concept, um, and it's going to find its way first into coding tools.

[00:45:11] So tools like Cursor and Windsurf and Cloud Code, and all these other tools. They're going to incorporate these concepts, um, and it's gonna make code better, even better, even smarter, better at solving problems that the developers that are training these tools or guiding these tools, I should say, uh, dunno how to solve.

[00:45:28] So that is going to be very powerful. Uh, and I think it's gonna have a compounding effect because where the code goes, everything else follows, you know, and that's why coding tools have been such a, a natural place for these companies to focus on. It's, it's one way to make a lot of money in the world of AI today.

[00:45:42] It's also super competitive, uh, but it's a direct. You know, use case of all these technologies that is insanely productive. I mean, there's things you can do with one engineer now would've required a team of 20 last year at this time, and I'm not exaggerating that. And so if that's like one person can do what a team of 500 could have done, I.

[00:46:01] You know, that's, and that's gonna come from this type of improvement. It's not just faster, better models, it's this kind of additional capability. So you're gonna see it there. I think people will rapidly adopt this in other specific domains, uh, like branches of medicine or things like that. So I think it's gonna be really interesting.

[00:46:17] I think about like the number of cases where people say, I don't know how to solve. I don't know how, I don't know what this issue is that this patient has. Um, and or if maybe you know what it is, but you don't know how to cure it. But maybe something else does, right? Or maybe there's some novel cure that, you know, an AI can come up with.

[00:46:31] So I think there's just so much more opportunity in front of us. There's this, you know, space of unexplored territory that's dramatically bigger than what we know. It's like, I think most people realize the percentage of discovery that's left to be had in space and then even in our own oceans, is vast, right?

[00:46:47] Like what we know about marine biology relative to what is knowable, is a tiny fraction. And that's true with. Ai Certainly. So I think all this exploration's gonna get accelerated, which is, is fundamentally exciting.

[00:46:59] Mallory: Mm-hmm. Well, everybody, thank you for tuning in today. Hopefully you learned a little bit more about Alpha Evolve and how it might pertain to your association sooner than you think.

[00:47:09] We will see you all next week.

[00:47:14] Amith: Thanks for tuning into Sidecar Sync this week. Looking to dive deeper. Download your free copy of our new book, ascend Unlocking the Power of AI for associations@ascendbook.org. It's packed with insights to power your association's journey with ai. And remember, sidecar is here with more resources from webinars to bootcamps to help you stay ahead in the association world.

[00:47:37] We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.

 

Mallory Mejias
Post by Mallory Mejias
June 2, 2025
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.