Skip to main content

Timestamps:

00:00 - Welcome to the First Episode of 2025
04:07 - Discussing Mindfulness and Productivity in a Fast World
10:07 - What is Google Gemini Deep Research?
16:01 - Use Cases of Gemini for Associations
23:02 - AI-Driven Event Personalization Insights
32:41 - Introducing Carbon-14 Diamond Batteries
37:21 - Applications of Diamond Batteries in AI and Beyond
45:14 - Closing

Summary:

In the first live episode of 2025, Amith and Mallory dive into groundbreaking innovations reshaping the association world. They explore Google’s Gemini Deep Research, a cutting-edge AI tool revolutionizing how associations can generate actionable insights. Then, they discuss the futuristic potential of carbon-14 diamond batteries—offering nearly unlimited power for thousands of years. From enhancing member engagement through AI to powering the tech of tomorrow, this episode blends scientific curiosity with practical association strategies. Tune in for insights, inspiration, and a glimpse of what’s ahead in AI and energy innovation.

 

 

🔎 Check out the Sidecar AI Learning Hub:
https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:
Gemini Deep Research ➡ https://gemini.google.com/app
ChatGPT ➡ https://openai.com/chatgpt

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey. Follow Amith on LinkedIn.

Mallory Mejias is the Director of Content and Learning at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Follow Mallory on LinkedIn.

Read the Transcript

Amith: 0:00

People are going to figure out how to stretch the boundaries of what this thing can do. It'll be used for all sorts of applications. Imagine a laptop that never has to be plugged in. Imagine a phone that just works forever. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings and welcome to the Sidecar Sync, your home for all things association plus artificial intelligence. My name is Amith Nagarajan.

Mallory: 0:52

And my name is Mallory Mejias.

Amith: 0:54

And we are your hosts, and we have your first official 2025 Sidecar Sync episode here for you today. This is the first episode Mallory and I are recording in 2025, is what I mean by that. We did air something this week, but we did record it last year, so this is our first time recording this year, and we'll get into all sorts of cool stuff. We've got some great content for you. Before we dive into that content, though, let's just take a moment to hear from our sponsor.

Ad V/O: 1:24

Introducing the newly revamped AI Learning Hub, your comprehensive library of self-paced courses designed specifically for association professionals. We've just updated all our content with fresh material covering everything from AI prompting and marketing to events, education, data strategy, ai agents and more. Through the Learning Hub, you can earn your Association AI Professional Certification, recognizing your expertise in applying AI specifically to association challenges and operations. Connect with AI experts during weekly office hours and join a growing community of association professionals who are transforming their organizations through AI. Sign up as an individual or get unlimited access for your entire team at one flat rate. Start your AI journey today at learnsidecarglobalcom.

Mallory: 2:20

Amith, the more and more we meet and do this podcast, I realize we should probably start recording right when you and I hop on the call, because we tend to have really interesting convos right before we press record. Just now, we were talking about oh, this is a scary statement that digital now is 10 months away, and talking about how is this possible. I feel like I just did digital now in 2024, but talking about how life passes by so quickly. But you kind of have a factual theory behind that if you want to share.

Amith: 2:50

Yeah, I have my perception on it anyway. So, first of all, actually, on Digital Now yeah, it's 10 months away, less than 10 months away. 10 months from now, digital Now 2025 will be over. So what are we doing November 2nd through 5th? Is it Mallory? Yes, yeah. So what are we doing November 2nd through 5th? Is it Mallory? Yes, yeah. And it's in Chicago at the Lowe's Hotel, right? Yes, indeed, I've never been in that property. I hear it's really cool. If you're Chicago-based, you probably know the property. It's really, really nice. So can't wait for that, that's, you know, not even 10 months away, and I'm sure we'll have amazing community conversations. We definitely will have some great keynotes, we'll do some exciting things around town, so I'm pumped about it. I love Chicago. Chicago is one of my favorite cities.

Amith: 3:33

But talking about time zooming by, it's interesting because the holidays are always a time at least where there's an opportunity for some reflection. I don't always partake in reflection. I tend to go, go, go, but sometimes I do, particularly when I'm on a chairlift. I'm a big skier and if I'm sitting on the chairlift, especially if I'm flying solo on a ski run or two, I tend to just kind of meditate or think and look around and enjoy the natural beauty, and sometimes, you know, I zoom back out. I'm like what's going on in the world and in my life and all that.

Amith: 4:09

And to your earlier point about time zooming by, I don't know, in my mind for maybe the last few years I've had this, this theory of the of the universe and my own simplistic way of thinking about it, which is, I think there's like a denominator of time and a numerator of time. If you think about your perception of speed of time, it kind of like make a fraction out of it and say, well, how long have you been alive and how much time is passing by. So if you're 10 years old and you're talking about the next year of your life, it's 10% of your lifespan, and so for a 10-year-old, a year seems like a really long time, and then for an old guy like me it feels like a tiny fraction of time, tiny sliver of time. So it's kind of funny, because I do think that is definitely how our perceptions reflect reality.

Amith: 4:49

All of us experience time, you know, in actual terms, of course, in the same way. Otherwise the universe would be really screwed up if we all actually experience time differently. But our perceptions, I think, reflect that. And so for me, as I look ahead and I say, well, this year is going to zoom right by, I think that's part of it, I don't know. What do you think of all that?

Mallory: 5:08

I think that makes a ton of sense. I like how you strategically did not include your own denominator in your example, fair point.

Amith: 5:14

It's a very large number at this point.

Mallory: 5:16

I think that makes sense, and what I was getting into before this call with you, amit, was sometimes life seems like it's passing by so quickly that we are kind of just like viewers of our own lives, or that things are happening so quickly we're not necessarily active participants, and I think that probably relates to what a lot of our listeners feel in terms of workloads and especially, given all this AI stuff, like it's happening so quickly. How do you become kind of an active participant in that? How do you slow things down? I don't know that we have the answers to that, but that's where my mind goes.

Amith: 5:48

Well, you know, there's a lot going on all the time, every day, every week. This week, ces is going on in Vegas, which is a big, big event. It historically has been around consumer electronics, but more and more it's become kind of the kickoff to what's happening in AI at the beginning of the year and Jensen Huang, who's the founder and CEO of NVIDIA, someone who's done a lot of amazing things. One of the things that happened while he was being interviewed, I believe, on stage at CES. Someone asked him why he does not wear a wristwatch, and I don't know if you know this quote, mallory. He said the reason he doesn't wear a wristwatch is because the most important time is now. That's pretty deep right center on the idea of being present and kind of experiencing things and kind of noticing more, right.

Amith: 6:46

So it's kind of, if you close your eyes, you hear more. If you really take time to breathe, things work a little bit differently. So I find that to be interesting. I don't do nearly enough of that in my own life.

Amith: 6:59

I have a meditation practice every morning, but I do it sometimes, I don't do it sometimes, so I'd like to get better at it this year. But I think it's interesting because the time to be present to reflect, to think more is, in fact, a really interesting aspect of what's happening in the world of artificial intelligence research right now, because people are giving these models an opportunity to actually, you know, step back and take a deep breath and say, you know, am I really completely hallucinating or am I doing something that makes sense? So I think that's also it ties into our world of AI for sure.

Mallory: 7:33

Yeah, I like that. I like the idea of being present, I have my own goals. It feels like every quarter of my life I'm like I'm going to get better at meditating and then I struggle with it. But I will say something that is seeming to have worked for me, and maybe you've tried this, amit, but I'm using a journal. I think the brand is called Best Self, but it's essentially quarterly priorities, but for your life. So it's kind of like having OKRs or top priorities, daily, weekly, and then you kind of analyze them on a quarterly basis, and that is seeming to resonate with me pretty well. Because I'm so used to that concept, at least in work, I'm taking it and applying it to my own life, which has been fun.

Amith: 8:08

That's fantastic. I mean, it's the old adage about priorities or goals. If you just assert a goal to yourself, it's kind of a certain level of probability that you'll achieve it, which is obviously variable by individual. And then if you write it down, though, and compared to just asserting it, you increase your likelihood of achieving it. And then if you share it with other people, then you're even more likely to achieve it. And then if you put a post-it note or some other way of remembering it and put it somewhere where you can't ignore it, like you know, you take a goal and you write it on a post-it note and you stick it in your bathroom mirror so you see it every single day, multiple times a day, right, then it reinforces. It reinforces that you said this was important to you.

Amith: 8:51

So writing a goal down, I think, by itself is magic in a way, and very few people do that. They just say, oh, I want to get better at X, y and Z, and then do nothing about it. So I think it's just a very simple system of reinforcement like that, and I get away from that from time to time and I come back to it, and then I you know, I tried to do some of those same things that I talk about, but it's very powerful. So kudos to you for getting a journal like that started. That's awesome.

Mallory: 9:13

Well, we'll see how it goes. I'll report back at the end of the quarter and let you know my percent attainment of my goals, but I really like the quote you said, Amit. Which is the most important moment is now. Is that the quote?

Amith: 9:26

Yeah, that's the paraphrasing I think Jensen Wong said. The reason he doesn't have a wristwatch on is because the most important time is now Makes sense to me.

Mallory: 9:59

Ethan and I for a couple weeks now. We're excited to talk with you all about it. And then the second topic is a little different. We're talking about diamond batteries, and if you want to know what that means, you got to stay tuned to find out. So, first and foremost, gemini Deep Research is an AI powered research assistant developed by Google, available to Gemini advanced subscribers for $20 a month at the time of the recording of this episode. It transforms how users interact with information by autonomously exploring a vast range of sources, analyzing relevant data and synthesizing findings into comprehensive multi-page reports within just a few minutes. So kind of the step-by-step on how this works is a user submits a research question or topic through the Gemini interface. Gemini then creates a multi-step research plan which the user can review and approve, or even modify. Once approved, gemini begins searching the internet using Google search framework to find relevant information on the topic. The AI refines its analysis over several minutes, mimicking human research behavior by searching, identifying interesting information and initiating new searches based on what it learns. And I do want to share screen and show you all a little bit inside Gemini deep research. For those who are tuning in audio only, I will do my best to walk you through that process, and audio only. I will do my best to walk you through that process. So I am inside Gemini right now. You'll want to navigate to 1.5 Pro with deep research to access this feature.

Mallory: 11:28

I ran a quick little experiment yesterday with some time when I was prepping on for the podcast today, and I asked it to create a research report on AI opportunities for associations in 2025. I said be specific and be detailed, which was something I pulled from a LinkedIn user who did the same experiment or a similar experiment, and they said that that would help with the output. So I entered in that prompt and then Gemini comes back to me and kind of outlines its plan. So it's going to research websites and it gets pretty granular here with how it's going to do that. It's going to find research papers and articles on AI opportunities for associations in 2025, find case studies of associations that have implemented AI, so on and so forth all the way to finding information on the future outlook for AI and the association sector. All the way to finding information on the future outlook for AI and the association sector. It will then analyze all of those results and create a report and it took about a few minutes. But as I was going through this process, I decided to edit that research plan just a little bit because I realized there probably are not a ton of case studies out there publicly right now of associations using AI. So I said include case studies outside of the association industry and then clarified why it provided an updated plan and then I said, ok, that's good to go, and I will say this whole process, once I clicked start research took, honestly, five minutes or less.

Mallory: 12:55

I actually clicked out and started working on some other tasks, so I don't have the exact time there, but it generated this lovely report which I will share with you, opening it in Google Docs so you can get a more comprehensive overview of it. But it breaks down the potential benefits of AI for associations, from increased efficiency, member engagement, reduced cost, enhanced security. It talks about challenges and risks, particularly for associations, like data privacy and bias cost accuracy. Then it breaks this down into specific opportunities for associations as it pertains to AI case studies of successful AI implementation from Coca-Cola, ups, a few other examples there. It talks about specific AI technologies that might be relevant to associations, like natural language processing and machine learning, and then what I really like here I'm going to scroll down is we get this nice kind of table which, amit, I remember you can talk about your example as well, but the one you shared with me had a really nice table where it broke down um tasks of like how difficult they were and then what the impact would be on those, which I thought was really neat.

Mallory: 14:07

In my own report it breaks down um survey sources and then year and then the key findings of those. So this is going back to 2023, because I'm not sure if we have that 2024 data just yet. But, for example, 60% of organizations with reported AI adoption are using generative AI, and the key finding there is 60% of organizations with reported AI adoption are using generative AI. And then, scrolling to the end, I've got a conclusion on that report and I also have all of the works cited there and I've got to point out somewhere in here oh, yes, yes, oh wow Top six AI guidelines for associations to follow. That is us sidecar with a link to our website. So I'm pretty proud that we're being included in the work cited here and I did not know that was going to happen. Just a heads up.

Ad V/O: 14:58

Good job Google.

Mallory: 15:06

Good job, google. That is a little, really quick overview of what you can expect in with Gemini deep research. Amit, as I mentioned, you tested this out yourself. I'm curious if you can kind of share what and you've probably ran several reports at this point but what you've done with it and why you were impressed.

Amith: 15:17

So what I did with it was it's similar in some ways. I asked Google's deep research to help me determine where AI agents would be most impactful in the sector. So we at Blue Cypress are looking at launching a minimum of four new AI products for the association sector this year, essentially one per quarter. Then we want to go up-tempo in 2026 and do two new products per quarter and then the following year we're going to go even harder. So our plan is to keep going really, really aggressively.

Amith: 15:48

Some of the stuff we've done so far that a lot of people have heard of are just, to us, the most obvious use cases of AI for associations and personalization and knowledge agents, data agents and so forth. But there's a lot of very specialized, different kinds of functionality like managing abstract submission or being able to automatically do speaker room assignments, things like that. That are massive efficiency gains. So we tend to work at Blue Cypress with a lot of input from the community and also, obviously, our collective intelligence and experience, having been in this space a long longpress, with a lot of input from the community and also, obviously, our collective intelligence and experience, having been in this space a long long time across a lot of our team members. But we said, you know what, let's think a little bit more about the data in terms of where the efficiencies would be greatest and where the implementation complexities could be ranked from high to low. So it's kind of a classical two-axis grid where we're saying how much of an impact would this thing have versus how much effort would it take to do it right?

Amith: 16:43

And so I asked Google's deep research to do that, and it produced a pretty compelling report. I mean, some of it was pretty generic, like the one you just showed is. You know, it's a starting point, right. There's a lot of things in work that would have taken probably you know somebody hours to put together, right, doing all those searches and pulling it together.

Amith: 17:07

So I also think I always think about, with these tools, it's not so much that, ok, that would have taken four hours of work or eight hours of work and I don't have to do that. I think what this is going to do is allow us to do more creative work, because we'll be able to ask more creative questions. We're limited. Our mental pipelines, right, collectively across our teams, are limited by how many labor hours can we put in, and therefore we oftentimes take a lot of shortcuts to get to the answers we get to, but if we can ask more questions and deeper questions and get better answers, I think that's going to help inform our decision making. It's going to get us to be more creative. So I view this as a creative tool as much as anything else.

Mallory: 17:49

I like that idea a creative tool. What do you see as some potential use cases for a feature like this for associations? In terms of which areas might they need to be thinking about doing this kind of research?

Amith: 18:02

Well, I think for staff of the association, being first of all aware you know, awareness is where everything starts with this stuff that this tool exists. It goes way beyond a chat, gpt or a cloud type experience. Those chatbots are wonderful. They do so many things, but they're pretty much giving you an answer almost immediately. Even 01 and soon to be 03, the reasoning models from OpenAI. They're not as detailed as this. They don't take minutes of time to do deeper research and compile results.

Amith: 18:31

Google's Gemini deep research product to me is an agentic system, so I view it as an application as opposed to a model, and being aware of the fact that there's a consumer-grade access tool to a research assistant like this is step one. So anytime you have to go deeper than just a quick search on Google or something where ChatGPT can answer it, this is a tool that potentially can give you a deeper, better thought-out answer to something you're asking. It's also the synthesis of AI's intelligence, with really heavily leveraging search. Obviously, google has the strongest search capabilities in the world, at least at the present time, and so their ability to leverage that Google search function within Gemini Deep Research is really powerful, and you saw that in the citations and kind of the breadth of content it's able to pull in. It's really amazing. And so I think first step is awareness. Then the next thing beyond that is where do you want to use this?

Amith: 19:28

And I think that imagine yourself in a sector like accounting and you say, okay, I'm working with a bunch of CPAs. I want to see what do CPAs think of the possibility of tax reform coming up here in 2025 with the change in administration at the federal level. How do people feel about that? What do they think is going to happen? Well, I can start reading all the different reports that are out there. Lots of big firms have put reports out there in terms of their position on what they think is going to happen in 2025. A lot of people reported on this. But what if we wanted to compile all that together and say I'd like to know how the industry feels, or how do people in my state feel about it? Google Deep Research probably could help with that. I don't know for sure. That would be great at it, but I think that's the kind of thing you could throw at Deep Research. That wouldn't be something that a regular chatbot would be good at answering. I, that wouldn't be something that a regular chatbot would be good at answering.

Mallory: 20:19

I'm thinking as well, potentially event location research, or even maybe if you needed to look up certain rules and regulations by state or even by community, like within a state or parish county, whatever that may be, that could be helpful.

Amith: 20:32

Sure.

Mallory: 20:33

Now the sources are cited, as we've seen, which is great, but it's still an AI model, so we all know that hallucinations are certainly possible. I don't know how to ask this. I don't know if you would give like a percent confidence on this, but how much weight should someone give to a report that Gemini generates?

Amith: 20:52

I mean, first of all, the hallucination problem has dramatically improved since ChatGPT launched just over two years ago. So back then it was very common to have this problem. But the models themselves, just on a standalone basis, have gotten far better at reducing hallucinations. And then the systems that sit on top of the models, that use citations and check their work and all that are much, much better. So I would put a lot of credence into the deep research product's ability to be accurate, because it's using cited works and it's actually doing an iterative agentic type loop where it's checking its work in compiling that output that it gave you. The output it gave you is the product of many cycles of this agent working on the process and figuring out like what it should do, quality checking it, all that kind of stuff. So I think that deep research probably is one of the best tools in terms of accurate content. It may or may not be exactly what you want, but it's likely to be very close to accurate, at least based on the citations.

Amith: 21:51

If it has a citation that's false, then of course that could lead it to giving you an answer that's incorrect. So as far as I know, it's not at the moment. You know cross-referencing and comparing multiple citations to check facts, things like that. It'd be great to test it out and say hey, for each citation you bring. In fact, check it with at least two other sources that either corroborate or dispute that particular statement, right? So there's a lot of things like that. I think journalists could use that. There's a lot of cool opportunities here. So I wouldn't go out there and bet that it's that great. Like for everything, I'd always check the work of the AI, just like I'd check the work of a colleague. That gives me something to read.

Mallory: 22:30

Yeah, and now that I'm thinking about it, that might be why this report that we generated was quite general, because I feel like the amount of resources out there on AI for associations are probably pretty minimal. That's just my guess, so maybe that's why we kind of got that generic output.

Amith: 22:46

I think that's right. I think your update during the process of what you demoed, where you prompted it and then you told it to change its approach, since AI case studies in this market are somewhat limited I think was good and that's the kind of thing that leads to better results, but it's still pretty high level. So one other example I'd give you that we've been playing with is another one of our products is focused on personalization and we have this particular use case we're really excited about. So last year, with several associations, we did a test of personalizing event content really across two dimensions, number one being session personalization. So recommending to Mallory or to Amith that these are the sessions you should consider attending at our upcoming event, both to really recommend things so that people will register, but also, once someone has registered, to tell them which sessions might be the best fit for them. That's a classical problem for event managers is how do you engage people through the content of the event, both to get them to register but, once they've registered, to give them the best possible experience? And if you have a show or an event where you have hundreds or even maybe over a thousand concurrent sessions and keynotes and breakouts and so forth. It's really really hard to get people to the right session and AI solves that completely, and so we experimented with. That had unbelievable results in terms of the happiness and engagement people got when they were doing that. And then we also tested the same kind of concept, but with networking. So being able to say, hey, mallory, you really need to meet this person or these three people, and if you have a great experience with that and you're able to connect with people at an event in a good way, that can be incredibly valuable. The value creation the association is helping create there is really amazing.

Amith: 24:31

So the reason I'm mentioning this is because what we wanted to do was research whether or not there's been any studies in the market that suggest that if people have really good engagement at an event meaning they connected with other professionals that they didn't already know they had a good experience if that would lead to a higher probability of returning to a future event, right. So we were looking for research. Did MPI or PCMA or ASAE or somebody study this problem and say, hey, if we do a deeper dive and we connect people really well at our events, that has an ROI right, and we can prove that through longitudinal analysis of a bunch of events. We did not find this, but we asked the research tool to help us collate data. It found a bunch of things that were kind of on the edge of that problem.

Amith: 25:20

Really, what we're looking for is kind of this obvious use case ROI to say hey world, listen, if you do a great job with AI-driven networking recommendations and session recommendations, your event's going to be better. People are going to want to come back. I think it's intuitively obvious that that's true, but we wanted to put an ROI to it. So I think stuff like that, where you're searching for an answer, is where this tool can be very powerful.

Mallory: 25:45

The last thing I want to dive into a little bit, amit, is when you've talked about Gemini 1.5 Pro, with, step by step, that we are allowing for a kind of system to thinking within the AI model.

Amith: 26:18

Very much so. I mean, if you think about when we talk about we were talking about this back when it was called Strawberry, that then became 01. We've talked about this in other episodes in different contexts. But if you give these systems and I'll say system is a loose term, which might mean model, or it might mean like an entire software system that is built on top of a model, like the deep research product is, the system ultimately has more resources to devote to a particular task and so it's able to actually think through the problem and do more research or just think about it longer, like in the case of 01 and the soon to be released 03 that OpenAI announced in December.

Amith: 26:58

These are reasoning models where, baked into the model itself, they've trained the model to take more time and it's a tunable parameter. You can tell it how much compute to use either a little bit or a lot and the answers, unsurprisingly, get better the more compute you give it, because you're essentially saying take more time to figure this out. That's like saying, hey, I'm going to give you a math quiz. I'm going to give you one second to answer this math problem, or I'm going to give you 60 seconds to answer this problem, or I'm going to give you two minutes to answer this problem, and so if I give you one second, it's just instantaneous reaction.

Amith: 27:27

You look at the problem. You have to guess what the number is right. It's more of like predictive the next token. It's like that's the version of you that you get, whereas if you have a minute, you maybe can work through the problem. Maybe you don't check your work that well, but you work through it.

Amith: 27:39

If I give you two or three minutes, maybe you check your work a couple times, maybe you think of different approaches because you have more time it's really the same thing, and so whether that's happening in the model itself or if it's happening through the system, iterating with the model multiple times, ultimately, I think that's an implementation detail that doesn't really matter to most of the people listening to this.

Amith: 27:59

The idea, though, is that these systems are capable of realizing that they need more time to solve the problem right. You know, if you give an eighth grade, ninth grade math problem to adults that have been through college, they probably can solve it in a few seconds. But if you give them, you know, a 12th grade problem and they haven't looked at that math in a while. They might need to go and pull a book off the shelf or look up, you know, a quick refresher on YouTube or something. So you know, these are things that we all do in life, and whatever the domain is, and I think we're just essentially allowing the AI to have a little bit more time.

Amith: 28:30

So to me, that's all good things because, you know, we expect instantaneous answers from computers because they're computers and we just assume they're right about everything and that they work instantly. But in reality, you know, these are complex issues.

Mallory: 28:43

So more compute to the AI models or to the software that's kind of working with the models, creates more time for them to kind of process the input and create better output. This I'm just going to ask this because maybe someone listening or viewing has the same question. But we've talked about Grok chips, G-R-O-Q and how incredibly fast they are, or when you see AI models run with those chips, Can you explain, like how that is different from this? So would, yeah, can you just explain how those are linked?

Amith: 29:14

Well, think about it this way. So Grok chips are. They're amazing, they're language processing units, which is a novel, fundamentally different hardware architecture than GPUs, and they can run AI models at, in some cases, 100x faster than GPUs, in many cases 10 to 20x faster. And so it isn't that they're changing the way the models work, it's that the fundamental unit of what's happening, this fundamental unit of computation, which is inferencing the AI model, is way faster. So, to give you an example, if you think about tokens per second, where a token is roughly equivalent to a word, that's all you need to really think about it. When you inference with OpenAI's 4.0 model, the maximum speed you tend to get is somewhere between 30 to 50 tokens per second, sometimes a little bit faster than that. And if you compare 4.0 on the OpenAI platform to LAMA 3.3, 70 billion parameter model, which was released in December and is comparable in terms of its intelligence inferencing on Grok, you get roughly 1,000 tokens per second. And they have a new version of it coming out which is based on a technique called speculative decoding that will be over 3,000 tokens per second. So it's a radically different experience, both because the model is substantially smaller, but it's comparable in intelligence, so the size of the model really doesn't matter if its capabilities are equivalent. So Grok is an enabling technology. It means that, because inference is so much faster and so much cheaper as well, people will build applications that are smarter because they use more inference, they use more compute. Ultimately, the way I would describe it, though, is that this ability for the system to know that it's allowed to use more compute is more about the system designer saying I'm willing to invest more time or more inference, or more compute cycles into this problem. It's like the teacher saying, hey, mallory, spot check, give me the answer instantly. I call on you in class, versus I say hey, here's 60 seconds, or three minutes, or 15 minutes to solve this problem on paper. It's a similar kind of analog, you know. I think, zooming out from this conversation a little bit, my observation is this particularly in the association market, I think, zooming out from this conversation a little bit, my observation is this particularly in the association market, but I think it's true for everyone All of this terminology at some point is going to fade to the background.

Amith: 31:38

It's super interesting right now, because it's a new frontier where people are figuring out all sorts of stuff. This stuff's changing really fast, but ultimately what you care about is the utility, the value creation to you. When you think about a software tool that you're using, be it like HubSpot or Salesforce or Microsoft 365, you don't really think about, like whoa, what kind of storage technology are they using and how fast is the network speed and what kind of computer is it running on? And blah, blah, blah. Those are just parts of the solution.

Amith: 32:07

Now all you think about is does it work, is it available? Does it solve my problem? And AI is going to become the same thing. All this stuff's going to become super commoditized, instantly available, very inexpensive, and that's already happening right now. So I think it's good for people to know about how these things are constructed, because it gives you kind of a window into what might be possible with these systems. But I think the average business user just needs to know that these systems are getting smarter, not just because of the fundamental algorithm getting better, but because we're giving more resources to the computer.

Mallory: 32:41

That makes sense. Gearing up for topic two, today we're talking about diamond batteries. Scientists and engineers from the UK Atomic Energy Authority and the University of Bristol have achieved a groundbreaking feat by creating the world's first carbon 14 diamond battery. So what are the components of a diamond battery? Well, you've got the radioactive source, which is carbon 14, a radioactive isotope that serves as the center of the battery, and then you've got the diamond encapsulation around the carbon 14. It's a synthetic diamond structure I want to add that as well. So it's functioning, is similar to that of solar panels, but instead of converting light particles into electricity, it captures fast moving electrons from within the diamond structure. It captures fast-moving electrons from within the diamond structure.

Mallory: 33:30

How does this compare to regular lifespan of a normal battery? Well, this battery has a half-life of around 5,700 years, meaning it would take that long to deplete 50% of its power output. This lifespan is close to the age of human civilization itself. On the flip side, conventional batteries I didn't know this like standard alkaline AA batteries, which are designed for short-term use, would run out of power in about 24 hours if operated continuously. Now, while the carbon-14 diamond battery produces less power than conventional batteries in the short term. Its longevity is unparalleled. So a single carbon-14 diamond battery containing one gram of carbon-14 could deliver 15 joules of electricity per day, compared to a standard AA battery that has a higher initial energy storage rating of 700 joules per gram but depletes very quickly.

Mallory: 34:25

You might be wondering this all sounds great, but what could this be used for? So the idea is for applications where replacing batteries is impractical or maybe even impossible. So think medical devices like pacemakers, hearing aids and ocular implants, spacecraft and space exploration equipment and extreme environments on Earth where battery replacement is challenging environments on Earth where battery replacement is challenging. So the groundbreaking technology offers a safe, sustainable way to provide continuous microwatt levels of power for thousands of years, far outlasting any conventional battery technology available today. So, amit, you were the one that shared this news article with me. I don't know if I would have seen it otherwise. To be totally honest, why did this pique your interest? There's a lot going on here, but kind of what stands out to you.

Amith: 35:13

Well, first of all, thanks for including and doing such a great job with the overview. I think you broke down the science in a very consumable way, and certainly for me, because I am not a scientist, I'm not a physicist, I don't have any background in nuclear technology at all. I just find it fascinating, in large part because AI is so power hungry. Our world is increasingly consuming power, in fact growing at a rate that is unprecedented, and so and more and more of what we do is on the move. So being able to have battery technology of various kinds, various shapes, various application categories or application profiles is really important. Some types of batteries, like lithium ion batteries that are used in all sorts of things they're used in your phone, they're used in electric cars. They're capable of discharging a large amount of power really quickly. They have a limited number of cycles, meaning the number of times you can recharge them. But they have applications that are different than, let's say, this technology, at least in its initial incarnation. But the idea of essentially like a limitless power source, that's nuclear powered, safe and compact enough to fit into lots of small applications is really really interesting From an AI perspective. Remember that AI models are becoming smaller, they're becoming faster, they're becoming cheaper. Imagine a world where you could take a very small AI model, a tiny model, something that's under a billion parameters, 100 million parameters, something like that, but a model that's based on advanced neural networks, that's based on all of the things that have happened over a number of years in the world of AI, and that model, let's say, is capable of translation, right, it's capable of language translation, real-time audio-to-audio, you know, as opposed to text-to-text. We take that and we embed that into an implant and we power it through one of these batteries and it becomes part of your ear and you don't even know it's there. It's just for the rest of your life, and that's. This device could give you the ability to hear in other languages and, you know, respond back, right. There's more to it than that. I'm kind of making up a sci-fi scenario, but this is a component that would enable that kind of a sci-fi scenario. Humans, ai is pretty cool, but I think our creativity at the moment is still a lot better.

Amith: 37:36

But we tend to find ways to do things that the technology wasn't initially thought to be suitable for, and a great example of that is actually going back to lithium-ion batteries, if you go to that technology and say, well, was it initially designed for electric cars? And the answer is no, not at all. A single cell lithium ion batteries. You know it's very small, it doesn't contain a lot of power, it's suitable for like a portable radio, at largest maybe. And what ended up happening is people started chaining them together into systems and these battery packs that you know make up, you know, a very large percentage of the mass of an electric vehicle, are capable of doing things that you see them doing on the road every day these days.

Amith: 38:10

But people did not believe that lithium ion was a technology that would scale in that way. They thought it was unsafe because of the potential combustibility. They thought it was not cost effective because at the time lithium ion batteries were super, super expensive, even for like a very small cell. But with economies of scale, just through manufacturing, along with process improvements and improvements in the tech itself, incrementally it now is affordable. Now there's lots of issues with lithium-ion batteries, don't get me wrong. But the point of this is that if this technology, at the fundamental science level, is shown to be effective and safe, then it will scale over time. Then it will scale over time. You mentioned.

Amith: 38:49

Something also that's really important for our listeners to understand is that we're talking about people encapsulating this radioactive material inside a artificial diamond, which essentially encapsulates it in a way that makes it totally safe At least that's what the science suggests. Right, that would be tested lots of different ways before it was used in any applications, but particularly in embedded applications in our bodies. But assuming that that ends up being true, what's going to happen? People are going to figure out how to stretch the boundaries of what this thing can do. It'll be used for all sorts of applications. Imagine a laptop that never has to be plugged in. Imagine a phone that just works forever. There's a lot of really cool stuff that comes from that. If you just disconnect and say we have a limitless power supply in the form factors of the devices that we care to use.

Amith: 39:35

Now getting back to AI.

Amith: 39:38

Ai is a big problem from an environmental perspective.

Amith: 39:41

Right now, it's consuming a crazy amount of energy and growing really, really fast. That's why you hear stories like Jeff Bezos, amazon founder, working on a deal with, I think, constellation Energy, if I remember correctly, but one of the nuclear power companies to restart Three Mile Island, right, so that would have never have thought to be a thing previously. But because AI is so hungry for power, it makes sense to say, hey, we're going to do that. In other news I think Microsoft has, or no, it's Meta they have an RFP out right now for an upcoming data center project they're going to do, where they're asking a company to build a nuclear reactor just for them, a massive one, something that could power New York City for their data center. So you know, nuclear is attractive in general because it puts off zero emissions. It is, at least in theory, something that should be very cost effective relative to other forms of energy. So there's obviously issues, downsides, risks, but people are pursuing this really aggressively and when there's this much demand for something, you tend to find really creative solutions.

Mallory: 40:45

It seems like over and over on the pod now that I'm reflecting in the new year, that we keep going back to this point where power is a bottleneck for technological innovation and we're constantly trying to solve for that, Would you say it is. The biggest bottleneck at this point is just figuring out ways to power things.

Amith: 41:03

Yes, and I think we have lots of reasons to be optimistic. I mean, you know, if you think about like just the energy received by planet earth during a given single day of the year, we receive a hundred times the energy the entire planet needs for the entire year. We just have no idea how to harness it, how to store it, how to distribute it, how to consume it in a way where we can take advantage of that natural form of energy 's, of course, completely clean. There's lots more that we can do with wind. There's more we can do with the motion of the ocean. There's more we can do with nuclear. There's more we can do with ways of cleaning up fossil fuels. So there's a lot we can do, I think. Ultimately, you know and that's not even to speak of fusion, right? So there are a lot of reasons to be optimistic. Ai is going to compound and accelerate scientific research and, ultimately, discovery, which is the most exciting aspect of it, whether it's in biology or physics or material science, which we've talked about quite a bit on this podcast. This is an application of a number of those things coming together. There's lots of unsolved problems here, but we have both, I think more intelligent people and intelligent machines working on these problems than ever before. So we're going to see unprecedented scientific discovery. I wouldn't be surprised if this particular problem was a completely solved problem by the time we are in the next decade, so in the next five years. It wouldn't surprise me at all. I'm not suggesting that, I'm predicting that. I just would not be surprised by it at all.

Mallory: 42:32

We'll wait and see for our prediction episode four years down the line. We'll see.

Amith: 42:36

Yeah that'll be episode 400 or something.

Mallory: 42:38

Exactly so, Amit. This is an incredibly interesting topic and it makes me feel good to hear that you're optimistic. This is not something I know as much about, certainly, but what do you think is kind of the key takeaway here for our association listeners in terms of keeping this in mind? What does this mean?

Amith: 42:55

It's more of a macro topic to hopefully give people insights into yet another really amazing scientific discovery and engineering opportunity. That should give people both optimism that we're going to have really good solutions for some of these issues, because I know a lot of associations are considering kind of the overall responsibility of AI adoption and the more we adopt it, the more energy we consume. Of course, we don't really have a choice other than to adopt it really, but at the same time, people are thoughtful about that, which I admire. I think it's really important for people to be thinking about their carbon footprint, or their total energy consumption. Footprint is the best way to think of it, because it's hopefully not carbon-based, or maybe it's carbon-14-based instead of traditional carbon. So you know, I think that it's really important for people to just have that macro insight. So sometimes we cover topics that are around economics or around fundamental science and research, and a lot of times we're covering topics like the earlier one where it's like, hey, here's a practical tool. So to me, this isn't so much that you need to go out and do anything about it. If you find it interesting, share the pod, give us a like, subscribe on YouTube or wherever you listen to pods. Share this with other people.

Amith: 44:04

My point of view is simply that it's helpful for people to know more about what's going on, and that these things are not like scientific research. Historically. You might hear about some discovery in the 1980s that actually saw the light of day as a practical application in 2020, or something like that, but these, all these cycle times are compressing, and that's why I'm optimistic we're going to see solutions. So, again, this doesn't solve the energy problem we have this moment in time, and this particular technology probably won't solve the macro energy consumption needs, right, but it serves to reinforce everything else we talk about in this pod over and over, which is that these exponentials are feeding into other exponentials.

Amith: 44:45

Material science is begetting smarter AI. Smarter AI is begetting smarter AI. Smarter AI is begetting smarter compute. Smarter compute, of course, then, is compounding all of that, and that's not to speak of biology and other forms of innovation. So I like to get people thinking about that kind of stuff, even if it's not. You know, we like to geek out on this stuff a bit on this pod, but, you know, even if you don't find this like fundamentally just really interesting to geek out on, I think it's just good to be aware of it. That's really what it is.

Mallory: 45:12

This is fundamentally interesting. I'm just going to say I think this is a fundamentally interesting topic diamond batteries Everyone. Thank you all for tuning in today, for being present with us on our first live episode of 2025. We're excited for a great year with all of you and we will see you next week.

Amith: 45:39

Thanks for tuning in to Sidecar Sync this week for a great year with all of you, and we will see you next week your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.

 

 

Mallory Mejias
Post by Mallory Mejias
January 9, 2025
Mallory Mejias is the Director of Content and Learning at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.