Summary:
AI is evolving at a breakneck pace, and in this episode of Sidecar Sync, hosts Amith Nagarajan and Mallory Mejias break down the latest advancements that are pushing the boundaries of what's possible. From the game-changing hybrid reasoning capabilities of Claude 3.7 and Grok 3 to the emerging field of quantum computing, they explore how these technologies are shaping the future. They also discuss Microsoft's Majorana 1 quantum chip and what it means for computing, AI, and beyond. Whether you're curious about scaling laws, the potential of digital twins, or the next steps in AI-driven learning, this episode is packed with insights that will get you thinking about what’s next.
Timestamps:
00:00 - Welcome to Sidecar Sync
02:06 - The Importance of Work-Life Integration
05:58 - What’s New with Claude 3.7 & Grok 3?
16:36 - Interactive AI Experiences & the Future of Learning
35:45 - Quantum Computing 101
45:28 - What Is Microsoft’s Majorana One?
49:02 - WThe Potential Risks & Challenges of Quantum AI
57:22 - Closing
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
Attend the Blue Cypress Innovation Hub in DC/Chicago:
https://bluecypress.io/innovation-hub-dc
https://bluecypress.io/innovation-hub-chicago
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🛠 AI Tools and Resources Mentioned in This Episode:
Claude 3.7 ➡ https://www.anthropic.com
Grok 3 ➡ https://x.ai
DeepSeek R1 ➡ https://deepseek.com
OpenAI GPT Models ➡ https://openai.com
👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
⚙️ Other Resources from Sidecar:
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Director of Content and Learning at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
Amith: 0:00
Remember that what you have in your hand today, that it's the worst AI you will ever use, full stop. So three months from now, six months from now, 12 months from now, the AI that you have at that point will be dramatically better. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amith Nagarajan, Chairman of Blue Cypress, and I'm your host. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings everybody and welcome to the Sidecar Sync, your home for content at the intersection of associations and artificial intelligence. My name is Amith Nagarajan.
Mallory: 0:56
And my name is Mallory Mejias.
Amith: 0:59
And we're your hosts and we have another crazy episode, this time a really crazy episode, lined up for you, and you'll find out why momentarily, but before we get into the exciting, topics that we've picked for today.
Mallory: 1:18
Let's just take a moment to hear a word from our sponsor. If you're listening to this podcast right now, you're already thinking differently about AI than many of your peers. Don't you wish there was a way to showcase your commitment to innovation and learning? The Association AI Professional, or AAIP, certification is exactly that. The AAIP certification is awarded to those who have achieved outstanding theoretical and practical AI knowledge as it pertains to associations. Earning your AAIP certification proves that you're at the forefront of AI in your organization and in the greater association space, giving you a competitive edge in an increasingly AI-driven job market. Join the growing group of professionals who've earned their AAIP certification and secure your professional future by heading to learnsidecarai. Amit, I know you were fresh off a few days of skiing. Tell our audience a little bit how that went.
Amith: 2:13
It was fantastic. You know, skiing is a passion of mine and got to go up to Utah and hang out with a bunch of friends and ski really hard for three days, not think about really anything, because I skied so hard every day that I basically couldn't do anything other than ski. So in fact I could barely ski by the third day. So mission accomplished.
Mallory: 2:33
That's awesome, do you feel like? Because I feel like your mind is kind of always going. I'll get those periodic like early morning teams messages late night when it meets, like wait, what if we did this on the pod? But do you find that when you're skiing is your brain kind of totally clear or are you like working through business problems?
Amith: 2:51
No, no, no, no, if that would be a mess. You know I get into trouble on skis because I'm not quite as young as I used to be, so I but I think I am sometimes that I'm out there on the mountain and I see something I'm interested in going after. Sometimes I'm out there in the mountain and I see something I'm interested in going after, but you know, if I was thinking about AI or if I was thinking about associations or business in general while I'm skiing, I'd probably be wearing some kind of cast right now. So I definitely I have that's one of the things I like about it that, and mountain biking and water sports and anything really is that it's an opportunity to really clear your mind. I actually find that if I have a period of time like that where I'm able to really disconnect, I come back and all of a sudden all sorts of you know new ideas do come to mind. So I think it's a really important concept.
Amith: 3:35
You know, in our culture, at blue cypress and at sidecar, we talk a lot about this idea of work-life integration, which to us it's not a euphemism for saying there's no work-life balance, by the way, just to be clear, because some people hear me talk about. They're like ah, you're full of it. Basically, that just means everyone works all the time and never does anything else, and that is not the case. We do work really hard over here, but the point is is to be able to weave together the components of work and the components of life in a healthy way, in a balanced way. In a balanced way in a sense, but really balance. I don't like that term simply because it implies that you're out of balance at all times.
Amith: 4:06
And this like theoretical construct of balance means that you're always kind of having to separate the two on opposing sides of the of kind of the pendulum. But the point I'd make about what I'm trying to say here is, you know, if you can integrate work and life, the way I try to myself and set an example is you can, you know, go and do other things and then weave that in. And then, you know, sometimes work pops up at weird times and that's cool too. Especially if what you're doing is something you enjoy doing, it actually works out great.
Mallory: 4:35
Yep, yep. Do you feel like that's something you've always been good at, that work-life integration, or has it taken time?
Amith: 4:41
It's taken time. I mean, in my earlier years I was just basically flat out all the time, every day, all day, every night, you know. So it doesn't work that way anymore, but you know, I've got a lot of other responsibilities and a lot of other interests these days, so I think it's really important to be able to weave things together like that.
Mallory: 4:58
For sure, and I think you set a good example of that, because, as much as you are kind of always thinking, it seems there are also times where you're, you know, unavailable, or you go to Europe for two weeks or a meet skiing for three days, so he cannot respond to this team's message. So I think you do a good job of it.
Amith: 5:15
Yeah, and you know Siri has a habit of, you know, starting to announce things at times when you're doing stuff like skiing and you start to tell Siri go away for a little bit.
Mallory: 5:24
Yeah, we'll get back to the podcast on Monday.
Amith: 5:27
Yeah, sounds good.
Mallory: 5:29
Well, today we've got some exciting topics lined up for you all. We're talking about Claude 3.7 and these new Gen 3 models that are emerging, and then we are talking about something that I'm going to tell you all is kind of breaking my brain a little bit. We're gonna talk about quantum computing and Majorana 1 from Microsoft, and we're going to really break it down because it's complicated for me even in all this research I've done to prep for this pod. So I'm excited to kind of hash this out with you, amit. But first and foremost, claude 3.7. We are seeing the emergence of a new generation of AI models, specifically Claude 3.7. We are seeing the emergence of a new generation of AI models, specifically CLAWD 3.7 and GROK 3. That is, grok with a K this time. These Gen 3 models are trained with at least 10 times the computing power of GPT-4. And this leap in computational scale has enabled these models to excel in tasks like coding, math and reasoning, while also demonstrating creative and anticipatory abilities. So CLAWD 3.7 SONNET is the first publicly available hybrid reasoning model offering two distinct modes of operation. In its standard mode, we see an improved version of CLAWD 3.5 SONNET, providing quick responses for general inquiries, but then we also see an extended thinking mode where the model engages in more detailed step-by-step reasoning for complex problems, improving performance on tasks like math, physics, instruction following and coding. Users can toggle between these modes, and API users have fine-grained control over the model's thinking time quote-unquote allowing for a balance between speed, cost and answer quality. Interestingly enough, clawd 3.7 can also generate interactive visualizations, create functional programs from natural language prompts and even produce creative outputs like interactive experiences.
Mallory: 7:21
With minimal output, the model seemed to be getting better basically every day, and not just better, but drastically better, and so when we're thinking about this and how we can explain this trend, we really need to dial in on two scaling laws to explain it. The first scaling law is training compute. So this law states that increasing the computational power used during the training phase results in more capable AI models. So, essentially, larger models with more parameters tend to be more intelligent. A 10x increase in computing power typically yields a linear improvement in performance. That's the first scaling law. The second one is inference compute. This law focuses on the computation used during the problem solving or inference phase. So allowing an AI model more time to think or compute during a problem solving improves its performance. This discovery led to the development of reasoner models, and reasoner models can perform multiple inference passes, working through complex problems step by step. So the new models that we're seeing, like CLOD 3.7 and GROK 3, leverage both scaling laws combining massive training, compute with the ability to scale during problem solving. These scaling laws continue to drive the development of more powerful and versatile AI systems, pushing the boundaries of what is possible in artificial intelligence.
Mallory: 8:47
So, amit, so much to talk about this week. I want to dial into, I want to talk about Claude37 and Grok, but I want to talk about this idea first, of these Gen 3 models. What do you think is most exciting there?
Amith: 9:49
Well, I think I might start off by saying I would love to have been a fly on the wall at Anthropic when the marketing team was thinking about versioning numbers, because 3.7, it's like, where did that come from? I mean, this is a pretty big deal. And 3.5, honestly probably should have been 4.0. I mean, I know that they are trying to be modest in terms of the gains, I guess comparatively speaking. I know that they are trying to be modest in terms of the gains, I guess comparatively speaking. But it is interesting to see how AI labs, as brilliant as they are in all things, it seems and they have access to pretty good AI here they don't seem to get the marketing side very well, but in any event, not that I'm a marketing expert, but it just seems a little bit on, anyway.
Amith: 10:36
So what I like about Cloud 3.7 specifically and more broadly, this Gen 3 model category, is the keyword that you mentioned earlier, which is hybrid. So I put a LinkedIn post out there earlier this week when the Anthropic team announced Cloud 3.7. I was really excited about it because it was, from my perspective, the first mainstream, accessible model that you can get to that is hybrid. Now let's talk about that for just a minute. So when we have a problem to solve, we don't stop to say, okay, should I use my instantaneous reactive feedback capability, right, my fast thinking? Or should I use my slow thinking capability and really think through the problem step by step and break it down? We just kind of know which of the two we should be doing right. Like Mallory, if I gave you two plus two, you'd say yeah four.
Amith: 11:27
Right, but whereas if I gave you a problem, that was like a fairly complex formula, you'd probably say, well, I need to go and kind of refresh my high school algebra and break it down step by step, right.
Mallory: 11:37
Yep.
Amith: 11:38
I wouldn't need to say hey, mallory, make sure that you like Mallory, version one which is like Mallory that is only you know capable of doing instant reaction. If I gave you the other problem, you'd basically do your best guess Right? You'd say, oh well, I've seen problems that kind of. Look like that. So the answer is X right, or X equals seven right, or something like that, whereas if you had time to think through and break it down. So that's the difference between like the models that think fast versus thinking slow, and essentially what's happening in hybrid models is they're capable of choosing. So in the case of like O1 and O3, which we've talked about, or in the case of R1, which is the deep seek model everyone was going crazy about a couple weeks ago, those are pure play reasoning models or reasoners, whereas GPT-4.0, cloud 3.5, sonnet these are not reasoning models. They're single passes at inference, which means they just give you the fastest possible response.
Amith: 12:32
Now, what software developers have been doing since reasoning models have become available, really in the last few months, is to build intelligence into their software to say, hmm, should this step of my agent or should this step of my program use a reasoning model, or should I just use a regular model, right, kind of a classical LOM, and you would make that choice kind of somewhat deterministically in your program or in your code, right, Kind of a classical LOM, and you would make that choice kind of somewhat deterministically in your program or in your code, right. But now the models are smart enough to be able to make that choice for themselves, which is a really powerful advantage. And it kind of makes sense because you're essentially trying to create like the equivalent of a full brain, and you know, a more sophisticated brain is going to know when it needs to like just react because it's an obvious question versus when it needs to think more deeply. So that's the number one thing is that these hybrid models blend the best parts of reasoners with classical llms and are therefore able to automatically think longer if they need to, or think with less time.
Amith: 13:30
Now the part you mentioned that I think is also important to note is the budget. Like, how much time do you spend? So if you were in a math exam back in high school or in college and someone said, hey, mallory, you can have an infinite amount of time to complete this test, you'd probably spend quite a bit of time maybe not infinity, but you'd probably spend a lot of time to make sure it's the perfect answer. But if I said, hey, you have to complete 50 problems in 90 minutes, you would give yourself an internal budget, maybe give yourself a minute or two per problem or something like that. Right, so it's the same idea is that we want to give constraints to the model so that it spends the amount of time that we think is reasonable. So that's the part that I also think is important to be aware of. Those things all are helpful for end users in the cloud application, but they're also very powerful for application developers that are building systems on top of cloud.
Mallory: 14:22
That is very helpful. I love how your brain goes to math with the thinking fast and slow. My mind immediately went to seeing like a cockroach in my house or something, and how, in that moment, I wouldn't go. Well, mallory, what should I do? Should I scream? Should I kill the cockroach? You know, that would be my instantaneous reaction. I don't like roaches, by the way, so that's where my mind went, but I think it makes sense what you're saying, amit. I guess what I want to know is one.
Amith: 15:04
Have you tried Clawed 3.7 since it's been released and then have you noticed any?
Mallory: 15:07
of these differences, like as a user, I've tested it out a bit and everything seems roughly the same.
Amith: 15:09
So I'm looking forward to checking it out, and so, from my perspective, I think the key to it is like, if you compare it to what's happening in chat GPT right now, where you have to choose the model, you have this menu of GPT 4.0 or GPT 4.0 mini or 0.1 or 0.3 or 0.3 mini or 0, o3, mini, high right, like so complex user interfaces never win. I'm sure OpenAI is all over this and they're going after it really aggressively, but you know, my point of view is that simpler is almost always better, and so that's why I think Cloud's going to pick up a lot of. They already have a lot of fans, but I think they're going to pick up more fans from this.
Mallory: 15:54
Yeah, you all know I'm a big Claude fan. Claude is my preferred model these days, except I don't like the usage caps those are the things that drive me nuts. But other than that, I really am a big fan of Anthropic and Claude. I know you haven't used it, but I know we both shared an article with one another from Professor Ethan Malik where he shares like an interactive experience that he created using Claude 3-7. Have you watched that video, Amit?
Amith: 16:26
Yeah, and I think the interactive experience part of what you mentioned earlier in the overview is super interesting. I actually thought specifically about this in the context of just education in general, context of just education in general. So when you think about how associations are, one of the primary things that associations do to create value with their audiences is deliver content and specifically learning content, and that comes in a lot of flavors. That can come in, obviously, text in the form of articles on a blog or in a journal, obviously in the form of conference content, but interactivity potentially could be super interesting for simulations or for kind of reactivity. So seeing a user does this, what happens then it's those kinds of interactive experiences. Of course, that can be done through building software where you could say build me a simulator that asks someone to choose from one of several prompts and then give them different reactions using an AI. So that's the first thing my mind went to. What use cases did you have in mind when you saw the visualization or the interactive experience?
Mallory: 17:24
Well, I'll share with you all the video in case you haven't seen it. But Ethan Mollick did a time travel example where he created kind of this interactive experience and he could click time periods historically, go to those time periods and then see, maybe, what was happening. I will say the images were very rudimentary and it was like a very simple interactive experience but very exciting in terms of what's to come. I would say my mind went the learning direction as well, but I don't know that. I was kind of stumped, to be honest with you, because I saw that and then was thinking what? I guess? What does this mean for business? I don't know exactly.
Amith: 18:04
Well, I think one of the things from a learning modality perspective. If you think about the prevailing form of digital education, which you know we also subscribe to with Sidecar and our AI Learning Hub it's a form of asynchronous content, and so that asynchronous content is largely one way. It essentially is there's videos and there's documents and there's assessments, and the learner goes through it, you know, not necessarily sequentially, but usually somewhat sequentially watches a video, listens to some audio, maybe reads a document, perhaps does an assessment, maybe there's an interactive exercise, but these things are like hand-built painstakingly by learning content folks, and so they're very rare. Actually, in most learning environments they're extremely rare because they're very, very expensive to build. So the concept might be that if you have an AI that can build interactive experiences dynamically, like this, and if you can also have an AI resident in your LMS, can you provide an experience that actually feels a lot more like a traditional synchronous learning experience. Right where a traditional synchronous learning experience is, that you know in the most effective form that we know it's a one-to-one session where, mallory, I'm trying to learn a topic that you're an expert in and you say sure, I'll spend a half an hour with you and I'll talk to you all about that topic and answer your questions, and that type of session is extraordinarily powerful, but it doesn't scale until now, right, and so an AI avatar and that kind of solution, which is different than what we're talking about right now, could be a component of that.
Amith: 19:45
But interactive experiences are also ways of helping people visualize complex ideas. So, for example, let's say that we're talking about educating perhaps a bunch of accountants on some concept. Let's say the concept has to do with how to recognize revenue for membership. Membership gets paid upfront at the beginning of a year or the beginning of a particular cycle, and you typically recognize it one twelfth per month as the individual receives the value from the membership. It's a common concept in gap accounting, and so if we want to illustrate that, maybe we can show an example where numbers are literally flowing from one side of the accounting equation to the other in the context of this type of experience. Right, and of course this is a super simplistic concept. For most professional accountants, they heard about this back in their second year of accounting school. But the idea is, is you have concepts that are best illustrated visually, where the instructor might use the chalkboard, but maybe sometimes the student goes up to the chalkboard to fill in part of the problem, right? And so that becomes part of the interactive experience in kind of the classical sense. Is there a way to emulate that with this type of tool is the question I would put out there. And to me it becomes another dimension or another canvas upon which we can craft new kind of interactive storytelling and interactive learning opportunities. So I get excited about stuff like this.
Amith: 21:14
I also think you know we talk a lot about the whole idea of moving from a scarcity mindset to an abundance mindset. And this is yet another great example where you know we've talked about software a lot. Right, we say software has been super expensive and very hard to build. Now it's becoming less expensive and much easier to build, with AIs being able to build stuff for us. Theoretically, the cost kind of approaches zero over time.
Amith: 21:38
Well, in the case of interactive learning experiences, same thing. You know, it requires highly specialized skills, takes a long, long time, is super expensive and so very rarely do they get used. But how can we consider those assumptions and you know kind of re-evaluate what we can and can't do if the assumptions are essentially invalidated? Right, we say, oh well, now all of a sudden, if you could have unlimited interactive experiences for no additional cost. It's just like a feature of your LMS, where you can tell the LMS you want interactive experience to be dropped in here and give it some idea of what you want, and it builds it. Of course, under the hood it would use cloud or whatever. That stuff's coming, it will be there. I think. For now you have to kind of manually stitch together the solution, but it's available to you for people who are, you know, really looking to do something different. I'd love for us to experiment with these kinds of ideas in the Sidecar AI Learning Hub, because I think you know we can show some leadership there, of course, but I think it would really enhance the learning experience.
Amith: 22:39
To me, ultimately, the goal with just continuing on the learning track a little bit longer, the goal isn't about signing up learners. The goal isn't about people completing the course. The goal isn't about getting a certification. Those are all milestones or essentially heuristics to tell us whether or not someone has actually gained knowledge. The real goal is did they gain knowledge and can they apply it in a productive way in their profession? And ultimately, one of the best ways to determine. That is some form of interactivity, right, and that's what, going back to the one-to-one tutoring, is so powerful because dynamically a really good tutor can determine is this student picking this up and can they apply it or not, and then dynamically shift gears. So we're getting really close to being able to do.
Mallory: 23:19
In my previous life, amit, you know, I was a private tutor, so this really resonates for sure, the one-to-one teaching, and, you're absolutely right, adapting as you go, gauging understanding, and I assume an AI would be better than a human at kind of gauging those things and identifying those patterns of the interactive experiences is the concept of digital twins, which we've talked about on the podcast previously, the idea that you have, like, a digital version of your business you could make changes to and see what happens. Do you think not at this point right, claude37, these experiences are very basic right now. Could businesses start thinking about that though, now that this capability is emerging of like maybe using a model to create a digital twin of their business? Sure Well, the model itself, I think, a model to create a digital twin of their business.
Amith: 24:04
Sure, well, the model itself, I think, is going to be a component of the idea of a digital twin. The digital twin concept you know we've covered that in the past and the idea there is really powerful. And for larger enterprises, whether they're government entities or you're modeling like a system, or if you're modeling a business or a factory or something like that, a system or if you're modeling a business or a factory or something like that, you can do these things and they essentially are, you know, real-time simulations of the full complexity of the system. So what you might actually have is, you know, ultimately like a CLOD 3.7 caliber model multiplied by 10,000 within your digital twin, because to really reflect all the different moving parts that exist, if you were to model, let's say, let's just say we want to model one department in an association, not even the whole association, let's just say the membership department. You know there's a lot of moving parts there.
Amith: 24:54
If you were to like break down into, like, like Lego, building block type constructs, all the different business processes, all the different people who execute those processes, the customers externally that interact with them, those are all things that you'd model in the digital twin and then you'd say, okay, well, I have like essentially the model for that, the model being a generic model probably but then like instructions on top of it that turn it into a process agent or some artifact like that, and then they all work together to essentially you know, if you aggregate them all together it forms like, let's say, a departmental digital twin for the membership department and then you can simulate okay, well, what if we changed the way we approached our policy for member renewals? How would that affect the way our business operated? Right? And you change that policy and then all the different components of that digital twin immediately reflect the change and you can see how it would behave in operating the business. So it's essentially a very fancy simulation, but it's real time and kind of continuous.
Amith: 25:55
So that can be very powerful. These models are not quite ready for that in the most generic sense, but that's what you're describing is exactly the kind of thing these models could power. If people are wondering well, that sounds really interesting, sounds maybe kind of sci-fi, but like, how could that be valuable for me? Well, imagine if you modeled a digital twin of your entire membership, where you had the data for every single individual member or organizational member and we essentially had a digital twin that models each of those people, how they might behave, how they might react to different stimuli, such as increasing your membership views or such as changing the location of next year's annual meeting or changing the dates. We have a lot of data that might indicate to us how each person may behave. And then and sometimes the dynamics are such that one person behaves in a certain way, other people behave based upon their reaction, because there's influencers in your community.
Amith: 26:53
So it's not just if you have 20,000 members, it's not just 20,000 individual decisions that are completely independent. They're actually interdependent. So these digital twins have to account for that, because you know one actor in that simulation makes a decision that are completely independent. They're actually interdependent. So these digital twins have to account for that, because you know one actor in that simulation makes a decision and then others may follow, or you know so and that's what happens in real life, right? Like I say, oh hey, mallory, we're both members of Association X, and you're a member and I'm a member, and I know you're going to the annual conference and you know you tell me that you're not going anymore because it's going to a city that you don't want to go to. I might say, oh well, I'm not going to go either. Or I might say, oh, mallory's not going, I'm definitely going to go.
Mallory: 27:30
Okay, it could go either way.
Amith: 27:32
Yeah, so, like in people's behavior, you can't necessarily predict it with 100% certainty. However, there's a lot to be said for being able to predict with a high degree of certainty and, of course, AI is ultimately a prediction machine. It's just a really good one. So we can do that at that level individually and then model the interdependent predictions of how the system starts to behave. Now we can simulate okay, well, what would happen, and so when the board is thinking through, should we raise dues? No-transcript. What you're describing and bringing that topic up, I think, is a great application. The models becoming smarter and smarter make the likelihood of that happening higher, but also makes the quality of those kinds of simulations far better.
Mallory: 29:43
That makes sense. I want to touch briefly on the scaling laws piece before we move on to good old quantum computing. It sounds like the idea is throwing more compute, both in training and in inference, creates better models. That's kind of the simple overview. In theory it sounds simple enough right to be like well, let's just throw more compute, more compute. If that's kind of the bottleneck, let's do more of that. But can you kind of just explain what the holdups are at this point in creating more compute power?
Amith: 30:14
Well, I mean, there's a lot of constraints around compute related to manufacturing, related to energy, related to building data centers, relating to have enough people to run those data centers with additional compute. There's constraints in terms of the compute itself, in terms of both the actual hardware being able to run more calculations per second in parallel. There's the interconnect, which is essentially like the mini internets that exist between all the chips in a data center, right, where, from one chip to the next chip, they have to communicate at really, really high speeds, and so there's scaling that needs to happen there. So there's lots and lots of constraints in terms of the physical and the economic aspects of compute. All that's being worked on, of course, across all of those different components I just mentioned. But ultimately, if we were to say let's just say we had an unlimited amount of compute and we threw it all at training, right, which is what was happening for years in AI, we're just through more and more and more compute at training, and in some cases that meant also more and more data. And the question is is that will training by itself continue to scale the way you described earlier in the pod, which is to say that an order of magnitude increase in compute, aka a 10x increase in compute. An order of magnitude increase would result effectively in a doubling in power, right? So a doubling in power is that linear increase, whereas in order to get the linear increase we need 10x increase in compute. Will that hold true forever? Right?
Amith: 31:42
And we were already starting to see signs that there might be limitations to that. There were algorithmic improvements that would kind of extend that kind of like with Moore's law. People would say, well, you can't pack an unlimited number of transistors into a tiny chip. And then, lo and behold, we'd come up with a new node and process manufacturing that would result in smaller and smaller and smaller chips and other ways of stacking transistors, like three-dimensionally and stuff. So I think there was definitely, and there still is definitely headroom in terms of the first scaling law. There will be a lot of upside there.
Amith: 32:13
But lo and behold, along came the second scaling law, which was like, oh, if we give these models a little bit more oxygen in their tank, I'll give them a little bit more time to think what will happen. And that's when, like, this whole thing was strawberry, which then became O1, was like big news. And then R1 and now the 3.7 edition of Claude. Really, all you're doing is saying, hey, model, think about it again. Does that make sense? First of all, I come up with a plan, then think about it some more. Right, and we were approximating that with things like chain of thought prompting, but still that was just more of like a better tip to the model that was still reacting instantaneously.
Amith: 32:50
Now the models actually have the ability to run multiple iterations. Essentially, that's really all that's happening is the model itself can iterate and think through the problem and say what's the problem? How should I break it down? Let me solve it step by step. Okay, here's the solution to part one. Here's the solution to part two. Here's the solution to part three. Cool, now let me look at that solution. Does it make sense that I've put it together, yes or no? Oh, it doesn't. Let me go and change part one. Okay, now does it make sense? Oh, yes, so now it makes sense. So now we're good.
Amith: 33:17
That's kind of the way we might work through a complex problem of whatever kind, right, writing an article or doing a math problem or whatever. Well, models can now do that right, prior to models having this test time compute or inference time compute, scaling, where they can think more at runtime. Essentially, that's what agent builders were doing for several years actually. So products like Betty and Skip are essentially doing that kind of iterative, like multi-inference pass work, and that's part of how they work. Now, more of that being done in the model in theory is great. There's still value in the system or the agent doing some of that for different reasons. There's different purposes to different kinds of approaches.
Amith: 33:56
The real point I would try to make is this we're just starting on the second scaling law. It's like this isn't really true, but it's like all of a sudden the model developer's like, wow, if we give the model more time to think at runtime or at inference time inference time just is a fancy way of saying like when you ask the model something right, so if you give the model more time to think, it'll give you a better answer and sure enough, it works better. It's just like we produce better answers if we have a little bit of time to think about it. So there's a lot of upside there. There's lots and lots of headroom for this second scaling law, which is also, by the way, why folks at inference computing companies like Grok with a Q and others are so focused on saying that there's going to be this ridiculous surge in demand for inference, computation more so than training, because the base models are pretty darn smart already. What happens if we throw a crazy amount of inference at it? Well, you get really interesting results. So that's.
Amith: 34:52
What's exciting about this is that you know these are engineering problems. They're not even scientific domain problems, meaning like we don't need a breakthrough in the algorithms in order to produce radically better AI. Like the next couple of years, you're going to see breakthrough after breakthrough, which are really more engineering breakthroughs, like the DeepSeek. R1 was an engineering breakthrough. You're also going to see some scientific stuff happen, which is cool. But in any event, I think the second scaling law, the main takeaway for our association listeners who are not so interested in the technical details, is this Remember that what you have in your hand today in terms of Cloud 3.7, as cool as it is, or Grok with a K, the Grok 3 model that came out, which has some similar attributes remember that it's the worst AI you will ever use, full stop.
Amith: 35:39
So three months from now, six months from now, 12 months from now, the AI that you have at that point will be dramatically better, so why does that matter? It doesn't mean that the AI you have now is actually bad. It's staggering how powerful it is. However, what you need to think about is what will you be able to do if you have Grok 5 or GPT-7 or Claude 5 or whatever these guys choose to name their next generation of models? That's a really, really important thing to be thinking about. So one way to frame that as a little thought exercise for yourself is this what can you do now with Claude 3-7 that you couldn't do before? So go figure that out, play with it and learn how to do some stuff.
Amith: 36:22
There's plenty of examples. You mentioned Ethan Mollick, who we talk about a lot and follow his work. He does a lot of this type of thing, saying you can now do X, y and Z that you couldn't do with GPT-4.0. Then put yourself in a time machine and go back to 12 months ago, or even six months ago, and say, hey, what were you struggling with?
Amith: 36:40
And then think about it and say if you had known back then that GPT 4.5 or 5.0 or, in this case, cloud 3.7, was available in late February 2025 and it would do these things, how would that change the way you planned what you would do in your business, right, and then think about the gap in capability. What you would do in your business, right, and then think about the gap in capability, then try to project that forward and say, okay, well then, six months from now, I will have this next set of capabilities. What will I do with it then? Because undoubtedly, you will run into things that Cloud 3.7 cannot do. So, rather than like throwing up your hands in the air and saying I'm frustrated, I can't do this, put that on a list and come back and test it again in two months, three months, six months time, and most likely, you're going to see that these future models will be able to solve your problem.
Mallory: 37:31
Moving on to topic two of today, we're talking about Microsoft's Majorana One, but before we dive into that, I think it's important for us to talk a little bit about quantum computing at a high level. Don't don't run away, don't turn off the podcast just yet. This was a learning experience for me too. But quantum computing is an advanced field of computer science that harnesses the principles of quantum mechanics to perform complex calculations and solve problems that are beyond the capabilities of classical computers. So, unlike traditional computers that use binary bits, zeros and ones, quantum computers utilize quantum bits or qubits, which can exist in multiple states simultaneously due to a phenomenon called superposition. So superposition is how qubits can represent both one and zero at the same time, allowing quantum computers to process multiple possibilities concurrently. Also, there's entanglement and interference, which I'm just going to talk about really briefly. And entanglement qubits can be correlated in such a way that the state of one qubit instantly affects the state of another, regardless of distance. And then interference is quantum waves can amplify or cancel out certain computational outcomes, helping to arrive at solutions more efficiently. Computational outcomes helping to arrive at solutions more efficiently.
Mallory: 38:58
So I have spent a lot of time looking into quantum computing and it's one of those things for me, amit, where I watch a video on it and while I'm watching it I think, yeah, yep, this all makes sense, I get it. And then leaving the video and thinking about having to explain that to someone else is where I realize perhaps I don't get it as much as I thought. So it sounds to me like quantum computing. Let me backtrack. It sounds to me like there are some problems on Earth, challenges, issues that we don't understand, that couldn't even be solved by the biggest classical computer in the world. Like we could have a computer the size of the Earth, it could still not solve some of these problems that quantum computing could solve. Is that, is that make sense?
Amith: 39:36
Yes, that's correct, and I will preface what I'm about to say on this that I am definitely not anywhere close to an expert on quantum computing. In fact, I know very little about it. I find it super fascinating, but my level of knowledge on this topic is definitely still in the novice category. You know, when I talk to people who are in this field, it's hard to just even understand what they're saying. A lot of times especially when they get excited.
Amith: 40:00
So it's pretty cool because this is definitely very sci-fi like. So if you're a fan of Marvel movies, go watch Ant-Man again. You know there's a lot of quantum discussions there, but what I would tell you is it is about being able to consider multiple possible things in parallel at the same time. We can kind of approximate that now with parallel processing with GPUs. So I want to talk about that for a minute and compare and contrast GPUs with quantum processing. So a GPU or graphics processing unit, which is what powers AI, is really good at mathematical calculations that are executed in parallel. But they're executed in parallel because you have thousands and thousands of separate little GPUs within the big GPUs. So it's like you have the GPU, but that really means you have 10,000 or 20,000 or 100,000 cores and each core is capable of doing just one calculation at a given moment in time. So GPU is massively parallel, but that's because it has lots and lots and lots of tiny little processors in there that are really good at these calculations. So it's not actually truly doing like multiple things at once with the same physical piece of hardware. If you break it all down, a GPU is a package of many thousands of might have a dozen cores or 50 cores, but GPUs are where we've seen this explosion of cores and that's, you know. That, coupled with the fact that they do something called floating point operations really well, is why they're so suited for graphics, but also that same type of math that powers AI.
Amith: 41:50
Now, in comparison, quantum actually is doing multiple things in parallel with the same qubit, right? You're saying, hey, it can be zero and one at the same time, and so the concept. Again, this is not something that I can actually explain at any level of depth, but if you suspend disbelief, as weird as that sounds and that sounds really freaking weird, right? Really really strange, because how can one thing be two things? But the idea is essentially that there are multiple parallel states that exist in the natural world that we just can't contemplate because, frankly, the vast majority of humans, including me, like can't understand this, right, but in actuality, the states are actually in parallel, and so that's why in like sci-fi, you hear about multiverses and all this other stuff, or you know, there's actually quite a bit of scientific, like research and foundational thinking that support the idea that that may be a true thing. There isn't, like anything that's proven that, obviously yet.
Amith: 42:50
But um, but the concept and the fundamentals of what you're talking about with quantum mechanics are part of that, and so the idea of qubits being zero one at the same time, that even entanglement which is super interesting, where this has actually been shown in the lab, where, you know, at remote distances of thousands of miles, you can flip the position of a qubit and you can see the entangled qubit in another lab thousands of miles away instantly change and you can prove that it's quantum entanglement, because it's faster than the speed of light. Right, Because the speed of light is what we know is the fastest thing that you can actually like matter can travel at, which is photons, right, but like at the same time, what we're talking about here is instantaneous, right. It's not discernible that time has passed when these qubits have changed because they are interwoven that way. So again, I have no idea how that actually works, but that is a concept that's been proven and quantum computing is building on that. So I guess the point I would make is this so this overview is cool, we're about to talk about in terms of Microsoft's breakthrough, really makes it real, because over the last several decades, as the research in this field has gone from pure theory to very, very simplistic basic prototypes, like Google's been big in this space. You've had environments where you've had a handful of qubits on a so-called chip, but the chip was really like a giant machine.
Amith: 44:09
And so what we're seeing is that if a qubit can represent multiple states simultaneously and therefore can have these properties that allow for massive parallelization, way beyond what a GPU can do, which essentially just approximates true parallelization, right, we can solve problems in the natural world and possibly elsewhere that we just can't solve in any reasonable amount of time. So there are certain theoretical, mathematical problems that can be solved with quantum computers, that can't be solved with classical. But think about something that's a little bit more tangible, like weather forecasting. We can do that right now, but we're making essentially a lot of shortcuts happen. We're using a lot of approximations, a lot of summarizations. What if you could essentially predict the change in the state of every atom in the atmosphere in parallel, simultaneously at scale, and then have essentially like 99.99% accuracy in weather forecasting? Right, what would that mean? So that requires just enormity of calculations that happen in parallel and they're interwoven in this interdependent state. So that would take an infinite amount of time. Essentially, if you added up all of the classical computers together that we have on earth, you wouldn't be able to solve for that. Nobody even tries to solve for problems like that. They create heuristics or shortcuts, problems which are very helpful, but they're designed to solve problems with the tools we have today, which are classical computers.
Amith: 45:39
Now the last thing I'll say, really quickly, is, before you get into the Microsoft breakthrough, which I think is fascinating, is that classical computing doesn't go away. When quantum computing eventually does come online which, to be clear, is not about to happen tomorrow, it's, you know, maybe by the end of this decade at the earliest. But classical computing based on zeros and ones with traditional bits, whether it's GPUs, cpus, a mixture thereof, will be around for a long time because it solves a set of problems that quantum computing is actually not well-suited for. So, for example, things like having a database that tracks your customers. You don't need a quantum computer for that and in fact a quantum computer would be fairly ill-suited for that kind of deterministic type of process. So it's an interesting scientific concept.
Amith: 46:22
The reason we chose to feature it here that I think is worth just mentioning also before we go further is it's a cool scientific breakthrough that will have ramifications on AI, on the world and, of course, your association as a byproduct of that. And it's not that far off, it's not like it's 50 years from now. It's likely to happen, certainly in our lifetimes, probably within our careers, and quite likely, like within the next five to 10 years, there'll be some commercial impact from these technologies, based on what you're about to describe Microsoft has revealed to the world. So I'm excited about it for that reason, because it's actually a practical reality that we're going to be facing fairly soon and it simply accelerates what we're already seeing with AI.
Mallory: 47:06
Well, amit, if you're a novice and you just said all that, I think I'm like a caveman when it comes to quantum computing, but I'm getting better, all right. Microsoft's Majorana 1 quantum chip utilizes a groundbreaking new state of matter called a topological superconductor, which is artificially created and does not occur naturally in the universe. This novel state of matter forms the foundation for a potentially revolutionary approach to quantum computing. So this topological superconductor material enables the creation of topological qubits, which Amit was just talking about. These are the building blocks of Microsoft's quantum computer. These qubits are designed to be more stable and error-resistant than traditional qubits, potentially solving one of the biggest challenges in quantum computing maintaining quantum states long enough to perform complex calculations.
Mallory: 48:00
The technology could be applied to solving complex optimization problems in various industries, accelerating drug discovery and material science research, enhancing cryptography and cybersecurity measures, and simulating quantum systems for advanced scientific research. So this is profound for many reasons, but I'll cover just a few of those One. It represents a significant advancement in our understanding of quantum physics and material science. It could dramatically accelerate the development of practical, large-scale quantum computers, potentially bringing them from the realm of theory into reality within years, as Amit said, rather than decades. If successful. This technology could enable solutions to complex problems that are currently intractable for classical computers, potentially leading to breakthroughs in fields like climate modeling, financial analysis and, of course, artificial intelligence.
Mallory: 49:02
The ability to create and control this new state of matter demonstrates a level of atomic scale engineering that was previously thought to be impossible, opening up new avenues for technological innovation. The development represents a convergence of fundamental physics, material science and computer engineering that could reshape the landscape of computing and scientific research in the coming years. So I think I'm glad we kind of broke that into two parts of me. I feel like now that makes a lot more sense to me after what you said. Why are you so excited about Majorana 1?
Amith: 49:33
To me, what it represents is the scaling of the number of qubits and kind of the resilience of them. You know you mentioned this earlier, but the quantum compute that we've had, you know, thus far from other labs, including Google, have been very interesting to look at, but they've been like, I think, in the order of magnitude of hundreds of qubits at the largest. I don't remember exactly the numbers, but very, very small, and Microsoft's talking about being able to put a million qubits on a single chip that can fit in the palm of your hand. Now, granted, the machine that's required to run that chip is quite substantial in size, but, as is the case with all things, as they get simpler and we get smarter about them putting things into production, it'll likely shrink down, but a million qubits is enough horsepower basically to do some of these extremely complex calculations, if the qubits are stable enough the way you described. So, you know, let's talk about one of the fundamental problems we have.
Amith: 50:29
You know people talk a lot about AI in a negative way in the context of several dimensions. One is AI safety, which I think is a super interesting and really important topic. As much of a proponent of AI as I am, I'm also one of the people out there saying, yeah, it's going to be a big problem, we have to be on top of it. I don't think that means we can or should try to slow AI down, but it means we have to be constantly thinking about frameworks, governance, training, all sorts of things to make AI safe and use it responsibly. The other dimension of AI negativity that I hear frequently is energy consumption. It is true that the data centers that are used to train AI models and the data centers that are used to run AI models are ridiculously energy intensive. Now they are getting more efficient all the time. The chips that are being built become smarter, faster, more powerful. The models are becoming smaller as well, so there's going to be some help on the way, so to speak, based on the progression of the industry. However, demand is growing at a ridiculously fast pace, like we've covered in prior episodes, where, as the quality of the product increases and the cost goes down by orders of magnitude, kind of all at once, demand explodes, because the utility of AI is enormous. So energy consumption is a problem. We've got to figure that out, and so people talk about the impact on climate, how not all data centers are being run in a green way, and the reality is is even the ones that say they're run green, they really can't right now, because they need way more energy than they can get from sustainable sources in most cases. So what do we do about this right? So when you have breakthroughs in AI and if you have breakthroughs in quantum, potentially, potentially you can also solve for some of the currently intractable problems in energy generation.
Amith: 52:13
Probably the grand prize in all of it is fusion, and you think about, like, what are some of the problems stopping fusion from being a reality? There's a lot of fundamental science that has to happen to make fusion. You know, a household device. I don't know if we'll ever get to the point where you have, like, a fusion reactor in your neighborhood, but maybe and in order to do that, there's a lot of science that has to happen. In order to do lots of science, you know, to do it in parallel, to do massive amounts of it, to test a lot of hypotheses. You need a lot of creative ideas, because science always starts with creativity. It starts off with curiosity, it leads to creativity, it leads to hypotheses and then people test these things out. The scientific method works really well. It's just kind of slow. And so what if we could, in parallel, simulate and test millions or billions or trillions of different ideas? Right, and with quantum, you have the computing horsepower and with AI, you have potentially the creativity? You need to do a lot of this.
Amith: 53:05
So the mixture of the two could lead to breakthroughs in fields like fusion or maybe other ways of thinking about solar or geothermal or wind power or whatever to make things more efficient, and thinking about things like material science. We've talked previously about room temperature, superconductive materials, the transmit power to be more efficient, to have less loss of energy after we generate it. That becomes another. It's a grand challenge of material science. And so this type of compute, coupled with the intelligence that AI is giving us, you know you have unlimited intelligence on tap, and if you have unlimited raw compute on tap for these styles of problems that require massive parallelization, you can probably solve a lot of problems. You know, because it's going to feel like you know, the current things that we're doing to conduct scientific discovery are going to feel like you know, pre-computing right, it's going to feel like it's from the 1500s or something, or even further back compared to what we're about to get. So that's what's so exciting about this it's going to lead to these new discoveries. That's going to lead to a compounding effect of helping us then fuel more discoveries. Of course, the reason I'm so excited about AI is that intelligence on tap this way would lead to, you know, a greater compounding effect than any of the other things.
Amith: 54:17
But if you think, for example, could we solve for fusion with a combination of really advanced AI let's say, ai that we have five years from now, coupled with the first generation of truly useful quantum, maybe we could. And if we can solve for energy, pretty much everything else is downstream if you think about it. So what are some of the other fundamental problems we have on earth? Well, we have problems with access to clean water. But if you can solve for energy and you can do it in a carbon-free way, right, in a sustainable way and in a low-cost way or potentially a free way at scale, you can solve for clean water. You can solve, then, for food, because you know land that's suitable for agriculture is limited, but how can you solve for that? Well, we know actually how to solve for that in a lot of ways with things like vertical farms, but we need more advanced ai, we need more advanced manufacturing, we need more materials, we need more energy, we need more water. So, again, energy is kind of like this input that, if we can solve for lots of other, good things happen.
Amith: 55:14
So I get excited about these fundamental things because I kind of visualize the downstream effects solving a lot of fundamental problems we have around the globe and it gets exciting, right.
Amith: 55:24
And so why do we want to talk about this with associations? Well, association folks, you guys are citizens of planet Earth, just like Mallory and I, and so it's important to be an educated citizen, in my opinion, and at the same time, hopefully this is interesting, hopefully this gets you excited, but also hopefully gets you thinking about what happens to your space, to your industry, to your sector, to your profession if these things come online. Maybe not even in a few years, maybe in 10 years time. What does that mean? So that's the future we're heading into. I think that you know the way we see our job at Sidecar is to help break these ideas down in a way where in some cases like Cloud 3.7, it sounded kind of cool when we were covering it, but compared to this thing, it seems kind of simple, right, but we need to cover things that are immediate and obvious and useful to you today, but also think about where we're heading.
Amith: 56:17
And one of the best ways to think about where we're heading is to try to put our heads in these environments, with people who are trying to build that future, and then try to distill it down and say what does this mean? And we're going to be wrong way more than we're right about the timing, about what these things can and can't do, but the fact that these things are happening right now in front of our eyes is truly remarkable.
Mallory: 56:34
Yeah, this is incredibly exciting. One because I feel like I have a much better grasp of it and two because correct me if I'm wrong, but it seems like we're generally we can be pretty bullish on quantum computing. It almost sounds like it could just solve all the major problems we have in the world. Is there anything scary we need to be worried about, like with AI taking over? It sounds like quantum is just good.
Amith: 56:57
Well, yes, we should be excited. However, all technologies that have suitable sufficient power are dual use. So what could bad guys do with quantum? If it came online and it was highly available, and if you mix really really good AI with the horsepower of quantum, potentially really bad things could be done as well. Right, you know quantum cryptography, or quantum proof cryptography, is a major area of research because you know quantum comes online things like current blockchain. You know what is fundamental to why Bitcoin is a secure digital currency. It's gone.
Amith: 57:31
The current form of encryption is crackable instantly by quantum computing, even at a much smaller scale than a million qubits. So there are a lot of really smart people in cryptography that are figuring out how to create quantum proof algorithms, which isn't so much about making keys dramatically longer that actually doesn't work, because then classical computers won't be able to encrypt and decrypt, because you know quantum can do all the classical algorithms so much faster but rather coming up with algorithms that quantum computers can't solve, which is a category I really can't speak to, other than to say that I find it fascinating. That's a problem, right? So if quantum comes online before we have mainstream quantum proof encryption, we've got some major issues and, aside from Bitcoin, you know government secrets and your association secrets, right Like everything, all of a sudden becomes public, so we have to solve for that.
Amith: 58:19
There's a lot of work happening there and, in general, if you think about the combination of really smart AI with limitless parallel computing power for science, it's tremendously exciting and it could be used for extreme bad purposes as well. So we have to have our eyes wide open with this stuff. But I think that you know for now that part. You know it's a practical reality with AI, that you know cloud 3.7, deep seek these are tools that anyone on the planet can use, basically for anything, in spite of, like, the supposed guardrails that exist in these models. They've been proven. Even the publicly available models that are from the major labs can very quickly be like changed in order to suit ill intentions. So we have an issue with AI, and quantum is kind of a multiplier for that in a sense.
Mallory: 59:07
Yeah, well, it was nice while it lasted. Now I'm thinking about all the bad stuff. Well, if you're listening still with us, the Sidecar Sync pod could be like a quantum computing pod in the future. I guess we will see how that goes. Everyone, thank you so much for tuning in. We will see you next week.
Amith: 59:27
Thanks for tuning in to Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to bootcamps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.