Summary:
CES 2026 brought the heat—and the robots. In this episode, Amith Nagarajan and Mallory Mejias break down the biggest stories from the Consumer Electronics Show, where AI made its move from code to corporeal. They cover Meta’s $2B acquisition of Manus, Groq’s $20B licensing deal with Nvidia, and how LPUs are set to revolutionize inference computing. From humanoid robots dancing across stages to Nvidia’s open-source models reshaping robotics and autonomous vehicles, this episode offers a rapid-fire tour of the tech shaping our future. Plus, Amith shares why AI pilots are more about learning than proof—and yes, we dream about robots doing the dishes.
Timestamps:
00:00 - Resolutions, Ice Cream, and AI Headlines
👥Provide comprehensive AI education for your team
https://learn.sidecar.ai/teams
📅 Register for digitalNow 2026:
https://digitalnow.sidecar.ai/digitalnow
🤖 Join the AI Mastermind:
https://sidecar.ai/association-ai-mas...
🎀 Use code AIPOD50 for $50 off your Association AI Professional (AAiP) certification
📕 Download ‘Ascend 3rd Edition: Unlocking the Power of AI for Associations’ for FREE
🛠 AI Tools and Resources Mentioned in This Episode:
CES 2026 ➔ https://www.ces.tech
Atlas Humanoid Robot ➔ https://www.youtube.com/watch?v=e73kf_iLAP0
Groq ➔ https://groq.com
Cerebras ➔ https://cerebras.net
Gemini by Google DeepMind ➔ https://deepmind.google
HeyGen ➔ https://www.heygen.com
https://www.linkedin.com/company/sidecar-global
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
⚙️ Other Resources from Sidecar:
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖
[00:00:09:17 - 00:00:24:01]
Amith
My name is Amith Nagarajan.
[00:00:24:01 - 00:00:25:19]
Mallory
And my name is Mallory Mejias.
[00:00:25:19 - 00:00:30:06]
Amith
And we are your hosts and it is 2026. How are you doing Mallory?
[00:00:30:06 - 00:00:51:19]
Mallory
I'm doing really well Amit. This isn't our first technical episode of 2026, but it's the first time you and I are recording in 2026. And it feels good to be in a new year. We were just talking about a dry January, you know how some people opt to not have any drinks in January, Amit is having a no sugar or a low sugar January. So we're keeping it healthy over here at the Sidecar Sync.
[00:00:51:19 - 00:03:43:03]
Amith
Yeah, I live in New Orleans. It's hard to do that at times, especially we moved our Blue Cypress headquarters to a different location in New Orleans that happens to be really close to a really good ice cream shop. And that is one of my weaknesses. So we'll see how it goes, but that is my intention for the month of January. And January did not start slowly when it comes to the world of AI and the world of technology. In fact, before we get into some of the topics that you prepared Mallory, I think it's worth noting that a couple of pretty high flying companies have either been acquired or struck major licensing deals. One of which is a company called Manus, which recently became kind of in the, I think general public awareness shortly after the deep seek moment last year, Manus was an early agent platform that was just acquired by Meta and Meta acquired them presumably in order to increase their ability to target enterprise AI. And so that could be quite interesting and we'll see where it goes, but that was a $2 billion acquisition that has occurred in the last week or so. It might've been very end of December or this week even, but that was interesting. And in my opinion, much more notably, our friends over at Grok, the one with the Q, not the one with the K, Jero Q, they just struck a licensing deal, but it sounds a lot like an acquisition to me, a $20 billion deal with Nvidia. And the deal is that Grok has had for some time a tremendous lead in technology around chips. And we've covered them a couple of different times on this pod. Grok provides something called LPUs in contrast to GPUs. They're specifically chips that are made just for running AI, not for training AI models, but for running AI models, what's called inference. And so this inference specific workload that Grok excels at was a weak spot for Nvidia on a comparative basis. And so Nvidia swooped in and made a really good deal. And the reason I think of it more as an acquisition than the licensing deal personally is because Grok's founder and CEO, Jonathan Ross, as well as the rest of the executive team, all moved over to Nvidia in senior level roles. And it seems as though about 80%, 90% of the company did. Grok, as a standalone company, apparently will continue to operate their cloud, which is very important to us because we use a number of their pieces of technology in various products of ours. And I know a number of our clients do as well. So we're excited about that. But with Nvidia backing the LPU concept and technology, that's going to drive enormous scale. So I'm really excited about this because Nvidia obviously has an unparalleled level of cloud in the industry, deep expertise, manufacturing scale, or access to scale through their contract manufacturers. So very exciting times.
[00:03:44:04 - 00:03:56:05]
Mallory
Wow. And we'll be talking about Nvidia a good bit today in this episode. But Amith, we had Ian Andrews on the podcast before. At the time of that recording, I think he was the CRO of Grok. So are you saying he's at Nvidia now?
[00:03:56:05 - 00:05:21:09]
Amith
That's my understanding. I haven't caught up with Ian this year yet. But yeah, that's my understanding. And I think most of the team members from the C-suite over there are over at New Rules at Nvidia. And they're going to do what they've been doing, but just at a dramatically higher scale. And my suspicion is by the end of this year, we will see enormous adoption of LPU technology. Maybe it'll be called something else. I don't know if Nvidia will choose to continue calling it LPUs. But the concept behind Grok's inference-focused specialty chips is going to blow up because they'll have access to scale through Nvidia. And Nvidia realizes that there's a simple dynamic. It's actually not so simple in reality, but it's simple to explain that the more inference that there is, the more use of AI that there is, the more that creates demand for training the next generation of models. And the more powerful the next generation of models are, the more demand there will be for inference. So Nvidia does play on both sides of that equation at the moment because their GPUs, power, almost all of the inference that you and I are accustomed to using through companies like ChatGPT or products like ChatGPT or Claude from Anthropic. So that's great. But imagine if ChatGPT or Claude was somewhere between 5 to 20 times faster in responding to you. Now, as a user, you might say, well, you know, actually, it's pretty darn fast. It responds to me really quickly.
[00:05:22:13 - 00:07:45:04]
Amith
Perhaps so, but you're thinking about it only in the context of your own individual interaction. But what about a computer program that's trying to interact with ChatGPT or Claude to do something at scale? What if you said, I want to analyze every document that my association has ever published and look to extract certain content insights, and perhaps I want to do that across the board many times per year for different kinds of insights? Well, those are the types of operations at scale that right now would both be cost prohibitive and also take a really long time with frontier models, with the state of the art, most powerful, most intelligent models. You can actually do those kinds of things with smaller models at scale on the Grok platform or Cerebrus and others quite rapidly and for a very low cost. Or you can, of course, spin up your own inference environment. But the key to this is that with NVIDIA getting behind the LPU concept, they're both admitting a weakness in the GPU technology. But they're both standing up and saying, hey, we're going to go big on inference-only chips. And that's a big thing. That's a really big thing. So it's going to result in better models because you're going to be able to allocate the GPUs that you buy to just training. And you're going to be able to get better inference because you're going to allocate the LPU technology to just that side of the house. So I think that's quite exciting. It's going to lead to enormous growth this year. And with a player like NVIDIA behind it, I suspect that someone like AMD, who's also a major player in the GPU space, or somebody else might make a run at a company called Cerebrus, which is another fast inference provider. Cerebrus is a little bit different. They have something called wafer scale compute, which is this giant chip that's the entire 300 millimeter wafer is one single chip that's cast on there. And it's enormously complex and very powerful chip. It can be used for both inference but also training, which is a different part of-- it's a different part of the market than what Grok has been going after. But the point is that Cerebrus does also offer a technology that is capable of providing much faster inference than the traditional GPUs. At the same time, I would say that NVIDIA's core product with the GPU is not sitting still. There was an announcement this week at CES that we'll probably talk about that is pretty exciting about their next generation of GPUs. So all this stuff keeps going on and on and on.
[00:07:45:04 - 00:10:22:19]
Mallory
It's a perfect segue into today's episode. He just mentioned CES 2026, which is happening right now at the recording of this podcast. By the time that you hear it, it will be over. But if you're not familiar with it, CES stands for Consumer Electronic Show. And it's put on by none other than the Consumer Technology Association, the annual mega tech trade show where companies unveil what's coming next. And I did a little bit of research. It might be easy to dismiss some trade show announcements as hype, but CES does have a track record of previewing tech that later becomes mainstream, going all the way back to VCRs, CDs, HDTV, the original Xbox, voice assistants like Alexa, which shows up at CES often ends up in our lives a few years later. So keep that in mind when all these announcements sound a little bit sci-fi to you. This year's theme was physical AI, AI leaving the screen and entering robots, vehicles, and everyday objects. So today we're going to talk about chips, robots, autonomous vehicles, and some delightfully quirky gadgets while the show is still ongoing. So our topic one of me, if you nailed it, is the big picture. We're talking about the chip wars. Quick context on CES just for your knowledge, it ran January 6 through 9 of this year in Las Vegas with over 4,500 exhibitors and 140,000 attendees. Historically, it's been about consumer gadgets, TVs, phones, fun toys, but now it is increasingly about AI infrastructure. The biggest announcements this year thus far were not the next cool TV. They were chips and robots. Whoever controls the chips controls the pace of AI progress. So these announcements are setting the ceiling for what's possible over the next several years. Nvidia had a keynote. We saw Jensen Huang, who's the founder and CEO of Nvidia, the company who, as Amith mentioned, whose chips power most AI systems. They're now worth over $3.5 trillion just for your information. His keynote really set the tone for the entire show. And his headline quote was, "The chat GPT moment for physical AI is here when machines begin to understand, reason, and act in the real world." They had a big product announcement, which was the Vera Rubin platform, their next-gen AI supercomputer named after the astronomer who discovered the evidence of dark matter. It's a six-chip architecture claiming roughly 5x the performance of their current chips, with AI processing at 1 10th of the cost. But maybe more interesting was his warning. AI's next breakthroughs will be limited by compute infrastructure, not ambition.
[00:10:23:20 - 00:11:13:16]
Mallory
Demand is outpacing supply. His framing was that $10 trillion of legacy computing is now being modernized to AI-native systems. Amith, you also mentioned AMD. They had a keynote as well. Lisa Su, who is their CEO and Nvidia's main competitor, delivered the official CES opening keynote. Her big announcement was the Helios platform, AMD's answer to Nvidia's data center dominance. She called it the world's best AI rack, which was, of course, a direct shot at Nvidia. And the numbers she threw out were striking. AMD's next-gen chips will deliver up to 1,000x performance improvement-- it's almost impossible to believe-- by 2027. And she predicted 5 billion people will be using AI daily within the next five years, which would require a 100x increase in global computing capacity.
[00:11:14:19 - 00:11:35:16]
Mallory
A lot to unpack here, Amith. Jensen is warning us that AI progress is limited by compute, not ambition. I don't think that's really a surprise to us here at the Sidecar Sync podcast. But what do you think is the practical implication for associations trying to plan their AI strategy, maybe just starting to feel good about it when they feel like the infrastructure is struggling to catch up?
[00:11:37:01 - 00:14:17:06]
Amith
I think it's a practical matter. First of all, I want to say this makes Moore's law seem quaint, which-- isn't that nuts? Moore's law has been this most stunning advancement in technological progress in human history over the last 50 years. And now it seems extremely modest. So I guess what I would say is, as a practical matter over the next couple of years, on the one hand, this is super exciting. On the other hand, it may not matter too much to you as a day-to-day leader in an association. And the reason I share that is, you should know that these things are growing fast, moving fast, getting smarter, getting faster. That is important. The degree to which you are taking advantage of that next generation of technology or you, our capacity constrained as an association leader, is likely to be not super, super high, mostly because there's a lot of headroom from where you're probably sitting today in terms of your internal AI adoption relative to what's currently available. And so on the one hand, I would want everyone to be both excited, encouraged, inspired by the pace of progress. And also, it's a reminder to get going if you haven't done a whole lot. But the other side of it is, even if you're fairly far along in your AI journey, all of us, ourselves included, are just learning how to utilize this technology in our business processes. And that's actually a lot of what Jensen and Lisa Sue are saying, is that this capacity constraint presupposes that people are going to discover new use cases, right? Because it's not that we're going to be typing more messages to chat GPT, or we're going to be talking to Claude more, or having more voice conversations. We will. Of course, we'll be doing more of that. But not to the order of magnitude we're describing, right? Even if you have 5 billion people using AI on a daily basis next year, it's still not going to be anywhere close to the level of growth that would outstrip 1,000x increase or 100,000x increase. So what's assumed here is that because what we're talking about is a commodity called intelligence, as opposed to anything that's technology related, and abundant or effectively unlimited intelligence is a truly novel concept, we don't yet know. We don't yet know what the applications are. We don't yet know what the use cases are that are specific to you as an association leader five years from now. We do know, however, that there's many things you can do today. And those are the things that I would encourage you, as you start your year, to be thinking about. On the one hand, again, to be inspired by this future path, but at the same time to look pragmatically at what you have available to you today as a leader, which is enormous. It's incredible. The things you can do right now, this moment in time, far outstrip whatever it is that you're doing. I can say that confidently because I know that's true for even the most advanced AI adoption
[00:14:18:20 - 00:15:05:07]
Amith
organizations in the world, not even in the association market that I've seen. So there's a lot of opportunity. So I think that it ultimately means that the applications we're going to get are going to be stunning. And things we can't expect, that's the thing that we have to realize is that it's very hard to predict what these use cases are. The last thing I'll say is the modalities that we currently consider hard to access, like video. And they're hard to access because they're slow. They're batch-based. Like VO3 from Google, unbelievable. It's an amazing video generation tool. You can do things where you can have interactions with live avatars through HeyGen, very slow, very clumsy right now. That's going to dramatically be different. Totally fluid, totally real time. And additional modalities we haven't even dreamt up yet.
[00:15:07:06 - 00:15:26:08]
Mallory
So it sounds like what you're saying to me-- and correct me if I'm wrong-- is that while it's important to stay up to date on the AI chip wars and see where we're heading, you feel that most associations can get a lot of benefits out of our current AI availability, kind of regardless of the fact that infrastructure globally, perhaps, is struggling to keep up.
[00:15:26:08 - 00:16:39:09]
Amith
Right. And I think that there are going to be some capacity constraints that you'll run into from time to time when you say, hey, let's go experiment with the latest model from Google, Gemini 3 or Gemini 4 or whatever comes next. You might run into some constraints. We're like, oh, well, we only have access to this limited amount of it for the six months. And that might be a factor in the way you experiment and deploy AI for sure. So being aware of it is very important. I think the most important strategic thought process you can apply, though, is to recognize that the experiments you run today are more about teaching you than they are about understanding what the capabilities of the technology are. Historically, when you've done piloting of technologies, you've been trying to essentially test whether the technology was sufficiently capable of meeting a particular business need. That's the whole idea of a pilot or a proof of concept. The challenge with that with AI is that it's almost guaranteed that the AI in a year, two years, certainly three years time, will be sufficiently powerful to solve almost anything you can imagine. So it's not so much are you proving that it can or cannot work for you, but rather you're learning how you should use it, how your organization needs to adapt.
[00:16:40:09 - 00:16:58:17]
Amith
So a lot of it comes back to the same stuff we talk about here in terms of the importance around learning and experimentation and so forth. I think here what we're talking about is stunning progress. Super exciting. If we put on our 100% nerd hat and just want to geek out over how cool this is and all the flops that are being put into the world, that's awesome.
[00:16:59:19 - 00:17:15:09]
Amith
But I think as a practical matter for most of the leaders that I know in the association sector, they should be thinking about it more in terms of that arc. What does the curve look like? And how can they essentially anticipate where that curve may take them in six months or 18 months or three years time?
[00:17:16:16 - 00:17:40:14]
Mallory
I think that's really profound too what you just said. Pilots are more about teaching you than proving that the technology works. I don't know if you've ever said it in such a concise way on the pod, but I think that's really important. It's pretty much a given, as you said, very soon AI will be able to tackle most problems that associations face. But do you know how to utilize it to address those problems? That's kind of the headroom that you referred to earlier.
[00:17:40:14 - 00:19:54:15]
Amith
Yeah, and I certainly don't. I mean, I'm learning something new every day. Everyone in our organization across the board is constantly experimenting. I was just talking to a colleague over the break about some learnings that we had in 2025 working on this AI agent we have called Skip. And for those who haven't heard me speak about it, Skip is our data analytics agent, very, very advanced technology. It's basically able to have natural language discussions with you like Claude or Chachi PT or Gemini, and then interacts with your data platform and is able to produce these amazing charts and graphs and dashboards. And Skip has been completely rebuilt from scratch many, many times over the last three years. We've literally thrown it away and restarted roughly every six months because the technology keeps completely changing the assumption set. And we just had a moment like that this fall where we realized, the way we were approaching our architecture actually was really a late 2024 era approach that was cutting edge at the time. But by mid 2025, it actually was out of date. And we did not realize it till kind of the late fall. We looked at it and said, oh, wait a second. Actually, like the assumptions we've been making about what the underlying language model can do per operation, for example, were wrong. And the architecture didn't need to be so granular in the way that it approached certain aspects of its processing. So once we zoomed back out, looked at it a little bit more, took a fresh breath outside kind of thing, we said, oh my gosh, there's an opportunity here to rethink this. And that's super hard because all of us form these deep channels in our brains in terms of how we think and our assumptions. And those patterns of life drive what our next behaviors are likely to be. So it's very, very hard work. And that's what those pilots and those proofs of concept are all about. In my mind is testing, experimenting, and teaching yourself. Most people think of pilots as like, hey, let's prove that it works. And I like doing pilots that I think probably aren't going to work because I want to figure out like, OK, what is this thing good for? What do we learn? And what doors did we open up that we did not even know existed? That's part of the fun for me personally, but it's really where the discovery lies. And each organization has an interesting path to go and explore.
[00:19:56:00 - 00:20:14:00]
Mallory
And I remember a story you told, I think on the podcast you made long ago, I think it was a software development story, maybe with Microsoft, how you didn't know there was this feature that was in there. And you Googled it, and it was 20 years old, but you had just functioned under the assumption like, oh, it doesn't do this. It even happens to you, Amit.
[00:20:14:00 - 00:21:44:11]
Amith
Yep, happens to me all the time. And yeah, it's something I think you got to admit that to yourself. It's tough, and you can think that you know a whole bunch about a lot of things and you do, but that's actually the problem. Because someone who comes who knows nothing, oftentimes they don't have that issue. Of course, they don't have all your experience, which is a different challenge. But if they don't have your assumptions baked into the way they think, they are free to not assume that and therefore explore the world with a completely fresh set of eyes. It's interesting because next week we're going to be onboarding a half a dozen new employees here in New Orleans. This is part of our fellowship program where we hire people right out of the university. They can be recent graduates from undergrad, like bachelor's programs, or can be master's or even PhD graduates. And we put them into a really intensive multi-year program where they learn the ropes of AI and they learn how to apply it into the broader not-for-profit sector and associations very specifically. And in that process, it's eye-opening for those of us that have been around a while because they approach problems differently than we do. They look at it and say, "Hey, what about this?" And we're like, "Oh man, that could be really cool." Yeah, of course, there's plenty of things that come up where it's like, you know, that doesn't necessarily make sense, but it's incredibly cool to have fresh eyes on it. So I think the challenge for all of us, myself included, is how do we all give ourselves fresh eyes? How do we, you know, refresh our own vision from time to time through some kind of intentionality? And it's hard.
[00:21:45:13 - 00:22:03:17]
Mallory
I want to move to our next topic from CES, which is robotics and physical AI. So robotics, as we mentioned, was the dominant theme across keynotes. More robots at CES this year than ever before. But what's notable is these weren't just impressive demos anymore. Companies were talking real shipping numbers and production timelines.
[00:22:04:18 - 00:23:00:11]
Mallory
Boston Dynamics is the robotics company famous for those viral videos of robots doing backflips and dancing. They're owned by Hyundai. And they did something rare at CES, a live demo of their Atlas humanoid robot. This matters because companies often only release edited videos so they can, you know, potentially hide some of the failures with their robots going live signals some real confidence. I watched the video, I shared it with Ameth. Atlas walked across the stage for several minutes, waved to the crowd, and moved pretty fluidly. An engineer piloted it remotely for the demo, but in production, it will operate autonomously. And maybe bigger news was their partnership with Google DeepMind to integrate Gemini AI models into Atlas. The goal is robots that can learn new tasks in under a day, which would be a massive acceleration from current training timelines. Hyundai's target is 30,000 humanoid robots produced annually by 2028.
[00:23:01:11 - 00:23:57:17]
Mallory
Back to Nvidia, they are making an aggressive move to become the default platform for robotics. They're releasing open foundation models, Cosmos for world simulation, Groot for humanoid control that any robotics company can build on. And the partner list is impressive. Boston Dynamics, Caterpillar, LG, and others are already building on Nvidia's stack. Some other notable announcements here. We saw AG Bot, which is a Chinese company, announced that their U.S. market entry was happening with 5,000 robots already shipped. They claim to be well past the prototype stage. Qualcomm announced a processor specifically designed for humanoid robots called Dragonwing IQ10, partnering with Figur and KUKA Robotics. And then LG, this is a fun one, showed a home robot designed to start laundry, fold clothes, and unload dishwashers. Love that. It's more concept than product right now, but it signals where consumer robotics is probably heading.
[00:23:58:20 - 00:24:26:14]
Mallory
Goldman Sachs is projecting that the humanoid robot market will reach $38 billion by 2035. And we've also seen manufacturing costs drop significantly from between $50,000 to $250,000 down to between $30,000 and $150,000. Part of what is making commercial deployment much more realistic. So Amith, first, I got to start with the fun thing. I shared the video of the Atlas humanoid robot with you. What did you think? What were your immediate takeaways?
[00:24:26:14 - 00:24:29:17]
Amith
I love the way it stands up. I think it's so cool.
[00:24:29:17 - 00:24:31:02]
Mallory
I don't love that part.
[00:24:31:02 - 00:24:56:06]
Amith
Really? Yeah, it's you got to watch the video and see it, the full video if you haven't. It's a pretty impressive kind of movement. And it's really weird looking if you think of it like a humanoid. And I think of it as it is, but it's part of what they talk about is this idea of not being limited by the human form, the way it can move its head around and it can move its torso around 360 degrees and all these other cool things.
[00:24:57:07 - 00:25:24:08]
Amith
So yeah, the most efficient way to stand up isn't the way we do it, but we always absolutely have very limited joints on a comparative basis. So I thought that was super cool. I love the fact that they're partnering with Google. I think Gemini 3 is an amazing model family. If you haven't worked with Gemini 3 yet, I'd encourage you to dig into it. It is at this point in time, my favorite model. I use it for both like, you know, general tasks, but also for a lot of technical work. It's an incredible, incredible product.
[00:25:25:20 - 00:26:14:05]
Amith
And also for those that aren't aware, Google actually used to own this company, Boston Robotics, and spun it off. They acquired it, I think maybe 15 years ago, and then they spun it off maybe five years ago or something. I don't have the history quite right in my head, but and then Hyundai picked it up. It's a great focus for them. And you know, I think just generally the broad theme is, you know, we're moving from the world of the screen or the world of bits to the world of atoms. And once we're in the physical world, that opens up a tremendous amount of opportunity. There's so many dangerous jobs that are out there. There's so many taxing jobs that are not necessarily dangerous the way you'd think about like in the way of like your near hot things or electrical things or your near, you know, toxic materials. Of course, there's those positions, but there's just like a lot of things that are literally backbreaking labor.
[00:26:15:05 - 00:26:43:01]
Amith
That certainly would be great to put robots into the mix. Opens up, of course, all the usual conversations when there's a disruptive technological force in play. What happens to the people who do those jobs, right? Those jobs may be dangerous and difficult, but they pay the bills and they put food on the table for a lot of families. So how do you address that? That's of course, one of the things that comes to mind when you see this. And how quickly can you retrain those types of folks that have been doing those kinds of tasks for a long time?
[00:26:44:02 - 00:27:07:23]
Amith
Ultimately, though, the opportunity to make it possible for people to both live better lives through robots at home or in various aspects of their lives and then in the industrial setting, it's extremely compelling. And if you blend advances in material science, in robotics and electronics coupled with obviously the relentless progress in the AI models,
[00:27:09:03 - 00:27:34:17]
Amith
you're going to see pace probably considerably faster than what's being projected. I think the progress that's being projected is actually quite modest. 30,000 robots in production by 2028. Well, that's like at a very high end level. But to your other point, there are companies in China and elsewhere that are pumping out robots right now. They're probably not nearly as sophisticated as what we see in the video from Boston Dynamics, but they're capable.
[00:27:35:22 - 00:27:41:21]
Amith
I, for one, am quite excited about the idea of a dishwasher unloading robot as well, Mallory. Oh, me too.
[00:27:41:21 - 00:27:42:20]
Mallory
Me too, Amis.
[00:27:42:20 - 00:28:01:12]
Amith
In 2028, my youngest will be graduating high school and so heading off to college and that's one of her chores. And amongst many other things I appreciate her for, I do appreciate that she takes care of that around the house. So by the time she leaves for college, I'm hoping that we will have a robot that can take her place.
[00:28:01:12 - 00:28:05:09]
Mallory
We might be able to time that right out, Amis. I'd be so happy for you.
[00:28:06:11 - 00:28:10:01]
Amith
Yeah, pretty cool. Either that or I'll just use a lot more paper plates.
[00:28:10:01 - 00:28:13:01]
Mallory
We'll see. No, not good for the environment. Come on, Amis.
[00:28:13:01 - 00:28:13:09]
Amith
That's a good point.
[00:28:13:09 - 00:28:31:17]
Mallory
We'll just get a robot. It'll be fine. I know we've talked about in the past, I don't even know if we should mention this on the pod because I don't know if it'll happen anytime soon, but we have talked about how neat it would be to bring robots to Digital Now one year and maybe have some sort of demo there. Who knows TBD, but that's something I would love to see personally.
[00:28:32:18 - 00:28:46:05]
Mallory
Amis, I wanted to ask you too about NVIDIA releasing these open foundation models. Do you expect kind of the robotics models trajectory to play out as we've seen the LLM trajectory with like open versus closed?
[00:28:47:08 - 00:30:48:11]
Amith
I think it's a super interesting discussion point. It's another yet another smart move from NVIDIA. So a lot of people don't realize that NVIDIA's moat isn't so much around chips. It's more around software. So of course they're a chip company. Of course they're an advanced networking company that interconnects between chips. Super important for them to have fast bandwidth. They do a lot of things with hardware. But many years ago they introduced a platform called CUDA, C-U-D-A, and this CUDA platform very quickly became pretty much the standard for both scientific use of GPUs in a variety of domains and also in computer science specifically for machine learning and AI work. And CUDA is proprietary software. And so the world is built on top of CUDA and the CUDA stack basically drives almost all work in the space. And that's what of course and of course CUDA runs only on NVIDIA chips. So that is how they've had as much of a lock as they've had. Of course there's alternatives and there's competitors and there's a lot of crazy things happening in the world. But that has been a major strategic advantage for them with respect to what you're describing and putting a platform in the world that's open source I model. So it probably works either only or optimally with NVIDIA hardware. It's also smart. Make it super easy for people to leverage your products to build applications and they will and that will have a lot of downstream adoption in terms of you know units of GPUs and other technologies leaving NVIDIA's you know systems. So I think that's really really powerful. NVIDIA by the way this is another broad point has actually been introducing a number of different kinds of foundation models. They have a series of models called Nematron. Nematron is they have some pretty cool names like I really love group by the way that was cool. But Nematron is a series of models they have that actually are quite competitive with a lot of other companies that are out there in language processing. So for example there's a Nematron 30 billion parameter model that is roughly on par with 01 mini.
[00:30:49:11 - 00:31:34:17]
Amith
It's a very small model. You can download it and run it in your computer. It only has three billion active parameters and it's a reasoning model. So it's a pretty advanced piece of technology. They also just introduced an ultra low latency audio model I think today or yesterday. And so they're doing a lot of work with software. All this of course ultimately is some of it perhaps deepens their moat. But part of what they're trying to do is essentially drive adoption drive the commoditization of software. They don't want any one particular company to be like the leader in models. Their goal is to drive mass adoption because that's going to drive overall growth. And of course they're at the moment the leader in the space by a wide margin. So that's going to drive growth for Nvidia in the whole sector as well. But particularly for Nvidia.
[00:31:36:07 - 00:31:39:23]
Mallory
Nematron every time you said it had to smile. That is a fun name. I'm not going to lie.
[00:31:40:24 - 00:33:04:07]
Mallory
And we're not done talking about Nvidia moving on to autonomous vehicles. I wanted to cover this quickly because we don't spend a ton of time on it on the pod. But seeing like the intersection of autonomous driving and AI is certainly interesting. Self-driving had a moment at SES again but the framing has shifted. So it's less robotaxes are coming someday and more AI assisted driving is shipping. Now Nvidia again announced Alpomio a family of open source AI models for autonomous driving. This is the pitch that the cars don't just perceive the road. That they reason about it. Huang called it the world's first thinking reasoning autonomous vehicle AI. And one feature that stood out is that the model explains its reasoning for each decision which helps engineers of course debug and improve the system. That transparency is new. There's also a Mercedes Benz partnership. The 2025 Mercedes Benz CLA will be the first production car shipping with Nvidia's full autonomous driving stack including Alpomio that I just mentioned. It's shipping Q1 2026 in the U.S. And it's officially classified as a level two plus which means it still requires driver attention. But the capabilities look similar to what Tesla offers with full self-driving. Mercedes also announced the first in-car system that integrates AI from both Microsoft and Google. An interesting choice to go multi-vendor rather than exclusive.
[00:33:05:17 - 00:33:13:02]
Mallory
Ameth do you think autonomous driving I mean it's interesting to talk about. Do you think that this is something association should keep an eye on. I don't know.
[00:33:13:02 - 00:33:41:00]
Amith
I think autonomous driving just you know it's an application that's been pursued for quite a while because it is incredibly interesting for a lot of reasons. One is that we're generally speaking now talking about a particular individual. We tend to all be pretty terrible in terms of our driving skill level. We're dangerous drivers. We're inefficient drivers. We burn a lot more fuel than than necessary in terms of the way you know driving could be done.
[00:33:42:02 - 00:33:48:19]
Amith
Computers are just better at a lot of the type of stuff that driving needs optimization around. So when you're talking about the function of transportation
[00:33:49:21 - 00:35:55:23]
Amith
autonomous driving is going to save a lot of lives. People are certainly concerned about safety which is incredibly important. But the bar people set for autonomous vehicle safety is radically higher than any human driver or collection of human drivers have ever even you know considered achieving. So I think there will be deaths and injuries through autonomous vehicles hitting people getting in collision stuff like that. But at the same time the number of people that died just in the United States alone from highway traffic accidents as well as others is astounding each year. And so we're talking about an incredible opportunity to save lives. Also it's a quality of life thing for those of us that have commutes. My commute unfortunately for me is on foot. But you know if you're commuting in the car and you're driving it is it's mind numbing after a while which is part of what leads to safety issues. But if you have to be in the car if you can have the car drive for you that's a lot more pleasant than having to pay attention to every start and stop along the road. So I think there's benefits there. The other thing I'd say about this that's interesting is that AIs are capable of working together more collaboratively than we can. You ever watch people behave on an interstate? It tends to bring out the worst in human collaboration. Not the best. But AIs quite good at collaboration. And so like you know cars being able to move as swarms and be able to join groups and peel off from groups and do that safely. And it's both more fuel efficient and it can also reduce traffic congestion dramatically because a lot of what happens on interstates is you know you have people getting on the highway and you know they they basically accelerate really fast. They try to get into a spot where they you know there was not space for them and all these other things and it has this you know chain reaction. And if these AIs can interact with each other in real time that leads to a lot of opportunity potentially reduce congestion all over the world. So I think these things are all really exciting. But it's also a proof point because it's a complex application. So if you can make this work well it has all the benefits I described and many more I didn't. But it also is a proof point for what AI can do more generally.
[00:35:55:23 - 00:36:15:07]
Mallory
I'm glad we hashed this out because I never even thought about the idea of the AIs collaborating with each other and the cars moving as a swarm. I think that's a really good point. I mean as a futurist do you think there's any world in where all driving is not done autonomously by like machines or AI?
[00:36:16:12 - 00:37:05:18]
Amith
Well I was very careful to say that when you're considering the function being transportation getting from point A to point B I don't think there's a place for human drivers in the not too distant future if you're going to consider safety efficiency etc. But I personally find driving to be one of the most enjoyable recreational activities out there. And so I love it. And I've been driving you know forever and so you know there's a lot of people who feel that way. Not everyone does. I think I saw you shaking your head there. It's like a lot of people I think a lot of people are like no no no. Driving is purely about like it's an appliance. It is 100 percent about getting me from point A to point B. I have no relationship or emotional connection with my vehicle. It's 100 percent just getting me from point A to point B. And I get that. And that's that's like you know you view your car the way you might view a toaster.
[00:37:07:00 - 00:38:21:23]
Amith
But you know I have relationships with my cars. I love cars and I will want to drive cars for as long as I am able to. And so I think there's a lot of people who feel that way. I think there's a whole passion around that. And that's a different thing though. And that's totally fine. The two can coexist. It's just it's a different concept. What we're trying to solve for is you know global transportation and it's transportation of people of course. But it's also transportation of goods. How do we get things moved around more efficiently. You know there's a lot of efficiency that comes from transporting things on on container ships and obviously on trains. But then when it comes to trucks or local trucks and delivery you have the combination of autonomous vehicles coupled with like you talk about like the last mile of a solution. Oftentimes it refers to like hey how do you get that product literally from the distribution center to the doorstep of the business or the individual's home. And this is going to help a ton with that. Couple that with you know autonomous robots and that's like talking about the last few feet of the problem. And it leads it can lead to a lot of great things. It can also lead to a lot of challenges that we've referred to over and over. So but I think the autonomous driving piece is it's a grand challenge. And it's one that I think we're very close to seeing salt.
[00:38:23:05 - 00:38:35:11]
Mallory
And I need you all to know as well that I'm not joking about the enjoying cars and long rides because I'd say pretty much every year I'm if you make the long drive from Louisiana to Utah and may and back at least once is that right.
[00:38:35:11 - 00:39:12:16]
Amith
Yeah I do. It's and I don't mind it at all. In fact I enjoy it. I look forward to it. I will say that the stretch of driving between Louisiana and Texas like going particularly through Texas it can get a little bit tedious after a while because you don't see much other than road and tumbleweed for you know hours at a time. But once you get into some scenery like up in the mountains somewhere I find it incredibly invigorating actually. And I love to stop in random places that I haven't been to before and explore some new town or you know I'll see a sign that there's some you know some site whatever. And I'll go and read the sign and stop from it. And I just I enjoy that. I find that to be a lot of fun.
[00:39:12:16 - 00:39:37:24]
Mallory
Yeah and I would venture to say because I've always been I've always known when you're making those trips because you know it might be like during a weekday during the workday. I feel like a lot of your a lot of good conversations and a lot of good ideas come out of your long drives because you'll call somebody up and you'll be like hey let's let's just talk about this thing. I would dare say maybe one of the Blue Cypress companies has come out of a long drive. So there's a lot of value in that I think.
[00:39:37:24 - 00:40:12:17]
Amith
Yeah I feel the same way. And it's fun and look I think people need to and this is one of the things that I think AI in general and technology even more generally gives us is opportunities to explore our passions and figure out what we enjoy the most because they take kind of the tedious aspects out of an activity and you say well this is kind of a combined function of both being potentially enjoyable at least to some but also serves a functional purpose like getting you safely from one place to another. You know you take away one part of it which is perhaps the lion's share of what people do with a particular device in this case of the car.
[00:40:13:23 - 00:40:37:12]
Amith
You can still have other use cases for it right like for another example it's a little bit different is programming you say well like most code can already be written by the AI and that's I can say that that's true. Well that doesn't mean that the art of programming is going to be lost or people won't write code in certain cases or people might not write code necessarily for commercial purposes but might enjoy doing it. So I think there's a lot of different things like that and our perspectives are just going to have to shift. That's just life.
[00:40:37:12 - 00:40:38:11]
Mallory
Yep.
[00:40:39:12 - 00:40:50:13]
Mallory
Well I want to move into the fun part of the episode it's all been fun but this I think is especially fun because we're going to talk about some of the not AI. Well actually some of this is AI but more of the quirky gadgets that were announced at CES.
[00:40:51:24 - 00:42:03:15]
Mallory
One I'm not sure how to say it is Kokomo which is a fuzzy egg shaped robot pet that follows you around the house and warms to eighty nine point six degrees Fahrenheit when you hug it. During extended cuddling it heats up to 102 degrees Fahrenheit and wait it's designed to combat loneliness which sounds silly until you realize how many companies are now building products around that problems I don't know I wanted to share that one with you all. We've also got an an which is an AI panda designed for elderly care. It remembers your voice learns your preferences reminds you about daily tasks and can alert caregivers. I would say that seems pretty useful. We've got razors project Ava which is a five and a half inch holographic desk companion where you choose your character in anime girl or a muscular guy is what it says. And it acts as a gaming coach slash life assistant that you keep right on your desk. We saw a robot vacuum that can actually climb stairs using two wheel legs. It also jumps apparently and then we saw a Samsung TV that was giant like mounted on a giant easel to me that was not so impressive but I know some people are into TVs.
[00:42:04:19 - 00:42:19:22]
Mallory
The companion robot trend is worth watching as I said multiple companies are positioning AI pets and emotional support robots as real products not novelties so I'm eat I wanted to ask are you going to have a holographic work coach that sits on your desk with you how do you feel about that.
[00:42:19:22 - 00:42:26:07]
Amith
I pride I probably won't do that. I don't say that you definitely got my attention to the 130 inch TV.
[00:42:26:07 - 00:42:34:20]
Mallory
I didn't even say 100. I just saw the TV and I was like I feel like they could have done better than a giant TV on an easel personally.
[00:42:34:20 - 00:42:41:14]
Amith
I mean it sounds pretty cool to me but you know it's I think that'd be pretty fun with you know NFL playoffs starting this weekend.
[00:42:42:15 - 00:43:41:15]
Amith
That could be kind of cool. I don't know how big I want the NFL players to be on my screen. That's that's a good question. You know I think the other thing to draw from all of this this consumer devices are constantly experimenting with all sorts of different things right and so to the extent that you say oh this thing like Kokomo with this fuzzy egg shaped robot pet that warms up whatever you know you just blow it off as being a silly ridiculous thing. Actually think first of all what you described is a real problem and it's something that perhaps could be a solution and is interesting but if you think about something like that or any of these other technologies you mentioned the fact that they can exist also means that there's applications for the underlying capability that aren't necessarily in that form factor so ways you can apply it and you're thinking possibly in your personal life but also professionally and how you think about your associations business and how you think about your strategy so it's really good to keep an eye on this. Stuff not just the things that you think are intuitively interesting but stuff that you think is ridiculous.
[00:43:42:16 - 00:44:22:00]
Mallory
Yeah yeah you nailed that well everybody see us 2026 is still going on so more announcements may come and we'll flag anything major on the pod but I would say the through line is that AI is leaving the screen and entering the physical world robots cars appliances and companions. The gap between demo and deployment is closing and we're seeing real shipping numbers real production timelines and live demos instead of edited highlight reels. What makes this moment different I think is that these in advancements are compounding better chips enable better AI models which enable better robots which generate more data which improves the models. A huge flywheel each
[00:44:22:00 - 00:44:27:12]
(Music Playing)
[00:44:38:05 - 00:44:55:04]
Mallory
Thanks for tuning into the Sidecar Sync podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in-depth AI education for you, your entire team, or your members, head to sidecar.ai.
[00:44:55:04 - 00:44:58:10]
(Music Playing)