Summary:
Are we ready for a world where AI agents write our blogs and robots clean our homes? In this episode, Amith Nagarajan and Mallory Mejias dive deep into the rise of agentic AI and the future of robotics. They explore how digital workforces powered by open-source platforms like MemberJunction are already transforming productivity, especially in marketing. Then, they pivot to the physical world, breaking down NVIDIA’s fascinating vision of the “Physical Turing Test” and simulation-based training for robots. You’ll hear about digital twins, robot chefs, and what it all means for associations navigating this new frontier.
Timestamps:
00:00 - The Robots Are Coming
🎉 Thank you to our sponsor
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🤖 Join the AI Mastermind:
https://sidecar.ai/association-ai-mas...
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
🛠 AI Tools and Resources Mentioned in This Episode:
MemberJunction ➡ https://docs.memberjunction.org
Claude ➡ https://claude.ai
ChatGPT ➡ https://chat.openai.com
Groq ➡ https://www.groq.com
https://www.linkedin.com/company/sidecar-global
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
⚙️ Other Resources from Sidecar:
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖
[00:00:00] Amith: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence, and associations.
[00:00:14] Greetings, everybody, and welcome to the Sidecar Sync Your Home for Content at the intersection of AI and associations. My name is Amith Nagarajan,
[00:00:23] Mallory: and my name is Mallory Mejias.
[00:00:26] Amith: We're your hosts, and today we're gonna be talking about the robots. The robots are coming.
[00:00:31] Mallory: The robots are coming. How does that make you feel?
[00:00:32] Mallory? You know, I go back and forth with the robots are coming because they freak me out a little bit, I think, which is understandable. However, when I think about robots making me a really good dinner at night, or robots helping me with the laundry, I'm into those robots. It's more the ones that have a malicious intent.
[00:00:51] But I guess it's like you gotta take 'em all, you know?
[00:00:54] Amith: Yeah, I kind of feel the same way. I think it'd be great to have some help around the house. I think, uh, there's lots of, lots of [00:01:00] great uses for 'em, but they, they are kind of creepy in a lot of ways too. At least those of us have been around a while for, uh, you know, thinking about robots as sci-fi when they enter, when they enter the world for real.
[00:01:10] It'll be very interesting, uh, to see what happens Actually. I sense though that a lot of people, even those who have some hesitance, uh, might quickly find themselves adapting. Yeah. To the world where robots are doing a lot of their work and helping them out when they find the utility in them. Uh, so we'll see.
[00:01:26] I think it'll be interesting. The other thing that'll be fascinating to see is the kids who are growing up with robots from the very beginning. So kids that are born in the coming 10 years that grow up with robots, they don't know a world that didn't have them right.
[00:01:39] Mallory: You know, it's something I think every individual goes through this phase of, you know, when I'm older, hopefully if I'm, I'm lucky enough to get older, like what will be the things that I talk about to younger people?
[00:01:49] And I guess that'll be one of my, you know, when I was little we didn't have robots and they'll say, okay, grandma, you know, it's funny. Yeah, exactly. It's hard to imagine a world with robots, even though I [00:02:00] feel like we're always looking ahead on this podcast and at Sidecar it's very difficult for me to envision that world, even though it's probably quite close.
[00:02:08] Amith: Well, I started my first software company before the Netscape browser came out, so most people had no idea what the internet was, even though it technically existed for some years before then. So, uh, yeah, there's a lot of things that I can relate to of these transitions, although the physical manifestation of ai, uh, in your home potentially, or even in your workplace, I think is gonna be different.
[00:02:27] It's, it's just a different animal, so to speak, than anything we've ever adapted to. So quite fascinating.
[00:02:34] Mallory: I remember you telling me, and maybe you've talked about this on the podcast, but you telling me kind of the formation story of your old company, appify, and how, I think you used to have to mail things to people, like if they bought.
[00:02:48] Appify or whatever it was called at that point. Like you mailed it actual, I don't know, disk. See, I, I can't even talk about it because I wasn't there, but it sounds like a different world. Amme.
[00:02:57] Amith: Yeah. Mallory, there used to be these things called disks that [00:03:00] we would, we would mail people and, uh, man, they'd stick 'em into the, into the drive and their computer and they'd upload it and not even upload.
[00:03:06] They'd like, you know, copy it to their computer and they'd run the program. And before we had apps, we had these things called programs, which of course, apps and programs are the same thing. But, uh, yeah, there was a lot of that. So, uh, you know, I think, I think that that's a reflective moment, but it's also an opportunity to think about like, well, what kind of adaptations will we all make?
[00:03:24] Um mm-hmm. You know, that we talked about agents a lot on this podcast in the past, and interestingly, I think the agentic ai, uh, world we talk about, um, in the, uh, the world of Bits, right? And the digital world also is super important. And we'll get into that in this episode in the world of, of, uh, physical.
[00:03:42] Uh, ai, which is how robots, you know, really add value. But, um, agents are a big thing for everyone to be thinking about, which is as a quick recap for those that are newer on their journey with us, it's where AI takes action. That's how I like to describe agents is that AI models, AI tools, chat, g, pt, and [00:04:00] Claude and so forth, you can talk to them, they can think, they can do a lot of really great work, but if you want them to, let's say, take that great blog post, you collaborated with Claude on.
[00:04:08] And post it to your WordPress website, uh, or post it to your HubSpot CMS, that's called an action or a tool use. And, uh, you can start to do those things actually in consumer ai like Claude and Chat PT, which is really exciting. We've talked about MCP servers in the past, and that's the way you connect.
[00:04:26] Those kinds of tools, uh, to, uh, or those types of ais to various tools. But what if you could do that at scale? What if you could build your own workforce, a digital workforce, and each of these agents had a capability set that you got to define, you got to define their job description. You got to define their standard operating procedures and, and processes, and you got to define which tools they had access to.
[00:04:47] Well, that digital workforce could then actually take action and do a lot of work. And that's really what agents are all about. And of course, robots. Uh, take that idea of ag agentic behavior into the physical world.
[00:04:59] Mallory: Well, [00:05:00] before we dive into the physical world, Amit, you shared a video demo recently internally, so it's not publicly available yet of this ENT platform that I think is built on top of Member Junction.
[00:05:12] Is that correct?
[00:05:13] Amith: That's right.
[00:05:14] Mallory: And it was like a 15 minute demo. Blew my mind. Everything Amit's just talked about. I know a lot of our listeners are probably thinking, okay, one day, right? One day we'll have a team of agents that can do this work for us. Well, that one day's, um, here tomorrow maybe, maybe next week, because that's what happened in the demo.
[00:05:29] Amit, I'll let you, if you can talk a little bit about what you shared. Um, I'm obviously more so fixated on like the marketing agent demo that you showed, which I can talk about, but I wanna give you a chance to share with our audience.
[00:05:42] Amith: Sure. Um, I'll, I'll share a couple thoughts. I'd love to hear your, your thoughts on the video too.
[00:05:46] We, we shared internally across our team, across all of Blue Cyprus, a little demo video Mallory mentioned. Um, you know, a lot of the work that I do across our family of companies is, you know, the what's next kind of conversations and building and helping build, uh, software and [00:06:00] frameworks and tools that will empower, you know, the next 10 projects that we put into the world or how we help clients that are.
[00:06:06] Really looking ahead and agentic AI has been on the agenda and something we've been thinking about for actually quite a few years. Uh, we're just at the point now where the underlying AI models are smart enough to power really good decision making. So the, the prerequisite for agentic ai, and there's several of them, but the most important one is that the science has to catch up with the idea.
[00:06:25] You have to have AI models that are really smart, that you are comfortable. Handing over key day-to-day decisions to otherwise, if they're coming back to you constantly to say, Hey Mallory, can I do this? Can I do that? Uh, that's like having, um, an employee that asks you to approve every single little thing they do.
[00:06:41] It's, it's not super helpful after a while. Um, and so. Uh, what we really need is a smart underlying model. Also models that are scalable in terms of cost and speed. Uh, so we're, we're, we're actually utilizing a lot of open source models, inferencing on gr uh, the GROQ, for those of you [00:07:00] that have heard us speak about grok before, they're the fast in inference provider.
[00:07:03] Uh, we use a lot of their stuff and, uh, we're using a model called Quinn. Three, which is a really powerful reasoning model approximately at the level of power of a quad four sonnet or a GPT-4, oh GPT-4 one, kind of in that caliber. It's not at an oh three pro level, um, but it's good enough for a lot of the day-to-day decisions that agents have to make.
[00:07:21] So, coming back to the idea, uh, as I started to describe it earlier. Um, when you think about agents, think about a workforce, think about building your own team. So for marketing, uh, what we did in our demo, and this is by the way, everything I'm describing is part of Member Junction, which is a free open source software toolkit that you can download and, and use yourself.
[00:07:39] Um, so it's, it's, uh, something we're doing for the association community, the nonprofit community, and, and really anybody else who wants to use it, it's available on GitHub. You can go to docs member junction.org and you can learn about it and, uh, the software is out there in the world. So, um, the idea basically is this.
[00:07:55] That, say you had a marketing team and you wanted the marketing team to have several capabilities that you need to [00:08:00] be able to create content like blog posts, uh, perhaps ads on platforms like Instagram or others. Uh, maybe they need to be able to create social media strategies. Perhaps there's a, a paid media element.
[00:08:11] Uh, there's also a strategy element of what your marketing team does, and so you want the marketing team to be able to achieve certain types of deliverables or outcomes. And when you build a team like that, you might say, okay, well who are the best people for each of the roles that. Might need to exist.
[00:08:26] Like what are the roles? And so a marketing team might have like a copywriter expert, someone who's incredibly good at writing copy. You might have a world-class, uh, brand strategist to help you think deeply and empathetically about your audience and their pain and, uh, where the market is moving and the kinds of things that are necessary.
[00:08:43] Uh, you're probably gonna have people who do editing. You're probably gonna have people do social strategy and, and the list goes on and on. And so the idea is, is you simply define prompts that. Characterize, um, the role and the responsibilities and the capabilities that you want each of these team members [00:09:00] to have.
[00:09:00] Just like you would write a job description and perhaps a policy, uh, or process, you know, standard operating procedure kind of thing. If you were training an employee, you might do something like that or maybe just like, you know, record a video and share it with them. You do the same thing with the ai and then you have a team.
[00:09:14] Then that team of people, quote unquote people, uh, are available to just do whatever work you want them to do. And this marketing agent comes to life. And in the demo, what I showed is the output of that, which is to ask the marketing agent to perform some fairly complex tasks. And it, what it does is it automatically invokes subagents to say, oh, I need to write the copy that I need to have the editor review it.
[00:09:34] I need to do. SEO work on it. I need to do a number of other things. So that's the basic idea, is to not tell the ai, uh, you know, what you're, what you're doing from the perspective of like a little small task, but like a bigger picture objective. And then that ai, this agentic framework keeps iterating, it keeps looping and saying, did I solve a problem?
[00:09:55] Did I solve the problem? And it keeps working on the problem using all of its tools. All of its [00:10:00] sub-agents, it's, it's cast essentially to ultimately solve the problem. That's the idea behind the member junction agent framework. Uh, and then we can plug in models from any, any providers, um, into this architecture.
[00:10:10] So it's, it's pretty exciting. It's designed to make it possible for associations. To build their own agents. That's the whole goal. Um, it's maybe a little bit technically involved. It's not coding, it's not programming unless you consider writing prompts. Coding, I guess it is a form of, of, uh, programming, but you're writing in English or if you prefer another language, do that.
[00:10:28] So it's, it's pretty exciting.
[00:10:31] Mallory: It was amazing to see it play out. As you all know, I'm. Pretty heavily involved with blog writing for Sidecar. I'm very open about the fact that I, I use generative AI throughout that process. However, I feel like even though I'm using Claude primarily to write these blogs, they still take me, you know, sometimes.
[00:10:50] More than two hours for a single blog, depending on how much I'm going back and forth, how much, I don't really like this angle. Can we pull in this, maybe I'm doing a little research as well, so it's not ever like a write this [00:11:00] blog. I get the blog and we post it, uh, at least at this point. So with what Amit was talking about, there was a marketing agent and then there were these subagent, so I think it was copywriter, SEO brand guardian.
[00:11:11] Editor, maybe like one other one. And all you did was prompt the marketing agent. Um, I need to write a blog about X, Y, Z. I think what's really neat was what you pointed out, Amit, the ability for the AI itself to invoke the subagent. So it's not like it was pre-programmed to first run by the copywriter and then the editor, and then the brain.
[00:11:32] It was deciding to do that all on its own. Basically, I just would've done all of those steps myself with Claude. So I would do the copywriting, then I'm doing the editing, I'm guiding the SEO. So seeing that process play out pretty much automatically was really neat. Uh, I definitely had a, a moment of a sigh of relief to think, wow, that that's coming soon, right?
[00:11:54] I don't need to spend several hours on a single blog with AI in the near future. I'll also admit though, [00:12:00] there was a moment. Fear too, where I thought, well, you know, that's my job, that that's the thing that I, I spend a few hours working on. Um, so I can acknowledge that as well. I think it's pretty much how I feel with everything we talk about on the podcast.
[00:12:13] Excitement, fear as well. Because what if you do have a team of copywriters in your association? What does that mean? Now they're gonna have a lot more time, I can tell you that. Right? What are they gonna do with it? You need to be thinking of that.
[00:12:26] Amith: Totally. And I think, you know, if you have smart, creative people who may, of course everyone's a little fearful of this.
[00:12:32] I am as well. I mean, I'm building this stuff, but it, it, it freaks me out sometimes because it is a lot of jobs. It's a lot of, uh, a lot of things, right. Um, but it is happening and it is possible. And if it's possible, it will happen and you might as well be on the right side of that equation. And you, what you wanna do is teach those copywriters how to use this so they become.
[00:12:51] A hundred x copywriter. Um, a few weeks ago up here in Utah, I had a group of, uh, 10 of our development folks from across a variety of our companies. [00:13:00] And we had this hackathon, and I mentioned this on a pod, uh, earlier, uh, that we probably had like a hundred x productivity gain because we're using a lot of agentic AI for coding, cloud code, other tools.
[00:13:11] It's just unbelievable. I mean, seeing this, this is again, going back to. Being really old at this point. Concept of when I started my first company and how slowly things happened. And we thought it was going pretty quick back in the day, but, you know, compared to now. So seeing that in my career is, is mind-boggling.
[00:13:27] And it's a little bit scary, but it's super exciting because I look at it and say, okay, well yeah, we were all worried about like, you know, we spent, spent 15 hours of labor to produce one article. Well, what happens when the AI can frankly do just as good of a job or perhaps even a better job, and you spend 10 minutes of labor.
[00:13:44] Just to review it and approve it before it goes to the website, um, and you get things that you wouldn't have gotten. So the example I was running through when I was testing, this was a recent article that I wrote with, with help from a number of sidecar team members, uh, comparing the situation. In the late 18 [00:14:00] hundreds with, uh, the Edison and Tesla, uh, argument essentially over, uh, the type of current to use for the electrical grid, the A CDC wars, which, uh, I, I, uh, I wrote a whole article about this and, you know, talked about the band as well and it was, it was super fun to write 'cause I just went to an A CDC concert with one of my kids.
[00:14:17] And, um, that took me, I don't know. I probably spent a total of four hours on that project in total between thinking about it, talking to Claude, working with team members, refining it, and I, I think we have a really nice piece. I'm proud of it. I like what we wrote. Um, we're, we're trying to illustrate that.
[00:14:32] Listen, you know, just because like Thomas Edison was the one who was sticking with the old technology basically, and we're trying to compel people to say, Hey, just because you have someone of that stature, of that brilliance, of that wealth, of that importance in society, doesn't mean they're right. You have to rethink things.
[00:14:46] Well, of course I wrote a brief brief and I shared that with our marketing agent and I said, come up with a bunch of different angles, and it came with several versions of the article, all these different lines of thinking, really cool ways of tying in [00:15:00] different songs from the band. Into the piece that we didn't do, um, some really creative ideas.
[00:15:05] So it, it was super cool to see. So I'll certainly be using this a lot going forward. But the idea then is to say, okay, now the agent is essentially a reusable, um, piece of your infrastructure. Where else do you use it? How do you plug it into other processes? You can do this for member service. You can do this for, um, education processes.
[00:15:23] You can do this in a lot of areas, anywhere where there's thinking, decision making, and action. You can plug in agents. So it's, it's an exciting time. So we're super happy to put this out there as an open source free software tool that anyone can use. I'm hoping that we get a ton of adoption with it. Uh, we're obviously here to help as well, but, um, we wanted to make this part of that AI data platform framework we've put in the market so that it is, there's, there's no cost barrier to people using this 'cause it's literally free and owned by the community.
[00:15:52] Mallory: Now we're moving into physical ai. So we're diving into a fascinating concept from Nvidia about the future of robotics and what they call [00:16:00] the physical Turing test. We'll explore this new benchmark for ai, how simulation is revolutionizing robotics training and NVIDIA's vision for a. Physical API that could transform how businesses operate in the physical world.
[00:16:13] This does get into some robotics techy territory, but I think you all will find this as a mind bending as I did. So this framework comes from Jim Fan Amis shared with me a great video from YouTube. We will be linking that in the show notes. He is in NVIDIA's director of AI and one of the leading voices in what's called Embodied ai.
[00:16:32] Basically AI that can interact with the physical world through robots. To understand his breakthrough concept, we need to start with the original Turing test proposed in 1950, which measures whether a machine can engage in conversations that are indistinguishable from a human well. You might be thinking we've largely conquered conversational AI chat.
[00:16:52] GPT, Claude and other large language models can pretty much fool people in conversations, at least for a while. But Jim Fan [00:17:00] introduces a completely new challenge, the physical terrain test. So instead of judging conversation, it asks whether a robot can perform real world physical tasks so seamlessly that you can't tell if a human or robot did them.
[00:17:15] Youtube Video: So a couple days ago I saw a blog post that caught my attention. It says, we passed the Turing test and nobody noticed. Well. Terrain test used to be sacred, right? It's the holy grail of computer science, right? The idea that you can't tell the difference between a conversation from a human or from a machine.
[00:17:35] And then it just so happens that we got there. We just got there, and you know, like people are upset when, um, OO three mini took a few more seconds to think. Or that cloud is not able to debug your nasty, nasty code. Right? And then we shrug off every LOM breakthrough as just yet another Tuesday. You guys in the room are the hardest crowd to [00:18:00] impress.
[00:18:01] Mallory: Fan provides this example. Imagine you come home after hosting a big party. Your house is a complete disaster. Dishes everywhere. Spill drinks, food on the floor. You leave for a few hours, and when you return, your house is spotless. There's a beautiful candle at dinner waiting. The physical touring test question is, could you tell if this was done by a human housekeeper or, and a chef or by robots?
[00:18:24] This reframes robotics around useful ambient intelligence rather than flashy demos that we've seen in recent times. So think about what this requires. Spatial reasoning, delicate manipulation, understanding context, adapting to unexpected situations. Current robots are not anywhere close to this. Uh, most can barely navigate around a banana peel on the floor, let alone clean up after a dinner party and prepare a meal.
[00:18:50] But if robots could pass this test. It would fundamentally change how we live and work. Uh, Amis as we get into this conversation, the physical Turing [00:19:00] test, I just have to ask you, do you think we've passed the original Turing test? Do you think we're we're good on that Gold star?
[00:19:07] Amith: I don't know how we could not have passed it at this point.
[00:19:09] You know, I think on multiple different levels, whether it's just text or if it's audio, or even frankly video, it is. It's, I'm, I think we're at the point where AI outputs are indistinguishable from human, frankly. In many cases they're so much better. And that might be one way to tell is like, yeah, actually I don't normally get this great of a, of a piece of writing from
[00:19:27] Mallory: Yeah.
[00:19:28] Amith: You know, a person. But, uh, yeah, I think we're pretty far past that at this point.
[00:19:32] Mallory: You mentioned when you shared this with me, uh, incorporating this idea of, of robotics and, and gym fans talk into your Digital Now keynote, which I don't know if you will, we're a few months out from that at this point. What sparked that for you, for you to put it in your digital Now keynote means you think, ah, this is important.
[00:19:50] This is an area that we need to keep a close eye on. So can you talk a little bit about that?
[00:19:54] Amith: Well, I think, you know, the physical world is obviously important. It's, uh, it's what we inhabit and, uh, digital is great, [00:20:00] but you know, we're, we're. Creatures of the pitches and physical world. And that's true for all of us.
[00:20:04] That's true for all of our members. It's true for everything we do in our personal lives and, and certainly in our, in our work. And so whether an association itself is going to be deeply utilizing robotics and the operations of the association, which I think there's plenty of opportunities for that. Um, but.
[00:20:19] Whether or not they are, their members are likely to have something happen in their profession or sector where robotics are relevant. So for the association to be aware of this and to understand it, and perhaps to even embrace it in some ways, I think is an important concept. I don't think a lot of people are talking about this.
[00:20:36] It's viewed as, oh, that doesn't matter to us. We're digital workers. We're not physical workers. So robotics more about like construction or, uh, industrial settings or whatever, or warehouses, and we're not. Super into that. So we're not gonna really pay attention. So it's, my thinking was, and I still think this will be the case that come November 2nd through fifth in Chicago, when we're out there for digital now, uh, I will be speaking at least [00:21:00] at, uh, in some detail about robots.
[00:21:01] And hopefully we'll have some interesting things, uh, ready to, to show not so much the things we're building, obviously, but just ideas that we think could be relevant and for a lot of associations.
[00:21:10] Mallory: Mm-hmm. I also just wanna add a note here. This just came up for me, but I realize when it comes to autonomous driving.
[00:21:18] Perhaps you could argue that. Driving. Well, maybe you could either argue one way or the other, but you can't really tell like a Waymo driving on the street versus a human driver. Uh, Waymo, if you all know, is an autonomous driving company. I think it's under Google. I took my first Waymo in San Francisco a few months ago and I talked about that on the podcast.
[00:21:38] So I actually prefer it too. Human Uber drivers, which might be a little controversial, but I felt quite safe in the Waymo. Um, so it seems like maybe the physical touring test in the realm of driving. Is a little bit more fuzzy, but this is more so talking about the physical, like every other aspect of the physical world, you could say.
[00:21:57] Does that sound right?
[00:21:58] Amith: For sure. Yeah. And the ability [00:22:00] for an ai, uh, in the form of a robot to do some of the things you mentioned, like the dexterity of dealing with, um, very, very, uh, fragile or, uh, you know, uh, movable objects, things that change shape as you touch them, you know, or, uh, things that, you know, we don't really think about, but it's no big deal and there's a lot of progress happening in all these areas as we're gonna be talking about.
[00:22:20] But, um, I think that just a general purpose. Robotic capability to achieve the outcome that you want. Right. You mentioned not so much like watching the robots do their work and being wowed by it. That's super interesting. But like, will the outcome be what you want? Will your house look the way you want it to look when you get home a couple hours later?
[00:22:37] And will it be indistinguishable from having had a crew of 10 humans mm-hmm. Doing that work? Um, so that's interesting. I think that there's, there's so many other layers to this too, of. Um, you know, we think about the, the human labor displacement issue, which we've touched on here and we touch on pretty much all of our podcasts, but there's other side of it is the, the unmet demand.
[00:22:56] You just think about, for example, senior care, and you think about [00:23:00] the number of people that need more in-home help or help in institutional settings that don't have it, and the ability for robotics to both just dramatically improve the quality of care and take a literal load off of. A nurse's back, that's an amazing opportunity, right?
[00:23:15] And that's not in the home, but it's an opportunity, uh, to think about this, if we can scale it and make it work at a level that's at least as good as as us, that's exciting.
[00:23:24] Mallory: I wanna talk a little bit next about kind of the data training bottleneck that we've experienced in the past with robots. So unlike language models that can train on pretty much all human text available digitally, robots face a massive data scarcity problem.
[00:23:40] The data robots need. How much force to apply when gripping an object, how to navigate obstacles, coordinating movements. None of this exists on the internet. It has to be collected manually through Teleoperation, where humans control robots using VR to demonstrate tasks. So a human operator might spend an entire day [00:24:00] generating just a few dozen training examples.
[00:24:03] Compared to language models processing millions of text examples in a second. But NVIDIA's breakthrough solution is simulation at massive scale. So instead of collecting real world examples, one at a time, they create a detailed virtual world where robots can practice millions of scenarios. Think of it like a hyper advanced video game engine for training robots running thousands of simulations in parallel on GPUs.
[00:24:28] The key breakthrough is domain randomization, so intentionally making simulated worlds varied and unpredictable to force robots to learn robust generalizable skills. Nvidia uses three approaches that I think have some fun names they call digital twins, which we've talked about on the podcast before.
[00:24:47] Digital cousins and digital nomads, digital twins are exact replicas of real robots and environments perfect for testing specific scenarios. Digital cousins use generative [00:25:00] AI to create similar but varied environments. So instead of one kitchen layout, you might get thousands of different configurations.
[00:25:07] Digital nomads is the most advanced approach using video generation to create completely imaginative worlds where robots practice skills that somehow transfer to real world tasks. The goal is zero shot transfer. So robots trained entirely in simulation that can perform tasks in the real world without additional training.
[00:25:29] Jim Fan gives an example of a robot dog that learned to balance on balls and simulation then immediately succeeded on a real ball. This works because simulation forces robots to learn underlying principles rather than memorizing specific scenarios. This breakthrough essentially solves the data scarcity problem that's been holding back robotics for decades.
[00:25:51] Amit, the simulation approach seems it is a fundamental breakthrough, and to me it's very linked, almost [00:26:00] interchangeable, riding on the coattails of generative ai, I feel like you've been talking about this truly for years at this point, but can you talk a little bit about the idea of exponential curves, kind of compounding and combining and, and why gen AI was a necessary step to get to this breakthrough?
[00:26:17] Amith: For sure. Uh, you know, I think it's exponentials are uninteresting until they're extremely interesting, right? Because they look like a flat line until they look like a vertical line. And so that's where we're at now. We're seeing these things happen. The, the convergence of exponentials algorithmically in terms of AI progress at the neural network level, but also the amount of data we're capturing, the processing that we're throwing at it.
[00:26:37] Nvidia, obviously being a manufacturer of GPUs, uh, has access to resources that are enviable and they're able to do a lot in, uh, in this way to, to set up environments and simulations that are super, super cool. Um, and so I think that, um, it is exactly this. It's the exponentials that are converging, that are driving this kind of capability that wouldn't have been possible even a couple years ago.[00:27:00]
[00:27:00] It might be helpful to kind of, uh, take a little bit of, uh, rewind in time through even the last, uh, few decades in terms of what's been happening with, broadly speaking, this idea of spatial intelligence, where we think about like language intelligence. We think about, you know, the predecessor to when humans.
[00:27:15] Started speaking and certainly writing was seeing the world, you know, for 500 plus million years. Uh, organisms have had, uh, evolved sight and through sight there's been this explosion of capability in the three dimensional world. Um, and uh, it reminds me a lot of the things that Fefe Lee, who's the founder of a company called World Labs, uh, Stanford professor, uh, famously known for.
[00:27:38] ImageNet, um, you know, her, a lot of the, her talking points relate to a lot of what NVIDIA's talking about is that there's a need for spatial intelligence at scale in order to drive the next generation of AI to truly understand the world. And some of it's like, you know, the, the physics engine of what you'd find in an advanced video game tool or virtual reality tool.
[00:27:56] But it, the data problem you're describing. But let, let's take that little, [00:28:00] uh, brief trip backwards in time for a moment. Um, and we think. Of the time of around 2009, uh, when Fei FEI Lee started this project called ImageNet. And for those of you that are unfamiliar with it, it was, uh, at the time a very, um, uh, it was very ambitious, but, uh, very, uh, much not known project in the world of AI resource where she, what she was doing was trying to, uh, catalog the world's images essentially.
[00:28:24] So if you kind of think about what had happened in the 10 years preceding oh nine, you had the internet. Come to scale with lots of consumer users and you also had digital photography, so people were taking lots of pictures of the world, right? So there were lots of photos of cats and dogs and mountains and homes and people and so on and so forth.
[00:28:41] And the problem she was trying to solve for was, hey, can we, um, crowdsource algorithms that can do. Object detection in images. So can I tell you, Mallory, that this image is a picture of a dog or a cat or a dog sitting on a sofa? Uh, which would be a more advanced version of just the individual [00:29:00] objects.
[00:29:00] Mm-hmm. Just like dog sofa, that'd be the simple one. Dog sitting on a sofa. Dog relaxing on a sofa is a little bit more advanced. But she was trying to try to solve for like, can AI have a usable level of accuracy and object detection. So what she did was she and another researcher at Princeton sent out on a goal of catalog.
[00:29:18] And forming a taxonomy essentially of all the images that were available online, and she was able to do that through. Working with Google, uh, getting Google images, downloading them, and then using people to tag these images as training sets. So there were millions and millions of images and they had paid thousands of people all over the world to tag these images.
[00:29:39] And so that formed this catalog. And the ImageNet data set was both the data set that was open source to the world that AI researchers could use to try to solve this. Object detection problem and also a competition. So the crowdsourcing competition piece of, it's super cool. She said, Hey, we're gonna have an annual contest where researchers can submit their entries and we're [00:30:00] going to, going to, uh, have a different test each year.
[00:30:02] So the, the dataset kept growing, but she would say, okay, well the actual test we're gonna do is gonna be different. So kinda like what you're saying, like, you know, in simulation you could test against whatever, but in the real world, which her real world was like a test set that was different each year.
[00:30:16] So there was, you know, people didn't know the test. Uh, how good is the AI at solving problems of, in this case, very simple object detection, uh, detecting a cat in a photo to be a cat correctly, at a high percentage? Right. Started off, I think they were like 30% accurate, then 40% accurate, and they were like, kind of stuck for a couple years, 2000 9, 10, 11, and they had a lot of people interested.
[00:30:37] The problem was super interesting, um, but it was, you know, nothing really happened for a while. And then 2012 came along and in. Uh, Florence, Italy, they had their, their symposium and they, they had a bunch of people come together and there was a presentation that, uh, they had reviewed PR prior to the, the actual in-person event, um, from a team in, in Canada had submitted something called AlexNet.
[00:30:58] It wasn't called AlexNet at the time. [00:31:00] Uh, but it was, uh, supervision was the name of the submission, and this was, uh, using actually a technology from the 1980s called Convolutional Neural Networks. And CNN's had existed for years and years and years and years. The problem was, is they didn't have enough data.
[00:31:15] Problem was, is they didn't have enough compute. And so the key innovations were Fei Fei Lee's work on cataloging the world's images at the time. And, uh, the team led by Jeff Hinton, uh, in, in Canada, who had developed AlexNet, which was, uh, the first real example of deep learning where they were using more than one GPU in parallel to get the kind of compute we needed.
[00:31:36] To be useful, but the algorithm was only slightly tweaked from the original algorithm written about, uh, in the eighties. Um, so what happened was AlexNet blew the world away because now all of a sudden you had very high accuracy of object detection. And the subsequent couple years, the problem started to unfold where one problem after the next started to fall down because this deep learning technique, uh, was generalizable and usable across [00:32:00] domains.
[00:32:00] So I take us back in time simply to point out a couple things. One is that the exponentials sometimes make old ideas. New again, and we're seeing that to be the case now. Simulation is not a new idea. We've been doing it for decades with computers, but it's becoming more possible at scale and more and more possible by, uh, corporations.
[00:32:18] Uh, right now, big ones like Nvidia with lots of resources soon, lots of other people, uh, because of the exponentials at work. And it means that we shouldn't discount old ideas just because CNN's didn't work in the eighties or the nineties or the early two thousands. They did start to work again in 2012.
[00:32:35] The concepts there then changed the world. That led to the transformer architecture, which is what all modern language models currently are being built on architecturally, which is, you know, kind of a descendant of, of that work. So, very exciting stuff. Uh, I would say that in, you know, 13 years we've gone from being amazed that we could detect a cat in a photo to what you just described, which is being able to, in real time, you know, in four dimensions, right?
[00:32:59] 'cause the fourth dimension [00:33:00] is time. And so we're talking about the three dimensional world. Through time because the robots has to work in real time and manipulate the world. Being able to do things like you're describing, and then not only understand scene detection, but to understand an outcome and objective, you need all these crazy things to come together, right?
[00:33:16] All this compute, all this data, all of these, these capabilities that are, are truly science fiction type ideas, uh, brought to life. I, I think we're within the next handful of years, you know, by the end of the decade, um, we will see this stuff, uh, in our, like, literally in our homes and in our offices. So, uh, hope.
[00:33:33] That was helpful in terms of the background, but when I think about the exponentials in just the last 13 years, going from being amazed at detecting a cat, being a cat in a photo to what you just described about, you know, this robotic dog being able to balance on a, a ball immediately with zero shot transfer.
[00:33:49] You know, we're there, we, we have the compute, the computes increasing, we have the data, the data's increasing, and the simulation you're describing is generating that missing piece, that data. So I find [00:34:00] this extremely exciting.
[00:34:02] Mallory: That was a fantastic history lesson. Ame, I mean, you knew a lot of detail. I'm very impressed.
[00:34:08] Uh, it's hard to believe that was just, what'd you say? 12, 13 years ago? Um, that we were impressed by simple cat. Image detection. And also it's just as a note, it's funny how quickly we humans adapt as well because like I tell you, as soon as the new model's out, I'm like, ah, you know, it's not as good as I thought it was when it first came out.
[00:34:29] Claude four, it could be better. So it's really crazy how quickly as humans we say, um, this is our new benchmark and how can we get past it?
[00:34:37] Amith: Totally. Yeah. And you know, the fefe Lee, I'm a giant fan of her work. I think she's an amazing lady. She's done just unbelievable stuff. Uh, she's got just a really good mindset around it.
[00:34:47] I hope her new startup World Labs is a super big success. And, uh, I happened to have read her book two or three years ago. I think I talked it about in the pod it's called, it's a book called The World's iic. Uh, it's. Extremely well written. It's her history [00:35:00] as an immigrant to the United States. Coming here as a teenager, having like zero ability to speak English, learn the language growing up, uh, with really challenging, you know, situations, working through that, running a laundromat while she was figuring out how to pay for her education, uh, as an undergrad at Princeton, and then all the things she did professionally along the way.
[00:35:18] It's, it's beautifully written and I highly recommend. It's a great story and, uh, it's, it's just a really interesting set of insights in terms of innovation, uh, and so, so relevant to what is going on with. This story about NVIDIA and robotics.
[00:35:30] Mallory: Mm-hmm. Well, if any of our listeners happen to know Faye Vey Lee, we would love to chat with her on the Sidecar Sink Podcast or get her to digital now.
[00:35:37] So please, please let us know. Amit, one more question here on this topic. Uh, we've talked about digital twins on the podcast before. Do you think there's any value in the concept of digital cousins or digital nomads for associations and, and how that. Something like that could impact their business model or could be something that they use, um, to make [00:36:00] predictions and whatnot in the future.
[00:36:01] Amith: Totally. Yeah. So digital twin being an exact replica, a digital cousin being similar, but dissimilar enough to be interesting in terms of seeing if the skills transfer and then a digital nomad being so different that it doesn't necessarily resemble the original, but it is perhaps something that you might find in the real world.
[00:36:17] And these, these concepts of like something that's exactly the same, somewhat similar and totally different are very useful when we think about scenario planning. Um, and we think about the idea of, uh, what would a digital twin for an association look like? Different. It's not necessarily the physical manifestation of the digital twin Mallory of your kitchen.
[00:36:36] Um, and making sure the robot is trained on your kitchen. Or then how about my kitchen, which may be a digital cousin, or what about like, you know, an alien kitchen that might be a digital nomad. Something totally outside of the realm of what you might find in the, in the training set. Uh, but if you think about it in a little bit more abstract terms and say the digital twin is a replica of a system.
[00:36:55] Your kitchen is a system of sorts, right? It has a physical manifestation, but it has certain capabilities, [00:37:00] certain functionality. Uh, can we think about this concept of using these, uh, these simulation models essentially in other ways? So what if we were to model every aspect of your association digitally in terms of your annual conference, let's say, which is a portion of your, your, uh, event, and you were to say, Hey, what would happen?
[00:37:20] If we had more sessions of this type and fewer sessions of this type, what would happen if we used a totally different format? Um, how would people behave? And part of the digital twin is also modeling each of your members, perhaps individually a digital twin of them to the extent you have the data to say how would they behave in this scenario?
[00:37:36] Um, and then the digital cousin might be, you know, various flavors of that, right? So the digital twin would be say, Hey, this is exactly what we did in 2024 for our events. Okay. Uh, and then the cousins would be similar versions, and then you might. Randomly generate totally alternative realities and see, well, what would the outcome be?
[00:37:51] Because the goal is to figure out, like in the case of robotics, you're looking for skill transfer of what you've trained in the purely digital world. Um, the, the analog here is [00:38:00] you have a scenario where you're saying, oh, well, you know, we wanna see what would happen if we modified our current plan. We wanna have a, a, a simulation, a forecast, if you will, of what would happen if we did these other things.
[00:38:11] So I think the concept is super applicable. Um, these ideas are not new. Uh, they've just all historically been very low resolution, meaning the digital twin would not really look like you. It look more like a Minecraft character than a picture of a person. Uh, but that's changing quickly. We're coming into focus, coming into deeper resolution rapidly, and the compute.
[00:38:31] The AI intelligence and all of this is coming together through this exponential convergence where, what I'm describing probably is still out of reach, but for the maybe the largest associations who wanna throw a few million dollars at a project like this, that's probably what it would take right now to do that at scale of any usefulness.
[00:38:46] But very soon, that won't be true. Next year it'll probably be a few hundred thousand dollars. The year after that, it'll probably be, you know, something you can do with. Consumer AI tools probably, who knows? But, um, again, question your assumptions over and over and look to these [00:39:00] examples and say, well, how does this apply to me?
[00:39:02] Maybe you don't deploy robots in your workforce, but maybe the concepts as you point out of digital twins or other pieces of this could be super relevant.
[00:39:12] Mallory: Last thing I wanna talk about here is that kind of this whole conversation we've lined up on the episode is building toward Jim fan's vision of what he calls a physical API, essentially treating robots like programmable interfaces to the physical world.
[00:39:26] So instead of writing software code, you'd program physical actions like having a robot set. Your conference room, you could do layout equals theater. Uh, Claude helped me write out some like physical API code in my outline. Or you could have your robot prepare dinner style equals Italian, which I love guests equals four for four people at your house.
[00:39:46] This creates what fan calls a skill economy where humans could teach complex skills to robots wants, and then those skills become available as services anywhere. So a master chef could train robots to cook their [00:40:00] signature dishes, then offer Michelin level dinner as a service globally. The end vision is ambient intelligence.
[00:40:07] Robots so seamlessly integrated into our environments that they become invisible helpers handling physical tasks as automatically as a smart thermostat manages temperature. If this vision plays out, it could be as transformative, maybe even more as the internet was for information. Industries built around physical services, hospitality, facilities management, food service, cleaning, maintenance, could see fundamental changes in their business models, but it also creates new opportunities as well.
[00:40:35] Companies could offer specialized physical services without geographic constraints and entirely new categories of physical software. Could emerge. This was the thing a me for me in that video that I said, well, I've truly never, I have moments like this sometimes on the podcast. I've never thought about a physical API.
[00:40:53] Maybe you have. I have not. So that was quite eye-opening. What do you think happens [00:41:00] to industries built around physical services in the world? In the world of physical a PII
[00:41:06] Amith: think you gotta rethink your, your delivery model. Uh, you know, when you think about something that was once expensive and constrained.
[00:41:13] A scarcity kind of concept is now abundant. How do you still add value in that world? And, uh, there's a lot of different, you know, ways to think about it. So, uh, you mentioned, uh, cooking. So you say, okay, well if, if you could have a robot of your own that cooked whatever you wanted all the time, and you know, you, you were in a situation where it made sense for you to buy such a robot and have it in your house.
[00:41:33] Sure it's great. Um, but maybe there's a, a company that says, Hey, you know, we have our robots that have been trained by the best chefs in the world that have the, these capabilities and the knowledge of their recipes and their techniques, perhaps is intellectual property that the robots have been trained on are licensed to use.
[00:41:49] And this company says, Hey. We will send the robot to your house. You don't need the culinary expert robot at your home all day, every day. Maybe you don't need it every day, but some days you do. And so we'll send the [00:42:00] robot to your house and the robot will do the work and prepare the meal and do all this stuff, and you'll pay a fee per meal.
[00:42:06] Just like you'd pay a fee for delivery. You'd pay a fee for a private chef to come to your home or whatever. Uh, but now, instead of, you know, having a private chef come and cater a meal at your home for you and your family for $10,000 or something crazy, uh, it might be $50 or a hundred dollars, it might make it affordable for the masses, and you might not want to invest $30,000 or a hundred thousand dollars, or right now it's probably more like half a million dollars to have a robot like.
[00:42:29] This and to have to worry about upgrading it and maintaining it and doing all those things. So, you know, the robotics doesn't mean that the business goes away. It means the business model goes away, or the business model changes at least. And I think that applies to associations as well. It's like. Think about it always in terms of the end customer and the value they're receiving.
[00:42:47] The end customer in your example is, you know, you and your husband having a delicious meal at home, that you didn't have to prepare yourself and was done in a way that you guys would want. Um, that's the end value creation. So how does it end [00:43:00] up being? So there's lots of ways to achieve that outcome. Yeah.
[00:43:03] Even without robust, there's lots of ways to achieve that outcome. Right. Um, and so I think that. You have to be creative in terms of what is the pipeline to achieve that value? What's the process to achieve that value? AI can be your friend, it can also tear apart your business. Of course, most companies won't adapt to this.
[00:43:17] Most companies that do catering traditionally, or you know, traditional restaurants are, are gonna have a hard time adapting to this. But I think there's an opportunity here for people to say like, what are we best in the world at? And how do we deliver that value to the end customer in a way that is compelling to them?
[00:43:32] And so you're, in a sense, what you're doing is digitizing something yet another thing that. Was physical only previously is not digitized, um, and can be scaled and delivered in any way. So there'll be all sorts of disruption and all sorts of opportunity created with all these things.
[00:43:48] Mallory: Yeah, this is quite a surreal conversation talking about the physical API and the fee and the robot coming to my house, which sounds really nice overall, but what do you think is the takeaway?
[00:43:58] I feel like we've, [00:44:00] we've veered off on this episode into territory that we haven't explored so deeply yet, um, being the physical world of AI and robotics and, and what do you think is important for association leaders to take away from this conversation?
[00:44:14] Amith: To not ignore it, first of all, and to get familiar with it, even if it's just purely an intellectual curiosity, which, you know, to an, to a large extent, it is for me.
[00:44:21] I mean, I, I don't know that this is directly applicable to our business right now. I'm sure it will be though at some point. I just don't know yet. And that's, that's an okay answer. Just because you don't know the answer doesn't mean you shouldn't pursue learning more about the topic. So I, I sense that this will affect every business and every life in the next 10 years.
[00:44:40] And so. You know, you, if you aren't paying attention to it, you're at a disadvantage. That's one thing I'd say that the more you learn about something, the more obvious ideas become, you know, when we launched Betty, uh, which is our knowledge assistant AI into the association community, it was non-obvious to a lot of people, to those of us that were working on this, we're like, Hey, this is a super obvious idea.
[00:44:59] [00:45:00] This is gonna be the killer app for associations right now with this level of technology. And sure enough. It has become exactly that. But, um, you know, it's, it's, there's, there's a first mover advantage in a sense. But the first mover comes from having been the first thinker, the first tinker, the first.
[00:45:16] Curious one, right? The first people who are kind of like messing around with the idea. So, um, that's really what it's about to me is that this topic right now is more, it's, it is a curiosity. It's a interesting thing. I think it's interesting and scary and all that stuff. Um, but it's gonna affect all of us.
[00:45:32] Our lives in some way, and so it will affect your association. We don't know exactly how. I do think there's some examples we talked about here that are relevant to be thinking about, but those examples might be throwaway examples. The, the examples that might matter might be only discovered by those that are thinking more deeply about this in the next several years.
[00:45:47] It really will be discovered by those people thinking deeply about it. So I would encourage all of our listeners and viewers to at least take a stab at. Watching some of these videos. Check out the Nvidia video that you mentioned. It'll be in the show [00:46:00] notes. Uh, spend time just thinking about this. Have a conversation with your board or with your staff about it.
[00:46:05] Um, I think it's just worth letting the idea circulate.
[00:46:09] Mallory: Mm-hmm. You nailed that on the head. Going from an idea being obvious to someone versus non-obvious and, and Betty was a prime example, and it's something I've told you Amme many times, is that you have a really great ability to, things that seem obvious to you may not seem obvious to everyone else, but it's.
[00:46:27] Because of the work you put in to think, to tinker, to listen to these podcasts and have the thoughts circulating so that when it's time, oh wait, I'm gonna tie it back. You can shoot your cannonball. And if you don't know what I'm talking about, we just did, uh, an episode on. I think it's called Navigating the aics.
[00:46:44] It's one of our most recent ones where we talk about the idea of cannonballs and bullets, but yet you have to have that education piece be thinking about this ahead of time. This episode was already one step in that direction, so maybe if a physical AI association idea is gonna come about, it might be you who [00:47:00] has it.
[00:47:01] Everybody. Thank you for tuning into today's episode, and we will see you all next week.
[00:47:09] Amith: Thanks for tuning into the Sidecar Sync Podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in depth AI education for you, your entire team, or your members, head to sidecar.ai.