Sidecar Blog

Exploring Interactive AI Simulations and World Models with Thomas Altman | [Sidecar Sync Episode 97]

Written by Mallory Mejias | Aug 28, 2025 5:35:24 PM

Summary:

Thomas Altman, co-founder of Tasio and Betty, joins Mallory Mejias for an electrifying deep dive into the future of AI through the lens of world models—AI systems that don’t just respond but simulate environments and predict outcomes. From Genie 3’s physics-aware interactive worlds to the transformative potential for associations in training, events, and strategy, this episode unpacks the profound shift from reactive AI to systems that “understand” the world. Thomas also shares insights from his new book The Association Knowledge Cycle, and offers practical advice for staying ahead in a rapidly evolving tech landscape.
 Thomas Altman started working with associations after earning his graduate degree in applied data modeling techniques. One of the first things Thomas noticed when working with associations was that, while they do a great job of collecting data, hardly anyone is really putting that data to use. In an effort to leverage association data and content, Thomas developed Betty. With Betty, Thomas aims to utilize advanced AI-driven solutions to help associations deliver value to their members.

https://www.linkedin.com/in/thomas-altman-tasio/

Timestamps:

00:00 Introduction to Thomas Altman  
01:13 Thomas's Journey from Data to AI  
07:06 What Are World Models?  
13:04 From Image to Interaction: The Genie 3 Breakthrough  
17:57 Why Persistence Matters in AI Simulations  
27:31 World Models vs. LLMs: The Convergence Toward AGI  
30:40 Associations & Simulated Environments  
37:04 Forecasting Events with AI Agents  
42:46 Practical Advice for Association Leaders  
47:29 Thomas’s New Book & Final Thoughts  

 

 

🎉 Thank you to our sponsor

https://cimatri.com/

📅 Find out more digitalNow 2025 and register now:

https://digitalnow.sidecar.ai/

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:

https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

Genie 3 ➡ https://shorturl.at/ndooP

Gemini ➡ https://shorturl.at/fVVHA

Betty ➡ https://meetbetty.ai/

Tasio ➡ https://tasiolabs.com/

The Association Knowledge Cycle ➡ https://meetbetty.ai/#download

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00] Voice-Over: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence, and associations.

[00:00:15] Mallory: Hello everyone, and welcome to today's special edition episode of the Sidecar Sync Podcast. My name is Mallory Mejias, and I'm one of your hosts, along with Amit Naan, but today we're joined by a super special guest. Thomas Altman, who, if you all have listened to this podcast before, Thomas, I bring you up probably once every couple episodes because I'll have some story.

[00:00:36] Oh, Thomas showed me this. Thomas showed me that Thomas was actually the first person to show me generative AI ever. All the way back at Digital Now 2022 blew my mind and also kind of changed the trajectory of my career, so no pressure. Thomas, welcome to the podcast. How are you doing today?

[00:00:52] Thomas: I am doing great.

[00:00:54] I'm so happy to be here. I love, I love the podcast. It's always fun to. Gonna hear what you guys are talking about. So excited to [00:01:00] actually gonna be talking about it with you today.

[00:01:02] Mallory: I know I was telling you before the recording kicked off, I can't believe this is your first time on the podcast. I know it definitely won't be the last, but I'm excited to have you on to be talking about this really fascinating topic of world models.

[00:01:13] But before we kick that off, for our listeners who don't know who you are, I don't know how, but for those who don't know you, can you tell us a little bit about your background and what you do now?

[00:01:24] Thomas: For sure. So I think it's, it's pretty easy not to know who I am. So if you don't, that is, it's a probably more common by far than not.

[00:01:31] Um, so yeah, my name's Thomas Saltman. I am co-founder of a company called Taio, um, which does a lot of experimentation around AI and associations, kind of that, that overlap. And the way I got to doing that was, um, you know, prior to all of this, I went and got a master's degree and applied statistical modeling.

[00:01:49] Right. Specifically statistical modeling for business use cases. Right. So it's an MBA with a hard science background behind it. And kind of got pulled into the association world [00:02:00] in that way and started working really in the a MS space. So when I got in there, I started looking at like, these just massive data sets that associations were sitting on top of.

[00:02:09] And kind of with my, my background and, and data science started seeing just like waste, right, or not waste, but potential not yet fulfilled. And I got really interested, like, how can we start like taking all of this data, all this transactional information that associations all of, all of you guys are sitting on top of and make it actionable using some of the more cutting edge techniques.

[00:02:31] So that led me to kinda spinning up my company with, with my co-founder Dre McFarland, where we just started experimenting, we started trying different things and around 2020 we were kind of, we're trying to solve this one issue that was kind of the member retention problem using data. And kind of playing around with that.

[00:02:50] We started to see like being able to do predictive modeling itself wasn't enough. You had to be, you had to be able to enable people to take action and we just couldn't solve that problem until we [00:03:00] found out about this thing at the time called GPT. So we went all in really hard on, at the time was just like it was auto complete, right?

[00:03:08] But it was auto complete that you could trick into like seeming kind of smart. And we just developed this sort of set of techniques to. To kind of get everything to work for associations. Um, around that time, something called chat, GPT came out, actually prior to chat GPT came out is when Mallory, you and I kind of

[00:03:26] Mallory: mm-hmm.

[00:03:26] Thomas: Got together and started talking about AI and, you know, right. All and geek out.

[00:03:30] Mallory: And it was the GPT playground at that time. Exactly.

[00:03:33] Thomas: Yeah. Yeah. And sort of like, it was, it was before chat GPT, right?

[00:03:36] Mallory: Mm-hmm. Mm-hmm.

[00:03:36] Thomas: But when chat GPT happened and there was almost like this wave of like. Interest in the AI space that it was sort of me and a few other people like am me screaming in the wind, like, Hey, AI's coming.

[00:03:47] And then all of a sudden everything was coming together. So, um, it's been really, really fun kind of getting to dive into all these use cases and actually like, see this what was for a long time lonely [00:04:00] field, really be embraced by the association space and see how associations are starting to run with it and do really, really cool things.

[00:04:06] And that's kinda my background and, and why I get excited to have conversations like these.

[00:04:10] Mallory: I remember having many conversations with you, Thomas, about artificial intelligence well before chat. GBT came out and talking about what Taio did because at that time I was working for Blue Cypress, kind of the parent company and I so felt that Taio was just slightly before its time, uh, because the things you were talking about were so insane, but it was before people really knew what generative AI was.

[00:04:33] So I can imagine that was. A tough sell, but it really perfectly positioned you and Dre and Taio to kind of like be ready to take that next step when chat GT was released and the community was more receptive. Um, and now you are, so you're still with Taio, but you're also with Betty. So can you tell us about that too?

[00:04:52] Thomas: So the cool thing that happened there is, so we developed this kind of for years. Sort of expertise around how to use these generative AI [00:05:00] models, really within the context of associations. And then chat, GPT came out and like, you know, sort of light bulb went off in my head. A light bulb went off in a meet's head simultaneously, Dre's head simultaneously.

[00:05:11] We kind of huddled together and we were like, how can we, how can we make. Something that is chat GPT like, but one doesn't hallucinate, right? Like that's really, really important. Two is grounded only in the type of content that an association has and three kind of interacts with the members. That allows people to come and discover all this, like this vast body of knowledge that all associations sit on top of.

[00:05:35] In a lot of ways we, we kind of realize that. The knowledge base that associations create is kind of one of the key value propositions, if not the key value proposition for many organizations. But the discoverability of that content, the usability of that content, the sort of engagement with it was, was always lacking.

[00:05:54] And in fact, as people created more content and made it harder to find any individual piece, right? [00:06:00] So me, am me, Dre, sort of. Kind of cobbled a bunch of different ideas together to like how to solve that problem with ai. And over the course of maybe a month, month and a half, sort of Betty was born, which is, um, for those of you that are not familiar as a, an AI based knowledge assistant that.

[00:06:15] It does basically that right, kind of goes, crawls across the entirety of your knowledge base. Serves as this sort of like mini super librarian that can hunt down the exact right content, surface it back to the member, prompt them to engage. So they kind of stay engaging with the content, following their own path through it in ways that, um, are really, really, really interesting.

[00:06:35] And people stay. And instead of just finding the thing, getting the article on leave, they stay on these pages for like 7, 10, 15 minutes, just like. Looking for content and getting excited about it. Uh, it's just really, really fun. So yeah, it was basically that, that little nugget, and then me and me and Dre sort of just came together and sort of made this thing happen.

[00:06:52] Mallory: Man, I feel like you're so humble too. Betty is an incredible product, and I'm not just saying that because I've worked alongside these, these brilliant minds, but [00:07:00] they're experiencing explosive growth right now, truly. So I've just gotta shout out what you and the team are doing. But today's topic is not Betty.

[00:07:08] Maybe it'll tie back into it just a little bit. We are gonna be discussing world models and specifically genie three. So you all, if you're listening to this podcast, I think that you're familiar with LLMs are, or large language models like Chat, GBT, Claude, and Gemini. So you type something and it responds.

[00:07:26] World models are fundamentally different. These are AI systems that can create. Internal simulations of environments and predict what might happen next. Kind of like how you might mentally rehearse a difficult conversation before having it. So instead of just text responses or image responses, or even video, they can simulate entire environments with.

[00:07:47] Physics and spatial relationships. The key difference is moving from AI that just reacts to AI that can actually understand and reason about the world. This is being called a crucial stepping stone toward a [00:08:00] GI or artificial general intelligence because it gives AI the ability to plan an experiment before acting.

[00:08:06] So Thomas, I'm gonna kick it to you now. That was a very short and sweet overview of world models. Can you help our audience understand what exactly is a world model? In simplest terms.

[00:08:18] Thomas: So I think it's, I think it's important to place sort of where we are now in kind of a, the, the larger historical context of mm-hmm.

[00:08:24] AI research, right. Sort of in, in sort of plain, plain enough language. What, where we are now kind of really goes back a little bit over 15, 16 years ago and the, the key. Thing that actually made machine learning deep neural networks. All of this possible was this, um, kind of brainchild of this researcher called Fefe Lee.

[00:08:44] So she was a researcher, I think, at both Stanford and Princeton, and she had this idea of creating literally a catalog of 14 million, 14 million images. All of which were hand classified. So a [00:09:00] human being would look at an image and then in a sentence or two, describe what was in the image, right? So you'd have, you know, two dogs running across a frosty field and someone would type that in.

[00:09:08] Or you'd have like, you know, a hot balloon, a hot air balloon, kind of soaring over a pyramid, and someone would type that in, maybe add some colors and stuff like that. And she did all of this. She led this, this program, which is called ImageNet, like with the assumption that eventually sometime in the future, people would be able to do something with it, right?

[00:09:27] And she created this competition around, first it was classification, right? So to, to get where we are now. It really started not within generation of, of images or video or these immersive environments. It started with taking an existing image and having a machine be able to understand what was in it.

[00:09:46] And that she built this image set five, six years before. Really the machines were powerful enough to do, to do that task, but what happened was eventually. We got to the point where all the, all the ML researchers [00:10:00] in the early 2010s were trying to study this problem, right? Where they could classify it. So you would have like the two dogs running a frosty field and you could say, you know, go through and classify what's in this.

[00:10:09] Is it a dog, is it a cat? Is it hot air balloon? Is it a pyramid? And there were a couple breakthroughs around 2012 that made that like from hit or miss to like really, really, really, really good. And when that happened, and kind of sidebar, if you ever watched the, the, the series on HBL Silicon Valley? Yes.

[00:10:28] They had this like hotdog, not hotdog. I don't know if anyone remembers that.

[00:10:32] Mallory: I don't know if I remember that. But man, it's a good series. Especially when you're like around startups, man. Its a good one.

[00:10:38] Thomas: It's a little too close to home, honestly. Sometimes like it's a little relate Too hard.

[00:10:42] Mallory: Yeah,

[00:10:43] Thomas: it's a little too close to talk, but so even they were, they were gonna making fun of it, like at the, the time of like.

[00:10:48] Hey, we built this thing that says this is either a hot dog or not, right? And it was like AI that determines that. But that's, but that's sort of what started where we are now. So what happened is we started to get a sense, machines started to [00:11:00] learn sort of what was within an image, and we were able to take that, the, the same learning and then use it to create an image, right?

[00:11:08] So, you know, we would have Mid Journey or Dali or some of these other sort of early, early experiments around. Kind of this image track, right? So in parallel, we had this LLM track, which I think the pod has talked quite a lot about, right? So in the image track we started to take them and the breakthrough there, the thing that's important, right?

[00:11:27] When people see them, it's like, oh cool, we can do like marketing, or we could do sort of these, you know, interesting like, you know. Graphic design things with it, which is cool, right? I think that it's a, it's a legitimate use case, but to me the interesting thing is in order to go from classifying to generating, to creating, what did these models have to learn?

[00:11:48] Well, they had to learn something about. What a consistent world looks like, right? Mm-hmm. They'd have to learn sort of, this is what a dog would look like if it's jumping up in the air. You know, if this is what [00:12:00] a ball would look like if it's on the way down versus on the way up. If it's like splashing into the water, what would that look like?

[00:12:05] Right. Actually, it started to learn how to capture elements of our physical universe in a way that large language models just can't, 'cause they're not vision based. They're, they're text and word based. So we had this sort of parallel track developing where in the image form you could do it in like a still, but then once we moved into things like Sora or some of these other video creation, like you actually start to get reliable physics in there.

[00:12:30] So if you could type into Sora and say, I want two sailboats, like sailing in a cup of coffee, not only did it have to know what a sailboat is and a cup of coffee is, it had to be pretty accurate around what, like a sailboat. Going through water looks like and how like fluid dynamics work and gravity works and like air blowing into a sail.

[00:12:48] Like what would that, what would that do, right. To be, to do that accurately? You had to capture this sort of underlying sense. This sort of, you're starting to model what the physical world has [00:13:00] actually like, even in ridiculous scenarios like tcell, both in a cup of coffee. So kind of as we're kind of getting past this video, the thing that we're the where we are now, right, is we got to a point where you could do video.

[00:13:14] Mm-hmm. And if you played with Sora a little bit early on, you could get into like weird scenarios where you'd have someone walking through maybe a room and they walk into a kitchen and then they turn right. And then they like look at sort of whatever's on the right and then they turn back left and now all of a sudden they're in the bathroom.

[00:13:32] And what's happening is the way that these work. I think is interesting. It's not creating a computer program or simulating a world in advance. It's simulating them frame by frame. So on the fly it's creating one frame at a time that kind of has maybe a little bit of memory. So it's taking the previous five frames and saying, okay, what could a realistic next frame look like?

[00:13:52] So in the same way a large language model says, okay, these are the first words we're starting with, what's the most likely next word? This is starting with [00:14:00] a, a list of five to 10 frames. And what's like a, a realistic, most likely next phrase. But once you get outta the window, if you like, looked right and then back left it forgot what the, what the original world looked like.

[00:14:11] So like there was a model of the world, but there was no persistence, right? It was no, it wasn't reliable. You couldn't really do anything interesting with it, right? So. The breakthrough. The thing that we just got pretty recently, I think we'll dig in a little bit more, was something called Genie, right? So Genie is generative interactive environments, generative ge it IE, interactive environments.

[00:14:33] And what they've done, which is really, really cool, is added in that persistence layer. So not only are you able to sort of create video, but you can through text, say, paint a smiley face on the wall. Look to the right, look to the left. The smiley face is exactly how you left it, right? So you're able to actually start to get interactivity with these, these simulated worlds that have [00:15:00] persistent memory.

[00:15:01] And that's the kind of the cool thing there is now not only are we starting to simulate sort of underlying physics of sort of a larger reality, we now have interaction with that to see, you know, what would happen if I did this? So that's kind of the cool thing where we're at now, is being able to create these, these simulated worlds where you can poke them and they react to it and they remember.

[00:15:24] Hmm. Yeah.

[00:15:27] Mallory: Seems simple enough, right? Right.

[00:15:28] Thomas: Exactly. It's greasy in a world that

[00:15:29] Mallory: you just poke and it remembers. I wanted to ask you how this was different from AI generated video, but you did a really good job already kind of capturing that. So it sounds like that persistence piece is the breakthrough with Genie.

[00:15:41] So the it, but in theory, it doesn't seem like it would be that difficult to provide like an AI generated video tool with the ability to remember. I'm sure it's way more difficult than I can imagine, but why? Do you understand what I'm saying? Why is that the breakthrough?

[00:15:55] Thomas: Yeah, so kind of the, the parallel to this, the thing that I think feels like, [00:16:00] haven't we already solved?

[00:16:01] This is like. You could have, you know, SIM City came out in the nineties. I remember playing that as like, right. Love sims. Yeah. That's the best, right? Uh, or you've got Minecraft or Roblox, right? Mm-hmm. You've got all these worlds right now, and in order to create those worlds though, you need tens, if not hundreds of thousands of like programmer hours, like people sitting down designing like what this world looks like and going through sort of a quality assurance process.

[00:16:24] I created a, you know, pyramid over here. Did it break some other thing in the system way over there? Um, what. The persistence is like hardcoded in, right? Mm-hmm. That, that's the thing. Mm-hmm. The thing that we're getting with, with Genie and to some extent some of the, the, the video models is the ability to create that persistence in real time.

[00:16:45] So we're not, we're not hard coding the entire world all at once. We're just encoding the next frame based off of what we already seen. So that's one thing here is being able to like extend back. Sort of the amount of stuff that that's already in there and recall it and, and paint [00:17:00] it again in real time and make sure it's the same thing that, that's really, really hard to like look right, look left.

[00:17:05] I, you know, painted a smiley face with my initials on it. Mm-hmm. And it's the exact same smiley face with my exact same initials. Maintaining that through this interactivity is, is, is really, really hard. The other thing is to actually. Interject yourself into these environments. So the difference between something like a SOA or some of these other video models is like in real time, like as you're moving through this world.

[00:17:28] You can say jump, right, or a fox runs by and then like that happens. Like being able to interject sort of your idea onto the world in real time as it's being built on the fly allows you to do kinda a lot more cool stuff that I kind of wanna dig into a little bit later. But that's kind of the breakthrough.

[00:17:48] So you can, one, have this persistence, but then two, create something new in real time and then it stays. It's actually a part of the environment going forward.

[00:17:57] Mallory: And you can also create these interactive [00:18:00] environments with just a text-based prompt, right? Mm-hmm. So you don't need any programming ability. I could go in right now and say, create a world of, um.

[00:18:07] You know, hot dogs and flowers and it could do that.

[00:18:10] Thomas: Yeah, exactly. That's exactly right. I mean, if, it's pretty fascinating preview, so you can't, no one besides Google can do it. But eventually in

[00:18:17] Mallory: theory, once I have access to it, I could. So it sounds like there are a few parts to this. Um, there's obviously the memory piece and then there's also a vision piece and the controller piece, so mm-hmm.

[00:18:29] Kind of prompting the world. Um, can you help explain. Maybe like the inner working, what it would take to have a world model. You gave that really helpful background with fefe Lee, right? And ImageNet. But what. What were kind of the core innovations that had to come together to get us to this point?

[00:18:45] Thomas: So the interesting thing is it's learned sort of real world physics or an approximation of real world physics, right?

[00:18:52] So no one went in, again, if you were trying to build Minecraft or Roblox or something like that, there is sort of a physics engine behind the [00:19:00] scenes where like people went in and they kind of. Actively a human being wrote down the laws of physics that are represented in sort of the game, right? Um, that is not, that is not what happened here.

[00:19:11] So what happened here is a machine learning model learned to approximate those and close enough that it's an indistinguishable to us as we watch it, right? So no one actually went in okay. And said. You know, this is what gravity looks like, or this is what fluid dynamics looks like, or whatever. I'm, I'm not a physicist, so there are more complex ideas here.

[00:19:31] No, no one went in and explicitly programmed it. What happened is over a lot of trial and error, in order to get good representation of sort of recreating a, a physical environment somewhere in the model, this sort of black box, we don't know exactly where it is. It learned to approximate those things in, in high fidelity.

[00:19:49] So being able to do that. Is itself a, a, a pretty radical breakthrough because you have to have one tons of training data. So to get the, the how I was talking about Fefe Lee [00:20:00] hand coded 14 million sort of images, you need way more than that for video and in order to be able to. Sort of faithfully represented, you probably need on the order of like hundreds of billions, right?

[00:20:12] Of videos to be able to kind of go back and, and start to simulate it. Um, the other thing that I think is important is in order to have the memory, sort of the interactivity and then the persistence of memory throughout that, you just need tons of computing power, right? So you don't, that amount of computing power just did not exist last year.

[00:20:33] Like you have to have just tons and tons of computers. Creating this, which is why I think neither of us are allowed to use it right now. It's very expensive. So, um, so being able to do that I think is, is pretty important. But I think the important thing here is we finally got enough computing power and enough example data to realistically and faithfully model and approximation of the laws of physics.

[00:20:57] Which means you can do a lot of really, really cool stuff. [00:21:00] Right. There might be stuff that the machines have learned about the laws of physics, or if you scale this up that humans might not know yet, right. It might, it might actually be having something buried in there that could unlock sort of some creativity or scientific progress just because you're trying to create a, like a Minecraft game.

[00:21:16] Mallory: Yeah. And I wanna get into next, because if you look at, and we'll link this in the show notes. If you look at the Genie three announcement, there are some really neat examples that they share. Uh, little videos, but most of them, to me, kind of look like video games. Yeah. Which is cool, fun. Huge market for video games.

[00:21:32] But I wanna start talking with you, Thomas. 'cause I feel like you're very good at like bringing things back down to the ground. What. I guess, why is this important for us as a society, like before we talk about associations even, what could functional world models being available to the masses, what could that enable?

[00:21:51] Yeah.

[00:21:51] Thomas: Right. So the, I mean. One, it's cool, right? So like the, there's, there's, there's the entertainment aspect of it that's, it'd be fun, right? [00:22:00] But it, that's the least exciting part of it to me, for me, once you can faithfully sort of create these environments on the fly, right? Um, so the alternative here is having a team of a hundred programmers each with their own part of the world.

[00:22:13] You know, building it out and it takes, you know, it's very expensive. It takes a very long time. Instead, if you could just prompt through natural language, like the world you wanna see, you could actually start to create kind of these interactive what if scenarios. Right? So the, the immediate term would be allowing me to kind of simul.

[00:22:32] Let's say I wanna do some safety training, right? For whatever that is, right? If I work in a high pressure environment, uh, whatever that might be, we're doing, you know, pipelines or fire or whatever it is, medical research, um, physicians, you could create on the fly, very faithful scenarios. And then by being able to interact with those, you would have a way of kind of honing your skills or doing kind of what if analysis, if this happened, what could be the expected result and.

[00:22:59] You [00:23:00] know, you could wipe the slate skin wipe, wipe the slate clean, start again and see what happened if you did something else. So you get these immersive, interactive environments that respond to you in real time and faithfully represent like what could conceivably happen in the real world if you, you interacted with them.

[00:23:17] So to me, it's beyond just playing fun video games. Mm-hmm. It really has a real world impact across society more broadly, as we can scale up these technologies and get it into. Different people's hands. It's really what do you wanna learn? What do you wanna practice? And can I create a, an environment where I can do that and see what would happen in this scenario or that

[00:23:35] Mallory: could we use something like world models to predict weather or make medical advancements?

[00:23:41] I mean, can we pretty much apply this across the board?

[00:23:44] Thomas: Yeah, I think weather's one I hadn't thought about, but I think as far as, you know, there are versions of this for like pharmaceutical and, and, um, different kind of, you know, machine based things like creating products that, that could be useful. But I think what [00:24:00] doesn't exist like that, that's already in place and people are going that, where I think this enables us more is imagine sort of a scenario where you're a surgeon, right?

[00:24:09] And you want to, you know. Try a new technique, but you wanna do it in a safe way or in a repeatable way. We can try it over and over again and sort of see what's going on and kind of create this immersive environment where. You can see the outcome of that, or if you're doing sort of some sort of safety training, right?

[00:24:25] If you're kind of in some mechanical environment and you wanna simulate say, like a, like a, a pipeline, right? Mm-hmm. That's something that's comes to my head where all these things are around what would happen if you did this? If you wanna get your workers and a, you know, a way to practice these different techniques.

[00:24:41] Just use them in a way where they can see what happens if you do this or that. Those sort of real world environments, I think are much more. Sort of useful for the day-to-day, right? You don't have to be a PhD, you know, pharma, whatever, uh, chemist to make an impact. It could be [00:25:00] sort of regulated person like us trying different skills and learning how to interact with our environment in a repeatable and safe way.

[00:25:05] Mallory: Mm-hmm. We have talked a little bit on the pod, well probably a lot, a bit about artificial general intelligence and like what that might look like. And we have mentioned world models too, as being kind of maybe a necessary step or precursor to more advanced robotics and to ultimately a GI. What is your take on that?

[00:25:25] Thomas: Yeah, so right now we've got basically these kind of. If we zoom far enough back, we've got these two parable paths, right? We've got sort of the, the image video, um, world that we've been talking around so far, and that's, that's the world model, like developing this understanding of. How the world would respond to interactions against it.

[00:25:45] Right? What that doesn't capture though, is kind of all the social intelligence that we've built up over time, right? All the, all the, the language based information that we've got, all of our history, all of our culture, all of our kind of the mathematics and [00:26:00] math form, the research and sort of biology and, you know, anthropology or the arts, right?

[00:26:05] All of those things sort of. Um, or not captured in a world model. The world model really is about the physical world around you and how, okay, how it responds to interactions. LLMs, though LLMs do capture that, right? So we've got this parallel path that's building up all of our kind of cultural institutional knowledge and sort of all our, you know.

[00:26:25] Softer sciences are kind of built in that path, right?

[00:26:28] Mallory: Mm-hmm.

[00:26:29] Thomas: So right now, these two things are kind of going along a parallel. Two hands go, right? Yeah. Um, they're going along a parallel. What I think what's very interesting to me is this interactive environment. The, the sort of genie version is a.

[00:26:42] Intersecting those two streams of thought in a way that allows you to do some really, really cool stuff. So imagine you've got a very realistic version of the world that's interactive, right? You can, you can do something. The, the fact that you took an action persists, you know, for [00:27:00] right now, I think it's only a minute or two, but imagine.

[00:27:02] Five, 10 years down the line, it persists for a century, right? Like you can take a small action here. What effect does that have later on? Well, now imagine you've got an agent that is built on an LLM that knows how to interact with this world model, and you can compress the timeline. So you could have an agent sort of exploring this realistic version of the world, you know, and compress it.

[00:27:24] So maybe it lives a hundred years, but you run that simulation over the course of an hour, and you do that in parallel 10,000 times. Like what, what can you learn? What kind of counterfactuals, what kind of policy decision making can you make, right? Like how does that affect a lot of the stuff you do? If you could have, you know, each of these agents with their own personas, with their own sort of cultural background and kind of in, you know, institutional knowledge, interacting with a very.

[00:27:50] Real simulation of the world in a compressed timeline over and over and over and over again. So overnight you let your agent live a thousand lifetimes. [00:28:00] You wake up in the morning, you say, Hey, what did you learn? Right? So when these two things intersect, I think that's where we get sort of really true a GI like exploring the world, getting smarter than sort of all of humanity altogether.

[00:28:12] That kind of thing I think is where that starts to happen.

[00:28:15] Mallory: That's really helpful. So kind of the intersection of those two parallels that you're talking about. Uh, I wanna ask briefly too, and I don't know how much you know about this, Thomas, so your, even your kind of best approximation will be helpful, but when we're talking about world models and the understanding of the physical world, it seems like we.

[00:28:31] In some ways had kind of mastered that through autonomous driving. So do you know, like, or can you help me figure out how those two are different? Because isn't that kind of, uh, what the tech is that the cars will kind of live in simulated worlds and encounter all these obstacles until they learned how to drive?

[00:28:48] Yeah,

[00:28:49] Thomas: yeah, yeah. So you're right. I do not know too much about mm-hmm. The autonomous driving. But the, the bit that I sort of have read up on is these, these are. Similar paths. Similar, right. So [00:29:00] this is very much kind of the, the next version of that. Like as I see, yeah. Or like taking lessons learned from that, because that's still, that's high stakes too.

[00:29:09] Like you, you don't just. Put it on the road without doing a lot of it

[00:29:12] Mallory: right. Yeah. Um, I don't know if you've ever driven in a, a Waymo Thomas.

[00:29:17] Thomas: Not yet. I, I really want to though. You've

[00:29:18] Mallory: gotta do it. Yeah. I did it a few months ago in, in San Francisco and it was so awesome. I actually think I prefer it to human drivers just because, I don't know, I actually felt pretty safe in one.

[00:29:27] I don't know, maybe that was naivety on my part, but. It was fun.

[00:29:30] Thomas: No, I would love to. That sounds really fun for me.

[00:29:33] Mallory: Alright, so now I wanna zoom in a little bit more on why this matters for association professionals because right now it seems a bit ethereal, a bit abstract. I love what you said about simulated training.

[00:29:45] I think that could be really powerful for associations that have members in, um, you know, difficult technical fields or maybe dangerous fields, but kind of. Extrapolating beyond that, why do you think this is really important to be looking at in [00:30:00] terms of like the overall AI trend line?

[00:30:02] Thomas: Yeah, so the way I like to kind of, the way I like to think, think about.

[00:30:07] Issues like these are, you know, how can we be ready for when this technology's real? Right? Right now it's, it's very much research preview. Um, in my head, what we, what we're talking about right now is like going back to 2019, where with like GPT, okay? Right. It's, it's about that level of sophistication. So not ready, not ready for prime time.

[00:30:25] Right. But if you were paying attention in 20 19, 20 20, you could kind of clearly see like, okay. It's coming, it's gonna be here. How can I be ready on day one, day negative one to take advantage of this technology that's clearly advancing at a rapid base. Right. So this is where I think expertise, sort of the overlap of sort of knowing what's coming, but then your deep knowledge as association professionals around what your day-to-day is like.

[00:30:54] When you understand like sort of break it apart, what, what can it do? If you could start thinking now [00:31:00] how. How that could affect your day to day. I think you'll be, you'll be ready for it. So in my head, there's a couple and I, I hope this conversation definitely not meant to be comprehensive or even scratch the surface of what's possible, but hopefully inspires a lot of people.

[00:31:13] Um, listening to POD right now to. To, you know, think other, other things. Right. And please share, like, please, please let me know. Reach out to me and tell me what you think about, because I won't hear it, but to, to start the conversation at least. Um, from my point of view, there's a couple things like in the near term that I think we could be ready for.

[00:31:30] So, one training, right? A a lot of associations have sort of, sort of training and especially if you're with a, either a medical society or maybe a, a safety or a standards board, right? Thinking through. You know, if you're, if you're a surgical society, for example, thinking through, can you provide simulated environments where your surgeons could interact with them and understand sort of this technique versus that technique or this like high pressure situation, and sort of run that over and over again and what can they learn?

[00:31:58] Can you incorporate this into some of [00:32:00] the, you know, learning that you provide back to your mm-hmm. Members to really advance. Sort of the, the mission of your organization, or if you think in terms of standards bodies, like those standards are applied in real world scenarios, right? So if you've got sort of standards in a, in a concrete setting or standards in a sort of a highway or whatever, um, or an engineering setting, right?

[00:32:22] Can you come up with scenarios where those standards should be applied? Maybe it's a little bit vague, right? Like it's not clear this standard or that standard how they interact or you know what you need to do here. Can you create simulated environments or be ready to like allow your members to access these simulated environments to practice implementing these standards in a very real scenario, right?

[00:32:43] So they get a sense before they're ever on site, right? They're ever in the or, right? Like what? What are the, what are the choices they're gonna need to make? What kind of gotchas might float up that are very realistic, that might choose them to choose wrong, force them to choose wrong? Mm-hmm. Right? Like, can you start to think about, about those types of [00:33:00] things?

[00:33:00] So learning I think is, is a very clear one in my mind at least. Hopefully one to start getting the ball rolling around.

[00:33:05] Mallory: Mm-hmm.

[00:33:06] Thomas: But it's not just making a video game. It's not just marketing material, which both are cool and I'm not, marketing's a very important function. I don't want to say that's not an important one, but I think it's promise.

[00:33:14] Mallory: No, I'm

[00:33:15] Thomas: just kidding. Obvious one. Um, the other one for me though is events. Could you, I could imagine a scenario where I get the floor plan of like maybe the, the trade show or where the events are and we do sort of simulated setup. Like what is, what is walking through the T trade show floor, the trade floor look like in this scenario versus that scenario, right?

[00:33:36] Can you optimize what that looks like? Can you like say, okay, actually we're gonna design it differently because we were able to kinda experience that well in advance without having to like. Set up a trade floor. Right. Or maybe the, you know, the outline of where your different sessions are gonna be. Like, oh, hey, these sessions actually are way too far apart.

[00:33:54] Or like, that's way too far away from the lunch building. Right? You might not realize that until you like provided the floor plan and [00:34:00] created a simulated environment based off of, you know, whatever convention center you're gonna be at. Right? So things like that are like me, I, I bet. You know, this famous last words, three to five years from now, so 20 28, 20, 20 30.

[00:34:12] Mm-hmm. Uh, feel free to hunt me down if this didn't turn out to be true, but I bet we'll be able to do something similar to that. You know? Or if you think in these terms in 20 28, 20 30, you'll be ready for whatever actually comes, right? If you start thinking. About this. Maybe if it's not exactly that.

[00:34:28] Yeah. By laying the track, by getting ready for something like that to happen and being prepared for it, whatever actually does happen, you'll be able to like jump to that much more quickly than starting from starting from nothing.

[00:34:39] Mallory: So I wanna clarify something 'cause as you were talking I realized I might be conflating two things in my head.

[00:34:44] You talked about simulating a four plan at an event, which is a really smart idea and very relevant for associations. Are world models, do they allow us to? Predict as well. Are they predictive models? In the sense, could you say, well, if I have this session here, what would the, [00:35:00] um, flow of attendees look like versus here?

[00:35:03] Thomas: So that's where I think. That's where I think it's going. Okay. So this is, this is what I'm really, really excited about. Okay. So imagine instead of, so right now, these models have this memory and persistence for mm-hmm. A minute. By the time it gets to 20 28, 20, 20, 30, maybe it's 30 minutes to an hour, but imagine if it was a full, it's a full day, right?

[00:35:26] And imagine if it was a multiplayer game. So right now it's just a one player game. But I imagine, you know, we could, uh, see a version of this where there are multiple. You know, interactions with it. Right? Then imagine if you had an LLM with a persona like, I am this kind of a member, and I'm gonna sort of.

[00:35:45] Create 30 of those, or 300 of those, and I've got, you know, 500 of these members and 500 of those members all with their kind of own backgrounds and personas and personalities, and see what that actually looks like, where they're making their own decisions. They're kind of trying to navigate the [00:36:00] floor plan together and like an LLM pushing the buttons on the screen, right?

[00:36:05] Like I said, like go to sleep at, you know, log off. At five o'clock, log in at eight o'clock and you ran 10,000 versions of your conference with different setups and different, you know, people showing up and all of that, and then getting reports saying, okay, hey, this configuration led to these outcomes.

[00:36:22] That configuration led to these other outcomes. Like there's a lot that I think you're gonna be able to do. It wouldn't even just be predicting it would be simulating, right. And be doing it at sort of massive scale and then seeing like, you know, what are the. You know, what are the worst case scenarios?

[00:36:36] What are the best case scenarios? What's the average scenario? If we put this booth here, does that have a butterfly effect on like all these other things. If we set lunch here, do these sort of like high profile, um, trade show people, does no one go to the booth 'cause lunch is way on the other side. Or something more surprising than that happening, right?

[00:36:55] Those are the kinds of things you'll be able to do. Kinda what if planning, um, way more [00:37:00] effectively, I think, than really just traditional predictive analytics.

[00:37:04] Mallory: Oh man, that's so fascinating. And it sounds so sci-fi and farfetched, but I mean, hey, you put a year out there I bet. In the next, what, 10 years or so, do you think?

[00:37:13] We'll see things like that.

[00:37:14] Thomas: I, I mean, again, the, the, the mo the most certain I am is, whatever I'm about to say is not gonna happen. Right. But, but it'll be around that, I think. Yeah. Right. So if I tell you an exact scenario, I can guarantee you 100% certainty that that exact scenario will never happen. Okay.

[00:37:31] Alright. The type of thing that I think we're the path that we're onto. Right. You don't actually, so to know very specific, like, we're gonna get this, you know, possible future. I don't think I can say that for sure, but if you zoom out, right? If you, if you look at, and I'm sure we've talked about on podcast a few times, the kind of like power curves, right?

[00:37:49] Mm-hmm. Sort of exponential growth curves. Yep. That these world models are on the same growth curve, right? You have to just zoom out and like look like, okay, it's gonna be smart enough to do something [00:38:00] approximately like. Everything that I just said, right. We're gonna have some version of that within the next, yeah, 10 to 15 years.

[00:38:06] Some something will be able to do something like that. And what I, what I would urge kinda as association leaders and, and people who are taking AI very seriously, think about those scenarios and sort of. Think about that. What if, right? You don't have to have, you don't have to have a world model do this.

[00:38:21] What if you, you can do it just on your own right now, right? Think about what the world would be like if, if that happened. What kind of resourcing would you want in place? What would you start doing today to be ready for that? In 10, 15 years and then start doing that today, because whatever happens in 10, 15 years, taking those steps right now gets you ready to sort of seize the moment when, when that time comes.

[00:38:44] And that's kind of where I see if we can actually go even back to the beginning where we're talking about the evolution of Taio. Um, the reason why we were able to, you know, create Betty within a month, month and a half of Chad GPT launching, launching is 'cause we were, we were ready, right? Like we'd [00:39:00] been.

[00:39:00] In the game kind of laying, laying track. And I couldn't have predicted sort of Betty at the moment in mm-hmm. You know, 2020 when I started taking GPTA little bit more seriously. But when that happened, we were ready to jump in and seize the day because we were sort of in the space. So I would urge sort of any association leaders in the moment, like, be ready for these types of scenarios or, or take what, what we're talking about right now, and then run with it.

[00:39:24] Kind of go a little bit wild, like. In your version of this, but what could happen, like imagine. Sort of the, the craziest future scenario. Think about what you would need in place today to be ready for that, or what kind of upskilling or or or work would you do to be ready for, for that world, right?

[00:39:40] Mallory: Mm-hmm.

[00:39:41] I was gonna point out that exact same thing, Thomas, of you, you really lived the thing you mentioned earlier, back GPT or uh, generative AI back in like 2019. You live that. So back in 2020, you were looking ahead, you were thinking what could be possible with this, and then you were ready for it. I think in actuality though, that sounds, it sounds [00:40:00] scary, it sounds intimidating, and I'm sure some of our listeners are thinking like, we're running to the board right now about ai.

[00:40:06] Like how, how are we gonna say, wait, we need to talk about world models as well, even though they, they intersect. But what's your advice in terms of like practical? I don't know. There's just so much noise out there, so figuring out what you actually need to focus on

[00:40:21] Thomas: It is hard. It is really hard because there's a lot of noise and actually even more than noise.

[00:40:25] There's just a lot. Of possibility. Yes, and kind of looking into the future, we don't know of those possibilities. Which are, which are gonna take, which are gonna be sort of dead ends, right? You know, the, and then of the ones that'll take, there's way more than any of us could choose. We couldn't do all of them, right?

[00:40:43] Mallory: Mm-hmm.

[00:40:44] Thomas: So I think at a certain level there's gonna be kind of one trust, whatever fascinates you a little bit, right? And, and kind of lean into that. If it fascinates you, it fascinates you probably for a reason. And again, you can't predict the exact. Future moment where [00:41:00] it's gonna matter, but by kind of leaning into that you, you will be ready for when that happens.

[00:41:06] Which leads to which leads to another problem, right. No one's gonna take you seriously. No one's gonna take you seriously. Or maybe, maybe the people are a little bit more i'll. Yeah. And you and yeah, leave it. But you should be prepared for people to, to not take you seriously. Right. And one, so how do you navigate, like having to keep the lights on, having to sort of like be a productive employee with a day job that you know has to just do the stuff that you're supposed to do while still being ready for that.

[00:41:35] And I think that. It comes with one, staying engaged with, you know, podcasts like, like this, right? Just being aware, being in the conversation, kind of being around people who are thinking in this way so that, you know, if that's just 30 minutes of your day to an hour of your day, a couple times a week, that you're just at least in the conversation and that's, that's gonna be an important part of it.

[00:41:55] But then two, experiment when you can, right? So we don't have, we don't have [00:42:00] access to any of the, the world models yet, um, or generative interactive environments yet. But we will, you know, eventually within, you know, hopefully a year or two, right? When we do take these ideas that we're talking about now and just, just try it on your own, right?

[00:42:14] Do a little, you know, test and then get, if you have a manager, if you've got your board say, Hey, this is, I just tried this. What do you think? Right? And start to build based off of small wins and little experiments here and there. Some consensus around how the usage of this makes sense for, for your organization, right.

[00:42:32] Sort of kind of take it and run with small little experiments that, you know, cost you no time. Um, or even just sort of build up honestly with an LLM at this point. Like some people are still afraid of, you know, Chad gt. So if you can kind of just show like real, productive, real world value, taking small bets and seeing what happens with them, I think you'll be, you'll be ready for it.

[00:42:53] And if you think about sort of these world models and what they might mean, right? Like what does it mean to have interactive. Interactive [00:43:00] training, right? Interactive educational materials. What would it mean to have a, a floor that you could walk in real time and sort of like, Ooh, actually let's move the coffee table over here.

[00:43:08] Right? What, what else is there? What, like, what ideas do you have that like, might be inspired by those ideas that no one else is gonna have? Just sort of like, play around with that and when you get a chance to try it, try it.

[00:43:19] Mallory: That was so well said, Thomas. I was like, how are, how are you gonna answer that question?

[00:43:22] Lean in to what fascinates you. I love that. And again, digital now, 2022. Even before then, hearing you talk about ai, there was something in my brain going that is interesting. Like, I know people aren't necessarily listening to Taio at this moment, but I was a hundred percent confident that they would. Um, and I also think too, a good example of that was that year's digital now.

[00:43:43] One day was about blockchain technology, right? And one day was about ai. So point in the, I didn't plan that event, but in the minds, I'm assuming of the event planners was, oh, okay, we have these two very innovative things. We'll see kind of what's next. Blockchain is still around, but I definitely feel like AI is the thing that boomed.

[00:43:59] [00:44:00] Um, and so you're right. Lean into what fascinates you and, and kind. Be ready for that.

[00:44:04] Thomas: That's such a good point. 'cause like it, it really was. There were, there was a, there was an AI day and there was a blockchain.

[00:44:09] Mallory: Mm-hmm.

[00:44:09] Thomas: Mm-hmm. And I mean that there, I, you know, blockchain hasn't happened yet if it's gonna happen.

[00:44:14] Right. It's not, it's not mainstream. Right. But like at the moment, these were equally like possible. Right. They, both of them, like we had no way I have, knowing which one of these things or if either of them was really gonna have it sort of like. You know, productive, actual, real moment, moment, moment.

[00:44:31] Mallory: Mm-hmm.

[00:44:31] Thomas: And it turned out that it was ai, but you couldn't note standing in, you know, it was in New Orleans that year.

[00:44:36] Mallory: Mm-hmm.

[00:44:37] Thomas: Like in the room you would, it would, you'd been 50 50 to, to get that answer. Right. Um, and the blockchain people may still be right, like the, the, that might have its moment in the future.

[00:44:48] There might be other things, but mm-hmm. Lean into the thing that fascinates you and, and see where, where it takes you, I think is, is for sure. The best career advice I can give. I don't know.

[00:44:56] Mallory: Yeah, no, I, I love it. I didn't think we were gonna get into career [00:45:00] advice on the pod, but I figured while we have you Right, we should, we should ask, um, Thomas, this has been a really interesting conversation.

[00:45:07] I'm sure we're gonna have many listeners ask where they can keep up with you, hear your thoughts. Am I also correct? Did you just release a book as well?

[00:45:16] Thomas: Yeah. And we co-authored it with, so Dre, co-founder of Taio. Okay. And then Rob, um, Barnes, who co-founded Betty with us. So, okay. After we got the experimental version of Betty up, we needed to make it a real company.

[00:45:27] Yeah. Rob came in to help us make it a, make it a real thing. So we all kind of came together and it's really about our experience of one, it interacting with association's knowledge base, and then like our lessons learned by doing that and how we can kind of incorporate. Ai, not for AI's sake, but to help us smarter.

[00:45:45] Help us become smarter. Right? And sort of the lessons learned along the way and, and hopefully what you can take as association leaders to kind of apply those lessons, um, to, to your workspace.

[00:45:56] Mallory: Is it out now?

[00:45:57] Thomas: It is, it is the association knowledge cycle. [00:46:00] Um, it is, I think if you go to the Betty Ai, um, homepage, it's sort of prominently displayed there.

[00:46:05] Mallory: We will link it in the show notes. And then Thomas, I know that you're on LinkedIn. I don't know if you're a big poster, but I'll probably drop that in the show notes too.

[00:46:13] Thomas: Yeah, I please put it on there. I am not a big, big poster, but once this comes out I will check pretty religiously for a couple weeks at least, and we'll see what happens.

[00:46:21] Mallory: For sure. Thank you so much for joining us. This was an awesome convo, Thomas. We appreciate it.

[00:46:25] Thomas: Always a pleasure. Yeah.

[00:46:28] Voice-Over: Thanks for tuning into the Sidecar Sync Podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes, and if you're looking for more.

[00:46:39] In depth AI education for you, your entire team or your members head to sidecar ai.