Skip to main content

Summary:

 In this episode of Sidecar Sync, Amith Nagarajan and Mallory Mejias zoom out to explore the AI infrastructure quietly reshaping everything beneath the surface—from Microsoft’s new Maia 200 custom AI chip and the race to lower inference costs, to Google’s Project Genie and the rise of interactive world models that could transform everything from credentialing simulations to digital twins of annual conferences. They also unpack bold predictions about the future of work, including AI-driven performance measurement, shrinking workforces, gig workers armed with better tools than enterprise teams, and the shift from hyper-specialization to adaptable generalists. The throughline? Infrastructure, experiences, and workforce dynamics are all evolving at once—and association leaders don’t need to understand transistor counts to recognize that AI is getting cheaper, faster, and more capable by the day. 

Timestamps:

00:00 - Introduction
04:57 - Microsoft’s Maia 200 & the Silicon Arms Race
10:17 - Will Custom Chips Change Copilot Pricing?
16:53 - Project Genie & the Rise of World Models
23:08 - The ChatGPT Moment for World Models?
25:40 - The Wall Street Journal’s 20-Year Work Predictions
33:12 - Specialists vs. Generalists & The Future of Associations
36:46 - Companies as Classrooms & the Learning Organization
41:23 - Closing Thoughts 

 

 

👥Provide comprehensive AI education for your team

https://learn.sidecar.ai/teams

📅 Register for digitalNow 2026:

https://digitalnow.sidecar.ai/digitalnow

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🎀 Use code AIPOD50 for $50 off your Association AI Professional (AAiP) certification

https://learn.sidecar.ai/

📕 Download ‘Ascend 3rd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

 Claude ➔ https://claude.ai

Microsoft Maia 200 ➔ https://shorturl.at/MVtAU

Microsoft 365 Copilot ➔ https://www.microsoft.com/microsoft-365/copilot

Azure AI ➔ https://azure.microsoft.com/en-us/products/ai-services

OpenAI GPT Models ➔ https://openai.com

Google Project Genie ➔ https://shorturl.at/ULiof

WSJ article ➔ https://shorturl.at/2jGSO 

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

00:00:00:14 - 00:00:09:17]
Amith
 Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.

[00:00:09:17 - 00:00:25:19]
Amith
My name is Amith Nagarajan.

[00:00:25:19 - 00:00:27:10]
Mallory
 And my name is Mallory Mejias.

[00:00:27:10 - 00:00:46:17]
Amith
 And we are your hosts. And as always, there is a lot going on in the world of AI, but also in the world of associations, and we're here to help you cut through all the crazy and figure out what to do about all of this good opportunity that's in front of you in the world of AI, Mallory. How are you doing today?

[00:00:46:17 - 00:01:32:20]
Mallory
 Amit, I'm doing well. We were talking about, uh, my house before we started recording this call. I, if you all haven't heard, I just bought a new home with my husband. We're first time homeowners and we have not moved in yet, but we are in the thick of renovations, trying to, to renovate the kitchen, which of course is more than we anticipated and more than we realized that we bit off. But I will say in terms of AI, Claude is like my contractor, my designer. Claude is helping me pick my paint colors. I might have the first house heavily designed influenced by Claude. Maybe, maybe it's been done before, but I will say every decision I have to make, I'm going to Claude and saying, Hey, take a look at this. Look at my Pinterest board. So I'm very thankful to have generative AI in that regard.

[00:01:32:20 - 00:01:37:23]
Amith
 Yeah, no kidding. Now, if you could just get the AI to actually do the cut, the actual construction work for you.

[00:01:37:23 - 00:02:04:01]
Mallory
 That's what we were talking about, man, construction, I get it. It's a tough industry. It can be hard to find good people as with any industry, I'm sure, but we have been working hard to sort through all the noise in Atlanta. AI has been helping a little bit with the research process, but since we're new ish here, only two years living in the city, we just don't have a big network of people we can rely on when it comes to referrals. So it has been interesting to figure that out.

[00:02:04:01 - 00:02:09:17]
Amith
 Well, you heard it here. If you know of great contractors in the Atlanta area, let Mallory know.

[00:02:09:17 - 00:02:23:13]
Mallory
 Send them my way. We have met with so many and honestly, you know, it's been a good experience. We've met some great people so far. We hope, we think, and I'm sure we will continue to do so. But yes, if you're in Atlanta and you have recommendations for me, send them my way.

[00:02:23:13 - 00:02:38:20]
Amith
 And I do think actually this is an area where physical AI or the world of robotics will eventually help where you'll be able to have contractors that show up on time and do what they're supposed to do and not leave cigarettes in your kitchen and stuff.

[00:02:39:22 - 00:03:13:22]
Mallory
 Speaking of, we had someone come out during the due diligence process to quote us on like fixing the ladder to the attic or something like that. And he was so alarmist, Amith, and he said, if you don't fix that ladder, it is going to kill someone. And I said, oh, my goodness, like that's so scary. And then when he left, we realized he spit his gum in our yard and then we started getting other quotes and they said, you know, that man was probably trying to scare you. That ladder is OK. Get it replaced in your own time. But I was really rubbed the wrong way by the spitting the gum in the yard, Amith.

[00:03:13:22 - 00:03:19:16]
Amith
 Yeah, that's probably not a great look when you're trying to sell something to someone or really ever.

[00:03:20:18 - 00:03:29:08]
Mallory
 But also when you're trying to sell something. So I think a nice little physical AI, a robot, I just can't imagine would do that. Amith, how have you been? How's it going in Utah?

[00:03:29:08 - 00:04:02:11]
Amith
 So I am doing great up here in Utah. We're up here on a trip to get some skiing in. And unfortunately, the West has not had a good year in general for snowfall. However, we're very fortunate because this week it has been snowing. So we've gotten some fresh powder and, you know, it's like the least AI thing you can do to go skiing. You know, it's you're out there in nature and you're just skiing. I'm sure AI was used to do all sorts of things at the ski resort and probably make the skis that I'm riding on. But it's fun to be out there in nature for sure.

[00:04:02:11 - 00:04:08:01]
Mallory
 Are you sad to be missing Mardi Gras season in New Orleans or are you happy to be getting your ski on?

[00:04:08:01 - 00:04:39:02]
Amith
 We had plenty of that before we left. So Mardi Gras was cool. But by the time we leave for you know, we leave during Mardi Gras week every year. And by the time we leave, everyone I think has had their fill of all the Mardi Gras festivities and is ready to head to the mountains. So it's kind of our tradition to get out of here and to get out of New Orleans, I should say, because I'm out in New Orleans right now. But it's funny because the Park City area basically becomes like New Orleans. It's like you see tons of people from New Orleans all over the place in Utah during Mardi Gras week because a lot of people had the same idea.

[00:04:39:02 - 00:04:55:15]
Mallory
 I've heard it's the same for Disney World, that there's a ton of New Orleans folks that go to Disney World over the Mardi Gras break. Well, I was sad to miss it. I saw all of the King cakes that were peddling through the Blue Cypress office in New Orleans. So I was a little bit bummed about that. But alas, that's what happens when you move, right?

[00:04:56:18 - 00:04:56:23]
Amith
 Indeed.

[00:04:58:00 - 00:07:17:09]
Mallory
 Well, as Ameth said today, we've got a good episode lined up. We've got three topics and all of them relate to the AI infrastructure underneath everything that is changing so fast from the silicon powering the models to entirely new types of AI experiences to how work itself is being reshaped. So first, Microsoft just dropped a major custom AI chip announcement, Maya 200. We're going to talk about why associations should care about what's happening at the hardware level. Then if you all have been around for a minute, do you remember when we did that deep dive on world models with Thomas Altman in episode 97? Well, Google's Project Genie is now actually available to the public. So we're going to revisit that and talk about what's changed. And then finally, the Wall Street Journal asked five workplace experts to predict how work changes over the next 20 years. And their answers have some big implications for associations. So starting off with Microsoft's Maya 200, this is a custom chip designed specifically for AI inference. And as a reminder, that is the process of actually running AI models and generating responses, not training them. This is Microsoft's most powerful and how silicon to date. This matters because every major cloud company is now building its own AI chips. Google has TPUs, Amazon has Tranium, and now Microsoft has Maya 200. They're all trying to reduce their dependence on Nvidia GPUs, which have been the dominant hardware powering AI. And we've talked about this trend before on the pod. We've covered Grok with Acu's LPU chips, which are designed exclusively for inference and can run models at stunning speeds. We also did a deep dive on the TSMC Intel joint venture and the semiconductor landscape. Maya 200 is another chapter in that same story. So here are some key numbers. Microsoft claims roughly 30 percent better performance per dollar than the previous hardware in their fleet and about 3x the throughput of Amazon's Tranium 3. It's built on TSMC's most advanced three nanometer process with over 140 billion transistors. Maya 200 will power the AI services associations already use or are evaluating, including Microsoft 365 co-pilot, Azure hosted models and open AI's GPT models.

[00:07:18:09 - 00:07:25:09]
Mallory
 So, Amith, you sent me this Maya 200 announcement first. What caught your eye with this announcement from Microsoft?

[00:07:25:09 - 00:08:26:13]
Amith
 I think you said it well, Mallory, that this is the next chapter in that ongoing saga of hardware and hardware becoming both faster but also less expensive. The lower the cost per token, essentially, we can make AI inference and the faster we can make it, of course, as well, the more accessible it becomes. The more demand will grow ultimately. And so everyone's trying to solve the same problem. So to me, it is an important thing to keep an eye on because there are a lot of strategy implications here. Association leaders all the time are trying to make decisions on what they can and cannot do with AI. Some of that is guided by what they perceive as the capabilities of AI. But a lot of it is actually guided by cost. If you were to throw, let's say, Claude Opus 4.6, which currently is the fastest, not the fastest, sorry, but the most powerful model from anthropic at every problem, you'd spend a good bit of money because it's a very expensive model. It's extremely smart, but you don't necessarily need Claude Opus 4.6 for everything.

[00:08:27:22 - 00:09:05:23]
Amith
 But wouldn't it be great if you could get that level of intelligence or close to it much faster and dramatically lower cost? And so the idea here is that the more choice there is and the more innovation that's happening at the hardware level, the better everything gets for everyone. And so the reason I think this is such an important thing for us to keep covering here in the sidecar sync is that the assumptions we make in our strategy, both in terms of the capabilities of the model, but also what we're able to afford, what types of workloads we can build, can we reprocess all of our content our association has ever written and do it on a recurring basis when new ideas come up? Right.

[00:09:06:24 - 00:10:16:23]
Amith
 Would that be cost effective with GPT-4 back in the day? Probably not. It wouldn't be cost effective with Claude Opus 4.6, but maybe it would be cost effective running on this new hardware that Microsoft's coming out with. Certainly in a year or two years, that trend line continuing as we project it will, inference costs will approach zero or very, very close to that. And as that happens, we're seeing it essentially unfold in front of our eyes. And that's the idea here is to track this carefully, not because you care about the bits and the bytes necessarily, but because you want to understand that trend line so that if you're planning a project for your association that might not go live, let's say for six months, well, you're going to get probably roughly a doubling in AI capability. And that would mean that the models that come out, let's say in six months that are open source and are free to run in terms of the software, if you have a hardware platform like this, you might be able to do things that you otherwise would have thought you could not do both cost and speed wise. So that's the reason that we keep talking about it here is that, yes, it's cool. It's interesting. It's exciting from a tech perspective. But the more important thing is it changes the strategy. It changes what you can do.

[00:10:16:23 - 00:10:28:18]
Mallory
 Do you think the trend of Microsoft building its own inference hardware changes the cost calculus specifically for CoPilot or Azure AI services? Or is that something association leaders? OK, you should keep an eye on it.

[00:10:28:18 - 00:11:09:15]
Amith
 I sure hope so, Mallory, because CoPilot is pretty pricey and Microsoft is doing a lot of hard work here. And it's not just that they're working hard, but they have a hard problem. You see, Microsoft is kind of the enterprise standard for a lot of business processing for so many organizations that for them to throw CoPilot in and just like quickly iterate the way Claude is or the way chat GPT is, it has a lot of downstream implications in the Microsoft 365 ecosystem that they have to think through and plan and test and do a lot of governance work around that someone who's a brand new startup can do whatever they want. Right. You can spin up Claude co-work in 12 days with zero human programming, which is what the anthropic team did. It's amazing.

[00:11:10:21 - 00:12:26:10]
Amith
 But, you know, Microsoft can't move that fast because they're the incumbent with this massive amount of technology debt, but also these data governance issues and other other challenges. That being said, I think if they and others with similar kind of platform positions like Google being the other one with Google Suite, if they can really figure this out at scale, it's tremendous to have that level of AI in the environment that you're already working in. The bottom line, though, is they better figure it out because otherwise the platform will shift and people will be doing their work in co-work in other places instead of in office. And that would be really problematic for Microsoft. So I think this is part of Microsoft's strategy, but more broadly, Microsoft is looking ahead and saying, hey, over the next five to 10 years, what kinds of data centers do we need? What types of energy requirements will we have? And how can we make it so that we can do dramatically more inference at much lower cost, obviously pass on some of that cost savings in the form of lower prices that makes them more competitive, but also ultimately drive better profitability, better results for Microsoft. So of course that's what they're after. Everyone's doing the same thing. Google's doing it, as you mentioned, with the TPU project. They keep driving that forward. Meta has custom silicon. They're working on XAI, which is now part of SpaceX.

[00:12:27:22 - 00:13:46:11]
Amith
 Elon Musk was just recently quoted on a podcast talking about how they're not only building their own chips at XAI, as well as at SpaceX, but they also plan to launch experimental fabs of their own. So they're going down to the manufacturing level saying, hey, even though we'd love to get 10 times more out of TSMC and Samsung and other providers, they can't deliver and we need to move faster. It's actually kind of an interesting thing that Musk pointed out is that the fab companies like TSMC have been through boom and bust cycles so many times. It's roughly every three to five years that they end up with a glut of capacity and they end up taking a lot of hits. And so they're somewhat cautious in spite of the incredible growth and the opportunity. TSMC and Samsung both are actually projecting their own growth and their own CapEx to fuel that growth at not a slow pace, but from their perspective a very fast pace, but from the viewpoint of the demand side, very, very slow. And so Musk is saying, well, I guess I'm going to spin up my own fabs. And he's probably one of the few people in the world who would just go do that. And he's saying, well, we'll try a small one first. And then if that works, we'll build 50 big ones. So we'll see how that works out. But there's a lot of capacity coming online from different people through the demand that's out there.

[00:13:47:17 - 00:14:00:04]
Mallory
 Do you think that when Google and Microsoft have proprietary silicon optimized for their own platforms, do you think that will trickle down and increase switching costs to go from one to the other for like the average association?

[00:14:01:08 - 00:15:29:00]
Amith
 I'm sure there are some line of thinking with some of these companies that they can provide like an extra layer of benefit in some way that would cause people to want to not only use their platforms, the vertically integrated stack of hardware models, et cetera, but then to optimize for those stacks. And everyone's always been trying to do this since the beginning of time, right? So even like Nvidia, for example, the reason that they continue to dominate is partly, I mean, part of it is they own the supply chain. The other part of it is that Nvidia has a software layer and that software layer is very easy to use. They've perfected it over many, many years and developers know how to build on top of Nvidia's stuff. It's called CUDA. It's their platform, their software platform. Running on top of TPUs, for example, is much harder. There's not that many engineers who know the TPU architecture and comparative basis. So the essence of what I'm describing is you have more and more switching costs at each layer. And so sure, that's definitely part of what people are going for. I think they're going to have a hard time with it though, because it's so easy to move from one stack to another that ultimately I think you're going to find that switching costs, I'm not suggesting that there'll be a thing of the past. I think there will always be novel ways to architect distinct customer value that causes people to want to enter into your little proprietary world. But I don't know that it's going to be through like shift force of will. I think it'll be over time, really nuanced. So it'll be interesting to see.

[00:15:29:00 - 00:17:38:00]
Mallory
 I want to shift gears to topic two and talk about Project Genie. So back in episode 97 of the Sidecar Sync, we did a deep dive on world models with Thomas Altman, specifically looking at Google DeepMind's Genie 3 announcement. At the time, neither Thomas nor I could actually use it. It was only available to Google's internal trusted testers. Google just launched Project Genie as an experimental prototype available to Google AI Ultra subscribers in the US. It's a web app that lets you create, explore and remix interactive worlds using text prompts and images. So this has gone from a research preview to something people can actually try. Quick refresher for anyone who missed that episode, a world model is fundamentally different from an AI image or video generator. It doesn't just create a static visual, but it simulates an entire environment with physics, spatial relationships and persistence. Thomas gave a great example back in that episode. He said, imagine you're in one of these simulations. You can paint a smiley face on a wall, look away, look back, and the smiley face is still there exactly as you left it. And that persistence, the world remembering what you did is the breakthrough. So what's new in Project Genie specifically, there are three core features. First, you've got world sketching. You can use text prompts and images to create your world, preview it and fine tune it before entering. Second, you've got world exploration. So as you move through the world, it generates the path ahead in real time based on your actions. And third, you've got world remixing. You can take existing worlds and build on top of them or explore a gallery of worlds others have created. Current limitations for now are that generations are capped at 60 seconds. Control ability can be inconsistent and some features from the original Genie 3 preview like promptable events that change the world as you explore aren't included yet. This tracks with what Thomas told us in episode 97 about how computationally expensive this technology is. Amith, we've dabbled in world models a little bit on the podcast. What do you think is most exciting about this being available to ultra AI subscribers?

[00:17:40:03 - 00:21:13:21]
Amith
 Well, I think getting this in the hands of just everyday users, I mean, ultra users are not like the most common Gemini subscribers, but I think it's actually an awesome package personally. I subscribe to it. I think it's well worth it. But I think the idea of people having access to it is exciting. You're going to see an explosion of world models coming to life and being available for consumers this year. And they're going to be exciting for people to play with to get a sense of where things are heading. The unlock that world models give us is partly what you described that Thomas said, that this persistence of where you're at and understanding the 3D world. A big part of it though is what you can't understand through words and even through images and videos in the sense of how language models have traditionally looked at these things is that these world models are actually based on this idea of three dimensions plus time where essentially they're able to see and understand physics in motion. And the reason that's important is that the real world or in real life as we might think of it is something that the synthetic world or the artificial world doesn't really understand that well. So if you think about robotics, but even if you think about like just simulating what might happen in the real world at a future point in time, what we have right now in language models, even the most advanced ones like Opus 4.6 and Gemini 3 Pro and stuff like that, they have a very weak understanding of the physical world. So one of the reasons that a lot of labs are so focused on this is that if you can unlock true really high resolution understanding of physics in motion, you're able to then do a lot of other things downstream from that. One example of that is this idea of digital twins. And so we've talked about this in the pod a few times in the past where a digital twin essentially is an artificial or digital replica of some thing. It could be a system or a process. It could be a complex organism or a group of organisms. It could be a system like a factory floor. In the world of associations, a digital twin might exist for your annual conference where you have a digital twin, which is a very detailed digital replica essentially not only of the venue, that might be one part of it, but also every single detail about it, how you have rooms laid out, how the rooms are organized, which sessions are located in which areas of the building. And to be able to also have digital twins of every member who is coming to the event and the kinds of decisions they may make in real time and in 3D space so you can optimize not only the flow of traffic in the meeting, that's of course important, but also how you organize the meetings, where you put stuff and things like AV, like where you lay stuff out. And that might be a ridiculous use for this technology in terms of the sophistication of the tech relative to the use case. But I actually would argue that when you distill down a really novel technology like this into the hands of every user, you make it possible to just say, "Oh, well, you know what? I'll use the AI to do that." And then I have this ultra-realistic simulation of what's going to happen at my annual meeting, and I make better choices to create better, more engaging experiences, more inclusive experiences, thinking about things that might not be understandable without that type of tech. Another example is, well, in that digital twin, which would be brought to life through a world model, you could basically sit down in any chair in that auditorium and understand, "Oh, is the view blocked? Is this a spot that I can really be fully immersed in the engagement that I'm trying to design?" So there's a whole lot on top of that, but that's just one probably pretty straightforward example.

[00:21:14:22 - 00:21:43:07]
Amith
 Ultimately, I think that what's going to happen is these world models, which are acoputationally expensive and slow and all that, and they're pretty limited right now. It's the same thing that we saw with language models over the last several years. It's going to explode, and there's going to be novel innovations coming, and more and more use cases will come to life. That's what's exciting really coming back to your question, Mal. You're putting it in the hands of regular users. People will start doing stuff with these world models that Google would not have anticipated, and then it'll allow the thing to compound and get better.

[00:21:43:07 - 00:22:13:03]
Mallory
 I think the digital twin example of a conference is great. Then in episode 97 of me, Thomas mentioned things like having world models for simulations in high-risk professions or things that you may not be able to put an individual in that scenario because it would be unsafe, but maybe there are things that are a part of their credentialing process that might not cover like all the various things that could happen while you're actually in the line of work. So I think that's another example too for a way world models could be really useful for associations.

[00:22:13:03 - 00:23:04:00]
Amith
 Totally. Yeah, I think the physical world is underrated in terms of the work that associations do day-to-day. From an internal operations perspective, most work associations do for generations has been information driven. So something that's very naturally converted to the digital world. But much of what we do that matters to our members is in the physical world because our members typically, obviously they live in the physical world, we all do, but at the same time, a lot of their work is interacting with physics as opposed to interacting with computing. So I think there's a lot of opportunity around this. And I just find it exciting. This is essentially could be like a chat GPT type moment when world models first get into the hands of consumers and people start exploring applications. I think we're maybe a little bit ahead of that. I think that might be maybe later this year, maybe in 2027 sometime, but we're getting close. It's exciting.

[00:23:04:00 - 00:23:23:03]
Mallory
 So you mentioned something I wanted to ask about. If this is indeed a chat GPT launch moment, let's say we're in a November 2022 era for association leaders and world models, what would you tell them? What action do they need to be taking now to ensure that they can reap all the benefits of this technology?

[00:23:23:03 - 00:24:16:07]
Amith
 At a minimum, I would get somebody to play with this tool and just mess around with it and get their perspective on it. And again, things that are absolutely toys today could very well become unbelievably production grade quality tomorrow. In a parallel universe and probably something close to your heart, Mallory, the new ByteDance video generation model that's out there and making waves on the internet that has these ridiculously ultra realistic movie grade videos that they're producing. Granted, they're like 30 seconds right now, but with incredible audio and all that, it's like the next thing it makes Sora look like a dinosaur, comparatively speaking, and Hollywood's freaking out. And it's a toy until it really isn't. And that's the same thing that's going to happen with world models. That's what's happening right now with audio as another distinct modality that I'm really excited about this year. So I think it's just important to stay on top of this, understand it at a basic level and get some hands-on experience with it.

[00:24:17:07 - 00:24:34:09]
Mallory
 Speaking of ByteDance, we've officially crossed the threshold where I can no longer tell if a video is AI generated or not for a while. I was like, I can tell that that's AI generated. I've seen a couple and then had to look at comments where they're like, this is AI. And I go, oh no. So it's happened, everybody. Be safe.

[00:24:36:06 - 00:25:40:06]
Mallory
 Moving to topic three for today, we want to spend a little bit of time talking about the future of work in general. So the Wall Street Journal asked five workplace experts to predict how work changes over the next 20 years. There were five predictions and we're going to cover each one briefly. First is AI driven performance measurement. AI will replace annual surveys and self-reported data with real-time signals, how someone works hour to hour, who they collaborate with, and how patterns correlate with results. Imagine knowing a member services coordinator does her best work from 3.15 to 6 p.m. and aligning workflows around that. Yes, there are certainly big privacy implications there, but once AI governance foundations are in place, measurement could look entirely different. The next prediction is a shrinking workforce, fewer available workers in Europe, Japan, and the United States over the next two decades. The implications, skill-based over pedigree-based hiring, more vocational training, and companies becoming classrooms. So employers investing in employee skills rather than treating the relationship transactionally.

[00:25:41:11 - 00:26:01:16]
Mallory
 Next up is the changing role of management. Middle management could become a thing of the past as AI compresses tasks that took teams and months into minutes, but counterintuitively face-to-face connection becomes more valuable, not less. Emotional intelligence still sets leaders apart. Those who blend empathy with tech savvy shape the future.

[00:26:03:03 - 00:26:40:05]
Mallory
 Gig workers may actually have access to better AI tools than full-time employees. Corporate workers might be limited to co-pilot, for example, to justify license fees while contractors use the full suite from OpenAI, Google, or Anthropic. The prediction here is that gig workers deliver greater value than ever because they're not locked into enterprise tooling decisions. And then the last prediction, work shifts toward generalist roles, valuing connections across silos and creative problem solving. Reduction in strategic planning and analytics roles, and we'll see the emergence of new roles in scenario modeling and change activation.

[00:26:41:09 - 00:26:50:09]
Mallory
 I mean, some of these feel more obvious than others. None of these, I think, are too shocking, but are there any of them that are surprising to you or are they not at all surprising?

[00:26:51:21 - 00:28:00:10]
Amith
 I don't know that they're surprising. I think, Mallory, that the way I look at it is agility matters more than ever. I think it always has, but being able to quickly move and try things and deploy them becomes valuable at a much greater order of magnitude than previously thought. And in the association market, certainly. The comment about gig workers, I think that's interesting. And I think that makes sense because people who have more degrees of freedom to try different tools, they're going to try tools that are newer. It's like they can say, "Oh, you know what? Nano Banana Pro from Google is a way better image generator than the corporate approved one, and the policy hasn't been updated. I'm not allowed to use it, but the gig worker doesn't care. They were just tired to create an outcome. They're not in your sandbox, so they can do whatever they want." So that's interesting. And it might be an opportunity, maybe something to think about experimenting with, where if you want to run an AI experiment, maybe using outside talent for some bits of it could be interesting, maybe as a kind of isolated experiment of sorts. So that might be interesting. Of course, there's all sorts of challenges that come from that in terms of data and in terms of how do you manage those folks and all these other things.

[00:28:01:13 - 00:28:06:06]
Amith
 And is the gig worker a human or is the gig worker an AI? Here's another question. Can you tell?

[00:28:06:06 - 00:28:07:21]
Mallory
 Mind blown.

[00:28:07:21 - 00:28:28:06]
Amith
 And I would actually suggest to you that I think I could probably build an AI gig worker to go on Upwork and do a lot of the tasks that are on there. I'm sure people are doing this right now, that this would be wildly unethical. I wouldn't do it. But it could totally fool the average hiring manager of a gig worker into thinking that it was a real person. It would not be hard to build that with current technology.

[00:28:29:06 - 00:28:31:09]
Amith
 Wow. Yeah, it wouldn't be. You just think about it.

[00:28:31:09 - 00:28:35:23]
Mallory
 I mean, I believe you, but it's like in actuality, you really think that it could fool people.

[00:28:35:23 - 00:29:15:06]
Amith
 Yeah, because a lot of the things that people do don't even require audio interaction. A lot of people will hire gig workers to do work that they've never even spoken to. And then to the extent that they want to speak to someone, that can be always done through audio AI as well. So I don't think it would be very difficult, certainly for the really basic things. Not everything, obviously, but in terms of volume, when you look at the funnel of stuff on Upwork, Upwork, if you're not familiar, for our listeners, is probably the biggest gig work platform that's out there. You can hire people there to do marketing work and computer work and accounting work and legal work. You can hire pretty much any, I think, pretty much almost any white collar type of task.

[00:29:16:09 - 00:29:31:24]
Amith
 So if you want a contract reviewed, you can find an attorney that has expertise in that particular area of law. And if you want accounting work done, you can find CPAs and CFOs fractionally or whatever. So it's a great platform. We use it all the time for all sorts of things to augment our team at Blue Cypress. But

[00:29:33:00 - 00:29:43:19]
Amith
 my point is that many of these tasks, people say, "Okay, here's a task I want done." And different contractors will say, "Oh, I'll do that for $1,000 or whatever." And you never actually talk to the person.

[00:29:45:03 - 00:30:02:00]
Amith
 Mallory, for example, for our book, Ascend, that we publish every year and update, we have an awesome, I think, human. I'm pretty sure it's a human, but a person who does the layout. But she does an amazing job. I have never spoken to this individual. Never? I don't think you have either.

[00:30:02:00 - 00:30:04:03]
Mallory
 I know I haven't. I just assumed that you have.

[00:30:04:03 - 00:30:46:17]
Amith
 No, I've never spoken to her. But she charges as a flat rate every time we take a final manuscript and we say, "Hey, can you please make this print ready?" And she does an awesome job. We go back and forth exclusively over email and we get a great result. And I'm not suggesting her particular profession is something that could be fully automated. I don't know anything about that type of work, but I assume that AI could do some of that work as well. So what's a lot closer to my expertise that I can speak to directly is coding. So I can pretty much guarantee you that the 80th percentile of the coding tasks people want done on platforms like Upwork, I could plug in an AI to do that. So coming back to this conversation about gig workers, human, or AI,

[00:30:47:19 - 00:30:59:02]
Amith
 I think there's a lot to be thought about and explored there, both in terms of the variety of economic implications, but also opportunities for associations to explore experiments through that approach.

[00:31:00:12 - 00:31:25:01]
Amith
 The idea of specialists to generalists though that you mentioned, I wanted to mention that. I think it was the last thing you threw out there. I think that's a really important thing, Mallory. For so long, we've ultra-specialized. There's been deep, deep specialization in... You're not just an accountant, you're an accountant in a particular sub-subdomain or in medicine. There's hundreds of associations that deal with particular sub-specialties in, say, surgery or something like that.

[00:31:26:10 - 00:31:48:09]
Amith
 There's lots of reasons for that. We've been doing specialization of labor as a core human pastime for generations, for thousands of years now. I don't know that that goes away entirely, but if the AI is so good at some of those deeper, highly specialized things, do we need to be better at connecting the dots and being more of that generalist? What do you think?

[00:31:48:09 - 00:32:20:23]
Mallory
 This is the one that I was getting a little bit caught up on. I think in theory, it makes sense to be a generalist, to be able to connect different silos, to be really creative with teams that you work with. On the other hand, I'm thinking of myself being a podcaster who knows about AI for associations. That to me feels unique. That's my niche. That is my specialization, you could say, and that will help me get opportunities in the future as an individual, as opposed to just being a podcaster or someone who knows about AI. I don't know. Me at the individual level, I'm struggling with that, but I do think it makes sense.

[00:32:20:23 - 00:33:03:16]
Amith
 I think you're right. I think that's what people seek out. They want someone who plugs right into this exact puzzle piece that they're looking to fill. They don't want something close. They want something perfect. The question is, can you morph more easily into these specialist roles if you have good generalist skills? How does that work going forward? I think this is all super fascinating. There was a study recently released, I think in the last couple days, about the macroeconomic indicators actually catching up with what we've suspected would happen, which is around productivity specifically, that in the last quarter of 2025, we found that productivity actually grew as measured by economists by about 2x from the equivalent quarter in 2024,

[00:33:04:21 - 00:33:21:02]
Amith
 which is perhaps not super surprising for those of us in the world of AI, saying we're probably doubling our productivity every six months or something crazy like that. But sometimes things take time to reflect in the broader statistics. This is the diffusion problem at scale. I think that also applies here.

[00:33:22:04 - 00:33:29:15]
Amith
 Incentives drive behavior, and that's true at the system level. When you look at economies at the nation level, it works at the individual level, obviously.

[00:33:30:20 - 00:33:46:24]
Amith
 When you think about incentive modeling, you're going to see people seek the most efficient path to get their job done. I think it's going to be an interesting blend. I think it's going to be a mixture of these things. Now, zooming back out for a second, I just want to talk for a moment about what this might mean for associations.

[00:33:48:01 - 00:35:21:02]
Amith
 The lens that we've been speaking about primarily is operational. Internal of the association, how do you get your work done? How do you engage with your members? How do you produce the products and services that your members want, both products and services you've produced for generations, but also the new things that you should be producing? This has implications for all those operational responsibilities at your association. But think a little bit more broadly for a moment and say, "Well, if specialization is reducing and generalization is increasing, what does that mean for the structure of our associations themselves? The associations that focus on ultra-specialized narrow niches, will they cease to exist? Will they need to merge with other associations? Will the services they provide no longer be necessary?" I'm not suggesting that I think that is true. I'm asking the question that these things are things that should be considered. Perhaps there's a broader meta or macro trend to be considered, which is, are there opportunities for new associations to spring up that are more aligned with the workflow patterns and the professions of the future? I have no doubt that there will be such things because that's always what's happened over time as new professions have sprung up. An association tends to be a fast follow to that. My question for all of you association leaders out there today who are leading particular niche associations in a particular industry or in a particular sub-professional areas, how do you adapt? How do you look ahead and say what happens to your niche? How can you prepare the people in your segment to be ready for that? And what does it mean for the future of your association?

[00:35:21:02 - 00:35:34:14]
Mallory
 I think that's a really important point, Amith. I also wanted to ask you, because I feel like this is probably music to your ears a little bit, but what do you think about companies becoming classrooms or we could say associations becoming classrooms for their staff?

[00:35:36:08 - 00:36:56:20]
Amith
 In the sense of a continuous learning environment, a continuous learning loop. I think that that's the key to everybody being successful. We have to program ourselves to be in this continuous learning path. Unfortunately, the modern adult worker in the United States anyway has very little continuing education in their career. If you think about it, it's something in the order of single digit hours per year of ongoing training is the norm. That's been true for a long time. It's almost like an AI model. You learn what you learn through whatever your formal training is through university or through vocational training or whatever you've done. That's it. And then you actually degrade if you think about the way the curve works. Because if you add 10 or even let's be generous and say 40 hours of new knowledge per year, you're probably declining in a lot of ways. Now, granted, your work itself does train you to be better in terms of whatever it is that you do, you become better at it. But it's a reinforcing loop of the same processes and the same tasks as opposed to new things. I think that our systems have been, they're very front end heavy in the way we think about training professionals. There's not enough emphasis on ongoing learning. I think both for society's sake and for the sake of your profession, what if it was a quarter of your job or a third of your job or half of your job to learn?

[00:36:57:20 - 00:37:07:24]
Amith
 And then you used AI to help you execute. So it was less about that, but you're just constantly learning. I think it's a really interesting thing to think about. What does that world look like? I think it could be deeply fulfilling for a lot of people.

[00:37:09:00 - 00:37:21:20]
Amith
 It could be a massive shockwave for people who just want to say, "Listen, I just want to learn the thing and I want to go do the thing." And a lot of people are like that. They just don't necessarily want to continually adapt. And I'm not sure what to do about that. I think that's actually probably the majority of the workforce.

[00:37:23:02 - 00:37:46:18]
Amith
 So we societally have a big challenge ahead of us to figure that out. But I don't think companies will be competitive and companies that use that in the broader sense of the word that includes associations, that includes government, but any organization. I do not think you are a successful organization by any measure, financial metrics or otherwise, if you're not a learning organization in a very deep way in the next few years. I think you just fall off a cliff.

[00:37:48:11 - 00:38:19:22]
Mallory
 And I also think to circle back to what we were talking about about generalists, if we zoom in on AI education, I've said this before on the pod, but I do think it's important to have your membership folks take the data course. If, for example, they're in our AI learning hub, which I know many of you all are, but to kind of generalize that AI education. And while it is important to zone in on what day-to-day work you do as an association leader, realizing there's a lot to learn as well from other segments of AI education that you might not think relate to your work, but they can.

[00:38:21:02 - 00:38:29:15]
Amith
 Totally. I'm reading this book right now about Grace Hopper, who's one of the most inspiring computer science pioneers. She did a lot of amazing work.

[00:38:31:04 - 00:38:44:04]
Amith
 And, you know, she was, it was interesting because early in her career, she was a professor of mathematics. And while she was a professor of mathematics, she made a habit of just showing up and auditing courses in a wide variety of subjects, anthropology,

[00:38:45:06 - 00:39:46:08]
Amith
 literature, philosophy, all these other domains. And what it taught her was a couple of things. First is it broadened her perspective. So her approach to problem solving was different than some of her colleagues who were like pure hardcore mathematics people or computer science people. You know, back then it wasn't, there wasn't really a field called computer science. It was all math. And she was a brilliant mathematician as well, but she was able to look through a little bit different lens because of that broader exposure. It also taught her that she could learn anything. It doesn't matter what the field is. And so her approach over time through World War II at the latter parts, when she was contributing to that effort and later on as computers kept evolving, it really was different than a lot of her contemporaries at the time because of that broader mindset. And so that's one of the best things I'm learning about her in this book is that part of her background. I knew a ton about her contributions to computer science from my own experience in the field for years, but I didn't know that aspect about her. And I think it's super relevant to what you just said. So we should all be inspired by Grace Hopper for lots of reasons.

[00:39:46:08 - 00:39:54:06]
Mallory
 I love that. So it sounds like Amith, we need to get you into an acting class or something. We need to get you somewhere else outside of AI world, tech world.

[00:39:54:06 - 00:39:57:13]
Amith
 Yeah. As long as you don't try to make me sing or dance, that might work.

[00:39:59:04 - 00:40:06:10]
Mallory
 Well, everybody, the through line across all three topics, the infrastructure, the experiences, and the workforce are all shifting simultaneously.

[00:40:07:11 - 00:40:31:20]
Mallory
 Custom Silicon is making AI cheaper and faster. World models are creating new kinds of interactive experiences and the nature of work, who does it, how it's measured, and what skills matter is being rewritten. For you as association leaders, the takeaway isn't that you need to understand transistor counts or world model architectures. It's that the cost of AI is coming down quickly. The capabilities are expanding into areas we haven't seen before and the workforce your

[00:40:31:20 - 00:40:37:05]
 (Music Playing)

[00:40:47:23 - 00:41:04:22]
Mallory
 Thanks for tuning into the Sidecar Sync podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in-depth AI education for you, your entire team, or your members, head to sidecar.ai.

[00:41:04:22 - 00:41:08:03]
 (Music Playing)

Mallory Mejias
Post by Mallory Mejias
February 24, 2026
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.