Summary:
In this thought-provoking episode of Sidecar Sync, Mallory and Amith dig into the fascinating world of semiconductors and how a historic joint venture between Intel and TSMC is reshaping the global tech landscape. They explore the underlying tensions between vertically integrated business models and specialization — a conversation that holds key lessons for association leaders navigating change in the age of AI. From reflections on the Innovation Hub Chicago event to an insightful breakdown of Llama 4’s powerful capabilities, this episode is a timely reminder that adaptability is everything — in both tech and associations.
Timestamps:
00:00 - Introduction to Sidecar Sync and This Week’s Topic
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🛠 AI Tools and Resources Mentioned in This Episode:
Llama 4 ➡ https://ai.meta.com/llama
GPT-4 ➡ https://openai.com
Claude 3.7 ➡ https://www.anthropic.com
Gemini 2.5 Pro ➡ https://deepmind.google/technologies/gemini
DeepSeek R1 ➡ https://deepseek.com
Aqcuired Podcast ➡ https://www.acquired.fm/episodes/tsmc
The Information (Article) ➡ https://shorturl.at/xymgG
https://www.linkedin.com/company/sidecar-global
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
⚙️ Other Resources from Sidecar:
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖
[00:00:00] Forget about what's happening to you and your association, but in your sector, what will people in your sector look for from the association? That's the open question. Figure that out, and then go build that. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights, and developments in the association world, especially those driven by artificial intelligence, you're in the right place.
[00:00:26] We cut through the noise to bring you the most relevant updates with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amith Nagarajan, chairman of Blue. And I'm your host. Greetings everybody, and welcome to the Sidecar Sync. You're home for content at the intersection of all things associations and semiconductors.
[00:00:52] Actually, I meant to say artificial intelligence, but today we are gonna be talking a little bit about semiconductors, and it's gonna be a lot of fun. And we'll tie it back to ai. We'll tie it back to all the things you're used to hearing on the sidecar sync. Uh, so we'll jump right into that momentarily.
[00:01:06] My name is Amith Nagarajan. And my name is Mallory Mejias, and we're your hosts. And before we jump in, here's a quick word from our sponsor. If you're listening to this podcast right now, you're already thinking differently about AI than many of your peers. I. Don't you wish there was a way to showcase your commitment to innovation and learning?
[00:01:26] The association AI Professional or a A IP certification is exactly that. The A A IP certification is awarded to those who have achieved outstanding theoretical and practical AI knowledge as it pertains to associations earning your A A IP certification proves that you're at the forefront of AI in your organization and in the greater association space.
[00:01:48] Giving you a competitive edge in an increasingly AI driven job market. Join the growing group of professionals who've earned their A A IP certification and secure your professional future by heading to learn sidecar.ai. Amit, the intersection of associations and semiconductors. I never thought we'd be talking about this on the podcast, but I mean it when I say I'm very excited to have this conversation.
[00:02:14] I am too. I, I've been looking forward to this. I, I kind of geek out about this stuff and it's, it's fun to understand a little bit more about what goes on under the hood, so to speak. You know, you think about what actually powers AI and what powers really most things in the world, it's chips and how are they made, who makes them?
[00:02:31] These are questions a lot of people really don't know the answer to. And it's, it's a complicated answer. So I think that right now is a perfect time for us to spend a little bit of time unpacking that because, you know, we are in this really interesting moment in time globally. We're in this interesting moment in time in terms of like globalization or de-globalization is what I mean by that.
[00:02:51] And also with respect to, uh, what has been really the leader of the PAC for many, many decades in the United States. Intel, uh, no longer. Being what they once were as an industry titan and really having, uh, fallen from that. Uh, and what ha what happens there. So I think that's gonna be a really great conversation to have.
[00:03:09] Um, so I'm excited about it. But, you know, semiconductors, power, ai, uh, we talk a lot about exponential growth. Uh, yesterday at the innovation hub in Chicago, I had the pleasure of, uh, opening the event and talking about exponentials. And you know, a big part of it is the underlying advancements that's happened in semiconductor design, but also the manufacturing process.
[00:03:27] So it's, it's. Just to, it all ties together in, in, in this way. Mm-hmm. I'm excited to unpack this, but first you mentioned the Innovation hub in Chicago, which I haven't really talked to you about yet. So this will be the first time I'm hearing. You said you kicked off the event. You also said you talked to a good few people about the podcast.
[00:03:44] This podcast. Yes. So, um, how did it go? Like what, what were the vibes at the Innovation Hub Chicago? It was awesome. I had a number of people tell me that they, uh, really enjoyed listening to the podcast, which I know makes both of our days when we hear that. And, uh, people were saying, yeah, it was just a really, it's a helpful way to stay in sync with the latest that's happening in, in the world and in the world, and a contextually relevant way for associations, which is, of course, our goal is to be with you on this journey.
[00:04:11] Um, and uh, that was awesome. And then as far as the, uh, attendance, we had record attendance both in DC two weeks prior and in Chicago. Wow. Uh, the, these started out as really like informal small community gatherings about three years ago. And this is, this is the third time we've done this event. Um, and the idea was, Hey, let's get together in the springtime, in both Chicago and DC just to have some small local events, uh, get the community together and share ideas around innovation.
[00:04:38] And we thought that would also be a, a good counterweight to digital. Now, being on the calendar every fall. So it's an opportunity to engage, uh, our community face to face. And both in DC and Chicago, we actually had quite a number of people fly in for the event, which was Wow. Um, was, I wouldn't say it was surprising necessarily.
[00:04:55] It was just, it was, it was delight to hear that people were making that investment in time and dollars and energy to join us because they were saying the value is, was so strong. So that was awesome. Um, what I would say is, is similar to, um, how I felt about the DC event a couple weeks ago. Hmm. People are doing things, they're not just talking about things, which a year ago people were, some people were doing things of course, but most associations were kind of just like testing the water a little bit and maybe starting to learn a little bit, but not really doing things.
[00:05:23] Now I see associations, uh, actually running active experiments and starting to deploy technology at scale and, and seeing impact. Seeing members tell them that their quality of service is better, seeing their engagement statistics improve, seeing their financial results improve as a result of improved service or.
[00:05:40] Services that they've been able to deploy, um, that would not have been possible without ai. So to me, that's really gratifying to see people do that in the community. Of course, I do realize that our community of folks that are coming together that watch our content, um, and come to our events are. You know, self-selecting in that are probably a little bit ahead of the general curve of the association markets adoption.
[00:06:02] Nonetheless, I think it is, you know, setting great examples for associations who, you know, may be a little bit, uh, less advanced in their at the moment in terms of their AI journey. Yep. Yeah, that's great to hear. My follow-up question was going to be, did you have the same takeaway of associations are actually doing this innovative work, they're actually working with artificial intelligence, but it sounds like you did.
[00:06:22] Were there any kind of distinctions between the DC and Chicago event? It was much colder. Well, the weather, Chicago, Chicago is colder. It was early April. Like, oh, it's gonna be great. It's gonna be beautiful. Early spring weather in Chicago. And uh, actually I would say on the day of the event, it was a beautiful morning.
[00:06:38] It was about 29 degrees, um, but it was crisp and the sky was blue and it wasn't super windy. So it was beautiful. And, um, we were very happy to, um, have the event hosted by the American College of Surgeons at their beautiful office, uh, on the top floor of their building. They had this unbelievable conference space.
[00:06:57] Uh. Overlooking the city and you can see the lake. So it's, it's a beautiful space. Everyone seemed to enjoy that. Um, being in Chicago is always fun. I love the city. It's a great place to go. Tons of wonderful association things there. Uh, and it's just a fun place to hang out. I was, I was there a few days early, caught the Cubs home opener, uh, at Wrigley, which was super cool.
[00:07:15] And then, uh, my son was in town as well for, uh, checking out colleges. We're taking a look at some universities up there for him. So, uh, it was a great trip all around. Awesome. Awesome. Well, everybody, we won't have another round of innovation hubs until the spring of next year, but, but be on the lookout for those dates and sign up if you wanna attend.
[00:07:33] All right, our topics for today, well, you know, we're talking semiconductors, particularly we're talking the Intel TSMC joint venture, which is kind of shaking up the semiconductor industry. And then we will be talking about the release of LAMA four, which we're really excited about. So I've got to give you a little bit of deeper background, I think, for the Intel TSMC joint venture.
[00:07:56] And then Amme will kind of do the same in our conversation after. So bear with me. Before we dive into this industry news, I wanna just make sure we're starting with kind of the basics and the foundations. What is a semiconductor? So a semiconductor is a material that has electrical conductivity between that of a conductor like metal and an insulator like glass.
[00:08:16] These materials usually silicon form the foundation of modern electronics. When precisely engineered with various elements added to them, they become the microchips that power everything from your smartphone. To supercomputers. So now that we've got an understanding of semiconductors, let's talk about the business models that have been at play historically.
[00:08:36] So traditionally the chip industry has operated under two main approaches. First, there is the integrated device manufacturer or IDM model. Companies like Intel both design and manufacture their chips in their own factories, which are called fabs. They control the entire process from concept to finished product.
[00:08:56] The alternative is the fabless and foundry model, which separates these functions. So fabless companies like Nvidia, AMD, apple and Qualcomm focus exclusively on designing chips while foundries, like TSMC, which is Taiwan Semiconductor Manufacturing company, and Samsung specialize in manufacturing those designs for others.
[00:09:18] About 20 years ago, more or less, we saw a major industry shift as companies like Nvidia chose to go fabs focusing all their resources on design while outsourcing manufacturing to specialized foundries like TSMC. This approach reduced capital requirements dramatically. Since building and maintaining modern chip fabrication plants cost tens of billions of dollars.
[00:09:39] It also allowed for faster innovation cycles and let each company focus on their core strengths. TSMC emerged as the dominant foundry, especially for cutting edge manufacturing processes. Intel, however, maintained its traditional IDM model. While this offered certain advantages, it increasingly struggled to keep pace with the specialized expertise of T SMCs manufacturing capabilities.
[00:10:04] That context is essential for understanding what might be the most ironic twist in semiconductor history. Intel's financial crisis may be over with support from its biggest rival. So Intel and TSMC have reached a preliminary agreement to form a joint venture aimed at revitalizing Intel's struggling chip manufacturing business.
[00:10:24] Under this partnership, TSMC will take a 20% stake in Intel's US-based chip fabrication facilities with Intel and other US investors controlling the remaining shares. This represents a seismic shift in the semiconductor landscape. Intel, which has faced declining revenues and technological setbacks reported a staggering $13.4 billion operating deficit in its foundry division in 2024 alone.
[00:10:50] I. Despite efforts to transform under its IDM 2.0 strategy, Intel has struggled to compete with T SMCs advanced manufacturing capabilities. So this joint venture has huge implications for the capacity of chip production and ensures the US continues to have meaningful manufacturing capacity for advanced chips with geopolitical tensions rising, particularly around Taiwan, where TSMC is headquartered.
[00:11:14] Establishing robust American manufacturing capabilities has become a national security priority. So Amit, that, that's a lot of context. Uh, you sent me an email with this topic idea maybe a few days ago, and you kind of included a long blurb with it to help explain this to me because I was not super familiar with this whole industry prior.
[00:11:34] And even with that email, I was a bit confused. So I went on a deep dive. I listened to the acquired podcast episode on TSMC, which is fabulous. It's two and a half hours, and I know what you all might be thinking, like, oh, I don't know if I'm gonna listen to that. It's really interesting how they frame the whole story of, uh, the founder of TSMC.
[00:11:53] So, Amit, I wanna get your initial thoughts, but I kind of also want you to lay the landscape on like what this industry has looked like in the recent history. Well, I think you did a great job providing an overview of the, the competing models where you think about what is the fundamental business model and what has worked historically.
[00:12:11] And if you go back in time, several decades prior to the era that you're describing, you know, which really kind of started in the nineties, where Nvidia, you know, at the time was a startup and uh, they started off as a fabulous chip. Company and they never had their own plants and many other companies, uh, followed suit.
[00:12:30] And so these companies, uh, as fabulous chip designers were kind of this first generation, um, that were able to go out there and essentially leverage. Outsourcing. And at the time, um, there were arguments against that saying, well, if you're not fully integrated, vertically integrated, where you have your manufacturing, you have a weakness, you don't have an advantage there.
[00:12:50] Uh, but it allowed you to do what you described, which is the, the speed at which you could innovate. Um, when you have those separations of concerns are really, another way to put it is, um, essentially specialties, right? Areas of focus. Um, and, you know, people have been doing. Various forms of specialization of labor since the beginning of time and, uh, that has something that's something here that you're seeing play out in, in chips.
[00:13:12] So Intel, um, prior to that era when Fous Manufacturing became a thing, um, was, you know, really the dominant chip manufacturer for, uh, the pro, the microprocessors specifically that powered the PC revolution and well into the internet boom. And, uh, their IDM model of both designing chips and then manufacturing chips.
[00:13:32] Um. Was what a MD their only other competitor, uh, in that particular category. Right. They, they did the same thing up until actually late, oh, late two thousands, uh, where a MD spun off their entire manufacturing business. And we can, we can talk more about that later. Um, but the essence of, of that shift is really interesting to think about because you had a company.
[00:13:53] That had tremendous advantages in scale. They had advantages in process, right? Because Intel was known as being extraordinary in its manufacturing process. Um, and then over a period of time they started to have a decline even though they had effectively a monopoly. Not exactly. There was kind of a designed duopoly with a MD being kind of a manufacturer that had licensing access to their ip so they could make, you know, basically these things called X 86 compatible chips, which is you could get a PC with.
[00:14:22] Or a MD or Intel CPUs for a long time. But the point, and that was, that was really a government, uh, satisfaction thing so that intel wouldn't be broken up. But the essence of what I'm describing though is, is a shift in an industry which is worth considering because part of what was going on that enabled TSMC to RISE is a maturity of the industry, um, the acceleration of the underlying technology that made it possible to manufacture chips at scale prior to that era.
[00:14:51] Um, the chip manufacturing process was so incredibly specific to the exact chip you were building, um, that it was very, very difficult to, uh, consider, Hey, I'll, I'll, I'll be able to manufacture chips for someone else. 'cause it was really, really, every single piece of equipment was super custom. Uh, it was very early days of that, so that, as that.
[00:15:10] Process became more and more and more sophisticated, and the machinery became more adaptable and more capable of producing different kinds of semiconductors. Uh, it made it possible to say, Hmm, well what if we actually had a separate sector focused on the fab process, which is the manufacturing process?
[00:15:26] Um, and Intel completely missed it, right? So Intel, as strong as they were, um, really, uh, relied upon an aging architecture. Um, and. While that architecture still powers the vast majority of PCs, um, they completely missed out on GPUs, which is the math processors that power graphics, but also power ai, which is a similar mathematical operation.
[00:15:50] Um, and they obviously had this fundamental issue in their business model, uh, of IDM versus. Going Fabulous. Um, so it's a really interesting thing to reflect on both because it's relevant to the AI revolution that we're in the midst of right now, uh, but also because it represents an opportunity to reflect back on a dominant player.
[00:16:08] Not that long ago, uh, people would've thought of Intel to be a surefire blue chip, you know, stock to invest in, and a company that you could rely on being great, uh, for, for decades to come, maybe. And now they're, you know, they're not being taken over by TSMC per this preliminary agreement. But it's. Not that far off from that.
[00:16:27] And in some ways too, which we can, we'll talk more about that in a sec, but, um, I think associations need to pay attention because in many respects, they're the dominant player in their space. Mm-hmm. They have a business model that's existed for decades or in some cases over a century. I. And they also live in a world that is rapidly changing, and so do they want to be the intel or do they want to potentially adapt and find a better model because the environment's changed?
[00:16:53] That's the main reason. Actually, I wanted to talk about this. I think the technology's interesting. I think the sector is something we could all benefit from learning more about. Most people know very little about semiconductors, know very little about all this stuff, so that'd be an interesting. Kind of educational topic, but to me it's more about, Hey, wait a second.
[00:17:09] There's a company that even 10 years ago, few people would've predicted it. The decline of intel to this extent. And now here we are. Yeah. Oh man. I think this stuff is so fascinating because it's easy to look back with hindsight and say, well, yeah, it makes sense that we would break this out right into chip design and chip manufacturing.
[00:17:27] It doesn't seem that novel from our perspective, but listening to that acquired podcast that I mentioned, you realize at that time when TSMC was founded, they were creating a solution to a problem that didn't really exist because all of the chip companies were designing and man manufacturing their own chips.
[00:17:42] So they were going out to startups and saying, Hey, now you can form your company because we can. Help manufacture these chips. So it wasn't a given, like this was very, it seemed very controversial at that time to have this idea. Um, so I just wanted to like up play that a little bit. So, Amit, would you say Intel by going into this joint venture, preliminary joint venture is essentially admitting defeat on the idea model?
[00:18:05] Like that is no more. I'm not entirely sure. I think that part of the thinking is Intel has a giant amount of manufacturing capacity in the United States. It's not necessarily all at the cutting edge, but some of it's close and with that footprint, um, the idea is that TSMC can come in and help Intel understand.
[00:18:24] Like build better chips basically. So TSMC is part of this deal, I think is gonna own a chunk of the company, or it's, it's some relationship along those lines. It's a little bit murky based on the latest news that's out there. Uh, but ultimately what they're contributing isn't so much capital as it is process expertise.
[00:18:40] So we talk about Hamilton Helmore's seven Powers, and we talk about the seven powers, which are these routes to enduring differential returns or really a strategic moat that some people would call it. Um, one of them is called Process power and TSMC has an, is an. Extraordinary example of process power.
[00:18:56] Um, probably best illustrated this way that if you took all of TSM C'S equipment and even all the documentation they have about how to run the equipment and how to run their business, I don't think it would be possible for other people to go into their plants and actually execute the same playbook, similar to like the Toyota manufacturing process.
[00:19:12] Um, so c has developed this culture, this technology stack. Uh, they have, you know, an incredible concentration of. PhD level scientists who help them continually optimize what they do. So they're gonna go help Intel in this agreement, make their manufacturing a lot better. Um, so now, will that mean that ultimately Intel's fabs become TSMC fabs effectively?
[00:19:36] Maybe. And then is Intel just going to focus on competing more effectively with arm, uh, on the CPU front or competing in the GPU race finally, in some meaningful way? Maybe, uh, I think there's still a lot of questions to be had, but I do think it's exciting, um, from an American perspective to whatever the ownership structure is.
[00:19:55] The revitalization of American semiconductor manufacturing capacity and being at the cutting edge is important because as a country we're behind at this point, like the manufacturing side, none of the design side, we're still world leading at the moment, but uh, on the manufacturing side, we're clearly behind.
[00:20:10] The closest in terms of fabs behind TSMC would probably be Samsung, uh, which is in Korea. Uh, global foundries, which used to be part of a MD has some advanced manufacturing capability, but not nearly at the level or scale of what, uh, TSMC offers. So, um, I think it's a good thing for Intel because, you know, the alternative is to essentially fire sailing their manufacturing capabilities.
[00:20:31] And that's, I don't think that's good on, on a lot of levels. You were talking about this idea of specializing in meat, this idea of expertise and in the podcast I keep mentioning the acquired podcast, they, uh, had a phrase on there that I wrote down. It's something you've probably heard before, but the phrase was, you can only do one thing.
[00:20:48] Well, and that really stood out to me. I started thinking about associations. In what way do you think associations can learn from this joint venture? From the idea of specializing, from the idea of expertise? Do you feel like there's one area that associations do incredibly well that they should kind of funnel all their resources into?
[00:21:06] Or do you think the current model works. I think, let, let me come back to that just a second. I just want to add one thing to what you said earlier. Mm-hmm. On the acquired pod, for those of you that haven't actually had an opportunity to take in one of their pods, I would highly recommend it. The TSMC episode is great.
[00:21:21] They have a series on Nvidia, they have a recent episode on Microsoft, uh, and they also have episodes in a variety of other companies like Costco and Hermes, and most recently on Rolex. So, um, if you're interested in kind of like a very well. Told story about the history of a particular business that you might be a fan of or you are a consumer of?
[00:21:41] Um, I, I would definitely recommend it. I, I, I absorb their pods, you know, in 15 to 30 minute chunks, usually while I'm out and about or in a, in a car. I don't typically listen to them, you know, end to end. 'cause they, they tend to be very, very long. Some of 'em actually, the, the two and a half hours you mentioned malware and TSMC is one of their shorter podcasts.
[00:21:57] Okay. Um, the one I just listened to on Rolex, I think is like five hours or something. It took me like a month to get through it. Okay. Because it's not a company I'm like personally super interested in. Whereas the Microsoft one I think was like four hours and I listened to it in like two, you know, in two chunks.
[00:22:12] Yeah. 'cause that's something I'm super interested in. So, any event, um, for listeners, if you want to go deep on business history, uh, of a particular company or sector, highly recommend, uh, Ben and David at the acquired podcast. They do fantastic work. Um, but one of the things they talk about that's similar to the way you described it is a lot of their episodes they talk about, Hey, what.
[00:22:29] Makes the company's product the best in the category. And, uh, one way they describe it is what makes the beer taste better, right? Like, so if you're a beer, if you're a beer producer, if you're a brewery, um, only do the things that actually make your beer taste better. So if it doesn't actually make your beer taste better, like.
[00:22:45] Owning your own fields of barley, that probably doesn't make your beer taste better than, than sourcing those grains from the best producer that's regional to you. Um, can you get an advantage by vertically integrating or by owning more of the process? In the case of Rolex, um. Rolex actually manufactures their own steel because they have a specification and a proprietary approach to it that they believe is the best way in the world to do that particular type of steel.
[00:23:11] Um, is that necessary? Maybe, maybe not, but uh, in their particular case, uh, maybe that works. But vertical integration as successful as it can be in specific instances, if you study industries more broadly, tends to not be the norm, you tend to see specialization of labor. AKA high distribution of supply chain.
[00:23:30] So if you look at, for example. Auto manufacturing or any kind of complex product, you tend to see this whole ecosystem of companies that highly, highly specialize in very specific things. Like in auto manufacturing. You might have a company that just specializes in wiring, harnesses, for example. Mm-hmm.
[00:23:46] That's all they do. Um, and, and on and on and on there. Everything will. Every single thing you, you break down. So the question I would ask of the association world is, does the association need to be a fully integrated delivery mechanism to deliver value through engagement? Right? If you think about why people are part of the association, they come for value.
[00:24:06] To be delivered. Right. And that's true for probably any, any business. In the case of an association, what's the value? What's learning is sometimes it's contact content. A lot of times it's connecting with other people. It might be professional advancement because the association helps them find new career opportunities.
[00:24:22] Those are some examples of value creation by the association. Um, but the association tends to have like this fully, uh, integrated stack of services where they run events and they produ, they provide member services and they run technology. They do all this other stuff, and they tend to have, uh, a radical distribution of different specializations there.
[00:24:43] We often say there's an association for everything. Um, and so the question in my mind is, does it make sense for all associations to run their own technology stack? Does it make sense for all. Associations to manage their own events. Does it make a sense for all associations to produce all their own educational content?
[00:24:59] In many cases it might, and in some cases it might not. Would it make sense for associations to outsource some of those things? Would it make sense for associations, maybe to join forces if they're in adjacent verticals with other associations, which of course is not a novel concept that's happened. For a long time.
[00:25:15] Um, but oftentimes you see people kind of repeating the same playbook and everyone's manufacturing their own steel. And so I would ask you, you know, do you need to manufacture your own steel? Do you need to have every single thing vertically integrated, uh, in your association? And a lot of times, and that would be like an extreme way to describe it, associations obviously use a wide variety of vendors and all that.
[00:25:35] But um, my question is really just an open one. I don't have an answer for it necessarily. It's just with what's happening with ai. Do does your current model for creating value for the end consumer? Makes sense. Right. And are there other forms of value that you need to be able to create that you're not seeing because you're so focused on making the steal because you're so focused in that lower level work of producing the event or creating the courseware or whatever it is.
[00:26:02] Does that make you somewhat blind to the direction of your sector? Where your profession is heading, what people are actually in need of, uh, and are you able to respond to market demand by creating the value that they want three years from now in the world of ai? That's really the question I have, and if you can anticipate those changes, um, and try to be where the curve is heading, right, in terms of that future.
[00:26:24] Uh, creation of value. That op that offers opportunities for players who are able to think that way. I don't know if that means that their whole business model needs to change, but it, it could mean that, and the era of powerful AI that we're in now, we're just starting in means that there's more options.
[00:26:41] Hmm. Ken, actually, what you just said at the end there, now that we're in this age, this era of ai, I wonder if it, it is possible to do more than one thing. Well, I mean, my gut instinct is like, focus on your one thing and, and execute. Like heck, essentially on that. But maybe in the advent of ai, maybe you can kind of juggle all these things.
[00:26:59] What do you think about that? I mean, I, I think that it's a great question, right? I look at Blue Cypress, our family of companies mm-hmm. And all the different things that we do across there. I think the only way that works, at least in our case, is that we have each business, um. Not isolated, but at least somewhat separated and defined where sidecar is its own thing.
[00:27:17] And Sidecar has its mission to educate a million people in the association sector on AI by the end of the decade. And that's very clear. And everything Sidecar is doing is focused on delivering on that mission. Uh, similarly, the focus of the raa.io platform are focused on delivering personalization at scale for all associations.
[00:27:33] Right. Um, and they're not thinking that much about AI education. They know about it. Mm-hmm. But it's like, it's, there's a, a degree of separation that allows for separation of focus as well. It seems to be working for us pretty well. Um, but we are ultimately, you know, one integrated company in the other hand because there's obviously common ownership and, you know, these are related, related businesses as we talk about a lot.
[00:27:53] Um, but they're separated enough. So that seems to work in our case. I know that in a lot of other cases, you know, you have companies that have multiple different products and divisions and services, um, and it's, and it's really hard to see that work. Um, I think associations. Do you have a singular focus in the sense that they're there to create value in the lives of their members?
[00:28:14] That's ultimately what they're trying to do. Um, the question is, is that, is there. Is there something disjointed under the hood? Right? Do you need multiple different, fundamentally different skills to execute on the future of that vision? Um, and, and what does that mean in terms of the best way to source those materials?
[00:28:30] If you think of it as a manufacturing supply chain, saying, what's the best place to get my event production or my event design, or my planning and I'm, I'm picking on that one 'cause it's top of mind since we just ran an event. But like that particular critical process, does that make sense? Or does it make more sense to do that a different way?
[00:28:47] Right. I think, I think events are a critically important part of the association formula. That's a form of engagement that's likely to be durable for centuries to come because we're all gonna want to get together. Um, that's just a thing that people want to do. So I think associations are well positioned to deliver there.
[00:29:03] But, um, I guess ultimately it's more of a question than anything else that I have in my mind is that when you see something like this happen, when companies that you grew up with that were the dominant, you know, behemoth blue chip. You know, companies that people look to and even said, Hey, we're gonna base our methodology for management.
[00:29:20] Like the whole OKR systems that people talk about. Objective key result. That's an Andy Grow thing from, you know, decades ago that he implemented, uh, when he was, when he was running Intel, actually before he was CEO. Uh, and it's still a fantastic. Methodology, by the way, we use it ourselves, and I highly recommend it to anyone who's looking for a better way to frame their priorities and execute on them.
[00:29:40] It doesn't mean that OKRs are bad, all of a sudden it means that something happened, right? This company that was at this unbelievable pinnacle, um, lost its way perhaps, or maybe the model, you know, no longer made sense. So that to me is the big opportunity to ask that question openly. Mm-hmm. So maybe as an association letter leader, consider.
[00:30:00] Whether your organization might be the Intel, and if there's something like A-T-S-M-C out there that's doing some element of your business, uh, in ways that you couldn't imagine that you could partner with them. Exactly right. I think it's just being open-minded about it. 'cause Intel identified like at deep in its culture as a manufacturing company, as much as they were a semiconductor design firm and, and still are.
[00:30:22] Right. Um, they deeply identified as a manufacturing company and a big, big part of their culture was, you know, the pride in that. Um, and I think associations all of us, right? We have that, the rootings of where we came from. Um, that isn't to say that. That's bad necessarily. It might be the thing that you focus on and you say, you know what?
[00:30:39] That's actually what's gonna differentiate us. Because we're so good at producing world class events, we do indeed do it better than anyone else and. By doing that, our beer does taste better, right? Mm-hmm. So that's what you have to figure out is what is it that people are wanting from you? And I think that's the, that's part of the key question I'm asking us to all ask is, what will people want from you in two years time?
[00:31:01] Forget about 10, two years from now, given what's happening with AI in your sector. Forget about what's happening to you in your association, but in your sector, what will people in your sector look for from the association? That's the open question. Figure that out and then go build that and then determine whether or not building that means retooling the way you know you're structured.
[00:31:25] It might, it might not. It might mean you, you might be perfectly positioned for it, but I suspect there's a good number of us in this sec in this world of associations that might find, that does make sense to reconsider existing business models. Hmm. That's a great place to leave this and move on to topic two.
[00:31:43] We're gonna be talking about the release of LAMA four, which happened on April 5th of this year, just recently. This is release marks a significant step forward in Meta's open source LLM ecosystem with models designed to push the boundaries of multimodal and multilingual capability. So I wanna talk about the family of models in LAMA four, and I've got a.
[00:32:04] Give them a shout out. The names are much better than what we typically see with these family of model releases. So we've got LAMA four Scout, a compact model with 17 billion active parameters and a total of 109 billion parameters. It supports an industry leading 10 million token context window, which we'll talk about in a bit, making it ideal for tasks like multi document summarization and reasoning over large data sets.
[00:32:29] We've also got LAMA for Maverick. Also has 17 billion active parameters, but a total of 400 billion parameters. It excels in reasoning, coding tasks, and long context processing. And then aptly named, we've got LAMA four Behemoth, which is still in development. A massive model with 288. Billion active parameters and nearly 2 trillion total parameters.
[00:32:54] It is positioned as one of the most powerful AI models globally, but has not yet been released. LAMA four models are natively multimodal, capable of processing text, images and video inputs while generating outputs. And the models use a mixture of experts framework or MOE. Which enhances efficiency by activating only the necessary experts for specific tasks instead of the entire model.
[00:33:19] This design enables high performance while reducing computational costs. So when I was, uh, looking up the release of LAMA four, I watched a few YouTube videos. It seemed on my end, like some of the reception, uh, to the release was mixed. So. I wanna cover a few of those items. Due to, uh, European Union data privacy regulations, LAMA four cannot be used or distributed within the European Union, which I think is interesting.
[00:33:43] Also, reports suggest that Lama Four's release was accelerated due to competition from Chinese AI lab. Deep seeks cost efficient models, which outperformed. Earlier Lama models saying that perhaps this release was a bit rushed or panicked. Uh, there's also some commentary on the open source side, so the restrictive open weights license has drawn criticism.
[00:34:03] For example, large scale commercial users require special approval from meta. These limitations contrast with more permissive licensing models offered by competitors like Deep Sea. And some developers were disappointed by initial performance issues reported through APIs. Additionally, discrepancies between benchmark scores achieved by an experimental version of LAMA four and the publicly released model sparked criticism of META'S transparency.
[00:34:29] So that was at least what I was seeing in me. I know that you have, I think that you're excited about this release and you're very impressed by it. So what are your initial thoughts? Well, first of all, I agree with you with regards to the names. It's pretty easy to remember Scout Maverick and Behemoth. I, I particularly like Love it behemoth.
[00:34:44] I love it. I think it's, I think it's fun. Um, and, uh, they're, they're very interesting. Um, a couple things I'd point out to add to what you said. One is that they are natively multimodal. So LAMA four was trained on a combination of text, image, video, audio, et cetera. And so by fusing together the training content that the models are trained on, um, you can, uh, really see the model.
[00:35:08] Reason across modalities. So we talked about this recently in the context of GPT-4 oh's, new omni Modal image generation, and how that image generation is in the context of the whole conversation you have with chat GPT and in that particular conversation. Uh, and that's because it's an, it's a single model that's doing both text output and image output.
[00:35:28] Now, at the moment, LAMA four, the two released versions or, or the two released, uh, uh, sizes, the Scout and Maverick are text output. Only, uh, but I believe that's a temporary thing because they are truly omni modal models. Um, my suspicion is that with, with the, uh, the behemoth release, which will probably be only inference either on Lamo or Azure or other, you know, large scale clouds, it's such a large model, uh, that they'll have a true omni modal model in terms of output and that'll be able to compete with, um, the latest from OpenAI on the image front.
[00:35:58] Um, but what's interesting about it is the level of understanding the model seems to gain from being trained on this fusion of text and image and video. O um, is significantly better in a lot of other ways that are not related to, uh, the question of did it produce images as an output? So that's one thing.
[00:36:14] Uh, another thing is, um, we've been talking about mixture of experts or MOE architectures for quite some time, um, and the, the concept is not new, but the, the technology continues to improve, uh, considerably. So, deep Seeq actually, um, has done some tremendous work in improving MOE models, and they, they have open sourced all of their stuff and all their research is out there.
[00:36:36] Lama four is also an MOE model family. So I wanna point out something that you mentioned earlier. Um, for example, with Scout, you're talking about a model that has over a hundred billion parameters in total, um, but has 17 billion active parameters. So a fairly small percentage of, uh, the total parameters.
[00:36:54] So what does that actually mean? It means that for each token that the model is looking at. Only 17 billion parameters are active at that moment in time for that token. Um, the next token, it might be different, uh, experts within the model. So what's happening there essentially is you have this high degree of specialization within the model that allows for really, really high skills, essentially in different categories.
[00:37:17] And the model is smart enough to be able to switch around, uh, dynamically within a given prompt to use multiple models. What this means, it's more efficient. So it's 109 billion parameter model, um, which is roughly, it's, it's, it's a. About the same size as llama 3.3 70, which is what we talked about in December when it came out.
[00:37:34] This is a tiny bit larger, uh, but it's actually gonna be way more efficient than that and, and smarter, um, because of the MOE architecture and a number of other things. So that's super, super interesting. Um, there's a lot of noise being made about the 10 million. Token context window. Yeah. And I think appropriately so, um, there's a lot more testing that needs to be done on these models in the public sphere.
[00:37:54] Now that these models are out, um, the question will be like, how good is Lama for Scout at doing things with that many tokens? Yeah. So if you say there's this, a thing called the needle in the haystack problem, which is, you know, can you find something very specific in a very, very large context? And just as a reference point, you know, a token being roughly equivalent to a word.
[00:38:15] Uh, not exactly but. For, for our purposes, it's, that's a close enough approximation. It's about 10 million words, which is a lot. Right. That's, that's equivalent to very large code repositories. Uh, it's equivalent to, I think, somewhere in the order of magnitude of, uh, a couple hundred books. Uh, so it's, it's a lot of content.
[00:38:32] So what can you do with that? Well, in theory, the model has enough breadth of insight to look at a very large corpus of content, let's just say an association's entire knowledge base. Um, or a very large chunk of it, and to be able to inference across that I. In theory, what that would allow you to do is to have an even more performant knowledge assistant to even have a more capable, uh, database analyst and so forth.
[00:38:55] So larger context windows should, in theory, give you better intelligence at the aggregate level. The reason I kind of hedge that statement a little bit is because with models like Gemini that have had long context windows for a while, performance has been good in certain cases, but doesn't necessarily bump up the overall kind of understanding of the content that the model has had because there's, there's some limitations in the underlying architecture.
[00:39:17] Texture still that, um, make that valuable. But it's not necessarily like, you know, the silver bullet we've been looking for that says, Hey, you can have unlimited content also, if you did take 10 million tokens and drop 'em into scout, um, the time it would take and the cost for inferencing would be unsustainable.
[00:39:34] So if you do use the full context window or anything close to that, you're talking about processing a very large amount of content, it's gonna slow down. It's gonna be way more expensive. So don't necessarily take that as being like, you know, the ultimate solution to LLM shortcomings. It's not that, but it's, but it's still exciting.
[00:39:51] Nonetheless. It's a, it's a valuable tool. Um, so those are some of my initial thoughts. I think that, you know what it is, is it's exactly what we need, what we expected from Meta, right, about a year ago, they released LAMA three. Now it's time for LAMA four. Um. Notable, uh, is that they do not have a reasoning model.
[00:40:07] So there is no like equivalent to deep seek R one cloud 3.7 extended thinking mode or, uh, open AI's oh one slash oh three models. And for those that haven't heard us talk about reasoning models, the quick synopsis of that is these are models that are smart enough to realize that they need to break down a problem into chunks.
[00:40:26] Uh, work through those problems that are those complex problems, step by step, even check their work before producing an answer, but they do take longer to produce the answer. Than a model that's just doing, you know, a quick shot at answering your question as fast as it can. Um, Lama force release was accompanied by a little bit of a cryptic message from one of their senior execs saying that reasoning is on the way.
[00:40:47] Um, that isn't to say that it's gonna be part of behemoth necessarily, but it is something that cannot be lost on the folks at Meta that that's a critical part of a frontier. All Im offering at this point. So, uh, I find it exciting. I think you're gonna see a deep Seq R two very soon, maybe this month. Hmm.
[00:41:04] And Deep Seq R two is probably going to leap past Lama four in some ways, right? Uh, and then, you know, open AI and Google. Google released Gemini 2.5 Pro, which is no slouch. It's a very powerful model. It's actually free for consumers to use through their ui. Uh, it's not free on the, at the API level, but it's free for consumers.
[00:41:21] Uh, and then of course, there's the folks at Anthropic that make the cloud model. So this is just a crazy competitive landscape. Ape. Um, and, uh, this is just the latest, you know, the latest bit to keep track of. Mm-hmm. Uh, what it should tell associations is the same thing we've been saying for quite some time now is you have choice, you have optionality.
[00:41:39] You don't have to go down a singular path saying, oh, well I've heard a chat GPT, therefore I'm going to use open AI for everything. That very well may be the right solution for certain use cases, but there's so much choice. Uh, many association leaders talk to me about privacy. They talk to me about data security and they say, Hey, I don't want to take all of my data and drop it into chat.
[00:41:59] PTI agree with that. You should be very cautious about whoever you share your data with, whether they're an AI vendor or a traditional SaaS vendor, or anything else for that matter. In the context of OpenAI or any other major lab, do you really want to, you know, have all of your data residing in their ecosystem?
[00:42:17] Now, there are terms of use, they're licensed with you, assuming you're a paying customer, does indicate that they will not, cannot legally use your data for training future models, but. That's just an an agreement. Does that mean that that agreement will be abided by, and some people believe it, some people do not believe it.
[00:42:35] So that's up to you to determine. The reason I raise all of that stuff about privacy and data, data security is that if you are talking about open source, you have your choice of inference providers. So you can run LAMA four on Grok with A-Q-G-R-O-Q, you can run LAMA four on Azure. You can run it on AWS, you can run it.
[00:42:54] On tons of different places you could even set up your own, uh, infrastructure to run LAMA for. It's an open source model. You can run it anywhere. Uh, and that's true for all open source models that are, that have open source and open weights. You can run them wherever you'd like. Why is that important?
[00:43:07] Well, um, think about everything in terms of economics and incentives. Why would you be concerned about open AI and Google or even meta itself if you use their service or deep seek, for that matter, having your data, even if the agreement said. They're not allowed to use your data for training. Why would you consider that to be a potential risk?
[00:43:29] Well, potentially there's an incentive. To use your data to make new models and use everyone's data to make new models that's far larger than the downside risk of using that data, even violating agreements in some cases. Again, to be clear, I'm not suggesting that I believe that anyone out there specifically is doing that.
[00:43:47] The incentive model is such that if you're the model developer and you are also the inference provider, because there's no separation of concerns in theory, there could be value in, you know, inappropriately using some of that data. Um, that is something that could be argued in comparison. If you're an inference only provider, someone like agro.
[00:44:06] Or some of the other providers that are out there, even some of the cloud providers that, uh, might have relationships with the model developers, but they themselves aren't the model developers. Um, there's a separation of concerns there that may cause you to have more comfort. Um, so for example, the folks at Gro who we've talked about, they do not.
[00:44:23] Train models. They do not build models. They don't have a horse in the race. They work with meta, they work with Misra, they work with a bunch of other model providers. They even have one of open AI's, actual open models running on their cloud as well. Um, so they don't really have an economic incentive to, you know, misappropriate your data at all.
[00:44:42] Uh, so there's, there's that piece of it as well that I, I encourage people to be thinking about. So open source means optionality, it means cost reduction, but it also means better security. Mm-hmm. I was smiling there because as you were talking about model providers versus, uh, inference providers, I was thinking of Intel and TSMC and like, you know, expertise and it's all kind of, we're talking about the same thing in a circular way.
[00:45:04] Amit, you said the idea here, which we go back to all the time, is that associations have choice. I know we've mentioned on the pod before that several of the products that fall within the Blue Cypress family of companies. Use LAMA models. Uh, I don't know if they still do. So what is your kind of personal take or your business take on LAMA four?
[00:45:23] Is that the direction you wanna continue going? Are you impressed with what you've seen or are you a bit kind of skeptical? Well, we haven't tested it in any significant way yet. Okay. We've played with LAMA four already, but not, we haven't plugged it into Skip or Betty or any other products in the family in any, in any meaningful way.
[00:45:38] So that'll, that'll come shortly. Um. We are completely model agnostic. Uh, we're also inference provider agnostic. That's the way we architect everything we do, again, to allow our clients who are these associations to have optionality. Um, so with any of our products, you can choose different models on whichever inference provider you like.
[00:45:58] Um, and that's a really powerful thing to know that you have that available. Now, different models have different capabilities. So if you were to say, Hey, I want to use llama 3.0 instead of 3.3, for some reason. Some things in certain products aren't gonna work as well or might not work at all. Right? So for example, certain features within our data analyst ai, skip, do not work with models that are weaker than GPT-4 oh, or with LAMA 3.3.
[00:46:21] It just wasn't possible to do the current level of capability we have in Skip until those models got as smart as they've gotten. Um. So we tend to look at it really in like terms of model class. We'd say, okay, LAMA 3.3 is roughly equivalent to GPT-4. Oh. Uh, Lama four is a step above that. Um, you know, whatever comes, like Cloud 3.7 is a step above that probably as well.
[00:46:44] Mm-hmm. Um, so we look at it more in terms of like the power level of a model. Okay. And so within that, you know, we're constantly swapping models in and out. The way we've designed all of our software architectures is to be completely model agnostic, and you can literally plug and play these things, uh, a number of different ways.
[00:46:59] And sometimes our products use a, uh, kind of a, a medley of models, uh, where we're using, oh, we use Lama 3.3 for certain things, but then certain tasks are way more complicated. So we use a reasoning model like can O three Mini or a Deep Seeq R one. Um, and then, you know, we tend, we tend to run our inference, um, you know, either within Azure, uh, or with, and so we, we are, you know.
[00:47:21] We, we have these separations built into the way we've designed stuff that I think is really important to have in place because there's so much change happening right now that you cannot predict which of these providers is necessarily gonna be the best. I think the advantage for someone like Meta and Models.
[00:47:37] That are either, either Lama for itself or one of the many, uh, fine tunes that will happen. You know, when, when an open source model is released very, very quickly, especially for the larger ones, there will be dozens if not hundreds of fine tuned versions that have been, you know, adjusted in different ways to make them better at certain tasks like coding or whatever the task might be.
[00:47:56] So, but using those, uh, mainstream models sometimes is beneficial because there's so much money and so much development going into them that, that can be helpful. But, um, largely these things are commodities. Largely. It's more about like, you know, what power level, if you will, are these things at, um, and you can plug and play 'em.
[00:48:14] So different models have different advantages of course, but you know, if you're getting to the point where these things are all really, really good, that optionality leads to radically lower cost, which is what we're seeing. The last question I wanna ask you, am me, because I know you've got an a lot of knowledge here.
[00:48:30] There was some criticism on its open weights license. Are these models truly open source still? Can you kind of explain what that means? Yeah, well, I mean, you know, meta is essentially saying that people who compete with them can't use the model without permission. So what they're essentially saying is, Hey, if you're, you know, if you have more than, I think it's seven 50 million monthly active users was the specific language.
[00:48:51] Okay. If I recall correctly. Um. That's certainly not us. Yeah. You know, maybe one day, but that's not us today. Anywhere close to that. Um, and that's not any association i, I know of. Okay. And it's, uh, probably people like Snapchat. It's people like Amazon, it's people like Microsoft. So it's their, it's their competitors at that scale that are barred from having access.
[00:49:11] And it, it doesn't actually say they can't use it, it just says meta. Meta has to give permission. Okay. Which they probably would not. But, um, I also would think that those companies probably wouldn't wanna use meta stuff. So I, I think it is. Less of a big deal than some people are making it. Um, I don't think it's unreasonable for a company to have some degree of restrictions on their open source.
[00:49:30] I don't think all open source needs to be a, do whatever you want no matter what kind of license. I think that there's lots and there's lots of different flavors of open source licenses that are out there. Some that are actually far more restricted than this, uh, and some that are far, far more permissive.
[00:49:43] So I, I think it's totally fine for, for the association use case. Yep. I don't think you all have 750 million members, but you might, especially if we reinvent that, that business model, everybody. Thank you for tuning into today's episode. We will see you all next week. Thanks for tuning into Sidecar Sync this week.
[00:50:05] Looking to dive deeper. Download your free copy of our new book, ascend. Unlocking the power of AI for associations@ascendbook.org. It's packed with insights to power your association's journey with ai. And remember, sidecar is here with more resources from webinars to bootcamps to help you stay ahead in the association world.
[00:50:26] We'll catch you in the next episode. Until then, keep learning, keep growing, and keep disrupting.