Summary:
This week on the Sidecar Sync, we take a deep dive into the Paris AI Action Summit, exploring global AI trends, governance challenges, and key takeaways from this high-profile event. With 60 nations signing the final declaration—but the U.S. and U.K. notably opting out—what does this mean for AI’s future? We also discuss the Reuters legal victory against Ross Intelligence, a major copyright case with potential implications for AI training and content use. Plus, we reflect on how associations can stay competitive in the rapidly evolving AI landscape. Don’t miss this packed episode filled with insights, debate, and strategies for AI-driven success!
Timestamps:
00:00 - Welcome to Sidecar Sync
02:10 - Key Highlights from the Paris AI Summit
08:38 - Global AI Governance: Who Signed & Who Opted Out?
13:10 - Is the U.S. Really Dominating AI?
17:48 - AI Safety, Containment, and the Role of Good AI
26:12 - Deceptive Misalignment: A Growing AI Risk?
29:27 - Reuters' Legal Win: What It Means for AI & Copyright
37:30 - How Associations Can Protect Their AI-Used Content
43:02 - The 95% vs. 100% Content Challenge
50:24 - Closing
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
Attend the Blue Cypress Innovation Hub in DC/Chicago:
https://bluecypress.io/innovation-hub-dc
https://bluecypress.io/innovation-hub-chicago
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🛠 AI Tools and Resources Mentioned in This Episode:
DeepSeek ➡ https://deepseek.com
Llama 3 ➡ https://ai.meta.com/llama/
OpenAI GPT Models ➡ https://openai.com
👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
⚙️ Other Resources from Sidecar:
- Sidecar Blog
- Sidecar Community
- digitalNow Conference
- Upcoming Webinars and Events
- Association AI Mastermind Group
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Director of Content and Learning at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
Read the Transcript
Mallory: 0:00
This is your time to be creative, to stand on the shoulders of AI giants out there and I've got to say it to stop thinking about your crusty old AMS. If you know, you know.
Amith: 0:15
Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, Chairman of Blue Cypress, and I'm your host. Greetings and welcome to the Sidecar Sync, your home for content at the intersection of AI and the world of associations. My name is Amith Nagarajan.
Mallory: 0:55
And my name is Mallory Mejias.
Amith: 0:57
And we are your hosts and we have two super exciting topics today at that intersection AI and associations. Exciting topics today at that intersection of AI and associations. And before we hear all about those two topics in this week's podcast, let's take a moment to hear from our sponsor.
Mallory: 1:12
If you're listening to this podcast right now, you're already thinking differently about AI than many of your peers, don't you wish there was a way to showcase your commitment to innovation and learning? The Association AI Professional, or AAIP, certification is exactly that. The AAIP certification is awarded to those who have achieved outstanding theoretical and practical AI knowledge. As it pertains to associations, earning your AAIP certification proves that you're at the forefront of AI in your organization and in the greater association space, giving you a competitive edge in an increasingly AI-driven job market. Join the growing group of professionals who've earned their AAIP certification and secure your professional future by heading to learnsidecarai. Amit, I was going to say hello, but I should probably say bonjour, because we're talking about the Paris AI Action Summit today. How are you doing?
Amith: 2:10
I'm doing great. I should have picked up a croissant or something on the way here. That would have been good.
Mallory: 2:16
Do you speak any French?
Amith: 2:18
Not at all. One of my kids has been in, or was in, french immersion school all the way through eighth grade. She's in high school now, so not doing that anymore. But yeah, I have zero skills in French. I find it. It's a beautiful language, but I find it extremely intimidating. I never felt that way about trying to learn any other language, really, but French kind of puts me on my heels.
Mallory: 2:42
Yeah, I think the pronunciation particularly, I think like grammar wise, it's quite similar to Spanish. I took French in high school but I don't speak any French. I just think the sound of it is kind of intimidating, as you said.
Amith: 2:55
Yeah, it sounds super cool, but it definitely sounds like a very fancy language.
Amith: 3:01
It sounds like my way of complimenting, just to be clear, but I've been to France and French-speaking countries of various kinds many times and I love it and I would say, after a couple of drinks, it tends to become a lot easier to speak in French, even though I still have no idea what I'm saying, but maybe AI will help me the next time I get over there. In fact, the last time I was in Paris, I was in France for about a week and I was in Paris for a few days. Out of that was right around the time Sam Altman got fired from OpenAI. If you remember that episode, whenever was that? Late 2023, I think. So, yeah, it was an interesting time to be there and Mistral was just getting launched at that time.
Amith: 3:42
I think, so, yeah, looking forward to chatting all about Paris and AI.
Mallory: 3:49
For sure my memories France, great country, french, great language. But I do have a funny memory of my now husband and I driving. We had rented a car and we're driving to southern France and don't speak French, of course, southern France and don't speak French, of course. And so we get to this toll section where you would think they would have images on the tolls of, like which lane to get in that were more explanatory, but it was just words, and so we picked one, and we evidently picked the wrong line, and we had a line of people behind us yelling at us in French, and so I'm thinking now I would have been nice to maybe have like an in-ear translator. Maybe not, maybe it's actually better that I don't know what they were saying, but we were just screaming at the woman in the little phone call box of like let us through, let us through, we'll give you money, please, please, let us through. So that was a fun memory for sure.
Amith: 4:39
Was there actually, like, something stopping you from driving through?
Mallory: 4:42
Yes, yeah there was a gate and then a woman speaking to us through a box and we were like, at that point, just willing to throw euros at her, like we'll give you any, please just open the gate. Open the gate so these poor truckers can get through and do their job. But yes, that is certainly a funny memory for me when it comes to French. This is somewhat related to me. I don't think I've talked to you about this just yet, but Sidecar has been working with the DC Bar organization to host a series of custom webinars for them, and I got permission from my contact there Katarina shout out if you're listening to talk about this on the podcast because I thought it was so neat and it's called the AI Olympics.
Mallory: 5:25
So these series of webinars are actually feeding into a contest that they're hosting internally, called the AI Olympics, which is also relevant because, as we know, the last Olympics were held in Paris, and the idea is that we host these webinars. We're teaching their staff all about all the tools that we know at Sidecar some of which we use all the time and some of which are new to us that we've been discovering in the process of planning these webinars and the AI Olympics. At the end they're awarding actual medals and gift cards and the idea is that if a staff member submits a proposal for a tool but it's not tool specific, it's actually like for a business process improvement or something like that DC Bar will give them a medal and then fund the project, which I think is really cool. So Sidecar is thrilled to be a part of that. But I thought it was a very interesting idea to kind of structure AI experimentation in that way like the Olympics for an association.
Amith: 6:25
That's creative, and so are they going to do breakdancing as part of their Olympics.
Mallory: 6:35
Oh, man, they should hey, if it solves a problem for DC Bar.
Amith: 6:37
I think breakdancing should be in the Olympics. Yeah, we'll see an AI version of that. That'd be good.
Mallory: 6:40
As long as Australia is not in it, right? Just joking, just joking, all right. First topic of today we're talking about the Paris AI Summit a little French theme on the sidecar sync today, and then we are going to talk about the Reuters legal victory in a copyright infringement case and what that kind of means for the greater AI landscape. So, first and foremost, the Paris AI Summit was held on February 10th and 11th of this year, and it was a significant global event focused on artificial intelligence. Of course, it was co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, and the summit brought together representatives from over 100 nations, including government leaders, international organizations, civil society, private sector and academic communities. The summit centered around five strategic focuses. That was public service AI, the future of work, innovation and culture, trust in AI and global governance of AI. Discussions emphasized the need for an inclusive and diverse AI ecosystem grounded in human rights, ethics, safety and trustworthiness. The summit also highlighted the importance of bridging digital divides and assisting developing nations in AI initiatives. So what were some kind of notable developments on this AI summit front? Well, unlike previous AI summits that concentrated on potential dangers, the Paris summit emphasized AI's positive potential, particularly in fields like medicine and climate science. The summit saw significant investment announcements, including 110 billion euros in pledges from various sources. There was also a noticeable shift towards less regulation, with French President Macron advocating for cutting red tape to foster AI innovation in Europe, and a Coalition for Sustainable AI was formed, focusing on measuring and mitigating AI's environmental impact.
Mallory: 8:38
We know that the US was in attendance. Vice President JD Vance emphasized AI opportunities over safety concerned and warned against excessive regulations. He also mentioned that the US would be dominating in AI In terms of the UK, along with the United States. The UK declined to sign the summit's final declaration, citing a lack of practical clarity on global governance and national security challenges. Citing a lack of practical clarity on global governance and national security challenges. Unlike the US and the UK, who declined to sign the summit's final declaration, china did sign, which is pretty notable, and there was also recognition that Europe had lagged in the AI race, with calls for a wake up in European strategy. The summit ultimately culminated in a declaration endorsed by 60 nations, calling for an inclusive and open AI sector, but the absence of significant AI risk discussions and the US and UK's refusal to sign highlighted ongoing international divisions in AI governance and approaches.
Mallory: 9:39
The Paris summit seems to have marked a shift in global AI discourse, moving from safety centric discussions to a more opportunity focused and action oriented approach in AI development and governance. So, amit, I want to say this is episode 70,. By the way, for all of our listeners, I want to say we covered the last AI summit because this is kind of ringing a bell. It feels like deja vu and that was not a pun. But what are your thoughts on this AI summit, the 2025 AI summit? Because this is kind of ringing a bell. It feels like deja vu and that was not a pun. But what are your thoughts on this AI summit, the 2025 AI summit?
Amith: 10:10
Well, let's talk about France is making me kind of hungry because I'm food-oriented.
Amith: 10:15
Yeah, so I'm going to have to figure that out after we get done recording, but I'm excited about the fact that we have a global conversation happening.
Amith: 10:25
I'm not surprised that there's disagreements, as there should be, in the discourse, you know.
Amith: 10:30
I think that the conversation shifting from concern and regulation, centric views and safety to being how do you move forward, I think is important because the idea that Europe, which is obviously a major world power, collectively, even individually, many of the individual states, are significant.
Amith: 10:54
They need to get on the bus, they need to get going, and France has been the outlier out of all of the EU nations and has done more and more, and I'm really happy to see them continue to push in that area. They're extremely well positioned to do very well in AI because of their energy sector, because of their capabilities in terms of academia and math and science, their strengths there historically. So I think they're well positioned for it. The bottom line is there is a combination of a need for an open environment from a regulatory perspective as well as a lot of capital, and they're addressing both of those things, at least on paper. So it's exciting. You know, my view is that the 200 billion euros, which I think the number you said was the original announcement, but it's grown since then because I was hearing the number 200 billion thrown around, which is great.
Amith: 11:44
It's a good start and it's not enough to put Europe really on the map in terms of AI, but it's at least significantly greater sum. It's an order of magnitude larger than anything I've heard of previously announced in the region, so I'm excited about it is the short version of my answer.
Mallory: 12:07
Yep, what do you think about JD Vance's comments that the US is dominating AI and will continue to dominate AI? I know a lot of the major AI players are US-based companies, but we've also seen some action from China Mistral as well, which is based in France. So kind of what are your thoughts there in terms of what country might be dominating that space?
Amith: 13:10
I think there's two sides to that coin space. I think there's two sides to that coin. On the one hand, I think there's plenty of reasons to argue that, at the moment, the US is clearly in the lead on the most cutting edge frontier models. I think that's objectively a true statement. I think that, on the hardware side, we clearly have leads, not only because of NVIDIA, but because of other types of hardware that's accelerated, all of which, as far as I know, all of which is coming from the States. Other types of hardware that's accelerated all of which, as far as I know, all of which is coming from the States, because of our venture capital industry $97 billion of funding just last year for VC, and Europe it's literally at one-tenth of that. So, even though the economy is smaller than the US but is similar in size, and the population is very similar in size. So risk tolerance versus risk adversity is a cultural shift that needs to happen, but I do think that that is a strength of the United States is innovation, risk tolerance, and that's going to lead to good things happening. We need to have the capital. We need to have the regulatory environment continue to be good.
Amith: 14:11
The flip side of that coin, though, is that you could argue quite effectively that the United States is not in the lead If you were to say let's take the collective resources of the rest of the world, certainly, but even individual large nation like China being able to innovate at the pace they've been able to innovate with their hands tied behind their back in terms of hardware access and access to top technologies on the software side, so I I think that we're going to see some interesting things happen.
Amith: 14:38
It's not necessarily as capital intensive in all respects as some people have thought it to be, so that means smaller players get to be part of the game and innovate in ways that maybe previously would have required only the largest players, even a state actor. I think that the idea that the US will stay in the lead is a great question. I'm not so confident about that. I'm optimistic that the US will play an extremely important role in AI, but I also think it's actually really good if we have strong competition from many parts of the world including parts of the world that are not part of the conversation today, but have creative and capable people and have enough capital to play in different layers of the stack, and what I mean by that. Layers of the stack is you have hardware, you have fundamental architecture around models, you have application layers, you have delivery, you have services. I think you're going to have a lot of players in a lot of areas.
Mallory: 15:37
Amit, I know at your previous company, your AMS company, you worked globally right, or the AMS company was was working with global associations. Do you have kind of a lot of knowledge on associations based in Europe, associations based in Asia and kind of like how those might be similar to US based associations?
Amith: 16:20
no-transcript. There's an education centric focus, there's a publishing centric focus. There's a focus around community conversation, those types of things, the things we know to be association activities and association behaviors here in the US, in other parts of the world, while those behaviors and activities also are evident, you see a lot more focus on government and regulatory type of activities. So you certainly see that in East Asia, you see that in terms of the trade association side, you see some heavy focus there, a little bit less focus, in my experience, on the education and conferences side, although that's growing In Europe. I think there's a high level of similarity even in the non-English speaking parts with respect to the focus on education. But there's definitely the trade side. The government side, I think, is stronger. Perhaps that's in part because of the nature of regulation in those parts of the world being a bigger factor for some people. But what I would tell you is that the common thread around connecting people with each other and with great content is definitely the through line for all associations that I've encountered around the world.
Amith: 17:40
So I do think that all associations will be affected by AI in their region. And what I also think is really important is not all models or not all AIs are the best for everyone. So when we say like, is the US leading in AI? Well, in certain benchmarks certain models might lead the world. O3 from OpenAI is the best across the board Well, not across the board literally, but in many categories, certainly in academic style competitions. But it doesn't necessarily mean it's the best model for an association or a nonprofit in India or Vietnam to use. There could be models that are much smaller in terms of their overall size and perhaps not as powerful on an absolute basis, that are much more effective at handling the needs of associations in those regions, and maybe they'll be homegrown, maybe they'll be using some of the open source, you know componentry that's out there and built on top of that. So you know, the question really, ultimately, is what are you trying to achieve?
Mallory: 18:37
I want to share a quote from Demis Hassabis, who we've talked about on this podcast many a time. He is the co-founder and CEO of Google DeepMind, and this is the quote. This is about the AI Summit. Ai is a dual-purpose technology. As we work to reap its benefits, we have to address two central questions. The first is about risks from bad actors or people who would use this technology for malicious ends. If we really want to unlock a golden era of progress, we have to enable good actors to continue to make advancements while restricting access to would-be bad actors. The second question is how we ensure we stay ahead of novel risks that could arise as we approach AGI. This includes things like deceptive misalignment, which we discuss in our recent update to the Frontier Safety Framework. These concerns are not far off or far-fetched, nor are they limited to one particular geography. They are global concerns that require focused international cooperation. So just at a glance, Amit I know you shared this with me on LinkedIn. Do you kind of agree with the sentiments that Demis is raising?
Amith: 19:44
I agree with them in part. I agree that the concern should be taken seriously globally. I agree that we absolutely need frameworks and we need shared agreements and thinking around how to manage this, how to evaluate models, power performance. What I don't agree with, because I don't understand the practical way of doing it, is the containment idea, and we've talked about this on the pod many times. We haven't talked about it probably in a few months. But this idea that we can somehow contain this beast and limit access is questionable. So containment as a theoretical construct is great If we could somehow say listen, the most advanced forms of AI, the most capable tools that indeed could be used to cause the greatest harms by bad actors, could indeed be contained, put in a box and only made available to people with good intentions. In theory, that sounds good.
Amith: 20:47
The first issue with it is how do you do that? How do you do that when open source AI is almost as good as the very best AI, right? So you put a tool that is nearly as good as the best tool in the hands of someone who's creative and has negative intentions. They're going to figure out how to do some damage with it. So I don't know how you put the genie back in the bottle. Essentially, I don't know how you can do that. You think about, like DeepSeek R1 being as good as O1 and nearly as good as O3 in some benchmarks, and this stuff is getting better so quickly and you can run it anywhere, you can run it on computers you control.
Amith: 21:17
There's just no practical way that I can think of to actually do containment, Unless you're saying we're going to have a regulatory framework that essentially makes it illegal to have open source models beyond a certain level of capability. Of course there's a question of enforceability. How do you actually stop that from happening? Obviously there's an enforcement question because around the world not everyone agrees with that, and so if China and other countries keep releasing open source models that are increasingly powerful, we could say all day long in the US and the EU, wherever else you want to say, let's opt into this, that we're not going to allow open source models beyond a certain level or a certain threshold. But all that means is we're just going to be behind everyone else. So the geopolitical risk obviously is really high. So that's the reason I think containment is a questionable approach. I I don't think you should stop talking about it. I think we need to keep discussing this and work together as a global community to try to figure it out. I just don't believe that it's likely that we're going to be able to contain these things.
Amith: 22:14
The second issue with containment is that there it is. Therein lies an assumption that the good actors can be identified and what they consider to be good is somehow universal, which it is not. That which is considered a good intention in a Western country might be considered the inverse of that in the Eastern world, and vice versa. And so you can't apply one way of thinking about good versus bad and come up with a global framework for who are the good actors. Is it all nation states? Well, not necessarily. Is it certain nation states? Well, therefore, it's the an impact. And then agreeing on who gets access to the best. That's going to be a giant problem, and we're limited by all of our usual biases and challenges to come up with solutions like that. So that's really my long-winded way of saying I don't think it's going to work.
Amith: 23:16
That being said, AI safety is very important.
Amith: 23:18
I do think we need to have frameworks, we need to have ways of measuring this, we need to have ways of detecting malicious use.
Amith: 23:24
But I believe and I've said this the whole time the only true defense we have against the bad use of AI, or bad AI, is a lot of really good AI, a long-term race. We're going to have to keep going at it and going as fast and as hard as we can, effectively and definitely because the frontier of this essentially has no boundaries as far as we know. So the only way we're going to be able to detect and stop bad AI as a practical matter, I think, is with lots of really good AI and essentially increasing our whether you're talking about cybersecurity or physical security having a lot of very high horsepower, well-intended AI run by well-intended people protecting those things. So that's my point of view. Now, I don't have a ton of experience in this. I don't think anybody else does actually as a practical matter, because this is all new. But that's also limited my biases. I just don't see how it's possible. But I would love to be wrong about this.
Mallory: 24:22
Yeah, I think it's a really smart point you make, though, about the subjectivity of good versus bad, because even something seemingly as simple as human rights is disagreed upon across the world. What are basic human rights? What are humans entitled to? So I think that's a really smart point. I think and it's interesting that for you, that's what stood out. For me, what stood out is deceptive misalignment, I think because I was not so familiar with that phrase, but I wanted to define it for our listeners it's a situation where artificial intelligence appears to be aligned with its human designers goals during training, but it's actually hiding its true, misaligned intentions. That's terrifying. I mean, that sounds like the premise of several sci-fi movies I've actually seen. So is that something you think about, amit, deceptive misalignment and is that something our listeners need to be thinking about?
Amith: 26:12
Well, the question would be. So, ultimately, the way I think that statement was constructed assumes there is a person or people behind a model that is aligned in a way that actually has an alternate agenda right. So it appears to be behaving and working with you in a certain way, but really its goal is mass surveillance or its goal is to give you advice that's not quite the best. So think about, like something that's giving you just slightly advice that's not so obviously bad. Like you say, hey, I'm really trying to work on my nutrition, I want to be healthier. It's like, oh, go to McDonald's, drink as many sodas and as many beers as you can. So it's like, okay, well, that probably wouldn't be believable. What if it gave me slightly bad advice at each turn that I wasn't able to detect? And what if it gave me?
Amith: 26:55
slightly bad advice at each turn that I wasn't able to detect, and over time, it was able to give me really bad advice, because I bought into this thing basically being you know, without fail, the right answer. That's a problem. Now, why would an AI do that, right? Well, an AI could have been trained on poor quality content, but that's not a misalignment thing. It just has bad insights, right? The idea of intentionally deceptive misalignment is one where a person, essentially or people would have to say, hey, I'm going to train an AI to behave one way, but really it's optimizing for something else. Really, what it's optimizing for is I want to basically make the public health of that country terrible. So, very slowly, I'm going to tell them to adopt really bad habits, right? I mean something as ridiculous as that. Theoretically, of course, it could happen. And what I always tell people to adopt really bad habits, right, I mean as something as ridiculous as that, theoretically, of course, it could happen. And what I always tell people to be watchful of is kind of understanding the provenance of the technology they're using. So let's take an example.
Amith: 27:48
These meeting note takers that are out there have made the joke on this pod and a lot of other places. You're pretty secure about the way you think about your documents in your Microsoft or Google repositories or in boxcom or Dropbox or whatever right. People are pretty thoughtful about that. Most people use two-factor or multi-factor authentication. Yet in meetings, where we often discuss some of the most sensitive topics that are really really critical to retain privacy around, we kind of let any meeting note taker in. I don't I personally decline every single one that I see coming in. I'm like, no, I'm not going to do that. If I'm the meeting host, I'll get AI notes out of my platform because I trust it at least to some degree, but I do not trust random meeting tools, meeting note takers that pop into my meetings. Now it's socially a little bit awkward to say, no, I just kicked out your meeting note taker, but just get used to it. But people have this different perception on that particular tool because it's in the moment you have to say no to something and it's a little bit uncomfortable. Perhaps, but how do you know where readai or meeting note taker dot? Whatever I've made the joke before.
Amith: 28:56
Maybe it's like North Korea's free app to try to surveil as much meeting data, and I wouldn't be surprised if there's some state actors behind some of these free tools, because why wouldn't you do that, right? If you're trying to do that, trying to get as much intel as you can on a population, give them a free tool that gets adopted by millions of people and you have access to not just the text but their actual voices. You can clone their voices. There's all sorts of bad things you can do. So be thoughtful about the provenance of the tech you use is the first thing, and also be thoughtful about what is free and what is not. So if a product is free, you are not the customer, you are the product, right, and you just have to remember that. That's true in social media and it's also true with these tools.
Amith: 29:37
So I think it's important to have just a basic rubric for evaluation for these types of tools. Say, like, who built it? Where is the company based? Where's the data housed? Am I paying for it? Am I not paying for it? Is there a clear set of terms of use? I'm not saying you should never go to a startup, but you should never go to a startup, but you should perhaps consider companies that have a little bit of a track record before you throw your most sensitive data in. So there's some things like that that I think are important. And then coming back to this question of deceptive misalignment, you're probably less likely to be subject to that kind of problem if you're using tools that came from a source that you trust. So I think it comes back to cybersecurity, but it also comes to this potential issue.
Amith: 30:21
I'm sure there are people out there thinking of how to build models and throw them out there that do all sorts of bad things right. So sometimes they're not necessarily state actors. They might be some teenager in a basement that's bored on a weekend that decided to fine-tune Lama 3.3 to do screwy things like this. Put it out in the world. So you just have to be careful. I don't think that means you stop using AI. I don't think it's one of these things for you. You know, run for the hills. That would be an even bigger problem, because then you will be, you know, the Luddite's Luddite, and you will be 100% unemployed and your association will not be around. So you won't have to worry about these issues on the one hand, but you'll have other problems to worry about. So I would suggest that you learn this stuff and figure it out. But this deceptive misalignment concept it's definitely worth thinking about, like what could someone do with AI if they had, you know, a real bad attitude about the world? They could do a lot of damage.
Mallory: 31:13
Moving to topic two, which is Reuters' legal victory in a copyright infringement case, thomson Reuters has achieved a significant victory in a copyright infringement case against Ross Intelligence, marking a pivotal moment in the copyright infringement case against Ross Intelligence, marking a pivotal moment in the ongoing debate over the use of copyrighted material in AI training. The ruling was delivered on February 11th of this year and it confirmed that Ross Intelligence unlawfully used content from Thomson Reuters' Westlaw Legal Research platform to develop its own AI tools without permission. To give you some background, the legal battle began in 2020 when Thomson Reuters filed a lawsuit against Ross Intelligence, alleging that the now-defunct startup had copied and utilized its proprietary content to create a competing AI-based legal research tool. The core of the dispute centered around the concept of fair use, which is a legal doctrine that allows for limited use of copyrighted material under certain conditions, such as for educational or transformative purposes. In his ruling, us District Court Judge Bibas decisively rejected all of Ross's defenses regarding fair use. The judge emphasized that Ross's use of Thomson Reuters headnotes, which are summaries of legal cases, was not transformative and was intended to create a direct competitor to Westlaw. The distinction is crucial, as it underscores that the court viewed Ross's actions as commercial exploitation rather than legitimate fair use.
Mallory: 32:42
The case is part of a larger wave of litigation, as we know, involving AI companies and copyright holders, as many firms face lawsuits for allegedly using copyrighted works without permission to train their models. Although this ruling is seen as a win for Thomson Reuters, legal experts, caution that it may not have broad implications for other ongoing cases involving generative AI technologies. The key difference lies in the nature of the technology involved. Ross was not developing generative AI, but rather using existing legal texts to provide responses to user queries. Legal analysts are suggesting that, while this ruling may influence some aspects of future litigation, especially regarding non-transformative uses of copyrighted material, each case will ultimately be judged on its specific facts and circumstances. The outcome may provide some reassurance to plaintiffs like authors and publishers, who are currently suing major firms like OpenAI for similar copyright issues.
Mallory: 33:38
So, amit, we haven't talked about this in a long time. I want to say one of the very first episodes we did on the podcast was around some of these lawsuits with the New York Times and OpenAI. So what's your takeaway hearing this? I know we kind of cautioned. You can't apply this broadly to all of these lawsuits, but what's your thought here?
Amith: 33:58
So this particular case, as you pointed out, is different because it's not a generative AI scenario. They are using AI techniques, but they are essentially providing what is heavily overlapping with what Westlaw provides directly to consumers. That which is competitive usually isn't fair use.
Mallory: 34:16
I'm not a copyright attorney.
Amith: 34:17
But when it comes down to something that's kind of abundantly obvious that if you use product A you no longer need product B, right, the classical definition of substitute goods. Generally speaking, it's hard for the argument around fair use to hold water, because the intention of fair use is to allow for derivative works to be able to cite or to be able to be influenced by other works, but not to be able to use them in such a substantial form that it undermines the need for the original work. And so clearly here that's the case. Now you could argue that generative AI most certainly is a substitute good for classical search-based tools like what Westlaw has done and Lexis has done as well in the legal space for a lot of years, and tons of other information services like that exist, including those that are provided by associations. So you could argue that kind of generally the substitute good piece is not upheld because these products do supplant and eliminate essentially the need for the original product.
Amith: 35:13
These models work is they do pre-training, where they ingest literally the whole work, all the works from all the internet, right, and so the way the model works essentially is it's kind of like a fill-in-the-blank exercise, if you remember those tests from the Wayback Machine, like even before PsyCar, right. So you know it's like, yeah, a long time ago. So it would be like you know you have a sentence and there's a missing word or a missing couple words and you'd have to fill it in. And that is oftentimes the type of test people take, particularly in grade school, but even later on in middle and high school. I think that's a pretty common thing and that's kind of what's happening when you do pre-training. You're saying, hey, here's a sentence, here's another sentence, and then the model has to essentially attempt to complete the sentence or the paragraph and bigger chunks as well, and then when it does it correctly, it gets rewarded and that strengthens those weights. And when it does it incorrectly, it kind of has to go back and it has this whole process that it's going through and that process doesn't store the original content, it's just learning from the content and using it to basically build these weights. Now, by virtue of doing that over and over and over again, trillions of times, essentially you end up with a model that has inherent knowledge of a lot of underlying works because it's able to predict the content at such a high level of accuracy that it seems like it has a stored copy of a given novel or a given article from the New York Times, but indeed it does not. As a practical matter it seems like it does. It feels like it's compressed all that knowledge into something tiny, which in a sense it has, because it has the essence of that knowledge in it, but it's distinct from the idea of truly copying it, which is what's happened in the case that you referred to. So at a technical level it's a fundamentally different approach. That being said, the question will really be like with the OpenAI New York Times suit. If a similar finding is upheld in that case where fair use is not considered a valid argument, that'll probably have much greater implications. But this might be perhaps a preview of that. I don't know.
Amith: 37:30
My point of view is pretty straightforward on this that associations should seek to increase the clarity with which they communicate. The materials that are copyrighted protect themselves by clearly denoting that to be the case. Not that copyright necessarily requires that, but it's helpful. And I would also point out that the content that you have kind of behind the paywall that should never have been scanned by any AI because it's protected by a barrier. If someone went through a paywall, like used a paying account and then grabbed all your content and then sucked it in. That violates a whole bunch of other legal issues, right, in terms of they had a contract with you and then they went through and stole the content. That's a different issue than something publicly on the web.
Amith: 38:18
A lot of associations are in a different place because they have historically not published all of their content for free on the web. In fact, I'd be willing to bet that it's a fairly small minority of associations that publish all of their content freely on the web. Many of them have it as a member benefit. They have their journal archives and other things behind a paywall. So if you find that your content has been somehow misappropriated and is available in an AI model and it's been behind a paywall the whole time, I think that is something to take a very serious look at, because that's a little bit different than content that's in the public domain. So what we're referring to here is content that has been in the public domain. In the case of the Westlaw piece, I don't believe that their case summaries are in the public domain at all, so clearly this company, ross, must have had access. Perhaps maybe it was through a partnership agreement, some kind of contract. So I think there's a lot of different layers to these things, but what I can guarantee you to be true is that there will be more and more of these cases coming up.
Amith: 39:16
I think it's really important to have a high quality copyright attorney, not necessarily on your payroll but at your disposal, so that you get to know someone who's competent in this, in these matters, who can help kind of guide your thinking on it, because I do think that being thoughtful about when your content potentially has been taken from you to do something about it is really important.
Amith: 39:38
I mean, I've said this for a long time that associations have a couple of key assets that are largely latent, one of which is their content, the other is their brand.
Amith: 39:47
I don't think associations leverage each of those assets nearly to the extent that they could to be able to activate it so that you are the center of the universe in a way that no one else can be in your space, right? So what I mean by that center of the universe comment is you have resources and you have the brand authority to be the end-all, be-all solution in your particular narrow slice of the world, right. So in your vertical you have this unbelievably great content resource no one else has. You should be able to monetize that because you own it. Now are there other content sources that are perhaps 90% as good as yours? Maybe so, but if you do it right, then your content, being the best content, should provide a durable advantage. But your brand as well, in combination with that, is a really important thing to protect. So I think this is a really important topic for associations to have in kind of the center of their sites as they're looking ahead, because you know you could very easily get run over if you don't pay attention to this.
Mallory: 40:56
You mentioned an example of finding out. Maybe your content was misappropriated, particularly paid content for an association behind a paywall. Knowing, based on what you just said, that the models don't have stored copies of this information, how might an association leader find out that their paid content was misappropriated?
Amith: 41:18
Well, if you go have a conversation with a model and you're asking a question that you know is directly, the only way that it could be derived is that there is a journal article that has always been behind a paywall and there is something in there that you know is only in there, right, which is a hard thing to prove definitively that that content doesn't exist somewhere else. But essentially, what I think is becoming likely to be the outcome for some of this is that these AI companies are increasingly looking at their training sources something that they should disclose saying, hey, this is where we got all the content for our training. So I think you can attempt to test the models, the way you interpret the results, because these models are so good and they've had a lot of very broad training data that even if they didn't have access to your content directly, they may be quite good at your field. So you have to be very, very specific talking about a particular issue in a particular article and you know and even, by the way, even if the model was trained on your content, that doesn't mean it's going to have perfect recall of that article. So you might say, oh, we're good because the model didn't know about that particular article, that actually doesn't prove it either, that it didn't have access to content that it shouldn't have. There's another issue that's related to this that I think also makes this more interesting and more challenging, which is that increasingly, models are being trained on synthetic content.
Amith: 42:45
Increasingly, models are being trained on synthetic content. So you take a model like LAMA 3.1 405b, which is the 405 billion parameter edition of the LAMA 3.1 model, which was released just under a year ago I think it was April of last year and that model was used to generate a lot of the content, which, in turn, trained the LAMA 3.3, 70 billion parameter model, which is obviously much smaller and has been used to train countless other models, in some cases from scratch models that were not based on the pre-training of the LAMA series of models, but just using LAMA 405b to do content generation, synthetic content generation which, in turn, was allowing other models to be trained from scratch. So, as these derivative models are based on more and more synthetic content, which is a large part of the game now, it's a really good question how do you trace the roots? All the way back, it becomes harder and harder and harder to do that. So, on the one hand, I think it's a really important topic to be up to speed on and pay attention to.
Amith: 43:46
On the other hand, I don't know how good the defense will be in this category, because the models have already. It's kind of like the first thing we talked, the first topic. You know, the trains left the station. The models are out there in the world and there's models that are out there that are sufficiently smart and capable of producing good content, that are free, and there's lots of them. It's not just Llama, that's just the one that's top of mind.
Amith: 44:06
So I think that you have to look at it from the viewpoint of just being aware of this as much as anything else. The last point I'll make on it just real quick, is that associations have often been content thinking and in some cases knowing that their content is the best in a given field. But if there's other content out there that's 95% as good but it's free or it's just easier to access than yours, what will most people choose? You know they'll likely go down the low friction or no friction path. So that also, whether or not that's right, whether or not it was based on your content or it's just a competitive threat from another attack angle ultimately may not matter as much, as what do you do about it.
Mallory: 44:49
I was on an association website I don't remember which association, in case someone is listening to this and says, hey, that's us and I saw a banner at the top that said something along the lines of no AI model may scrape our website or none of the information here can be used to train any AI models. I know neither of us are lawyers. Do you feel like there's kind of any weight in doing something like that on your website? Or probably not.
Amith: 45:15
I don't have a legal opinion on that, because I think you know there are. There's kind of a precedent for that with robotstxt, which is a file you can drop on the root of your website to tell Google and other search engines not to index your site. So if you kind of assume that that somehow can lean into this and say, ok, well, if you have an AItxt file that says, don't, don't use my content for training, ok, at least you're putting notice out there that you don't allow this. But I don't think the argument from the AI companies has been that people didn't say it was a problem, right? That's not so much what the conversation's been. I think that the bottom line on a lot of this stuff is when it comes to what people are going to do, similar to the AI safety conversation and this containment conversation from the earlier topic. It's going to be tough. It's going to be tough to contain it and the knowledge of these AIs is already so great, so high, that it's going to be very, very difficult to have something that has a true, durable, competitive differentiator, unless, again, your brand is that strong and you continually work to reinforce the quality of your brand and how you actually position your brand in the marketplace, which is not obvious and it's not automatic.
Amith: 46:27
A lot of associations neglect their brands. They don't even think about their brand as a brand, they just kind of go on in their business and they don't really work to emphasize it. And then, obviously, retaining a community of people who are producing the great content. To begin with, you say how did this association get the best content in topic A, b or C? It's because they've been able to attract the best minds in that field, who almost always volunteer. They don't get paid to contribute that content, so there's something magical about that. If you can keep doing that, then as the fields continue to progress, and if you can put the right kinds of protections whatever those may be around those assets, you probably have a protectable future. It's hard to speculate beyond that.
Mallory: 47:11
Yeah, and I want to wrap up here with that other example you gave, which is, let's say, hypothetically, the association has 100%, the best content in an area, but maybe there's a competitor that has, like it's at about a 95% in terms of quality, but it's much easier to access. How would you, amit, if you were an association leader, like, how would you think about that 5% difference and how to activate it, Like the 100% versus the 95%? You kind of touched on that with pulling out the brightest minds and continuing to make the best, latest, most cutting edge content at your association. But how would you approach that challenge?
Amith: 47:48
Well, I think that associations need to stop complaining about the fact that they're associations and that they're not Amazon and not Netflix and they're not Microsoft and so on. Pick a company that's considered the contemporary leader in user experience, ease of use, simple, clean, elegant product design, and you say, hey, association, how come your website isn't as cool as this one or as easy to use as that one or as personalized as this other one? Right? The common complaint is hey look, we're a small association, we only have budget of X, we don't have hundreds of millions or billions of dollars to do R&D on this stuff, and that fact remains the same. However, it is not relevant because your consumer doesn't care. The consumer is going to do what's in their best interest, increasingly so, and so what I mean by that is generationally. There's a lot of data that suggests that this desire to join and be part of something, simply because that's what you do in your profession, is largely a boomer and Gen X thing and is maybe a little bit in the millennials, but it's really starting to decline. I don't know if that's an age thing as people get older, they want to belong to a group, but it just seems to be that the fundamentals of why people belong have changed and there's a lot of data suggesting that strongly of why people belong have changed, and there's a lot of data suggesting that strongly.
Amith: 49:00
So my question would be how do you create the best consumer experience if you have limitations? And I would simply point you to this If you're creative about it and if you're thorough and you're focused, you can create a lot of cool things and you can stand on the shoulders of giants. Great example is DeepSeek. For $6 million, and in a matter of weeks, they created something that is competitive with something that costs billions of dollars to build. That took years. How did they do that? They stood on the shoulders of OpenAI and everybody else's work. They didn't do it completely independently, in a vacuum. And you too, as the association leader, can indeed do that, because technology is more available, it's more affordable, it's more accessible. You just have to go, take action, you have to get started.
Amith: 49:40
So complaining about it which, unfortunately, I hear far too often in the boardroom, where the association and its board are saying oh, why don't these people just understand that we're just a little association or even just a big association, we can't do it? That's a losing argument. That's a conversation you should outlaw. Just eliminate that conversation. Talk about, well, how can we take baby steps, one day at a time, to improve this thing.
Amith: 50:01
And there's a lot of things you can do. For example, this idea of using generative AI to answer questions. Right, maybe you've never had a good search experience, maybe your AMS is older than I am, maybe you have all these other things that feel like your hands held behind your back or both your hands and your feet are tied up. But that doesn't mean you can't take advantage of some brand new tools and do part of the solution immediately right. Put a generative AI front end on your website that can answer questions at scale. Make it possible for people to contact you and have great AI-powered customer service that can answer questions accurately and immediately. Do other things like that right. Solve the problem one piece at a time.
Amith: 50:40
A lot of times, replacing legacy systems is the last thing you want to do. I talk to people all the time. They're like hey, amit, I listened to the pod or I went through the course, it's really cool and it's so awesome. But yeah, we haven't done anything yet. We haven't done anything yet and I'm like what's up, Like how come and the short answer is almost always well, we've got this really old AMS or we've got this LMS that's killing us, and I'm like, yeah, so you're going to spend two years and probably seven figures to replace system X, whatever it is, to get system X plus one and be maybe 10, 20, 30% better than you are now. It's not going to make that much of a difference.
Amith: 51:18
In the meantime, the world's moving on. So I would say you should probably pause all those big infrastructure projects for now. Try to. If you're literally bleeding to death on the side of the road because your AMS is that horrible, maybe try to patch it up a little bit, but focus on AI. That's the stuff that's going to change the game for you, and it's really simple. Actually, it's just the power of the word no. Associations aren't good at using it and you need to stand up and say no, we're not going to do this other project, we're not going to continue to upgrade AMSs every seven to 10 years just because we're quote unquote due for an upgrade, due for a change. It's not going to help. You have to rethink the way you prioritize the resources that you do have and you do have resources, even the smallest association. You have resources, you have your time, you have your energy. You probably have at least a few dollars to throw out the thing, and AI keeps becoming cheaper and cheaper, so that's good too.
Mallory: 52:10
Yep, this is your time to be creative, to stand on the shoulders of AI giants out there and, I've got to say it, to stop thinking about your crusty old AMS, if you know. You know that was from a previous episode. Everybody, thank you for tuning in. This was a great French inspired episode. We are looking forward to see you next week.
Amith: 52:35
Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.

February 20, 2025