Sidecar Blog

From White House Strategy to Youth Safety Tools: Navigating AI’s Risks and Rewards with Bruce Reed | [Sidecar Sync Episode 119]

Written by Mallory Mejias | Feb 2, 2026 5:59:16 PM

Summary:

What do AI companions, deepfakes, and White House briefings have in common? Bruce Reed. In this episode, Mallory Mejias and Amith Nagarajan are joined by Bruce Reed, Head of AI at Common Sense Media and former Deputy Chief of Staff in the Biden-Harris White House. Bruce shares an insider’s view on how the U.S. government reacted to the launch of ChatGPT, the urgent need for AI guardrails for youth, and why you don’t need to be a tech expert to lead responsibly.

Bruce Reed is Head of AI at Common Sense Media and a senior White House policy adviser across three administrations. He served as President Clinton’s chief domestic policy adviser, chief of staff to Vice President Biden, and Deputy Chief of Staff for Policy in the Biden-Harris administration, helping shape 17 State of the Union addresses. Dubbed President Biden’s “AI Whisperer,” Reed led landmark government efforts on AI safety, securing voluntary commitments from major tech companies and advancing a sweeping executive order on responsible AI.

Common Sense Media - https://www.commonsensemedia.org/
Time AI 100 - https://shorturl.at/2NeQZ

Timestamps:

00:00 - Introduction to Bruce Reed
03:06 - Parenting Through the Rise of Tech
07:23 - Leading in AI Without Being a Tech Expert
11:30 - Inside the White House During ChatGPT's Launch
18:05 - Coordinating the AI Executive Order
25:53 - AI, Job Displacement & Societal Impact
34:12 - How Common Sense Media Rates and Educates on AI
37:33 - Moving Fast Without Breaking Trust
42:51 - Closing Thoughts

 

 

👥Provide comprehensive AI education for your team

https://learn.sidecar.ai/teams

📅 Register for digitalNow 2026:

https://digitalnow.sidecar.ai/digitalnow

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🎀 Use code AIPOD50 for $50 off your Association AI Professional (AAiP) certification

https://learn.sidecar.ai/

📕 Download ‘Ascend 3rd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

ChatGPT ➔ https://openai.com/chatgpt

Sora ➔ https://openai.com/sora

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:09:17 - 00:01:22:14]
Mallory

Today on the Sidecar Sync, we are joined by Bruce Reed. Bruce is head of AI at Common Sense Media and a longtime senior leader in Washington who served in not one, not two, but three presidential administrations, including as deputy chief of staff for the policy in the Biden-Harris White House. In this conversation, Bruce walks us through what he's seeing right now at the intersection of artificial intelligence and youth safety. From the rise of AI companions to the bigger trust challenges like deepfakes and scams, and the practical guardrails families and organizations can put in place today. We also dig into what it was like inside the White House when chat GPT launched. And how the administration coordinated voluntary safety commitments and an AI executive order. And why you don't have to be a technical expert to lead responsibly through this moment. If you're trying to move fast with AI without breaking trust, this episode is for you.

[00:01:23:18 - 00:01:25:16]
Mallory
 Everyone, please enjoy our interview with Bruce Reed.

[00:01:25:16 - 00:01:56:00]
Mallory
 thank you so much for joining us on the Sidecar Sync podcast. We're so happy to have you. I've done some research on your background, which I have to say is quite impressive. I wanna talk about your journey first. It looks like you were a kid born in Idaho, you were a Rhodes Scholar, you participated in three presidential administrations. And now you're helping keep children safe in the age of artificial intelligence. Tell us about your background, Bruce, and what is the connecting thread there?

[00:01:57:03 - 00:02:02:00]
Bruce
 Well, I was lucky to grow up on a dirt road in a small town in a red state.

[00:02:03:13 - 00:02:32:05]
Bruce
 And never imagined kind of squandering my life in the political world. But my mother was a community organizer, and I was her first community. So she took me to knock on doors and hand out candy and parades and worked out well for her. She ended up being the minority leader in the Idaho State Senate at a time when her party was outnumbered 31 to four.

[00:02:32:05 - 00:02:33:01]
 (Laugh)

[00:02:33:01 - 00:02:41:13]
Bruce
 So I thought I'd never go into this line of work. I was more interested in writing and in policy.

[00:02:43:03 - 00:03:05:19]
Bruce
 But luckily enough, I got a degree in English, so I was really singularly unemployable. And politics seemed like a natural thing. Went to Washington, my first boss was Al Gore, who went on to be Vice President of the United States. And my second boss was Bill Clinton, who was his boss as President of the United States. So you can't have better luck than that.

[00:03:05:19 - 00:03:19:01]
Mallory
 That is awesome. What caught our interest in terms of your recent background was your selection to the Time AI 100 list. So I've gotta say congrats to you on that. What does that recognition mean to you?

[00:03:20:05 - 00:04:03:13]
Bruce
 Well, it means a lot. I'm definitely out of my league. I mean, I don't belong on any list that includes Pope Leo. And I'm not a tech expert. I've done technology policy for a long time. So I became interested in this issue first as a parent when my kids, who are now in their 30s, were first introduced to the iPhone after it came out. And social media was just coming onto the scene. And my wife and I spent a lot of time wrestling with what's the right thing to do, how to say yes, how to say no. And we realized that this was an issue that was affecting parents across party lines all over the country.

[00:04:04:15 - 00:04:15:17]
Bruce
 And these days it's hard to find issues that bring all Americans together, and this is one of them. So my initial work on it was in the Obama Biden administration.

[00:04:17:06 - 00:05:09:05]
Bruce
 Then I came out to California during the first Trump term and helped write California's Privacy Law, which was the first privacy law in the US. And has become essentially the national law of the land here. And realized what strong support there could be on this. And that this was one area where it might be possible to make some actual legislative and policy progress. Now that my party's out of office in DC, I decided to try it again. And the most interesting thing that I worked on in the Biden administration was the rise of AI. Because it poses all kinds of new questions about how to make sure it helps people, how to make sure it doesn't hurt kids,

[00:05:10:11 - 00:05:44:14]
Bruce
 how to put in place guardrails, make sure it's safe. While still recognizing that it has enormous potential to find new cures for diseases and make us a richer country. And so it's not often in Washington that a genuinely new issue comes along. Usually people on both sides have figured out where they are on it. And so AI has been fascinating because no one in politics really has really understands it. So they're still trying to get their arms around it, trying to do the right thing.

[00:05:45:18 - 00:05:49:08]
Bruce
 And everybody is anxious about it, but for different reasons.

[00:05:50:14 - 00:05:55:12]
Bruce
 And also wants America to win. So it's been a fascinating journey.

[00:05:55:12 - 00:07:02:24]
Amith
 Bruce, I gotta say I'm a huge fan of common sense media as a parent as well. I've got kids now that are late in high school, but it was top bookmark on my phone for years and years and years. My kids are now 18 and 16. So they're well past the point where I have any influence over their content choices, but for years it was it was went up on my phone on Friday nights. We were looking for a Friday night movie and my kids started to grow and they're like, dad, just make a decision. You don't need to look at common sense. It's fine. It's okay. I'm like, well, I actually kind of want to see what they have to say about that particular movie. And part of it was because my memory of some of the movies I loved as a kid is quite different than what I found to be the case, like when you watch it again. And obviously things have changed. So it's just an interesting thing. But I love to, I love hearing about what you guys are doing in generally even in the pre AI phase. And I can't wait to hear about what you guys are looking to do in the AI world. But for parents that haven't experienced the common sense media platform, I'd highly recommend you check it out. It's just an amazing, super easy to use tool to help you figure out the content that you you might want to consider and not consider for your kids.

[00:07:02:24 - 00:07:05:23]
Bruce
 Well, thanks for saying that. It's so tough to be a parent today.

[00:07:07:00 - 00:07:22:24]
Bruce
 Being bombarded by all kinds of different media options and hard choices and and, you know, on any issue, if as a parent, if your kids understand the stuff better than you, you're at a serious disadvantage. So we're trying to level the playing field.

[00:07:23:24 - 00:07:54:17]
Mallory
 I also want to point out how refreshing it is, Bruce, to hear you say, I'm not a technology expert yet. You are one of the leading gaming forces in the White House for the AI initiatives. You're head of AI at common sense media. And I feel like that's relatable for a lot of our association leader listeners who think, well, how can I lead in the age of AI when I'm not an expert per se? So how have you navigated the fact that maybe you're not so technical on paper, but you still need to be strategic when it comes to an emerging technology? What has that balance been like for you?

[00:07:54:17 - 00:08:00:20]
Bruce
 Well, it's a good point. I think this is true on in all walks of life, not just AI.

[00:08:02:03 - 00:08:35:05]
Bruce
 There's, you know, a feeling. It says some people feel intimidated by experts, so-called experts. And in my experience, keeping your eyes open, seeing what's actually happening, using some common sense and learning to explain problems in ways that ordinary people can understand is key in any organization. And Joe Biden once told me, never underestimate how little a member of Congress actually knows because there are so many hard problems.

[00:08:36:11 - 00:09:00:13]
Bruce
 And I think it's okay to just be open about what you know and what you don't know and to trust your instincts because so many times organizations and countries go wrong when they put too much faith in people who seem to know what they're doing, but don't necessarily know how it's affecting the ordinary person.

[00:09:00:13 - 00:09:30:10]
Amith
 You know, a related point that I find to be interesting with AI and perhaps different than prior technologies is that because AI is so human in the way you interact with it or human-like, it's possible to break down that barrier more easily than prior tech. And I've told a number of CEOs that I speak with that when they're looking for their key AI lieutenant to do exactly what we're talking about here, which is to find someone who understands the business issues and who can think big picture, not necessarily the most technical person in the room.

[00:09:31:11 - 00:09:55:09]
Bruce
 Yeah, you're right. AI, what's under the hood is incredibly complicated, but it's pretty easy to use and it's getting easier and easier. And that's one of the challenges that we face is that some of the risks are sneaking up on people. The biggest challenge we have today with kids is the surge of use of AI companions by teenagers.

[00:09:56:15 - 00:10:06:13]
Bruce
 70% of kids have used them, 50% rely on them regularly. A third of kids say they'd rather talk to a chatbot than to an actual human.

[00:10:08:06 - 00:10:24:14]
Bruce
 And you can see why that is. It's a good fake friend and it doesn't judge, but doesn't always tell the truth either. And, and more important, doesn't do what a real human would do, which is sometimes

[00:10:25:24 - 00:10:29:06]
Bruce
 tell you to slow down, question your choices,

[00:10:30:16 - 00:10:41:07]
Bruce
 show you a different point of view there, you know, in general, they're just a little too agreeable, which can be very addictive if you're insecure or looking for validation.

[00:10:42:08 - 00:11:29:11]
Bruce
 And I feel like for a lot of kids, social media and in particular kind of AI driven social media, it makes it as if you're permanently in junior high and you never get to go home, it just goes on all night long. And that's, that's rough. So, you know, that's one of the reasons we're pushing for guardrails on these AI companions to make sure that they're, they're not dangerous, they're not addictive, that they have the kids best interests at heart and that they know how to refer matters to a live human adult when the situation calls for it.

[00:11:29:11 - 00:11:46:06]
Mallory
 Bruce, I want to take it back a few years. You were in the White House in November 2022 when chat GPT launched. I can imagine that was an exciting, perhaps a scary moment. When did the administration realize, oh, this is probably something we need to pay close attention to?

[00:11:46:06 - 00:12:19:17]
Bruce
 Well, we had some people who were worrying about it ahead of time, particularly come from a national security standpoint, because they're, you know, there, there are real risks and opportunities on national security side and being the leader on this or falling behind. So some people knew what they were talking about, but most of us didn't see it coming. Most people in the industry didn't see it coming so quickly. So we were like everyone else, amazed and then began to put our heads around all the possible consequences.

[00:12:21:09 - 00:12:35:18]
Bruce
 And the president was quick to get involved. He saw right away that in the political world, in the public facing world, there are all kinds of challenges.

[00:12:37:01 - 00:13:00:12]
Bruce
 You know, it's, truth is a fragile commodity anyway in politics and the ability to do deep fakes scared the heck out of him because governments, societies run on trust. And if you lose that or if that comes into some doubt, it makes everything harder.

[00:13:02:06 - 00:14:49:18]
Bruce
 So we're now seeing that AI is kind of blowing through all the stop signs on that. And we'll see where we end up, but it does pose real worrisome things, not just for politics, which is not the most important thing in the world, but for just human interaction. If we get to the point where you can't trust who you're talking to on the other end of the phone and families start to have to have special safe words so they can know that it's really the person they love who they're talking to, that's going to change everything about how we interact as a people. So I think I'm not sure that national policy is headed to where it needs to be. It often takes the scenic route and getting to the right answer. But one of the things we do here at Common Sense Media is also work closely with the companies themselves because they have a lot of stake here too. It's easy to make a quick buck, but it's also if you're thinking for the long haul, building a trusted brand, preserving the truth as opposed to stretching it is really important. And nobody wants to be the next Facebook. Nobody wants to be responsible for a prolonged youth mental health crisis and not be trusted on anything it does. So there are financial incentives to take liberties with that, but they're also good long term business reasons to play it straight.

[00:14:49:18 - 00:15:55:00]
Amith
 I think your point about the financial incentives is really interesting because generally speaking, the incentives tend to drive behavior in business. And unfortunately, a lot of times it's short term mindset that drives that. I think there's opportunities with AI in the context of business model ideation that could result in a different approach where rather than the consumer becoming the product, the consumer could truly be the customer and drive more value. I do want to point out one quick thing and maybe hear your thoughts on best practices for this. I think this would be a very practical area to help our listeners. You mentioned safe words. And so the idea of a family having a safe word or phrase or a conversation that could help them authenticate each other, essentially, I've done that with my family, actually with my wife and my kids. We've picked some things that nobody else would really know about us. It's not anywhere in our public profile. And I've done the same thing with our business, our senior leadership team at Blue Cypress. We have certain protocols that we use. We get a phone call or a text that says, hey, transfer money to someone or go to the Apple Store and buy gift certificates. That's something that's literally happened to our CEOs.

[00:15:56:01 - 00:16:14:18]
Amith
 Had a request from an employee who literally drove to the Apple Store and almost bought a bunch of gift certificates. And then they said, well, this doesn't seem like our CEO. Let's make sure. But do you guys have any practices or resources, recommendations that families or businesses could use to best approach that topic?

[00:16:14:18 - 00:16:43:18]
Bruce
 Well, I think it's really important for families to, important for parents to have a conversation with their kids and for kids to open up about what they've come across online or in AI. Because I think parents and grandparents tend to be more skeptical of their child's behavior and more trusting of machine's behavior than they should be.

[00:16:45:09 - 00:16:54:18]
Bruce
 And whereas teenagers tend to be pretty skeptical generally, and so they're less likely to be fooled by their stuff.

[00:16:55:22 - 00:18:00:24]
Bruce
 They can, you know, it doesn't mean they don't need, you know, good talking to about the dangers of talking to strangers, but they will be better at spotting what's real and what's fake and can educate their parents on that front. But I've been surprised. It's clearly a lot of this scamming is happening. It isn't happening on the scale that I anticipated. It seems to me such an easy thing to do that, you know, and I know I have like elderly family members who've been approached and happily were too hard of hearing to fall for it. But it's going to be difficult to have hard and fast legal protections or technological verification to protect folks. So for a while at least, it's important for everybody to talk to their organization to put them on guard and let them know that, you know, at some point technology will fix some of its own problems. But in the meantime,

[00:18:02:18 - 00:18:03:18]
Bruce
 trust but verify.

[00:18:05:03 - 00:18:29:01]
Mallory
 On that topic of safety and trust, a little less than a year after chat GPT launched, the administration released the A.I. executive order. I was hoping you could talk a little bit about that and what the process was like to coordinate across, I imagine, many individuals, many federal agencies and get out an executive order of over 100 pages in less than a year after chat GPT was launched.

[00:18:29:01 - 00:18:33:24]
Bruce
 Yeah, it was quite a scramble. And we had a couple of great people who were really driving the bus on that.

[00:18:35:03 - 00:19:51:09]
Bruce
 It is, I'm sorry to say, the longest executive order in American history as far as as far as we can tell. Part of that was that there was just a lot of enthusiasm from cabinet members themselves who were so excited about the possibilities of A.I. that they wanted to make sure that their department did what it could. And we had an excellent relationship with the leading A.I. companies as well. And, you know, the truth is, in an executive order, you can't do things that you don't have the legal authority to do. You can try. But the most important things that we incorporated into the executive order were voluntary commitments that we made with the leading companies that they were eager to do because they recognized that making sure their products were safe and trustworthy was important for them as they got off the ground and that they're not allowed to talk to each other. They can't conspire together to set practices. And so we were the table they could gather around and have an honest conversation about what they were worried about and have us do what we could to set some norms and guardrails.

[00:19:52:16 - 00:20:19:19]
Bruce
 And this administration repealed the executive order and put another one in place. A lot of this stuff is still in effect, but it was never a heavy handed regulation. We didn't have the authority to do most of what the new administration is accused of trying to do. We were just trying to have a code of best practices that would be good for the companies and good for the country.

[00:20:21:00 - 00:20:38:22]
Mallory
 I feel like we hear a lot from our association listeners that it can be tough to push projects and implementations forward because they have to have committee consensus. They have to have their board buy in. But it sounds to me like if we can get all these federal agencies on board and these tech companies within 11 months, I feel like there's hope

[00:20:38:22 - 00:20:59:13]
Bruce
 for sure. Well, yeah, you're nice to say that. I think the great benefit of some new issue or problem is that it's new to everyone. And so people are excited to put their head around it and they haven't figured out what turf they're trying to protect or

[00:21:01:00 - 00:21:04:18]
Bruce
 what traditions they're trying to follow.

[00:21:06:00 - 00:21:12:16]
Bruce
 Everyone's looking at the problem for the first time at the same time. And that makes it easier to get along.

[00:21:12:16 - 00:21:21:21]
Mallory
 I'm curious from your time with the government and now your role at Common Sense Media, which I think this is you've returned to Common Sense Media, right? You were there prior.

[00:21:22:21 - 00:21:33:08]
Mallory
 What do you think is the greatest benefit to society with AI, the thing that you're most excited about? And then on the flip side, what is the thing that you're most concerned about?

[00:21:34:09 - 00:22:01:24]
Bruce
 Well, I think the potential for medical breakthroughs is enormous. There's so much that AI can do so much faster than humans could ever do in experimenting with different possible cures. And the head of one of the leading AI companies, Dariyama Dey, has said, "We'll make a century's worth of progress in cures over the next decade,"

[00:22:03:06 - 00:22:13:23]
Bruce
 which is going to be remarkable. There's, you know, for all the potential downsides of AI, that alone could make it worth the trouble.

[00:22:15:15 - 00:22:40:14]
Bruce
 I think there are some theoretical downsides that if you want to lose sleep over, you can. And the potential for biochemical and other kinds of weapons, targeted pandemic, there's all kinds of really bad things, cyber attacks, that all the offenses can be put on hyperspeed with AI.

[00:22:41:21 - 00:22:53:06]
Bruce
 So can the defenses. So it's important for us to stay ahead of the rest of the pack because if we're the ones leading the way, then we can set the rules.

[00:22:54:13 - 00:23:05:02]
Bruce
 But I think in the near term, I'm more worried about humans following into our usual ruts,

[00:23:06:02 - 00:23:18:19]
Bruce
 that social media is supposed to change the world and really it just has brought out some of the strengths and a lot of the weaknesses that we have as humans. And I think AI will do the same if we're not careful.

[00:23:19:19 - 00:23:29:00]
Bruce
 So far, the worst things that are happening in AI are where AI is hypercharging social media.

[00:23:30:00 - 00:24:17:07]
Bruce
 It's TikTok, is a lot of fun, but it's incredibly addictive. And one thing that AI is very good at is figuring out our weak spots and how to just pound away. And we just were able to pass a social media warning label law here in California that is not going to change the world. But it's a good idea. But the premise of it is just to remind people every three hours that they're watching social media and that it might be a good time to take a break. Because especially for kids, but also for a lot of frustrated and angry adults, it's addictive. It can be dangerous. It can lead you to dark places. And make your life a much less happy place.

[00:24:18:12 - 00:24:27:22]
Bruce
 And I think there are enough struggles that we all have to go through as humans that we don't really need a lot of help from technology to realize that.

[00:24:27:22 - 00:24:56:14]
Amith
 Bruce, I want to ask you about a common challenge or concern that the association sector often has with respect to AI, which is job loss. And I'm curious how the prior administration felt about this. It was a little bit earlier in the AI cycle, so perhaps there was less visibility into some of the potential displacement. But we spent a lot of time on the pod and with our association friends talking about this because each of these associations represents a particular profession or sector.

[00:24:57:15 - 00:25:48:12]
Amith
 And really, the only thing we've been able to offer people is insight that the best prepared folks are most likely to have the most resilience. So we are always pushing, hey, like, think of AI as a tool. You must learn. You have to have this in your repertoire of skills in order to have some defense against the possibility of job loss. And there's all sorts of different conversations around that. But I'm just curious from a public policy perspective, as you've thought about this, perhaps in the past or even present, you're thinking around how we might be able to address the speed at which the transition is happening. Because in prior transformative shifts, whether it be the Industrial Revolution or going back to the loom, there's been a lot more time on our side to learn and adapt. As a society. So with the speed at which this is happening, there are, I think, well-founded concerns about what happens to the people who don't actually jump on the AI wave.

[00:25:48:12 - 00:26:32:00]
Bruce
 Well, great question. We're woefully unprepared for this. In the past, as you said, technology has generally produced more opportunities than it's taken away. But this time, it could pose unprecedented challenges, in part because the jobs that it threatens most are not the low-paying kind of unskilled jobs where you could just find another one like it. And there are professions like lawyer or coder or things that people spend a lot of time and money learning. And AI is going to make them much more productive.

[00:26:33:11 - 00:27:37:24]
Bruce
 But as a result, the society will need a lot fewer of them. And that's that will affect social attitudes and how we feel as a country more dramatically than anything we've seen before. If you have something that cuts across all sectors that is scaring the hell out of everybody and disrupting a lot of traditions that our society is based on, are we going to need college to be four years anymore? Are we going to need professional schools to be as long and as expensive as they are? It'd be great if they were cheaper. But what if the credential doesn't mean anything at all? That's a it's just a lot to for society to go through. One thing that doesn't get nearly enough attention is that the the real value of work is not just what we earn, but the purpose that it gives us in life. And in theory,

[00:27:39:05 - 00:30:17:22]
Bruce
 we'll be able to replace incomes for people. It's going to be difficult because those with the most are not going to be eager to share with the rest of us. But it's it's theoretically possible to meet people's needs. But finding purpose in life is what society is based on. It's the as my old boss Joe Biden used to say, a job is about a lot more than a paycheck. It's about your place in the community. And I worry about that a lot because what we have learned in recent years is that when when people lose that for for whatever reason, you know, the people who go on disability for for a while and then can't find a place back into the workplace and kind of end up turning to drugs or alcohol or or just feel lost. That is going to be a risk for for all of us that we don't know what to do with ourselves that we will struggle to figure out the value of being human. And, you know, we get past that, but it's it'll be difficult for society if everybody has to make that struggle on their own. And if we're not providing, you know, thoughtful, organized paths for what young people should do after high school or or as they get started out in life, how you how you make sure that everybody feels a reason to keep going. And there are things that, you know, people have talked about. I think that national service is one idea that is is something that we could look at where everybody has something to do that's of value. But it's not, you know, it's not as part of some profit making enterprises, part of making a stronger society and meeting people's needs. And so far, AI doesn't seem to be, you know, we'll see how how the robots go. But there are a lot of things of things that are difficult to do, you know, care that people need, care the kids need. And also one has to believe that the human touch is going to matter a lot, that we're all getting older. We're don't think we really want to spend the rest of our waning years talking to Alexa. It'd be nice to talk to an actual human. So, you know, there's a lot to be excited about. But with our political system as unproductive as it has been for the last couple of decades, it's no it's we're going to have to step up our game and figure this stuff out.

[00:30:17:22 - 00:31:14:07]
Amith
 Makes a lot of sense. You know, the Cycarc Inc. audience will know that Mallory and I are big optimists about AI. The big picture, I think there's tremendous abundance and great things coming. But your points about organization around how we work together as a community. That's where associations can be a really interesting part of the solution. I think there's opportunities for public private partnerships of various kinds. There's opportunities for associations to engage the depth of their expertise and their particular profession in novel ways. And building that connection, that human touch at scale is what associations have done for hundreds of years. And I think are an important part of the solution. I think it's both a responsibility of the association sector and leaders and our listeners to stand up and provide solutions and to be part of that solution. But it's also an opportunity. It's an opportunity to grow the association's impact and influence and their business model to create more sustainability around the way associations are able to turn the wheel.

[00:31:14:07 - 00:31:39:05]
Bruce
 That's a great point. And associations are so much closer to ordinary life than government tends to be and have common interests and see problems emerging and opportunities emerging as well. And so I think you're absolutely right that one impact of AI may be that people want to associate with one another more because we kind of have to in order to keep up.

[00:31:39:05 - 00:32:05:19]
Mallory
 I also too, I guess doubling down on Amith's point, you know, he talked about associations being that touch point for humanity, offering wisdom to their industries, to their professions. But also what you said, Bruce, about finding purpose. And I feel like that's something we don't talk about as much on the podcast with associations. But being that North Star that helps people find their purpose in their profession, in their industry, even as it evolves, I think that's really, really powerful.

[00:32:05:19 - 00:32:29:00]
Bruce
 Yeah, it's an important thing. And it's easy to lose sight of that. You know, it happens all the time. And in my profession, you have people go into the business of politics and then forget all about why they did it. It's it's just about hanging on to whatever you've got or we're not making mistakes. And it's I think for every organization,

[00:32:30:01 - 00:32:47:12]
Bruce
 every leader, really important to remember why you did it, why you went into this and and set your sights as high as you can. And not get beaten down by all the obstacles, but to realize that effort and purpose or what we're here for.

[00:32:47:12 - 00:33:18:11]
Mallory
 I want to talk a little bit about what you all do more in practice at Common Sense Media. So it seems like you how I found it in my research was rate, educate, advocate. And so it looks like you've been assessing AI tools, providing risk ratings on those or in the past media movies, TV shows, things like that. And then you also have literacy curriculum and AI toolkits and then you advocate for policy in the end. Can you kind of just give us a brief overview of what that process looks like in practice?

[00:33:18:11 - 00:36:08:04]
Bruce
 Well, we're still best known for the ratings of movies and TV programs and and that's still a valuable resource for parents because it's hard to know. It's hard to draw the line about what's age appropriate and it's going to be different in different families. And so Common Sense has worked hard to have parents be able to recognize what they're going to get when they watch a movie or a program. And it's so that still remains a useful thing. We are trying to do that with AI as well. And in in the case of AI, there's a lot more concern and testing that we do on just pure safety. Is this appropriate for kids? And in theory, AI could be great at this. Like you can you can go on chat GPT and say, explain quantum physics to me like I'm an eight year old and it will do a fantastic job of that. But unlike most parents, chat GPT is not going to be spending all of its time making sure that everything that it puts in front of an eight year old is actually appropriate for an eight year old. So we need to figure out how to make that happen or how to make how to make it easier for parents to remain relevant when kids are spending an awful lot of time on something else. So so we rate all the products. And then when we do spend a lot of time raising alarms about the things that aren't appropriate, that aren't safe first to let parents know. And then second, legislators know that some of this stuff should be off the market altogether. And this is we have not done a great job as a country of tech technology oversight and safety testing and so on. It's just we've left it to the companies to try to do the right thing. And and that doesn't always happen as quickly as it can. And this stuff is just moving so fast that that we need to raise awareness as much as possible as we possibly can for parents so they know to shop for the right things or to set the right guardrails themselves and for kids that matter, too. But we are going to need to continue to push for legislative progress so that bad products are harder to access for kids and the companies are held accountable and that they set high standards. And as we saw with social media, a technology that seems like it's going to change the world can sudden can over time morph into a technology that seems like it's going to change the world for the worse if everybody's not trying to keep it on the on the right track.

[00:36:08:04 - 00:37:00:06]
Mallory
 I'm going to flip the script a little bit. Amith, I want to ask this question to you because it seems like the the topic of this episode really is move fast, innovate with guardrails, with caution responsibly. And so for our association listeners, I think they can probably relate to a lot of what you're saying, Bruce, even if they don't have children, because the responsibility that we as a society have to protect our children or parents have, I do think in some way mirrors the responsibility associations feel to protect their organizations and their members from perhaps technology that goes awry. So, Amith, my question to you is if you can talk about that balance between wanting to move fast and quickly, but also realizing, like, as Bruce said, this technology, while it can do a lot of good if it's not regulated properly, can do a lot of harm, too.

[00:37:00:06 - 00:38:35:20]
Amith
 I think that's a great thing to think about quite deeply, actually, because associations tend not to be the most aggressive when it comes to adopting new technologies. And there is some good in that because they want to be reasoned and measured. At the same time, the technology is indeed moving so fast that if you don't get on the train, at least at some level, it's it's hard to catch it once the thing's moving at light speed. So I think probably the best way to evaluate and understand how you're going to deploy these things is to dig in and use the technology for something non trivial. If you just play with chat GPT and say, I've checked the box. I know I know you really haven't gone too far. You don't really understand the depth of its impact and its potential impact, not only internally, but for your audience, whether your profession that you support is in law or engineering or architecture or the arts. It doesn't really matter. The idea is to think deeply about these potential impacts. But I think of it as, you know, Bruce mentioned earlier how little people in public office might know about particular topics. And that is appropriate because they can't know everything about everything, right? You can't go super deep. You have to be, you know, an inch an inch deep across a mile as opposed to going super deep in any one thing. But I guess the point I'm making is when you're in a position of both power and responsibility, it is required of you. At least have some degree of competence in a technology that you are going to make a decision about. So I think the way you approach this is you have to get some firsthand experience. You as an association leader, whether you're right out of college and you're joining the workforce or if you're a seasoned veteran and you lead an entire group,

[00:38:36:21 - 00:39:08:03]
Amith
 I think it's really important to just put your hands on that stuff and experience it. And a lot of people still have not. I know chat GPT has a billion monthly active users. I know that many people around the world have had exposure to A.I. but most A.I. use cases are still very much consumer grade. You know, it's Friday evening. What cocktail should I make? Or I've got this stuff in my fridge. What can I cook? And that's cool. That's like there's value in that. But thinking about it from an impact perspective in your profession, I think is really important. And in my opinion, the only way to really do that well is to get into it yourself.

[00:39:08:03 - 00:40:10:08]
Bruce
 I couldn't agree more. It it has so much potential, especially for, you know, for a small association that your reach can go. So much further. But you have to get close enough to it to understand what it's possible, what is capable of and what could go wrong. And I think like I'm I'm a in theory, a big supporter of the whole Silicon Valley view of move fast and break things. I worked in government forever. So there's a lot of stuff that we had had to break. But it has to be in the service of actually fixing things of the law. You can't just leave a bunch of wreckage alongside the road. And I'm sure that everybody who runs an association has their own red tape and bureaucracy and challenges and how to get everybody to play well together issues. And this can do a lot to leapfrog some of those problems. But you have to be wary of casualties.

[00:40:11:13 - 00:41:24:19]
Amith
 For sure. And I think that the leadership in organizations also needs to be thought of in the association world to be extending beyond the paid staff, but also the volunteers, particularly the board of directors, the governance structure and associations is usually fairly complex. And it's very important to bring those folks along, because if you want support to drive transformative change in your organization, whether it be elements of your business model or the way you engage or your approach to public policy or whatever it might be, you have to get those volunteers on board and for them to be on board, they have to understand it. And when you have a profession like doctors or lawyers or engineers and architects, the people that are your members and therefore your volunteers as well, they're used to being the smartest people in the room. And you as the staff serving that sector often defer to them because they not only assert themselves as perhaps the smartest people in the room, but you tend to think they are. But in the world of A.I., just because they're brilliant surgeons or amazing lawyers or whatever, doesn't mean they know really much of anything about this. So it's still your responsibility to bring them up to speed. And that may be a little uncomfortable, but comfort isn't the zone we need to be in at this moment in time in history. I think we have to be willing to get a little bit outside of that comfort zone to drive this type of change.

[00:41:24:19 - 00:41:25:15]
Bruce
 Yeah, good point.

[00:41:25:15 - 00:41:41:24]
Mallory
 Bruce, you have such a unique perspective with your time in government, your role at Common Sense Media. I'm curious, what do you think people should be preparing for in the next one to two years, 12 to 24 months, whether that's parents or business leaders? What are you looking ahead to?

[00:41:41:24 - 00:41:47:07]
Bruce
 Oh, well, I think that to this point,

[00:41:48:19 - 00:43:03:11]
Bruce
 a lot of the potential weirdness of A.I. has been for, those who play with it all the time, it hasn't been that mainstream. I think the stuff that Open A.I. has just released with Sora that makes it possible to do incredibly deceptive video with relatively little effort, may be the thing that propels deepfakes into the public consciousness. And that may be good for laughs, but it's at a time when truth is already at a premium, could make life tougher in a lot of areas. And so I continue to worry most about the loss of trust, loss of faith that people have in, whether it's their employer, their government, their, you know, the way things are going. And I hope that that doesn't get in the way of the enormous hopes that we can have for this technology. Because, you know, I do think in the next couple of years, we'll see some remarkable medical breakthroughs that hopefully will make it all seem worthwhile. But in the meantime, there are going to be plenty of hiccups.

[00:43:03:11 - 00:43:07:01]
Mallory
 Absolutely. Well, Bruce, we're about at time. I want to say

[00:43:07:01 - 00:43:12:12]
 (Music Playing)

[00:43:23:05 - 00:43:40:04]
Mallory
 Thanks for tuning into the Sidecar Sync podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in-depth AI education for you, your entire team, or your members, head to sidecar.ai.

[00:43:40:04 - 00:43:43:10]
 (Music Playing)