Skip to main content

Summary:

In this episode of the Sidecar Sync, Mallory Mejias and Amith Nagarajan dive into three seemingly distinct but tightly connected topics: a groundbreaking robotic hand from China that could redefine how machines interact with our world, the growing movement of mandatory AI literacy training sweeping through corporations and governments, and new data from Anthropic showing a widening economic divide in AI adoption. They explore what these trends mean for the future of work, equity, and innovation—and why every association leader should be paying close attention.
 
Timestamps:


00:00 - Introduction: Fall Plans & AI for Kids
05:41 - Breaking Down the Wuji Robotic Hand
10:03 - Why the Simulation-Reality Gap Matters
14:41 - Association Implications of Advanced Robotics
16:50 - The AI Literacy Mandate Movement
22:37 - Mandating Training: How Far Should You Go?
25:41 - Role-Specific vs. General AI Education
30:39 - Anthropic's AI Usage Index Explained
33:12 - Why Automation is Outpacing Augmentation
38:42 - Final Thoughts: Sizzle, Substance & The Road Ahead

 

 

👥Provide comprehensive AI education for your team

https://learn.sidecar.ai/teams

📅 Find out more digitalNow 2025 and register now:

https://digitalnow.sidecar.ai/

🤖 Join the AI Mastermind:

https://sidecar.ai/association-ai-mas...

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:

https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE

https://sidecar.ai/ai

🛠 AI Tools and Resources Mentioned in This Episode:

ChatGPT ➔ https://openai.com/chatgpt
Claude ➔ https://claude.ai
Suno ➔ https://suno.com
NotebookLM ➔ https://notebooklm.google
Google Gemini Nano ➔ https://ai.google/discover/gemini
Anthropic Economic Index ➔ https://shorturl.at/23b42
Citi AI mandate ➔ https://shorturl.at/DLXZY
Wuji Tech Demo ➔ https://www.youtube.com/watch?v=LXVV-oErD8s

👍 Please Like & Subscribe!

https://www.linkedin.com/company/sidecar-global

https://twitter.com/sidecarglobal

https://www.youtube.com/@SidecarSync

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

🤖 Please note this transcript was generated using (you guessed it) AI, so please excuse any errors 🤖

[00:00:00] Amith: Welcome to the Sidecar Sync Podcast, your home for all things innovation, artificial intelligence and associations.

[00:00:14] Greetings, everybody, and welcome to the Sidecar Sync Your Home for content at the intersection of artificial intelligence and the wide world of associations. My name is Amith Nagarajan, and

[00:00:27] Mallory: my name is Mallory Mejias.

[00:00:29] Amith: We are your hosts and we have an episode for you today. It's gonna be a lot of fun. We've got some interesting topics and, uh, you know, we'll be, we'll be wa through them shortly, but, uh, in the mean before we get there, how are you doing, Mallory?

[00:00:42] Mallory: I'm doing really well. Amith having a good week so far. I was just telling you, I have family visiting this weekend, my sister, her husband, and their three young children, so I will be in full aunt mode, which I'm excited about and I do feel like exploring a city as an adult is very different from [00:01:00] exploring a city when you have children.

[00:01:01] So I'll get to kind of see it Atlanta through children's eyes, which should be fun.

[00:01:06] Amith: Yeah, that will be fun. And uh, you know, uh, you can kind of simulate what they're gonna do ahead of time with chat GPT or Claude and then get some good ideas. So,

[00:01:15] Mallory: you know, that's a good idea. I haven't done that yet. I wonder now that you're saying that, I wonder if they've had any exposure to chat GPT or Claude.

[00:01:23] So that might be fun. They're gonna hang out with us Friday night. We were planning to do a little Halloween fall evening, so maybe I'll blow their minds with some, uh, chat GBT stuff.

[00:01:32] Amith: That'd be fun. Or show them Sora or, uh, make some songs together with Suno or, um, do notebook, lm or you know, take Yeah.

[00:01:39] Mallory: Or maybe, uh, Google Nano.

[00:01:42] I could like take their picture and like put them into a cool scene.

[00:01:45] Amith: Yeah,

[00:01:46] Mallory: that could be fun. I like where your, where your head's at. A me.

[00:01:49] Amith: All this sizzle stuff with AI is pretty cool. I think it's great to get people's attention with it and, uh, like we've been talking about, you know, it's interesting because where the value gets created isn't necessarily where the sizzle is, [00:02:00] although the sizzle is definitely interesting and important to keep aware of because that's what opened our eyes to AI is a community.

[00:02:07] Right? I mean, a number of people, including myself, have been involved with AI for a long time preceding launch of chat CPT as this moment in time that everyone remembers in the fall of 2022. Um, but. It does take moments like that to shift the pattern of interaction, the pattern of attention, because prior to that moment in time, and really probably six months after that is when people in this market really started paying close attention.

[00:02:30] But prior to that, AI was just kind of hyped to people, or it was just like, yeah, that's just some technical thing people were talking about, but it's never gonna be real. So going back to the sizzle, I think getting young people's attention through really cool ways that they can engage with it, I think is important because.

[00:02:46] There is, uh, seemingly a divide between younger folks in terms of their interest and willingness to use ai. I was, uh, speaking at my son's school last year. He was a junior in high school last year and I was a senior and I was asked to go speak to, uh, the [00:03:00] computer science program there, and I was happy to do that.

[00:03:02] And I showed them some really cool AI coding things and I just asked them like, how many of you are using ai? How do you feel about it? And. I got a lot of different answers, even from kids that are taking computer science. Uh, some of them were pretty negative about it, and one of my kids is not particularly fond of ai.

[00:03:17] I think we've talked about that before too. So I think getting people aware of it is important because part of it is the thing we keep coming back to Mallory that. The bottom line is, is whether you like it or you hate it, it's in your world now, just like electricity and telephones and the internet. And so if you don't know how to use it and if you don't know how to use it, well, you've got a problem.

[00:03:35] It's kinda like showing up to work somewhere and saying, what's this thing called internet search? I've never heard of Google before. I don't know how to use Microsoft Word or similar tools. You don't stand much of a chance of being employed if you have that kind of a, of a gap in your ability. So I, I think the same thing is true for AI Now, it'll definitely be true.

[00:03:54] By the time my kids graduate from college in a few years.

[00:03:57] Mallory: Yep. And maybe the time my nieces and [00:04:00] nephews graduate, maybe we'll invite one of them on the podcast. You know, maybe a seven year old's take on ai. Could be fun. Everybody listening if you do have an interest in kind of understanding and hearing how young people are using ai, we have a really good episode with Connor and Finn Grin, which is a father son Duo Finn, I think at the time was 16 years old in high school and he was obviously a big AI advocate.

[00:04:21] And if I'm not mistaken, he will be, both of them will be at digital now this year, right? That is right.

[00:04:26] Amith: We're very pleased to have the two of them, uh, lead a keynote and be part of a panel that I think you're moderating Mallory, uh, later that day. And, uh, I didn't know if

[00:04:34] Mallory: Finn, I knew Connor was gonna be there, but is Finn actually giving a presentation too?

[00:04:37] He's, wow. It's super

[00:04:39] Amith: excited to have him and I think he's going to be, um, very high, at least sought after person after he speaks and people understand, uh, what he's been doing with ai. So won't spoil that further, but it's, uh, it's definitely another, another good reason. For you to go to your web browser.

[00:04:54] Mm-hmm. Or go to chat PT and ask it to buy a ticket for you at digital now. So, um, [00:05:00] get going on that. Try it

[00:05:00] Mallory: out. Use some computer use. I've gotta say Finn Grin is gonna have like the best college application this world has ever seen by the time he is applying. I agree. Uh. All right. Well, today, as Amit mentioned, we've got an exciting episode like we always do.

[00:05:15] We're gonna talk about three topics that might seem a little bit unrelated at first, but they're all kind of about the same thing. Who's winning an AI and automation and what happens to everybody else? So we're starting with a robotic hand from China that has engineers calling it a game changer. Then moving to major companies, making AI training mandatory and finishing with new data showing AI adoption is splitting on.

[00:05:40] Economic lines. So starting off first with the EE hand, a company in Shenzhen called EE Tech just unveiled a robotic hand that's turning heads in the robotics world and turning hands too. I suppose it represents a fundamental departure from how most advanced robotic hands work. Unlike [00:06:00] Tesla's optimist hand, or most competitors that use tendon driven systems, this is where motors sit in the forearm pulling cables like puppet strings.

[00:06:08] The Ouje hands puts micro actuators directly inside each finger segment. It has 20 degrees of freedom with four independent joints per finger. As a reminder, degrees of freedom refers to the number of independent movements a robot hand or a robot in general can make, which is determined by its joints, essentially making each tiny finger in this hand a little robot.

[00:06:33] The hand itself weighs less than 600 grams or about 1.3 pounds. It measures approximately eight inches and can lift 44 pounds in a power grip, uh, and 11 pounds in a pinch with just two fingers. The technical advantage here is significant. Direct drive motors in the fingers eliminate the transmission losses and mechanical slack that plague tendon based systems giving much greater [00:07:00] control and precision and repeatability.

[00:07:03] This dramatically reduces what engineers call the simulation to reality gap, the difference between how a robot performs in computer simulations versus in the real world. Tendon systems are hard to model accurately because cable tension varies, but direct drive actuation is predictable and consistent.

[00:07:21] So how does this compare in the competitive landscape? Well, it sits in a powerful, sweet spot. Tesla's optimist hand has 10 to 16 degrees of freedom using tendons, and weighs about one and a half to two pounds The shadow robot hand. A research grade system costing over a hundred thousand dollars has 24 degrees of freedom and advanced tactile sensors, but is complex and less focused on durability or scalability.

[00:07:47] The Wge hand is more dextrous than commercial options, more robust than research prototypes and potentially more affordable at scale. Experts are noting that JI could outpace western models and [00:08:00] manufacturability and consistency. And the direct drive architecture is now influencing design choices in rival hands, pushing the entire field toward more reliable robotics.

[00:08:11] We'll include a link to the demo from JI Tech in the show notes. Amit, I think you've seen it. It's pretty jarring. I know it's just a hand, but it actually looks like. Someone just chopped off an actual human hand and it's moving its fingers, it's waving. It has a pen between two of its fingers, as you might do at your desk, so it's really insanely impressive.

[00:08:33] Amit, what was your initial take?

[00:08:35] Amith: The same thing. I think it was impressive. I think the fact that, you know, this direct actuators concept that you talked about is a big thing because making the, those motors essentially small enough and powerful enough to actually be useful is the key, right? So very, very small, uh, and very, very powerful.

[00:08:54] Relative to size, um, you know, requires innovation and there's some really cool things that have happened here. So, [00:09:00] um, that's gonna make everything much easier to scale in terms of the complexity of movements that this hand can, can make. Um. The hand is being focused on so much so in robotics because it represents, uh, perhaps the most complex aspect of our anatomy and how to replicate that so that robots can interact with the physical world that, you know, we've created, you know, to, to suit our bodies.

[00:09:24] So if we can make a hand very effective, that means that a lot of robotic applications. Um, in the home or in the office outside of industrial settings where, um, this is still very useful, but, but industrial settings are more predictable and you can use robotics in, in ways that don't necessarily require a human style interface.

[00:09:42] But, uh, I find it fascinating. I think that it just is another example of the compounding. Of multiple different exponentials, be it material science or you know, the energy curve with batteries. Um, and the ability of course to have the algorithms move fast enough to take in all the data and respond. [00:10:00] So it's, to me, it's a very exciting thing.

[00:10:02] Mallory: Mm-hmm. I mentioned the reduction of the simulation to reality gap with this hand. Can you talk a little bit about that and why reducing that gap is going to lead to probably a ton of innovation and advancements.

[00:10:15] Amith: Sure. One way to think about it is if you have a degree of separation between something you're trying to do and something that you're controlling, the more of those degrees of separation, uh, the more opportunity there is for a loss of information potentially, or just a challenge in kind of controlling it.

[00:10:31] Think about, for example, um, if you've ever had a trailer attached to the back of your vehicle and you've had to, uh, back it up, you know you're going forward. It's not really that much more difficult to drive a car, uh, or a truck with a trailer attached to it. You have to be a little bit thoughtful about turns you make and how much room you give yourself.

[00:10:47] But if you're backing up, you kind of have to invert. What you've taught your brain to do, right? Because, uh, the trailer moves in the opposite direction of where the back of your car is going. And so when you think about that, you say, okay, well that's kind of like how it would be with a [00:11:00] ligament connecting to the actual joint you're trying to move.

[00:11:03] Um, it isn't necessarily the inverse, but it's a degree of separation that causes additional complexity and there's more variables in the mix. So they were talking about, um. The, the fact that you eliminate the tendon eliminates the variability in the, uh, overall strength of the tendon, but also the flexibility of the, of the tendon, things like that.

[00:11:22] And so that's a very important variable that's, that's hard to predict, uh, consistently. And when you have manufacturing processes producing these things, you're gonna have a lot more consistent performance. So it's, it's all of those things. It simplifies the model that the AI has to understand in order to control the robotic, um, you know, hand or.

[00:11:40] Any other, uh, type of device essentially.

[00:11:44] Mallory: Mm-hmm. We had an episode, someone recently where we talked about physical AI and Jim Fan from Nvidia proposing the physical touring test, which in his presentation was having maybe a party at your house, or I think it was a hackathon in his situation, [00:12:00] leaving your house is a rep pizza box is.

[00:12:01] Boxes everywhere. You come home, you got a candle lit dinner made for you. The house is impeccably spotless and you as a human can't tell whether that dinner was cooked, uh, and the house was cleaned by a fellow human or a robot. So I feel like this is perhaps a step in the direction of that physical touring test.

[00:12:20] Do you agree with that?

[00:12:22] Amith: I agree. I mean, it's, you have a lot of different, to achieve that outcome, you have lots of different pieces that need to be built that don't exist today. So the type of, um, really nimble robotics that you're, that you need in order to navigate those environments and to be able to, you know, do all sorts of arbitrary tasks like that, uh, with enough strength to be effective, but not so much strength that it breaks things, right.

[00:12:43] So you don't want to have, uh, you come back and it's like, oh, well, but by the way, it did all of that. But in the closet. There is a whole bunch of debris from all the broken dishes and broken glassware and other things that the robot destroyed in the process of creating that meal for you. So, uh, [00:13:00] ideally, you know, some people may consider that a tolerable outcome, but I think that most people would require the robot to be at least as good at not breaking things as a human in that capacity.

[00:13:10] So, and of course, that. Puts a pretty high bar out there because you know, we've gotten pretty good collect. I haven't at cooking meals, but we collectively as a species have gotten pretty good at cooking. I know how to make eggs. That's about it. Know, hey, that's

[00:13:22] Mallory: something, eggs can be tough. I dunno. Eggs can kind of challenging and I

[00:13:26] Amith: can only do scrambled eggs, so my range is limited.

[00:13:28] Okay, so.

[00:13:30] Mallory: You're, you're a one egg type of guy. Hey, that works. Yeah,

[00:13:33] Amith: I'm one and done. That's what I do. And then I'm done cooking for the day. So, but, uh, I think, I think a robot will be able to beat me. I don't think it'll be able to beat my wife, who is a professional chef and, uh, others who are somewhere between the two of us in terms of range of skills.

[00:13:45] But, um, I think that the other thing that. Obviously as important is the speed at which, uh, AI models can inference the stimuli they're receiving from a variety of sensors, including video and potentially radar or laser detectors where you have a robot in the physical [00:14:00] world. The robot needs to continuously feed this data back to the brain, essentially, which is the AI model that's running.

[00:14:06] And so if it takes the AI model, uh, more time than real time to inference on that input and make a decision, uh, you have a problem. So you need really fast. Localized AI to run there. You can't afford, you know, the latency of a network and you need to have something that inferences in, you know, single digit or low double digit milliseconds in order to do a lot of this stuff you're talking about.

[00:14:29] Uh, so there's a lot of really interesting things coming back to the AI conversation we usually have. Mm-hmm. But here we have many different, uh, innovations required to have this kind of, you know, physical manifestation of ai.

[00:14:41] Mallory: Mm-hmm. What do you think association leaders can take away from this, besides, that's a pretty cool robot hand.

[00:14:47] Amith: It's pretty cool. The video's also a little bit creepy. Uh, yeah, it, you should take away that this stuff is coming. That, um, you know, the other thing that we don't really fully appreciate here in the western world is how incredibly [00:15:00] advanced at robotics the Chinese are. And, uh, this is one example of many.

[00:15:05] Um, and there's a lot in terms of not only the research side, which we're talking about, that's quickly being commercialized, but the deployment of robots and industrial settings and otherwise that we are radically outstripped. I was listening to a podcast, I'm trying to remember which one, uh, fairly recently that talked about, uh, last year China deployed something in the order of magnitude of 500,000 industrial robots, and we deployed in the United States, we deployed something like 40,000.

[00:15:30] So, um. It's dramatically more adoption. And so when you adopt the technology at scale, you learn a lot more about it and you move faster and then you can iterate faster. So I think that it's, uh, a bit of a wake up call that number one, this is not 20 years from now, this is probably two years from now, maybe five years from now, and.

[00:15:49] In your industry, in your profession, this will have an effect. So let's say your association is in, uh, something medical related, uh, or maybe your association [00:16:00] is in the world of engineering, there's lots of physical things that happen in those worlds. And so if this capability exists all of a sudden, right?

[00:16:09] It's not all of a sudden, but it feels that way. Then it's gonna affect your members, it's gonna affect your profession. So to bare minimum, you as an association leader should have some general familiarity with the trend line of what's happening with robotics, that you understand how it affects your members.

[00:16:23] And then of course, there's lots of things we do within association operations where robotics may be helpful. So I think there's a lot of opportunity here, but the, the more important thing is the trend line. Hopefully like the sizzle we talked about earlier, this is the kind of thing that might attract people into saying, Hmm, this is really interesting.

[00:16:39] I wasn't that interested in AI before, but for some reason this caught my attention. So that's, that's part of it as well, I think.

[00:16:45] Mallory: Mm-hmm. Definitely watch the video and I think it will catch your attention for sure. Alright. Moving to topic two, the AI Literacy Mandate Movement. Can y'all believe we're reporting on this?

[00:16:57] We are witnessing a fundamental [00:17:00] shift in how organizations view AI education. It's no longer about whether to train employees on ai. It's about when and how. Citibank just made headlines by mandating AI training for all 175,000 employees globally requiring completion within just 60 days. Their program is called Asking Smart Questions, prompting like a Pro, and it uses adaptive learning technology that experts can complete in 10 minutes while beginners get 30 minutes.

[00:17:30] JP Morgan now requires AI training for all new hires. As part of onboarding Bank of America reports, over 90% of its workforce uses AI tools. After completing training, Wells Fargo has trained thousands through Stanford's human-centered AI curriculum. PWC has integrated AI literacy into company culture through gamified monthly tournaments and quizzes.

[00:17:52] This trend extends beyond corporate America, though the eus AI Act effective February of 2025 legally [00:18:00] mandates AI competency training for all employees who interact with or operate AI systems. Companies must ensure personnel acquire sufficient AI knowledge, including technical understanding and context of use.

[00:18:13] Germany and other EU member states are implementing mandatory training protocols for compliance. Meanwhile, over 250 CEOs from companies like Adobe A MD, American Airlines and Cognizant signed an open letter urging mandatory AI education in all K to 12 US schools. So the shift I think is clear. AI literacy is transitioning from nice to have to must have.

[00:18:37] Although I think at sidecar we've always said it was a must have. Leading AI training firms report surging demand for customized mandatory learning paths, role-based corporate workshops and compliance certification. Organizations are treating AI fluency the same way they treat cybersecurity awareness or workplace safety as a baseline competency.

[00:18:57] Every employee needs not an optional [00:19:00] skill for technical teams. Amit, is this just music to your ears, this whole topic?

[00:19:06] Amith: Yeah. Surprise, surprise that we picked this topic for the Sidecar Sync Podcast, where we've been known to take a fairly forward-looking stance. Some might say aggressive stance with respect to.

[00:19:17] AI training. So, uh, very much music to my ears that organizations are doing this and not actually really for their benefit. I, I do like it to see when companies improve themselves and become more effective and all that, but mainly because I think it's so critical that big organizations that have tens of thousands or hundreds of thousands of employees, um, treat their people right and lead them into the future.

[00:19:40] I've been known to say that if you are not. Not only offering, but mandating AI training for your, for your people. You are committing leadership malpractice. And the reason that we have taken such a strong stance on this is because of a very simple fact. I don't think anyone would argue, even if they don't like AI [00:20:00] very much or they're not using it very much, most people would not argue that AI isn't going to affect the skills required by the workforce.

[00:20:06] We just talked about that earlier. So. To the extent that you believe that AI is going to significantly affect the skills and experiences that are required to be an effective employee, basically anywhere in the world. It is really critical as a leader that you prepare your workforce for that not only to serve your organization, of course that's important, but to serve those individuals futures.

[00:20:29] That's your job as a leader. Your your job is to grow people. That's the product that you're making in your organization. Whether you're a software company or a professional services company, or you're an association, your job is to grow people. And so as a leader, you are not serving your employees or your volunteers.

[00:20:45] Um. If you are not providing them AI training, and to the extent that you know, you agree with this statement, um, to really, truly mandate it to, to deeply drive a stake into the ground, say, we're doing this and you have to do this, uh, it's optional. To the extent that [00:21:00] you don't wanna work here anymore, we're not, we can't literally force you to do something.

[00:21:03] But if you want to stay employed here. You will complete this AI training. I think it's really important that leaders do that because the, the optional things, we've seen this over and over with the sidecar AI Learning Hub for teams, associations are, uh, they sign up and they sign up. Everyone. We only sell that product on an all you can eat basis, meaning there's no metering.

[00:21:22] Seats. Everyone in your organization plus a hundred volunteers get access to the platform. And so the whole idea is to not make it a scarce resource, right? To make it available for everyone. But we see different adoption curves. When you have a leader that stakes takes that stand, stands up, stands up tall and says, listen.

[00:21:38] Um. This is the most important technology shift we'll probably ever experience in our lifetimes. And everyone here has to get on board with this and understand it. Um, you have very broad adoption. We've seen organizations have a hundred percent participation rates, not just in taking the courses, but getting the a IP cert.

[00:21:55] So that's super exciting. And then we also see organizations that only get 10, 20, 30% of their [00:22:00] employee base to go do it. And it's directly correlated to this idea of is it mandatory or not? Uh. A lot of leaders in this market are not comfortable with that. They're more comfortable with the idea of consensus or committee based decision making or opt-in.

[00:22:15] And I understand that, but this is not a time to focus on being comfortable. It's a time to focus on results for your people and for your organization. So to me, the city, uh, announcement absolutely was music to my ears.

[00:22:30] Mallory: I thought. So Amie. Really the, we didn't, we didn't know Citi was doing this. The topic just kind of fell into our laps, but it worked out well.

[00:22:36] Sure. I mean, I've worked with a few, uh, association leaders, some of the teams that have gotten into the AI learning Hub, and we talk about, we've taken kind of a firm stance on the idea of mandating versus voluntary learning. However, within. Mandating AI education. There's kind of different levels of that, different tiers.

[00:22:52] So I wanted to get your take. We've had some associations try to uh, I guess cherry pick certain courses that they [00:23:00] would mandate and then some would be optional. What is your opinion on having role specific AI education, more general AI education? Does everybody need to know the technical stuff? Like what do you think about that?

[00:23:13] Amith: First of all, by the way, knowing that I, like this topic doesn't take a particularly advanced LLM to predict that outcome. I guess, you know, it's, it's just, uh, if you've listened to me talk for more than about 10 seconds, you know that I'm, I'm really deeply passionate about this. I mean, our mission at Sidecar is to educate a million people in the vertical by the end of the decade because we think that is the single most important thing we can do on our end to affect that outcome.

[00:23:35] To drive impor, uh, changes in this, in the sector. So, um, I would say that to your, uh, question, the most important thing, Mallory, in my opinion, is just doing something. So you have to have some kind of AI training. It can be the free courses that are out there from Google or Microsoft. Or open ai and now philanthropic, they have, you know, free courses.

[00:23:56] Now most of these vendors, of course, their free courses do talk about their tools, but the [00:24:00] idea is it doesn't really matter. Just do something and make it mandatory. Obviously, we at sidecar offer the AI Learning Hub for association people, which has role specific training for membership and for marketing and for finance, and.

[00:24:12] Ethics training around AI specifically contextualized for associations. So the reason we built that is we felt there would be value in providing context specific AI education, which is continually updated and all of that. But the goal here isn't to pitch you sidecar. Lots of people are finding that it's a great fit, but it.

[00:24:29] To me the most important thing is people doing something. So yes, having some baseline minimalist education cities thing isn't a three week training course. It's an hour or whatever you said earlier, right? It's under an hour, so. It's very reasonable. You can't mandate something that's unachievable. You wanna mandate something that's quite reasonable.

[00:24:49] Hopefully there will be people in that group that the, the group of everyone that are so intrigued that they wanna take more advanced courses. So you should definitely offer that. And in some roles you may [00:25:00] mandate additional training, uh, in the context of the sidecar, uh, content. You know, we typically see people mandating the foundations course and the prompting course.

[00:25:08] Those are the two most fundamental I think, that give. Uh, give a student a really clear understanding of what AI is, how it works, uh, how to apply it to their jobs and their association roles. So that's important. Uh, and then of course, having electives essentially beyond that, I think allows, uh, the more interested learners to go further.

[00:25:27] Mallory: Mm-hmm. I think I've gotten some of your, uh, aggressiveness around AI education through osmosis from you eth I guess, on the podcast, because I always would tell people, and maybe this is too aggressive, but I feel like. AI is blowing, blurring so many lines, like if you traditionally work in marketing and have never been involved in software development, like AI's blurring, all those lines.

[00:25:49] So I would always tell people like, I think you should mandate all of them, and they're already in the learning hubs. It's really to no benefit of like sidecar personally, but my thought being you never know what you can [00:26:00] learn from a course outside of your role. Totally. I don't know what you, what you think about that, but

[00:26:05] Amith: No, I think, I mean, look, first of all, I don't think there's such a thing as too aggressive Mallory.

[00:26:09] It's like saying there's too much ice cream in the world or something like that. I dunno if everybody

[00:26:12] Mallory: agrees with Yes, but you're right. I

[00:26:14] Amith: know, I know. I'm kind of ridiculous about that. But my, you know, I have, my role is to play that kind of, uh, function in this community and I, I really believe in it, obviously.

[00:26:21] I'm not doing it at a theater. It's just, it's it, the reality is, is that, um, people have to get aggressive in order to drive change. You can't drive meaningful change, certainly quickly enough. If you take a passive approach or kind of a moderate approach, you have to be pretty aggressive about it. And that's where we've seen the best results is that you have leader or leaders, preferably a whole leadership team standing up and saying, we're gonna go do this thing.

[00:26:43] Um. I think that what you're describing is a really important insight, Mallory, that it's not just about what you find to be the most obvious thing that's your of expertise in like, oh, I'm a finance person. I'm gonna do the finance course, or I'm a marketing person. I'll do the marketing course. That's wonderful.

[00:26:58] But yeah, cross-pollinate that [00:27:00] stuff. Learn something that's a little bit outside of your wheelhouse. So I think that's important. I also think learning needs to be done in a consumable way. You can have a big push initially, but the most powerful thing is if you get people learning continuously. So what I always tell people.

[00:27:13] When I deliver keynotes and workshops and stuff is to say, listen, um, what you need to do is really simple. I'm talking to the individual at this point, not the organization. I'm saying, go on your calendar and block off 15 minutes every day as a recurring appointment in the morning, in the evening. It's sometime during the day.

[00:27:31] Make an appointment with yourself for 15 minutes and make it your AI learning block where you are going to dedicate that time every day consistently to learn something about ai. If you do that over and over and over and over again, then you will become very knowledgeable, and then you will become eventually an expert, and then eventually you'll become a world class expert actually, quite quickly, within a matter of a couple years of practicing a habit as simple as 15 minutes a day.

[00:27:57] So to me, that's what it's about, is it's, it's the [00:28:00] repetitions, it's the continuous focus on it, just taking the course by itself. If you are determined not to learn anything about ai, you can go and complete the course and be on your way and you'll still be. Effectively incompetent at using the AI tools.

[00:28:13] But if you go and tinker with them every day and couple that with some formal training, I think you're gonna do really well.

[00:28:18] Mallory: Mm-hmm. And I mean, something I've learned in my life, ame, and I'm sure that you have too, and especially all of our association listeners, is education is the one, one of the most beautiful gifts you can give yourself or give to others.

[00:28:30] So I think there's just a lot of power in doing that, especially in determining your future and your team's future.

[00:28:36] Amith: I've had a lot of people over time annoyed at me by making like employees at companies that have been part of that have said, Hey, you have to go do this training or whatever. I've had plenty of people, you know, grumble about that, you know, ahead of time.

[00:28:47] I don't really recall. I might have selective memory here, but I'm pretty sure I, I have not ever heard a complaint after. Right. Someone's received the education that it was, you know, 'cause if you do a decent job of picking something that's useful, people tend to find [00:29:00] value in that. Even if they didn't, it wasn't their choice to go through it.

[00:29:02] Mallory: Mm-hmm.

[00:29:03] Amith: There's still value accrued to them.

[00:29:05] Mallory: Yep. And that value just keeps compounding right? Like years after you take it. Totally. Moving to topic three for today. We're gonna talk about Anthropics economic Index. So Anthropic just released its third economic index report analyzing millions of conversations across Claude AI and API customers mapping AI adoption across over a hundred and.

[00:29:27] 50 countries and all 50 US states. They created what they call the Anthropic AI Usage Index or a UI for short, which adjusts for population size to show who uses clawed More than expected, the findings reveal a pretty stark pattern. AI adoption splits largely along economic lines. For every 1% increase in GDP per capita, there's about a 0.7 or seven tenths of a percent increase in AI usage.

[00:29:56] Small, wealthy tech-oriented nations like Singapore, [00:30:00] Israel, Australia, New Zealand, and South Korea lead adoption. While larger emerging economies like India, Indonesia, and Nigeria show far below average use within the us. The correlation is even stronger. A 1% higher per capita. GDP means a 1.8% higher AI use.

[00:30:18] This is exciting for everybody. Washington DC leads with an anthropic, uh, usage index of 3.82, followed by Utah am I know you're excited about that at 3.78. You're probably single-handedly contributing to that when you're there. And then California and New York. The report reveals fascinating shifts over time as well.

[00:30:39] So software engineering remains the largest use case at about 40% of conversations. But knowledge intensive fields have surged since December, 2024. Educational tasks are up 40%. Science is up 33% while business and finance tasks declined proportionally. Higher adoption countries use Claude for wider variety, [00:31:00] including education, science, art, and administration.

[00:31:03] While lower adoption countries focus more narrowly on coding and automation, perhaps most striking directive automation has jumped from 27% to. 39% since December of 2024. For the first time, automation overall has overtaken augmentation as people increasingly let Claude handle tasks autonomously. Rather than collaborating the business data tells an even more dramatic story.

[00:31:29] API customers mostly businesses paying per token show 77% automation patterns versus a 50 50 split on claw ai. They're heavily concentrated in coding and administrative tasks with 44% of API traffic mapping to computer or mathematical work Businesses show positive correlation between cost and usage, suggesting capabilities matter more than token expense.

[00:31:55] Andro frames this as a potential AI divergence, like [00:32:00] electrification or the combustion engine, where a general purpose technology drives growth, but widens inequality. The report warns policymakers about risks of digital and income divides from unequal AI diffusion. So Ame what? What is most surprising here?

[00:32:16] Anything that you're like, yeah, that, duh. That's obvious.

[00:32:20] Amith: I think, uh, in Utah, I wonder if there's seasonality with respect to AI adoption because in the wintertime, you know, trying to use it to predict the best day to go skiing would be very high on my priority list. They're like,

[00:32:31] Mallory: every time there's a lot of snow in Utah, this one individual there.

[00:32:35] It's AI U spike. It's crazy. Well,

[00:32:37] Amith: preceding that the AI usage would spike and then during the snowfall, the AI usage, you're right, would, would precipitously drop, right? Um, so I don't know who's doing AI while they're on the chairlift, but uh, in any event you can tell where my mind is heading, uh, at this time of the year.

[00:32:50] Um. In any event, um, I think that the divide you're describing is super predictable and a lot of it actually here doesn't have to do with money because AI is so inexpensive. [00:33:00] Uh, and that was quoted a couple times in what you said, but like, you know, it's not about the cost of tokens as much as it is about, uh, the adoption being correlated to, um, the wealth of the nation.

[00:33:10] And the other thing is like these smaller countries that have the ability to more quickly focus, are able to drive change more rapidly. Um, it doesn't mean other people cannot do that, but. Education in general is how change is going to occur in any realm. It's not just technology and what we're talking about here are people who are open-minded to using new technologies.

[00:33:31] There's this flywheel effect that the more open-minded you are about using new tech, the better at using new tech you are. Which of course, in turn makes you more open-minded to using the next technology. So people who've adopted broadly mobile or the internet. Computing in general are obviously gonna be further along than people who have not yet adopted even basic digital technology.

[00:33:49] And so I think that is, um, kind of a self-reinforcing cycle. But I do think it's really critical that governments address this, that nonprofits address this, that it's thought [00:34:00] about very actively. I like the work that philanthropic has been doing here to try to raise awareness of this, uh, because it actually doesn't take.

[00:34:07] Massive amounts of resources in terms of a shift in spending to empower people anywhere in the country, anywhere in the world, to have access to basic AI tools. We're not talking about everyone having access to the very best ai, the most powerful frontier version of GPT five Pro and High Mode or Cloud 4.5 Son, or whatever's the latest model as you're listening to this, it's about having good ai.

[00:34:31] Which is maybe six to 12 months behind that, but very good and essentially is very close to free. And so when you think about that, that's not the typical financial barrier to technology adoption. If you wanna adopt electrification, uh, in the developing world, you have a massive amount of infrastructure you have to put in place, which requires tons of capital, lots of time, lots of skilled labor.

[00:34:50] Here we have the distribution of the internet, which is generally available. Even in the poorest parts of the world, many people have have mobile phones with, with the ability to connect to the internet. And we have the [00:35:00] ability to deploy this technology broadly and at fairly reasonable cost, but we have to go do it.

[00:35:05] That's the key. We have to go do this thing. And so that's gonna require a lot of people to agree that it's important and then to put some initiatives behind actually. Us working at it, right? It's, it's a labor thing more than it is about capital, in my opinion. So, uh, and that's just the very simplistic view of it, but the most important thing in my mind is just going back to our last conversation, is at the societal level, finding a way to bring everyone along.

[00:35:27] A company can do that all day long with their. Hundreds or thousands or tens of thousands of employees. And that's good. That's important for the company, that's important for those individuals, professional development. But for a nation and for the world, we have to bring everyone along. And so I think that's a lot of what this is talking about.

[00:35:43] Um, I think the gap here, it just because that, that gap is similar to past curves doesn't mean that it's preordained, that it has to be so. We have the opportunity to influence that outcome if we're willing to put some focus there. I think this community of associations and more broadly than not-for-profit market, [00:36:00] um, can in fact have an outsized role in that.

[00:36:03] Mallory: The index reports automation is outpacing augmentation for the first time. Amit, do you think that's more about users trusting AI more, the model's getting better? What do you think about that?

[00:36:16] Amith: I think it's probably both of those things. I mean, you have to have trust to hand over anything to anybody. So it's like, you know, Mallory and you, when you and I first started working together, um, I would say, Hey, can you take care of this task for me?

[00:36:26] Or whatever. Mm-hmm. And you would go do it. And I'd say, okay, well I'm gonna have certain implied trust. 'cause we got to know each other during the interview process. I'm like, yeah, this is a, a good person, they're gonna take care of this thing. But a lot of people don't feel that way about working with an ai, right?

[00:36:40] Mm-hmm. So if you didn't know me, which you asked me to do a whole bunch of things as a human. But on the AI side of it, you know, how comfortable are you handing over and what degree of autonomy. But like with a person, you get to know them better and you get to go, you know, onto bigger and bigger projects when like, you know, you've done most of the heavy lifting on the new edition of Ascend [00:37:00] and that's amazing.

[00:37:01] Congratulations on that. By the way, it's gonna hit the press here, hopefully in the next couple weeks. And if you're at digital now, you'll get a copy. Um, so that's gonna be pretty exciting. But my point is, is that, um. Three years ago, four years ago, I don't remember exactly when it was that we first crossed paths professionally, but it's like I don't think that that would've made sense for me to say, Hey, take this whole book and go figure it out.

[00:37:20] For lots of reasons, right? I'm glad you

[00:37:22] Mallory: didn't at that time. It's

[00:37:23] Amith: capability. Yeah, it's knowledge, but it's also as you get to know people better and you trust them more, you're like, oh, okay. There's a reinforcement loop happening, and I think people are experiencing that with chat GPT and with Claude and with Gemini as well.

[00:37:34] They're getting more comfortable because they've had good results. If people weren't having good results, you know? Do you think chat PT would have over a billion monthly active users at this point? Of course not. You know, it's got, it's, it takes a lot more than a novelty to do that, to have people come back.

[00:37:47] Uh, so I think that's part of it. The model capability is also important, but the thing I actually keep pointing. People to is think about the time horizon that an AI is able to work independently, right? So you can look at all these benchmarks like ARC A [00:38:00] GI two, and you can look at MMLU and all these other benchmarks that are out there of different capabilities of different models, and they're interesting.

[00:38:07] But they're not necessarily telling you what the outcome is, like what can it actually do for you? Whereas these metrics that are looking at how long can cloud code work on its own to produce a functional piece of software, which I think at the moment is something like seven to 12 hours, something like that.

[00:38:22] And GPT five I think claim to be able to work for 48 hours on its own. Um. And, and the assumption, by the way, is that these things aren't just like looping. They're doing useless things, that they're actually doing valuable, productive work for that long. That tells you a lot, right? So an AI model two years ago, most certainly would not have been work able to work maybe even more than a couple minutes on its own.

[00:38:40] And most of that was because it was so slow, it was producing like three characters, you know, per second or something. So, um, that's a, a good thing to pay attention to is how long can AI work independently. And so that's why you see. Automation outpacing augmentation. You know, people like to comfort themselves saying, you know, some people are still saying this.

[00:38:59] I actually [00:39:00] said this two years ago on stage a lot that it wouldn't be AI replacing you in your job, but it would be someone who knows how to use ai. Well, I think that statement is now two years, like expired. Um, because AI has been capable of doing. The majority of tasks of many professions for some time, and now it will soon be able to do the majority of tasks in most professions.

[00:39:19] Doesn't mean the profession goes away, many of the tasks go away. Um, so why would you not automate if you could, if you trust the thing and if it's competent, do you want to do extra work? I think most people to that question would say no, they don't like doing extra work. So, um, I'm certainly seeing that with software development, you know, we're seeing more and more and more automation.

[00:39:40] Mallory: So philanthropic made this data publicly available on an interactive site, tracking AI usage by state and occupation. Do you think there's any like practical takeaways, associations could get by looking at that map and I, especially if it's like a state association or something like that.

[00:39:56] Amith: I mean, if you have geographic concentrations or if you're an industry association in a [00:40:00] particular area like California, or a particular region of the country or region of the world, that's worth looking at.

[00:40:06] I think looking at it by occupation is super, super relevant to everyone. If you are a professional association and you work with a particular field, well then you should kind of get a good idea of what's going on that that'll give you a much better insight than a member survey you did that had a hundred respondents or something.

[00:40:22] You're gonna get a much broader sample from actual. Usage data and usage data is super important to understand because when you're talking about the context of, uh, people providing their own feedback through q and a, which is the, the age old survey, there, there is value in that. You can get some qualitative insights that way, but a lot of times, people, um, unintentionally, most of the times, sometimes intentionally do not tell you the truth.

[00:40:46] For various different reasons, and we've, we've known this about surveys for a long, long time. People are either aspirational in their responses or guarded in their responses. If they feel it's a sensitive topic, they're less likely to be truthful. We've known this for [00:41:00] generations with surveys and responses, and we have to do, uh, corrections for that.

[00:41:03] There's all sorts of statistical methods to try to address those, those issues. But ultimately, they're all basically good guesses. Whereas with empirical results from actual usage, um, you know exactly what people are doing. They're not fooling you by acting differently in front of Claude and in order to like skew the data or something.

[00:41:20] And, um, it's, it's just something that you can get constant, continuous, realtime insight and, and there's no inaccuracies because it's actually what people are doing now. The methods used to aggregate the data, of course could have some flaws, et cetera. But I think the folks at philanthropic are pretty good at stats.

[00:41:33] So I, I'd trust that their report is, is pretty usable.

[00:41:38] Mallory: That's bringing us to the end of today's sidecar sync conversation. I'm curious, out of all of our topics, Hmm, really, between the first two, so the hand or the AI education mandates, what do you think has the most sizzle,

[00:41:51] Amith: you know. Hopefully the one about training because it's the most actionable thing.

[00:41:56] So, you know, get back on that soapbox for a second. It's like, it's the thing out of all this [00:42:00] stuff that we've talked about that you can go do something about today. So that would be one that I hope got people's attention, if not the sizzle and kind of the, the fun way. Um, but. The commentary I'd make is if you're not interested in robotics or any form of physical ai, um, you should get interested in it because all of us, you know, inhabit, uh, bodies that are made of atoms, not bits.

[00:42:20] As much as we spend our time doing digital things, uh, we're still biological species that consist of, you know, the physical world and interact with the physical world. So it's something you should pay attention to. Even if you haven't thought of that, it's something super relevant to you and you don't need to be a sci-fi junkie or whatever.

[00:42:36] I'm actually not one of those, but you just. Have to have a little bit of curiosity. I think it's worth looking at because it will open up your mind to new possibilities and the trend line across all the, these three different topics. They might seem very different, but they're They're very related.

[00:42:50] Mallory: I have a feeling we're gonna be talking a lot more about robotics too on this podcast in the future.

[00:42:54] Everybody thank you for tuning in and we will see you all next week. [00:43:00]

[00:43:00] Amith: Thanks for tuning into the Sidecar Sync Podcast. If you want to dive deeper into anything mentioned in this episode, please check out the links in our show notes. And if you're looking for more in depth AI education for you, your entire team, or your members, head to sidecar.ai.

Mallory Mejias
Post by Mallory Mejias
October 16, 2025
Mallory Mejias is passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.