Skip to main content

Summary:

In this special interview edition of Sidecar Sync, hosts Mallory  and Amith sit down with AI consultant, John Huisman, to explore how associations can effectively adopt generative AI. John shares insights from his extensive experience advising Fortune 50 companies on digital transformation and AI strategy. The conversation covers how associations can shift from adopting AI just for the sake of it to strategically leveraging it for real business outcomes. They also discuss employee productivity, AI training, and practical applications like knowledge management and customer service automation. If you're wondering how to make AI work for your organization, this episode is a must-listen!

Timestamps:

00:00 - Introduction to John Huisman
04:49 - Early AI Adoption and Technology Shifts
08:18 - Three Key AI Use Cases for Businesses
13:00 - Where AI Implementation Yields the Quickest Wins
17:58 - Addressing Employee Fears About AI Adoption
22:31 - The Importance of AI Training for Associations
31:39 - Customer Service Automation with Generative AI
42:10 - Knowledge Management and AI for Associations
51:40 - The Future of AI Agents and Business Processes

 

 

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai/

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/ 

🛠 AI Tools and Resources Mentioned in This Episode:
Whisper ➡ https://openai.com/research/whisper
ChatGPT ➡ https://openai.com/chatgpt
Claude ➡ https://claude.ai

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

Follow Sidecar on LinkedIn

⚙️ Other Resources from Sidecar: 

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Read the Transcript

John: 0:00
Those who put their head in the sand are the most likely to be disrupted. I think there is going to be plenty of opportunity for those who understand generative AI, understand how to use it as a tool to augment their own ability, are going to be well-positioned to continue to succeed.

Amith: 0:17
Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, Chairman of Blue Cypress, and I'm your host.

Mallory: 0:46
Hello everyone and welcome back to the Sidecar Sync podcast. My name is Mallory Mejiaz and I'm one of your hosts, along with Amit Nagarajan, and today we're excited to be bringing you a special interview edition of the Sidecar Sync. We're talking to John Huseman, who's an associate partner at a leading global management consulting firm, where he works with some of the largest organizations in the world, and you might be thinking what do associations have in common with some of the biggest companies in the world? It might be more than you think. I'm going to share a little bit more with you about John's background after a word from our sponsor.

Mallory: 1:19
If you're listening to this podcast right now, you're already thinking differently about AI than many of your peers, don't you wish there was a way to showcase your commitment to innovation and learning? The Association AI Professional, or AAIP, certification is exactly that. The AAIP certification is awarded to those who have achieved outstanding theoretical and practical AI knowledge. As it pertains to associations, earning your AAIP certification proves that you're at the forefront of AI in your organization and in the greater association space, giving you a competitive edge in an increasingly AI-driven job market. Join the growing group of professionals who've earned their AAIP certification and secure your professional future by heading to learnsidecarai.

Mallory: 2:08
As I mentioned, john Huseman is an associate partner at a leading global management consulting firm, where he has spent the last eight years advising clients across industries. He specializes in digital transformation, technology strategy and operating model design, with a focus on generative AI. Before consulting, john co-founded Rodolo, a software consultancy that built custom business applications, emphasizing user experience for underserved business units in larger enterprises. He holds an MBA from the Fuqua School of Business and a BS in business administration from UNC Chapel Hill. If you feel lost in your generative AI journey as an association, I think this will be a great listen for you. We cover a lot of territory in the interview, but to me, one point stands out we must shift our mindset from we need to implement generative AI because everyone else is doing it to we have specific business problems that generative AI could help solve. Enjoy this interview, john. Thank you so much for joining us today on the Sidecar Sync podcast. We are so happy to have you. I'm hoping, first off, you can share a little bit about yourself and your background with our listeners.

John: 3:18
Yeah, absolutely. And to me, Mallory, thank you for having me. Yes, my name is John Huseman, I am a management consultant, and so a lot of my time for the last call it seven and a half, eight years or so has focused on working with large companies think Fortune 50, fortune 100, or portfolio companies of large private equity firms of solving whatever their most pressing challenges are. This has been, let me, have a variety of different industries and capabilities. However, the last several years, most of my time has fit in the consumer goods space and, likely most pertinent to your audience, on technology, and so this is everything from technology strategy or digital strategy operating model to what I think is probably more exciting to talk about in AI and specifically generative AI over the last few years, so that's where I spend most of my time today. Prior to that, in a previous life, I helped start up a software company that built custom business applications, so, while I'm not a native engineer, I know a good decent amount about how products are created and how software is developed.

Mallory: 4:23
Awesome, awesome, and you were just sharing with me before we started recording how you and Amit know each other.

John: 4:28
If you want to talk a little bit about that, yeah, so that aforementioned startup was with Amit and I working together. So that was back in New Orleans, gosh, probably about a decade or so ago now, which makes me feel old, but yeah, we got to work together for several years, which was a great experience.

Mallory: 4:45
And were you all talking about artificial intelligence 10 years ago or not quite?

John: 4:49
I think Amit was questioning my actual intelligence more a decade ago.

John: 4:54
No, but actually it's funny because what we were doing a lot of is that was early in days of cloud hosting, and so I remember that when we talk about new technologies and adoption, a lot of what we had to talk about our clients then was hey, are you comfortable hosting this thing called AWS or the cloud, and what is that? And I'm much more comfortable on-prem, and so it's a different technology, but a similar paradigm of hey, this is a new technology that is disrupting how things have been done in the past. This is a new technology that is disrupting how things have been done in the past, and how do you help?

Mallory: 5:34
people get comfortable with it and learn how they can use it in whatever else they're doing to make their lives better and more productive and hopefully help the bottom line as well Do you feel like your perspective on technology implementation has changed from your startup days to now working with large organizations.

John: 5:47
Yeah well, it's far easier with startups in many ways. Right, you just have an idea and you go do it. A lot of my time today is spent less so on the answer, the technology itself, but the cultural and political issues that exist within large companies, right, and so, whenever there's something that's disruptive, that scares people, and so helping them get comfortable with how this can empower them, how this can enable them to do things they haven't done before, and a lot more time with legal departments than I used to, who are very worried about how this technology will work, and so I think that's a burden that large companies deal with that smaller companies don't have to, which, in many ways, while there might be more resources at a large company, those constraints can actually slow them down and allow small organizations to really move a lot faster and experiments and trial things and see what works and what doesn't, which can often be a pretty big advantage.

Mallory: 6:38
Yep, Amit, I would say you agree with that right.

Amith: 6:41
Yeah, I agree with all of that. It's fond memories and recollections of what John's describing with the startup here in New Orleans that we were involved in together and ultimately that company had a nice ending and it was sold and it was a great experience along the way, and it actually was an incubator that spun off a whole bunch of other software companies, which was really really cool as well. And one observation I'd share with our audience, just to kick things off from my viewpoint, is I've known John for a long time, as you just learned, and John and I recently actually reconnected. We hadn't chatted in a few years until probably three months ago or something like that, and we had this great conversation all about AI adoption and John's practice experience.

Amith: 7:24
He's quite modest in this interview. He's doing some very high-level work for some of the biggest companies in the world, working within one of the leading global management consultancies, and has some very impressive things, and I thought to myself it'd be really great to get John to share some of his experiences, both the exciting parts and also some of the challenges, and what he opened with there in terms of the difference between his current life and what he did in the startup land, is actually kind of similar to what associations deal with, even though they're much smaller than the kinds of organizations John, you're dealing with now. They deal with a lot of red tape, a lot of bureaucracy, a lot of layers of volunteer leadership, governance boards, bylaws sometimes. So we'll get into that, I'm sure, in this pod sometimes. So we'll get into that, I'm sure, in this pod. But working through the human factors and the change management, I think it always has been the biggest challenge and opportunity with technology disruption and it still remains to be the case, in my opinion.

Mallory: 8:16
Well, I want to jump right into the juicy stuff. So obviously we're here to talk about AI and the clients that you've worked with. When do you feel like you started seeing AI make a real impact in companies that you're working with?

John: 8:28
Yeah, so I think it's important to separate AI and generative AI, and so AI has been a big part of clients for decades and certainly as long as I've been a consultant and you see it a lot in terms of forecasting algorithms, pattern recognitions or optical recognition, like that's been around for a while, sort of the AI and the machine learning realm, the generative AIs or the new sexy stuff. Really, our last few years is where, you know, I think it was probably what November of 22, something like that when OpenAI first launched Chachi BT3, which is where I think most people started becoming aware of this, and obviously it's funny. We sort of complain about the limitations, but it's kind of crazy. This sort of came from nothing to where it is, you know, only in a matter of a few years. It's been pretty rapid development. But where I'm seeing companies use it is really trying to use it everywhere they can, and so for my clients there are really three primary ways they sort of bucket to use cases or applications. There are things and again this is for the consumer product space, but I think it's probably relevant to most industries there are how do I better engage with my end consumer and have a more personalized relationship with them in a way that the consumer finds valuable. I am not just forcing it on them, but they actually have to find value from this. There is how do I work with my distributors or retailers for this middle layer that connects my products to the end consumer, make them more effective, increase their satisfaction? And then, third, how do I just help my own employee be more productive, be happier, improve retention things that is really relevant to any organization. Those are really the three main channels that we look at.

John: 10:35
There can be a trap when a new technology comes, especially when, like generative AI, of like, well, let's just start building stuff and see what happens, which is kind of cool and demo, but you can't really see it on the P&L, you can't really see did it actually drive a meaningful impact? And thinking more in terms like, okay, well, how do we use this tech to actually change how we do the business and change how we run things? And one way we I sort of help my clients think through this is not just focusing on what's today, but have these sort of big, bold, ambitious bets that might be four or five years down the road, but where are you trying to go with this. And so in the consumer space, that might mean, hey, rather than having one or two or three campaigns going in any given time, what if we have 8 billion and we have a personalized, completely unique campaign that's bespoke for every individual? That's not something that we can easily do today, largely just because of data issues, but that could be something we want to go, and if you have that sort of ambition, then you can think through like what are the stepping stones over the next several years to help get us there?

John: 11:31
What capabilities do we need? What talent do we need? And so that is an area where I think as many companies are starting to think, hey, we're spending a lot of money on this, what do I have to show for? It is starting to be a little bit more structured and thoughtful about where are we trying to go to actually see those results, whether it be financial or however else. You're measuring rather than sort of where it has been up to this point, at least in my experience of let's build something and figure out the business case later.

Amith: 11:58
Yeah, it's really interesting because I think those three categories are directly relevant to associations and the nonprofit sector.

Amith: 12:04
I mean the one that you described. Obviously the end customer, the end consumer, it's the same thing, it's the member, it's the constituent, but in the other two and obviously they have staff, they have employees which I would also include kind of their close in volunteer leadership that contributes a lot of energy. The middle layer is interesting because in the context of CPG, you might be dealing with distributors and wholesalers, retailers, all that, the full supply chain to get to the end consumer. In the case of associations, you typically aren't dealing with that kind of a scenario, but I still think there's lessons we can learn from some of your experiences there, because associations do distribute their value through partners in a lot of cases, sometimes through partnerships and affiliates, sometimes through things like the context of chapters and structures like that. So ultimately, I think there's some really good parallels. So I just want to quickly draw those for our listeners so they understand, kind of how your world might relate to the way that they're thinking about things.

Mallory: 13:01
Do you feel like any of those buckets have easier wins than others in terms of the engagement piece, the middle layer that you mentioned, and then employee productivity, just one that you always like to start with?

John: 13:14
I think the employee productivity or anything internal. You just have more control over right and legal departments stress less about. Whenever you're actually directly engaging someone outside your company, you at least have. You have to be very careful there. There's all sorts of sort of ethical issues. You need to make sure that it's not saying something you wouldn't want to say, protecting the brand, and so that just gets more challenging in that you just have to put a lot of thought into how you're going to mitigate any potential bad actors or bad outcomes, whereas you obviously don't want to give bad information to your employees.

John: 13:45
But it's more of a contained risk than it is when it goes external, and so oftentimes we encourage our or my clients to start there, both because it is a little bit easier, it feels a little bit less dangerous.

John: 13:59
It also allows them to help build excitement and engagement throughout the organization. That it's one thing to hear about. Hey, we're doing this cool thing that helps these consumers and obviously I want consumers to be great, but I'm in finance. I don't really ever see them Versus like, oh, this is something I can actually see and touch and it helps me make my job easier or do something I don't want to do. That's pretty exciting. And to Amit's point earlier on, the change management aspect of here, a big part of the battle is just getting buy-in of. Like this is something that is useful to me and it's worth me putting up a little pain of trying it out and recognizing that new tools have bugs and they don't always work as we thought, but getting those wins start building that advocacy and building those champions and getting people just able to use and see how it can help them is often a big part of the initial battle.

Amith: 14:51
John, how frequently do you find in these larger organizations there's resistance from employees due to fear?

John: 15:44
of job loss Quite a bit. It also depends on what the use case is. I think it's less so on specific use cases we are sort of discussing, because often we try and develop it with the teams and more of what they're reading in the newspapers and you see all these headlines of like 70% of white collar jobs disappearing and people worry about that, and this is not new Korea of destruction has been a big part of our economy for the last century, if not more, and so I think there is recognition that there will be disruption here and that worries them. I think where I tend to land and this is not a unique view to me is that those who put their head in the sand are the most likely to be disrupted. I think there is going to be plenty of opportunity, at least until the robots completely take over, but plenty of opportunity for those who understand generative AI, understand how to use it as a tool to augment their own ability, are going to be well-positioned to continue to succeed wherever they go.

John: 16:47
This is just, I think, a little bit like 25, 30 years ago, people who refused to use the internet. They said this is a fad, or I don't use email. It does replace part of your job, right, like you'd spend less time on research. You'd spend less time on, maybe, writing letters, whatever the case is but it ultimately allows you to do a lot more things and it augments your ability to be far more productive. And so that's the message we try and get across of how are you going to use this technology to upskill yourself and allow you to be actually a much more productive and attractive employee wherever you go, and recognizing that some things will go away we don't have telephone operators anymore and I was a huge employer, you know, 50, 100 years ago developers, and we have people who create generative AI, and so there are always sort of an upskilling and increasing of what people are working on. But it is important to help contextualize that and frame that and have people understand where they fit in.

Amith: 17:41
One of the things we think associations have a really critical role to play is in the area of educating their professions on AI, so certainly educating their teams on how to use AI.

Amith: 17:54
And you know it goes back to I throw this out all the time on this pod that I think there's gonna be two types of people in the near future the people that are natively knowledgeable, trained up to speed on generative AI or AI in general, and there's going to be people who are unemployed, and so that's pretty much it.

Amith: 18:10
And so and I reason I say it that directly is we're trying to get people to learn this stuff right, just to first of all get awareness of what it can do and then learn a little bit of it. But you know, I think a little bit about the context of uptake of how people do this within the industries you work in, and so every industry has associations, sometimes multiple associations. Sometimes there's associations for little slices of the industry or different sides of a supply chain, for example, but I believe that they all have an opportunity and a responsibility in a way to train their professions on AI, to better disseminate this information. What I'm curious about is, within your practice and the clients you work with, what's the uptake in terms of learning? If you take a broader population of employees, how many of them have actually been offered AI training? How many of them have taken that up? I don't know if you have rough ideas of any quantitative metrics on that.

John: 19:04
Yeah, I guess I would share directional metrics with this. So a lot of my clients that has been a big push over the last year Like how do we upskill and train our people on this tool One to use the tools we're giving them but, as importantly, for them to come up with their own ideas and their own use cases, that sort of more? The sort of raw whether it's chat, gpt or cloud or any of these tools use the tool itself in addition to anything we're creating for them. And so helping to figure out how to design those training courses is important. Ideally, involving general AI in those training courses to sort of act as a teacher can sort of give you sort of two birds with one stone, and so that's something that we encourage our clients to do is sort of think through how you can use this as part of the training, not just have it just be a PowerPoint slide. And so I think in terms of uptake it sort of varies. That it is, at least in my current client. It was offered to everyone. So I think tens of thousands of employees across the world I think almost everyone has taken the training. It was partly because it was a mandatory training.

John: 20:10
I still think time will tell in terms of use that there are still a lot of people who, I think, understand it but don't think it's for them. They're like, well, I'm an accountant, I don't need this tool, or that's for the marketing people to come up with cool slogans and pictures, or I'm in ops, that's not for me. And one area that we are still trying to crack is helping people understand that really, no matter what your job is, this is a useful tool, and one of the analogies I use is think what you would do if you had, as the cost of labor approaches, zero, what would you do differently If you could have a reasonably bright or, as some of the researchers have shown, might even PhD bright? You know analysts there who could help you develop your answer for free within a few minutes. How do you do things differently? And thinking about that is challenging for a lot of people because they were trained on Google, right, and so if I have a question, I write in five or six keywords. I see what I get. If it's not on the first page, I try five or six different keywords. That's not a page I might give up and say it's not there. It's a completely different interaction model. Right it's.

John: 21:24
I'm actually going to give you quite a bit of detail, like I would a person I'm assigning this to Like, hey, here's what we're trying to do, here's what we're trying to achieve, here's how. Here's an example of how I've done in the past. Whatever that is, and if you get something back that maybe you don't think is exactly right, telling it why, saying hey, this isn't it, here's why it should be different, or I want it to look more like this, or stop that, or whatever the case is. So treating it as a person rather than a machine and how you give feedback to a person rather than a machine is a huge unlock. That is just a different approach than most people will take, and so that takes time.

John: 22:01
That it's one thing to see it in a training module. It's one thing even to see a demo it. That it's one thing to see it in a training module. It's one thing even to see a demo. It's another thing to actually do it, and I think part of the struggle often is, now that we are more hybrid, it's harder to see the person next to you do it right, and I think that also that assimilation is just slower than it was when there's more co-location. We're getting there, but especially for the larger organizations, who again are busy and have day jobs, it is a slower uptake than maybe we would like.

Amith: 22:31
You know, John, I've got two quick things coming out of that. And then I know Mallory has a list of things she's excited to ask you, so I'll pass the baton back to her. But the two quick things one is the mandatory versus voluntary on the training piece. We talk about that from time to time here. Associations tend to be very consensus-driven and opt-in kind of mindset, as opposed to thou shalt do this kind of mindset In the organizations you've been involved with. For the ones that have had mandatory training versus the ones that may not have chosen to do that, what's the difference in uptake and AI training amongst their team?

John: 23:12
They take it versus they don't. Right. For most of my clients, they're busy, right, and so if you give them an optional training, the people who are most likely to take that are people who are interested in the topic and have probably already learned about it ahead of the training, right, and so the person you need to get to are those who don't think it's relevant to them or find it somewhat intimidating. Those are the people you need to get to. The people who are willing to take it are actually probably already fine.

Amith: 23:35
Yeah, and I think I wanted to have you say all that because our listeners need to understand that the same thing applies in corporations and this is true in government as well that there are times when you have to mandate things, and this is one of those times. It is actually the responsibility of the leader to ensure that their people have a path to growth, and absent training in AI, it's clearly obviously our opinion here that you don't have a very rosy future. So I think it's where that's kind of the moral imperative of the leader to drive. That, of course, it's important for the business as well. So the second thing I wanted to ask you before I hand it back to Mallory is this and you said something that I think is actually quite profound that people have to rethink what the possibilities are.

Amith: 24:16
In a way, this isn't exactly what you said, but to ask a better question or maybe to ask many more questions, whereas we've been trained to essentially formulate a hypothesis and then try to test it, and that might be true in drug discovery, it might be true in a business paradigm, it might be true in testing a marketing campaign, but we generally are doing kind of a serial process of coming up with a hypothesis, testing it. Generally speaking, we fall in love with it while we're testing it, so we bias ourselves into saying yes, it's good, because we don't want to be a failure. That, of course, is endemic to all of us. And then we move forward or we don't. And now I think we have the opportunity potentially to ask 10, 50, 100, 1,000 questions in parallel. What are your thoughts on that?

John: 24:56
Totally agree, and I think you can still even have a hypothesis. You can just validate it much faster. Or it's even thinking through how you frame the question. So I want you to be a really staunch critic of all new ideas and I'm going to propose an idea to you and I want to hear your feedback, and so that can help me prepare for the eventual feedback I get and I can sort of iterate on that. Or even I had this idea and I don't even know how to think about it. Can you help me provide structure to it? Or can you help me write a better prompt to get you to give me more structure to this, like you can get to my meta with it? Or even a example that I've seen which is somewhat tangential.

John: 25:35
What you said is, especially now it has voice mode and other things like that. I have colleagues who just on their drive home just brained up and just talk to it and there's not a little structure to like oh, I had this meeting and this is what happened. Oh, and I also had this idea. It's completely stream of consciousness. But then they sort of have this 20-minute sort of dump of information at the end of every day and then it says, hey, organize that thinking and provide structure to it, and that way I can search it and I can come back to it. But it's a way for them to sort of use the tool to help them digest information and keep themselves smarter. That is another way. It's not exactly where you're getting at, but the idea of using the tool to ask questions and ask it to do things that isn't like a better version of today, it's a completely new thing. That you don't do today is important, and so part of the guidance I give my teams, and especially those that work for me, is, every time I ask you to do something or anyone asks you to do something, your first 30 seconds should be like okay, how can I get Gen AI to do this for me, or at least give me a starting point?

John: 26:38
The value of going from a blank page with 60% answer is enormous, and so this isn't to say you should write it in and just send whatever it comes out to you directly, not look at it, but it is the way. How do you get the 50, 60, 70 answer, and either iterate with the tool or just take the pen and go from there. Um, but one start with start with that. And two, if you don't know how to use the tool, ask it. And this is, I think, can be an unlock, especially for new people like, hey, I don't know how to write a prompt, I ask it.

John: 28:01
And this is, I think, can be an unlock, especially for new people like, hey, I don't know how to write a prompt. I'm trying to do this thing. Can you help me write a prompt to help you understand what I want? It's kind of meta, it feels a little bit like three-dimensional chess, but actually be a really useful tool that even I use sometimes. Have there been times where I'm like I'm just not getting what I want and I'll respond like, hey, look, you're not giving me what I want. Can you help me phrase this so you better understand what I'm trying to get at? I'm trying to accomplish these things and it gives me a prompt that I can get feedback to it and it sort of feels like a cheat code, but it can be really effective and sort of that trial and error and how do I rethink how to use this tool is, you know, more than half the battle. Just start using it and testing and experimenting and you'll find where you're comfortable and where you have more versus less success along the way.

Amith: 28:48
Building on that for just a second, one of the cheat codes related to that meta-prompting approach that I've used quite effectively is I go to something like a chat, gpt or a cloud and I say, hey, listen, I need to go work with another AI that's not quite as sophisticated as you are, and so I'd really love your help in creating a prompt that a lesser AI would be able to understand and complete the following task, and it's like kind of like you know, it's just like all of us. Oh yeah, I'm a superior AI, so the AI responds really well to flattery. I just got to say that.

John: 29:18
It is funny how it does that, that even things like hey, this is important to me in my career, please take your time, take a breath and think through it before you go. It feels weird to write that to potentially a probabilistic engine, but it actually can be quite effective. So, yeah, I still say please and thank you when I write prompts. It just feels natural. Again, it also helps me get in the mindset of the same way I'd ask a person I'm acting this machine the same way, because that is a closer paradigm than maybe we've had historically.

Amith: 29:53
Well, and you and I are both also hedging for the likely future that we're heading into or we're not in charge of this planet.

John: 29:59
Exactly. We'll see what happens Nice.

Mallory: 30:04
Hey, now we're not alarmist on the sidecar sync. Well, I think we're all in agreement here that training is kind of one of those essential first steps. Education that's really the whole reason why sidecar exists. We do have a lot of association leaders that I've had conversations with that think okay, we're rolling out this AI training, but we're struggling with what we do next. We've made this investment. How do we justify this investment to our board? But also, when talking about employee productivity, how do we measure everything that we've trained on? So, when you're working with clients who particularly want to improve employee productivity, what do you see as that next phase after education or training?

John: 30:45
It's a problem a lot of companies are dealing with right now. This is cool, but now what? Show me why this makes sense, other than cool chatbots? And I think there are two ways I think about it. One this gets back to the point I made toward the beginning of it's important to have that bold ambition you're headed towards, because that's ultimately I'm improving employee productivity to accomplish X. So being focused on the ends rather than the means is important, because then you can also not only show financial outcomes, but you can also show how you're progressing towards whatever the bigger, bolder thing that you're trying to accomplish is. So that's one thing I think. Second is being really clear about what you're trying to measure and why.

John: 31:28
A lot of times they say we're going to improve employee productivity and the implicit assumption is if we do that, we'll save money. It's sort of like not really. Unless you're removing headcount, there's really no saving there. Employees are slightly more productive, but there's probably not a clear financial savings. There might be an indirect financial savings of like they're generating better answers, our customers are happier, so we have higher retention, but that's a harder connection to tie, and so I think it's important to understand, if our goal is to achieve savings, that must mean we are going to reduce headcount, reduce tech spend, reduce something, and Gen AI will help us get there. So that's key. It's also, I think, perfectly reasonable to have more qualitative measures of we're going to have higher employee retention, we're going to have higher NPS, we're going to have higher, you know, more visits, more traffic, whatever. It is Just be clear about that. More visits, more traffic, whatever. It is just be clear about that.

John: 32:28
I also find, at least with the clients I work with, that savings comes more from necessity than planning. What I mean by that is, if you want to save 20%, for example, it's a lot harder to say, okay, go find me 20% using Gen AI, than it is to say I already took the 20% away, use Gen AI to make it work right. And so that necessity there of, oh, actually, we don't have these people anymore, so we have to find a way to use it. And here's a way we can save everyone two hours a day and that will fill the gap.

John: 33:00
That I see work quite well, even though it can be somewhat stressful. Fill the gap. That I see work quite well, even though it can be somewhat stressful. It's just much harder, just in human nature, to be like, yeah, we could do this and it could save us time, but I don't know, I'm not sure, and that's just harder. There needs to be some forcing function there, I think, to really see at least cost savings. Again, this is separate from growth saving or growth generation or other sort of metrics, but on the cost side that's what I see with my clients.

Amith: 33:26
You know, building on Mallory's question of where to get started, a lot of people I talk to say, hey, we really want to try to find a way to take some of the load off of our customer service or member service team, which tends to be overwhelmed, and also improve the quality of customer service at the same time. And there's actually this concern that comes up saying, hey, I've got X number of people that answer emails and phone calls all day, and what happens to them if we automate it? And I do think that, absent the desire and reason to cut headcount, you're not going to have a financial savings from it, but you potentially can create far more value. Part of it is the people who do that work oftentimes are far underutilized, not necessarily in terms of their times, but in terms of their knowledge. So they're answering the same question over and over and over, the same case over and over and over. They are probably trained on lots of other things that are perhaps more interesting to them intellectually, but also potentially higher value. And so by using AI to automate some of the lower level tasks and, for example, the customer service workflow, you do some interesting things there in terms of employee morale and retention, lower training cycles. The other thing too is I think that particular use case might be worth digging into a little bit with you, because I look at it as kind of a dual benefit type of scenario, in that it can make the internal ops more efficient. But if you automate customer service really well in this way, the way we're doing it now, it actually can increase the quality of customer service, which is pretty remarkable, because if you think about the entire history of customer service tech up until recently, it's only been on the one side. It's about cost savings.

Amith: 34:55
Nobody thinks phone trees and the kind of bots that were on screens before were enjoyable. Right, it's like everyone you get on the phone with your airline, you go agent, agent, agent immediately right, because you want to get past the technology. But that's not the case anymore. You know, like one of the case studies we talk about a lot in this pod it's a little bit old now, it's about 12 months old, which in AI timelines is ancient. But it's this Klarna case study that you've probably heard of, john, where they, you know, put an AI agent in their workflow and the most notable outcome from that was they went from an 11-minute resolution time to a two-minute resolution time.

Amith: 35:27
I don't know too many people who'd prefer to be on a customer service request for 11 minutes instead of two. Right, and so when something becomes better, cheaper and more available, generally, people can see more of it. So perhaps this could actually lead to an increase in customer service requests, which most organizations would view that as a net negative right, the metrics typically are not positive. You're not rated well if you have more customer service inquiries. People think your product sucks or there's some problem with it. But what if we flip that script and said, hey, our customer service is so great and oh, by the way, we can scale it to infinity. People want to talk to us all the time. Wouldn't that be awesome?

John: 36:04
Yeah, and educate how we can use a tool differently or upsell opportunities, yeah, and I totally agree with you. I mean customer service, I think, is not a pleasant experience for either party right now. It's no fun to be on hold for 10 minutes and no fun to be yelled at by every customer knowing they've been on hold for 10 minutes. And so one thing I saw with one of my clients which I thought was interesting was we had a similar. They started using AI for customer service and contact centers, first to just inform the agent and giving them sort of a knowledge base to search and help answer questions and, in some cases, starting to answer some of the calls. But in their pilot, what they found is actually the average call time to your point went down, but the call time with the human operators actually went up. And like what's going on here? This is a disaster, our people are somehow getting worse.

John: 36:55
But when they actually looked in and started listening to the calls, they're actually having much richer, deeper connections with the people who called in.

John: 37:01
Because the easy stuff was going to the bots right, and you change my address, my order's lost, whatever it is. Instead, it was like, hey, I'm having this problem, can you help me figure it out? And it was much more of a yes, let me figure it out, let's work together, let's brainstorm, and you build a much deeper connection there to your point, which is great for the customer, who had a good experience, and me, like hey, someone that spent 20 minutes helping me solve a problem. It's also a much better experience for the operator who's talking to them, because they feel a lot more purpose. They're helping someone. They have much more enjoyable experience than let me go track your order or whatever the case may be, and so I think you're right that there's. You may need fewer people because, depending on the proportion of calls that are sort of easy versus more challenging, but for the people who are staying, I think it's a much richer and better experience for both sides of the conversation.

Amith: 37:48
So if I was an association CEO based on that, what I would consider doing is putting in an AI, an agent, to handle a lot of the routine customer service, member service inquiries. But then I'd have my team whether it's five people or 50 people that do member service. I'd have them actually proactively reach out to members and try to offer value right and try to find ways to be proactively able to serve. And I think there's lots of creative ways people could do that. Just have conversations, just say, hey, I'm checking in. I wanted to talk to you about your experience with this and provide value, not just like a survey People don't like those phone calls but something that's truly value additive.

Amith: 38:23
And, of course, if you use some AI to drive personalization and have a better idea of what John or Mallory or Amith might be interested in, that can be helpful too. But those are things you can't even begin to think of right now because we're so overwhelmed by the workload we have, and this is the kind of creative outlet I think that you refer to as creative destruction. We destroy the traditional workflow and reduce that purely to automation, but leave room for this incredibly creative outlet.

John: 38:47
Yeah, and there's a middle ground also of what they were trying to solve is the person, whoever was calling, would describe their problem, and it wasn't like pick one of seven options and I'm going to ask you to repeat yourself, use whatever words you want, as much time as you want, and the AI would try to synthesize that and it could then use logic of like this is easy, for this is trickier, and the trickier ones is the ones they bring the people in, and even when they brought the people, the people were not there on their own.

John: 39:14
They also had the AI, using Whisper and any of the other tools listening in, and so on their screen it could both track it. It could say here's a answer to their question or here is, depending on how commercial you want. Here's an opportunity to upsell or encourage a new event or product that maybe they weren't considering earlier, and it could also give coaching of. They seem frustrated. Here's something to say to help calm them down. You can think through ways of arming your people to have better tools to have better conversations, in addition to automating and great degree of it.

Amith: 39:48
I want to unpack that for just a minute, because not all of our listeners are familiar with Whisper, but Whisper is actually an open source model from OpenAI. It's one of the few open source things that they published that's inferenced on a whole variety of clouds, including Grok and AWS and Azure, and provides essentially real-time audio to text translation, which can then be fed into any kind of other models. So what John's describing is this workflow where, in real time, the AI is not only listening, transcribing what the speaker on the other end is saying, but potentially being able to suggest answers and ways to improve the call, and that's, I think, a tremendous use case for that technology. That's a great example of a co-pilot scenario versus an autopilot scenario.

John: 40:29
Yeah, and then use it on the back end to provide feedback to your team and help them get better of like, hey, here's something where maybe the call didn't go so well. Here's some things you can try in the future, so you sort of have this continuous improvement element as well to give you better visibility into who's doing well and why. Or maybe an agent does something, not an agent a human agent does something that you wouldn't have expected, that actually worked really well, and how do you then incorporate that into your operating procedure? So, yeah, there's things you can do to avoid the call, the things you can do to improve the call and the things you can do after the call, using the technology to all make it a better experience for all sides.

Mallory: 41:06
In that example, did the consumer think they were talking to a human or were they aware more or less? You think that they were talking to an agent.

John: 41:15
It's a good question, Mallory. I suspect and this was maybe Eight, nine months ago, so I suspect they probably knew they were talking to a bot at the time. The tech's gotten better so you could probably hide it In this particular pilot. The company actually wanted to make sure it was clear, so they told you at the beginning that you were talking to a bot. They weren't trying to trick you, and it also made it clear when you were talking to a human. And so a lot of what they're trying to do is figure out how do we sort of triage these calls appropriately and sit it down the right path. But you could go down a path where you are trying to make it hard to tell the difference. You have to make a call on how comfortable you are with that and how comfortable your customers would be with that. But it is the tech's getting good enough now that it can be hard to distinguish.

Mallory: 42:05
Mm-hmm, I was thinking of my own personal examples, where I don't even do phone trees anymore, I just say customer service rep, over and over and over again Smash, zero see what happens. Yeah, no, I'm not even playing that game anymore, so I'm going to have to kind of reevaluate that when things change. But I think that's a really neat use case that you shared. I'm curious if you have any other Gen AI use cases that you are particularly proud of or that you always go back to in terms of how successful they were.

John: 42:33
There are lots that come to mind. It's also, I think, unlike some other tech. It's just far more expansive. A use case is really whatever you can think of that might work or might be a problem during the day. One that's fairly simple but I think quite powerful is just better knowledge management within your association, within whatever organization you're in. That especially, I would imagine for a lot of nonprofit organizations, especially if there's volunteer labor, there's a lot of sort of tribal knowledge of well, mallory just knows this, because Mallory's been here for 20 years and she's always known this and that can make transitions quite challenging.

John: 43:14
And one thing I did with one of my clients is they did tons of consumer research of hey in this country for this product, for this situation, for this demographic, whatever it was, and they spent lots of money on this. And unless you were directly involved in that project, you probably had no idea it happened. And they were spending millions and millions of dollars on this. And so the idea of being able to like, hey, what if we take all this you know hundred slide, powerpoint decks and PDFs and throw it into this tool and then anyone can just query it, like I'm launching this new product in this country and I'm targeting this demographic what should I be aware of? And it can then pull from all that and provide the sources if you want to go deeper, but can just give you a quick answer, and that applies to really any organization Every organization has a ton of knowledge.

John: 43:59
Remarkably little of it is codified. And even if it is codified, if it is codified, if it's on slide 74 of an onboarding deck, no one's going to see it, and if they do see it, they're going to forget it 10 minutes later. And so this idea of having sort of the knowledge of the company accessible we sort of have this now on the internet. We take it for granted of like essentially, knowledge of humanity is accessible.

John: 44:21
You could do that for your company as well, your organization, and so I think that's a really good, relatively easy use case to start with, to get you comfortable and everyone can start using and understanding it. You can think very small of like maybe we just put like our HR policies on there. So if someone's like hey, do we have dental coverage? Or what's our vacation policy, I can answer questions like that Too much more expansive of like for my clients hey, I want to write a new brief for a new campaign, base it on everything all our research we have, and give me a draft campaign that I can edit and then share with an agency so you can think of it. There's a wide spectrum of applications, but that's one that I think is sort of a easy one. That's a good one to get started on.

Mallory: 45:04
I think what's interesting there too, we talk about this a lot is the idea of knowledge management internally, but then for associations as well, the idea of knowledge management for their members, because they're huge repositories of, like some of the most authoritative content in their profession or respective spaces. So being able to synthesize that with an AI and then have their members interact with that AI to get that information and that exists. We talk about it a lot. It's called Betty Bot, but I think that's an interesting use case for sure for associations.

Amith: 45:32
Yeah, you know. Another related element to this is kind of the untapped knowledge from unstructured data which John's referring to as well as part of what he just described, and I think it's just an opportunity to zoom in and say, well, when you think about knowledge management, part of that is understanding who knows what, as he pointed out, and that is, by itself, part of the tribal knowledge. It's not just the fact that Mallory knows X and has known that for 20 years, but the fact that I know that Mallory knows X is also tribal knowledge. Yet that information actually does exist in digital form. It's probably reflected somewhere in your emails, it's probably reflected somewhere in your SharePoint documents, it's probably reflected in your teams or Slack conversations by virtue of the back and forth and the types of things people have said, and you own all that information and so an AI that would be able to continuously scour all these unstructured sources with the purpose of essentially being able to index and catalog what people are knowledgeable about and being able to connect people based on what they're working on. Another related thing is like John and Mallory both work at large company X and they're both working on basically the same project, but have no idea that they're both working on the same project. This happens all the time. It in fact even happens at associations with 100 employees or less. How do you detect that? How do you know that? How do you connect these people so they can collaborate, not only reduce potential for redundancy, but also just make it better?

Amith: 46:52
Right, and these are things that I am confident, like every workforce tool, workflow tool, will have this. This will be built into Microsoft 365. It'll be built into Google. It might take a few years, but there's also opportunities for a lot of third-party apps. For now, we actually have something very similar to what I just described on the drawing board as a product that we're thinking about building. But and I think that there's just this is crazy right, because the idea that, historically, you would try to maintain that kind of thing, you'd have like a database of all the skills, of all of do that, but it gets out of date by the minute you know, the moment you put it together, it's so hard to get that up to date. So I think this is such an amazing opportunity to kind of extract those structured insights from this mountain of unstructured information we have.

John: 47:37
Absolutely, and I'm sure you all covered this in the past. But I know some area that some of my clients get nervous about are like, hey, but wait, isn't all our day going to go train the public models and it's all going to leak. Now all clients have these enterprise instances that nothing leaves their instances or domain and so this is all protected and safe and secure. The models will still sometimes give wrong information, and so there are still hallucinations. Other issues you need to work on it's not bulletproof wrong information, and so there's still hallucinations. Other issues you need to work on is not bulletproof, but it is not the issue where I think there was a lot of concern a year or two ago of I don't want to give it all my information because that's going to then be public domain.

Mallory: 48:14
When I asked you, John, about successful use cases, you said there are so many to pull from. We talked about knowledge management, we talked about customer service. You saying there's so many to pull from makes me think you were one very good at your job. But I'm curious if you were in AI and you could kind of extrapolate across all the AI implementations, rollouts that you've done with clients, do you feel like there are any patterns, any threads that you have noticed in terms of clients that are really successful and maybe clients that are less successful, that you could kind of generalize and share with our audience?

John: 48:46
Yeah, absolutely so. This is not dissimilar from any projects in that the ones that are more successful have really clear definitions of what success is, and so it's really easy. But this is really cool. Let's build something, and then you build it and someone's like well, what does it do? Like, what does this one thing Like, do we want that?

John: 49:03
One thing I don't know, and so having the idea of like maybe say another way, is I don't advise my clients to have a Gen AI strategy. I advise them to have a strategy and think how Gen AI can enable that, that they should not be separate things and so similar here that this is less of let me go think of a bunch of Gen AI use cases and more of how can I go improve my business or improve my organization, and then, as part of that, I should think how Gen AI can help accelerate that or make something possible that wasn't possible before, and so that's where I see more success is we're trying to accomplish a business goal or business outcome, and Gen AI enables that, rather than we're trying to build this chatbot, which is a solution in search of a problem.

Mallory: 49:46
I'm curious what you think about that, Amit, in terms of gen AI fueling the strategy, because I feel like what we have often talked about on the pod is sometimes these strategies for associations can be so set in stone and they can be to improve or replace some legacy system that might not drive that much change for them in the end. So what are your thoughts there?

Amith: 50:05
Well, I mean, a lot of people come to me and say, hey, our strategic plan that we set in 2021, you know, says X, y and Z and I'm like, well, that's cool.

Amith: 50:12
I hope you like, you know, it's quality content for your museum maybe, but it's no longer relevant at all Because the reality of it is the strategy is informed by the possibilities of the world that you live in. And when the world radically changes and the opportunities are radically different, the competition's different, the risks are different, the strategic framework has to shift. I mean, for one thing, you have to be a lot more nimble than a five-year strategic plan, like I don't look further out than two years. I have ideas of what might happen in five or 10, but I don't know anybody any better than anyone else really. But next two years, a continuous, rolling shift in terms of what we're going to go after, based on the rapidly changing environment. But you cannot inform strategy effectively if the environmental factors external to your organization have changed even a small amount, but certainly radically like this. So strategy has to be informed by those opportunities and those risks. Otherwise your strategy is, like I said, basically a historical artifact and irrelevant.

John: 51:08
And that's fair. I mean, I think I probably should preface with assuming you have a up-to-date and effective strategy. Yeah for sure. My push is you should have a business outcome, should be the driver Agreed, not a technology for the sake of it and so-.

Amith: 51:22
Yeah, I knew what you were saying with that and I think a lot of times people think, okay, well, our strategy, our business outcome circa 2021 was these three things and we set that was our five-year goal set. Well, sometimes you have to throw that out and reset it. But yeah, no, it totally makes sense. Having clarity and outcomes, I think, is one of the most critical lessons. I think the association sector needs to zoom in on and say, well, what should those outcomes be and why do they matter? And are they quantitative? Are they binary? Did we achieve them or not? Are they measurable to begin with? Are they time bound? Are they things that actually will achieve the general business outcome if we have the measurable outcome we think we're going to get?

Amith: 51:58
And a lot of times there's murkiness around that. That's true in corporations too, but in associations, I find that a lot of the so-called objectives, I don't have any way to know whether or not I've achieved them. After the fact, they just like to say, hey, we want to have improved member service. Okay, what does that mean? So you have to really refine it the way I think you were alluding to. It's really critical.

John: 52:17
Yeah, absolutely. And it gets back to our point earlier about hey, how do I see ROI? How do I see impact? You have to know where you're going before because to your point, otherwise if there isn't a clear yes or no answer at the end, then it's very hard to know if you're successful.

Amith: 52:31
Well, mallory, earlier you just asked John to kind of articulate if he was an AI, which is kind of a fun, you know, creative exercise for all of us and for me. I've actually been accused more than once about that. I think that's kind of funny. My kids sometimes call me a bot, which I thought was a compliment at first.

Mallory: 52:50
I don't think so.

Amith: 52:51
I mean I looked it up on, I think it was like Urban Dictionary. I'm like oh wait, that's not positive, but I thought it was like high praise, you know.

Mallory: 52:59
Amit wants to be a bot, it could be high praise. I think you could take it that way. Oh, I think that's how it was meant. Well, I know we're almost out of time At the top of the pod, john, you were talking about the idea of imagining what's possible, which I think is a really creative way to think about Gen AI in particular, and kind of, if you had this PhD level assistant at all times, what would you do and what would you do with your time as a human? So I'm curious if you were doing that same exercise right now, are there anything, is there anything near term in terms of use cases that's not quite possible yet, but that you're really excited for in the next year or so to work on with your clients?

John: 53:39
The big buzzword in general is agentic or having these sort of agents where you give goals to rather than rather than tasks, and I think that that's a pretty exciting concept and so open ai demoed a tool called operator recently which essentially just uses your keyboard and mouse to just go through things and so say, hey, go buy me tickets, go book me dinner, and it just googles and it goes through like you would if you're a person, and so you, you can think does that mean we're going to reduce APIs? There's lots of implications there. But where I find the agents exciting, or the opportunity exciting, is how it can own sort of an end-to-end process. So I'm going to use a procurement example, because I think that's most tangible and maybe you can help me translate this specifically to associations. But imagine you're at a company like, hey, I need a service. We don't do that internally. I need to go get a partner to help me with this, and so I need to write an RFP, and that's kind of a chore.

John: 54:43
So instead, let's say I had this sort of vendor management agent and I can say, hey, here's my situation, I need a vendor that can do X, y and Z, help me write the RFP and it's going to write, it's going to ask you clarifying questions, you can answer it and you'll sort of write it together and you're like, ok, cool, now I have an RFP Maybe I on a thousand RFP examples we have in the past, whatever the case is, and then I can send that out. Now I get responses for this RFP and they're like great, now I have a bunch of 50 slide PowerPoint decks I have to review. Instead, I'm going to throw all those into the same agent which built the RFP with me and say, hey, help me grade these. And while you're grading it, also look to see if we've worked with any of these companies before and if we had a good experience or a bad experience to help me understand how I should think about which of these vendors is the best bet. So then it helps you actually select the vendor and do it in a better job than most humans would, or at least enhance most humans would, because it's actually going to get into the details.

John: 55:37
Where I think it really gets exciting is then you can have it maintained going forward. And so every time I get an invoice, hey, make sure that this invoice looks right, based on our agreement. Am I getting all my right discounts? Am I getting the right rebates? Is it time to renegotiate? Because things have changed and you almost have this tool, this agent that exists indefinitely and you can do this as one sort of vendor. You can do this with 100 vendors and it allows you to sort of own this process, end to end, and then restart the process, renegotiate whatever. That I think can be pretty exciting. That's a procurement example, but you can think of them in finance. You can think of them in terms of member retention, you can think in terms of employee onboarding. There are lots of really cool examples of if the tool could understand a goal rather than a task, that it unlocks a lot more opportunity.

Amith: 56:23
John, that's an excellent example and first of all, for all of our association friends as well as the vendor community that are listening, the term RFP is definitely a well-known term. Associations do issue quite a few of them, not at the scale of a corporation that's running a procurement process where they're repetitively buying certain kinds of products or services, or at that scale where that level of automation may be justifiable. They may be doing 10 RFPs a year or 50 RFPs a year for a large organization, but it would still be valuable to do that because you could still use some of the steps, maybe just not in a full auto way where, as you described, where the agent's actually emailing vendors and all that kind of stuff. But maybe the idea would be that it helps you write the RFP, helps you evaluate the responses and helps you catch things that you may not have otherwise have done. Take in five or 10 responses and to normalize them into a spreadsheet to help it make you make a comparison easier and all those steps and things.

Amith: 57:13
I think what you said is first of all, pretty non-relevant for this community, so that's appreciated. I would translate it also to another business process that's totally different but at the same time similar, which is the process associations go through to produce content and for many organizations that involves this idea that's RFP-like. It's called a call for speakers or a call for papers, where an association will say to their community hey, we're going to have this event coming up in June and we're opening up a call for speakers and we'd like people to submit proposals, and there's typically some kind of rubric that says hey, to submit, you have to provide this amount of content. It has to be on one of these subjects. You have to have this kind of experience. Maybe you need a co-presenter or not. Maybe you have to be a published author if it's an academic institution. That oftentimes is the case on and on and on right. And there's this whole process where they're intaking all these proposals that are coming in to speak and they have to grade them and they have to provide feedback and they have to kind of filter it down to a narrower set. Very, very similar in a lot of respects. That process is a perfect candidate for an agent that would do a great job, both streamlining the efficiency the way you described ultimately producing better content content, because one of the problems that associations have in the workflow I'm describing is they have a lot of kind of biases, just like we all do, like the people who are in the committee that's typically making these selection decisions. They've looked at a lot of these same names in the past and they're like oh, I know this person, they're pretty good. I know John, maybe I don't want to include him. So you have those kinds of scenarios going on and the AI, I think, is going to be a little bit more thorough, perhaps a little bit more objective than a lot of us would be, and then ultimately perhaps produce more novel content and produce more interesting content in areas the association sometimes has a hard time doing.

Amith: 58:56
And also, the other thing is is one of the main ways that you get rid of newer presenters or speakers who are submitting to you is you ghost them. And what a lot of associations do is they'll take months to respond to. People say it's kind of like you're getting a ruling from the king, like saying, oh, we hereby deem you worthy to come and speak at our event. It's like you get a scroll in the mail. That's how it feels.

Amith: 59:17
It's that slow, whereas it would be great if, first of all, if I screwed up, if I submitted something incomplete. I get an immediate response saying, hey, you know you're not too bright, are you? You should really probably include your resume or whatever or something. That's not the best email template, but you know something like that where I'm like, oh damn, I forgot to include that, let me upload that and have a shot at being considered. People would like that, as well as faster feedback at all the stages, including maybe some qualitative feedback, saying, hey, your proposal was too similar to many others we got, and in the future maybe here's some other topics you might consider that would improve, like in all angles, like the customer service example, everyone's experience. So it's kind of translating a similar workflow to what you're describing into something I think a lot of our folks would really find a lot of value in to both improve their products and decrease their pain.

John: 1:00:08
Yeah, and even feedback in real time. So not just you weren't selected and here's why, but this isn't there yet, but we really like this idea. Could you blow that up and do more there?

Amith: 1:00:18
and recent it Totally and a lot of speakers are like, yeah, totally, I'd love to present on. That Sounds super interesting.

John: 1:00:23
Exactly so. It doesn't just have to be after the fact, it can be real-time improvement as well.

Mallory: 1:00:30
John, thank you so much for joining us today on the Sidecar Sync podcast. I think you've shared tons of insights and stories that will be incredibly beneficial for our association listeners, so thank you for joining us.

John: 1:00:42
Thank you both. It was a fun conversation.

Amith: 1:00:54
Thanks for tuning into Sidecar Sync this week, looking to dive deeper. Thank you both. It was a fun conversation journey with AI and remember Sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.

 

Mallory Mejias
Post by Mallory Mejias
February 6, 2025
Mallory Mejias is the Director of Content and Learning at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space. Mallory co-hosts and produces the Sidecar Sync podcast, where she delves into the latest trends in AI and technology, translating them into actionable insights.