Most AI training starts with possibilities—what AI can write, create, analyze, summarize, generate. We've all sat through these presentations, watching demo after demo of impressive capabilities. But this approach often leads to a frustrating outcome: organizations leave inspired but implement nothing.
As someone whose job involves testing new AI tools and finding relevant use cases for associations, I've discovered something counterintuitive. Knowing what AI can do doesn't automatically translate into knowing what AI should do in your organization. After nearly 100 episodes of co-hosting the Sidecar Sync podcast, one principle emerges consistently: the organizations making real progress start with their problems, not AI's solutions.
And a good place to find those problems? Tuesday morning.
The Implementation Gap
I test AI tools for a living. My search history contains research on hundreds of platforms, each promising to revolutionize how we work. I've seen AI that can deepfake me with a single image, create full songs with a single prompt, and hold audio conversations so seamless you'd never know you weren't talking to a human. The capabilities are genuinely astounding.
Yet when I look at my actual daily workflow, I consistently use perhaps five or six AI applications. This gap exists because impressive capabilities don't automatically translate into practical applications.
This phenomenon plays out across associations everywhere. Leaders attend conferences, watch demonstrations of cutting-edge AI, and return energized about the possibilities. They gather their teams, share what they've learned, and then... nothing happens. The problem lies in approaching AI implementation backwards. When we lead with what AI can do, we're essentially asking people to reshape their work to fit the tool. Using AI without a clear problem to solve becomes an expensive experiment in technological tourism.
The Learning Paradox
Here's the challenge: You should absolutely start with problems, not solutions. But if you don't regularly explore new AI tools, you won't know what solutions exist when problems arise.
The resolution lies in separating exploration from implementation. Think of it as building a toolkit versus using the tools. Regular experimentation with AI platforms creates a mental inventory of possibilities. You discover which tools excel at data analysis and which ones handle content creation.
But—and this is crucial—knowing about a tool doesn't mean you should use it. The discipline comes in waiting for the right problem. When someone mentions they spend hours every week formatting reports, you can connect their specific pain to a specific solution. Without that problem-solution match, even the most impressive AI tool is just expensive software taking up space.
The Problem-First Philosophy
During our recent Sidecar Sync interview with Conor Grennan—Chief AI Architect at NYU Stern School of Business—he shared his approach on AI implementation. Rather than beginning his workshops by showcasing AI's impressive capabilities, he starts with a simple question: What do you need to do?
This shift in perspective changes things. When you start with capabilities, you're asking people to imagine new ways of working. You're essentially saying, here's this powerful tool—now figure out how to use it. That's a significant cognitive leap, especially for professionals already managing full workloads.
Starting with problems creates an entirely different dynamic. You're meeting people where they are, in the midst of their actual challenges. You're not asking them to reimagine their work; you're offering to make their existing work easier. This approach also democratizes AI. When you begin with problems everyone recognizes—the report that takes all afternoon, the meeting notes no one wants to write—AI becomes a tool for solving universal frustrations rather than a technology reserved for the technically adventurous.
The Tuesday Morning Test
Here's a simple framework for identifying which problems deserve AI solutions: What repetitive tasks consume your Tuesday mornings?
Tuesday morning is when the reality of your workweek hits. Monday's planning gives way to actual execution, and you're looking at all the recurring tasks that need to get done. Look at your calendar and task list. Which tasks always take longer than they should?
Those time sinks—the ones that show up every single week and eat up hours that could be spent on higher-value work—are exactly where you should focus your AI efforts.
For me, it was outlining new podcast episodes. Every week, I'd spend a good chunk of time re-doing the same formatting, prompting an AI model with the same podcast context, and searching through past episodes to match our established structure. The task itself was important, but the process was inefficient—too much time spent on logistics instead of content.
So I created a Claude project (or a custom GPT within ChatGPT) populated with past Sidecar Sync episode outlines and transcripts. Now when Tuesday morning rolls around and I need to plan an episode, I have an AI assistant that understands our format and can help me organize ideas in minutes instead of hours. The creative work remains mine; the repetitive formatting work is automated.
The Tuesday Morning Test reveals which problems actually matter. Not the flashy edge cases or the "wouldn't it be cool if" scenarios, but the recurring tasks that consume more time than their value warrants. Look for tasks that are:
- Recurring (weekly or more frequent)
- Time-consuming relative to their value
- Repetitive in nature but variable in content
- Process-heavy rather than strategy-heavy
These are your opportunities for meaningful AI implementation.
When applying the Tuesday Morning Test to your own work, the key is honest assessment of where your time goes. Not the big strategic challenges or complex projects—those often require human judgment. Instead, look for the recurring, process-driven tasks that consistently take longer than they should.
Organizational Implementation: Beyond Random Acts of AI
This problem-first principle becomes even more critical at the organizational level. Too many organizations engage in what one might call random acts of AI—disconnected implementations that might showcase innovation but don't address systemic challenges.
We learned this lesson at Sidecar. Different associations kept asking if we could customize our AI education for their specific members. Using traditional methods, we'd need to re-record entire courses with different instructors for each industry. The math was impossible. So we asked ourselves the right question: What's actually making this take so long?
The investigation revealed specific bottlenecks: scheduling instructors, managing multiple recording takes, and the inability to quickly update content when things changed. We weren't looking for AI to be innovative—we were trying to solve a business problem.
Our solution, the Learning Content Agent (LCA), addressed each friction point. We took our existing AI Learning Hub content and identified every place where customization happened—every time an instructor said "member" (which could be "client" or "patient"). Then we built a system that pulls core educational content, swaps in industry-appropriate terminology and examples, and generates professional video lessons using AI voices and avatars.
Result: We can create customized AI education for any industry in days instead of months. Every course in our current AI Learning Hub was created using this workflow. The tools came together not because we wanted to showcase AI perse, but because we had to solve a real scalability challenge.
Your Four-Week Implementation Framework
Ready to move from AI exploration to meaningful implementation? Here's a framework that works for both individuals and organizations:
Step 1: Apply the Tuesday Morning Test. Look at your calendar for next week and identify which tasks consistently take longer than they should. Document the repetitive processes in your workflow. Which tasks require you to do the same setup every time? What processes could benefit from streamlining? Pay special attention to high-frequency tasks. These recurring time investments are your biggest opportunities for AI assistance.
Step 2: Analyze and Prioritize Sort your friction points by frequency and impact. A task that wastes an hour daily is more valuable to automate than one that wastes three hours monthly. Create a simple matrix:
Priority Matrix Example:
Task | Frequency | Time Impact | Priority |
---|---|---|---|
Formatting board reports | Weekly | 3 hours | 🔴 HIGH |
Writing meeting notes | 3x/week | 1 hour each | 🔴 HIGH |
Member data entry | Daily | 30 minutes | 🔴 HIGH |
Annual report design | Yearly | 40 hours | 🟡 MEDIUM |
Email list segmentation | Monthly | 2 hours | 🟡 MEDIUM |
Holiday card mail merge | Yearly | 4 hours | 🟢 LOW |
Focus on the red zone first—these high-frequency, high-impact tasks are your biggest opportunities for meaningful AI implementation.
Step 3: Match Problems to Solutions. Now tap into your AI toolbox. Look at your prioritized list and ask: Do I know of any AI tools that could help with this? Maybe you've seen something in a demo that could work, or heard about a tool from a colleague. If nothing comes to mind, that's your cue to start exploring—but now you're searching with purpose, not randomly browsing. Test potential solutions with small experiments before fully committing.
Step 4: Implement and Measure. Start with your smallest, most annoying problem. Set clear metrics: How much time should this save? Give each implementation at least two weeks of real-world testing. Document what actually happens versus what you expected.
The Path Forward
After testing many AI tools and watching dozens of organizations navigate AI adoption, one truth stands out: successful AI implementation isn't about finding the most impressive tech. It's about solving actual problems that consume valuable time.
The most sophisticated AI platform in the world creates no value sitting unused. But the simple automation that streamlines those recurring Tuesday morning tasks? That frees up hours for strategic work. That builds momentum for further innovation. That creates the foundation for meaningful digital transformation.
So stop starting with AI. Start with your repetitive Tuesday morning tasks. Start with the processes that haven't been questioned in years. Start with the recurring workflows that consume more time than they should.
Your best AI strategy isn't hiding in a vendor demo or a visionary keynote. It might be hiding on your Tuesday morning calendar.
Tags:
Practical AI
July 22, 2025