Summary:
In this episode of Sidecar Sync, Amith and Mallory dive into OpenAI’s latest innovation: Deep Research. They explore how this powerful AI agent is transforming research by synthesizing vast amounts of information at unprecedented speeds. They also discuss Jevons Paradox—why technological advancements don’t always reduce resource consumption and how this applies to AI’s rapid evolution. Plus, they unpack the implications of AI’s increasing accessibility, from competition among AI models to what this means for associations navigating an AI-driven future.
Timestamps:
00:00 - Welcome to Sidecar Sync
03:49 - Blue Cypress Innovation Hubs
06:41 - OpenAI’s Deep Research and What It Does
10:39 - The Confusion of AI Model Naming Conventions
14:27 - o3 Mini: OpenAI’s Best Reasoning Model?
20:08 - AI Integration in Business: Multiple Models
24:19 - What is Jevons’ Paradox?
33:41 - AI, Energy, and the Future of AI
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
Attend the Blue Cypress Innovation Hub in DC/Chicago:
https://bluecypress.io/innovation-hub-dc
https://bluecypress.io/innovation-hub-chicago
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🛠 AI Tools and Resources Mentioned in This Episode:
Deep Research ➡https://openai.com
DeepSeek ➡ https://deepseek.com
Gemini 2 Pro ➡ https://ai.google.dev
Llama 3.3 ➡ https://ai.meta.com
Anthropic Claude 3.5 ➡ https://anthropic.com
Groq ➡ https://groq.com/
👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
⚙️ Other Resources from Sidecar:
- Sidecar Blog
- Sidecar Community
- digitalNow Conference
- Upcoming Webinars and Events
- Association AI Mastermind Group
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Director of Content and Learning at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
Read the Transcript
Amith: 0:00
I think AI is. You know, this provides us, theoretically unlimited intellect on tap to go solve the world's problems. You know, many of the world's problems are materials problems. Many of the world's problems are energy problems, right, and so those are things that get exciting. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amith Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings and welcome to the Sidecar Sync, your home for awesome conversations all about artificial intelligence and the world of associations.
Mallory
My name is Amith Nagarajan and my name is Mallory Mejias.
Amith: 0:59
And we are your hosts. And before we get into another exciting episode at that intersection of AI and all things associations, we are going to take a moment to hear a quick word from our sponsor.
Mallory: 1:11
If you're listening to this podcast right now, you're already thinking differently about AI than many of your peers, don't you wish there was a way to showcase your commitment to innovation and learning? The Association AI Professional, or AAIP, certification is exactly that. The AAIP certification is awarded to those who have achieved outstanding theoretical and practical AI knowledge. As it pertains to associations, earning your AAIP certification proves that you're at the forefront of AI in your organization and in the greater association space, giving you a competitive edge in an increasingly AI-driven job market. Join the growing group of professionals who've earned their AAIP certification and secure your professional future by heading to learnsidecarai. Amit. How's it going today?
Amith: 2:02
It's going fantastic. It's, you know, like a completely different experience in New Orleans. Like two weeks later, after the snow that we talked about recently, it's now 70 degrees. I played a little tennis early this morning and it was like a thousand percent humidity and it felt like it was early summer already. So I don't know that cold snap did not last.
Mallory: 2:22
Yeah, I've been outside. We've been having a warmer front in Atlanta as well. I've been outside every day for multiple hours a day, which has felt so good after it being cold. Only, really, I know I'm a wimp. Only for like a month and a half we had cold weather, but I realized that I do miss the warmth for sure.
Amith: 2:40
Yeah, it is nice. At the same time, I'm kind of hoping for a little bit colder weather for the next couple months here in New Orleans, because you know, when it gets warm here, it gets warm and wait. We're recording this episode just a bit early. I know the Super Bowl happenings are going on in I can so. I love football and the Super Bowl is super fun, but I am not going anywhere near the Superdome anytime in the next few days.
Mallory: 3:11
That's fair. I've had friends go out there. It seems fun. It seems like chaos, but you know, hopefully it's a great time.
Amith: 3:19
It's not nearly as crazy around here as it was when Taylor Swift was in town. That was much more crowded, much crazier.
Mallory: 3:32
Okay. Well, she's going to be in town again, though, because you know her significant other plays for the Chiefs, so you might be dealing with like a Super Bowl plus Taylor Swift debacle.
Amith: 3:38
That's a good point. Actually I hadn't thought about that, but I don't think she's performing. So if she's performing, you know, then yeah, new Orleans is just crushed.
Mallory: 3:48
Only time will tell.
Amith: 3:49
Well, I mean.
Mallory: 3:50
I wanted to mention too, because I don't think we've talked about this on the podcast recently, but we do have the Blue Cypress Innovation Hubs coming up. We have one in DC and that is March 25th, and then we're doing another version of that Innovation Hub in Chicago, which is April 8th. For those of you who don't know, we launched this event maybe two years ago and I was the one who worked on it. It was just in DC at that time and it's kind of a one-day event all about innovation. We talk quite a bit about artificial intelligence, but other innovative technologies there as well, and I just wanted to share it with all of you in case you want to hear Amit speak right, Because you'll be at the one in DC. What about Chicago?
Amith: 4:30
I will definitely be at the one in DC. That's confirmed. I am very likely to be at the one in Chicago as well.
Amith: 4:35
I just need to get the hall pass and then, assuming I have that I will be in Chicago as well. That's, I'd say, 75, 80% of this point. Hopefully my wife's not listening to this, because I haven't actually officially asked her yet about that trip. We'll see, but in any event, yeah, I think I'll likely be in Chicago and I'm super pumped about it. It's a little bit different format and a different feel than Digital Now, Our flagship event at the latter part of every year. You know, we launched the two events in Chicago and DC to have a smaller, more intimate feeling regional event in the springtime in each location where there's obviously large concentrations of associations in both Washington and in the Chicago area and we have lots of wonderful relationships in both towns. So we thought it'd be great to do something kind of on the other end of the calendar but also take a different approach.
Amith: 5:22
Digital now is, definitionally, at the intersection of technology and strategy. That's what digital now has been for 25 years and the innovation hub is just. It's purely about innovation. As you pointed out, it's artificial intelligence big time. Of course, we're talking a ton about AI, but it could be innovation in a business strategy, it could be innovation in a financial model, it could be innovation in culture, it could be any kind a financial model. It could be innovation in culture, it could be any kind of innovation, and we feature a number of speakers both within the Blue Cypress family and also in the client community, so it's a really cool event. It's just a one-day event, so it's super easy to block off. It's a great educational experience, so I'm super pumped about it. I'm looking forward to seeing people in person in. I guess it's just a handful of weeks at this point.
Mallory: 6:05
Exactly, yeah, like Amit mentioned, it's really intimate. We're expecting probably around 50 to 75 people maybe a bit more than that and so it's a really awesome opportunity to connect not only with folks in the Blue Cypress family but, as Amit said, association leaders as well. So if you're interested in checking that out, we will be including links in the show notes for both locations. Today, we've got two exciting topics lined up. First and foremost, how could we not talk about OpenAI's recently released Deep Research? And then we'll be talking about Javon's Paradox, which is a phrase that has become pretty popular in the last few weeks. So, first and foremost, openai's Deep Research was launched on Sunday February 2nd of this year, and it's an agent capable of performing complex, multi-step research tasks on the internet. Now, if you're having a feeling of deja vu right now, it's because we recently covered Google's Deep Research on a previous pod a few weeks ago. So what does OpenAI's Deep Research do? It's pretty similar to Google's, but it can gather, analyze and synthesize information from hundreds of online sources to create comprehensive reports. At the level of a research analyst. It can accomplish in 5 to 30 minutes what would typically take a human many hours or even days to complete. It's useful for various applications, from providing intensive knowledge for researchers to assisting shoppers with hyper-personalized recommendations. Every output, of course, includes clear citations and a summary of the agent's thinking process, which makes it easy to reference and verify the information that you're seeing. Deepresearch is powered by an optimized version of OpenAI's upcoming O3 model, which can search, interpret and analyze massive amounts of text, images and PDFs on the internet. Openai claims that deep research achieved a new high score of 26.6% accuracy in humanity's last exam, a benchmark test for AI models across various subjects. That some say is the hardest AI exam out there for models. The tool also topped another public benchmark test called Gaia, which evaluates AI models on reasoning, multimodal fluency, web browsing and tool usability.
Mallory: 8:15
Deep Research is currently available to OpenAI Pro users at the time of this podcast, with a maximum limit of 100 queries per month, and access will be expanded to plus team users next, followed by enterprise users. Right now, the tool is exclusively available via the web, with plans to integrate mobile and desktop applications later this month. So, amit, this was an exciting release. We've talked about Sora on the pod. We've talked about Operator Agent from OpenAI. All of those you said, eh, I'm not interested in upgrading to Pro. I don't need to test those out right now, but with this one you pinged me and you said I think I'm going to upgrade to Pro. So I want to hear your initial thoughts, and why is this the thing that's got you really interested?
Amith: 9:05
but I couldn't get it to work because we are a Teams user I think that's what it's called. We have, you know, like I don't know how many people just a whole bunch of people across the Blue Cypress family on this one OpenAI account and I couldn't upgrade my account. I definitely wasn't ready to upgrade you know many dozens of people across the organization to the $200 a month thing, so I couldn't figure out how to do it. I was thinking, okay, well, I'll just create a separate personal account, but I didn't want to go through the hassle. So just a quick side note friction is bad and it would be good to make it easy for people to spend significant dollars with you, and I think associations need to continuously remember that.
Amith: 9:34
Even with a captive audience, where there's a novel tool, where people are like I really want to try this out, people are busy. I was busy that day. I didn't have time to go mess around. I really didn't want to create a separate account anyway, I just wanted to be able to upgrade just my account. So I'm sure it's something they're thinking about. Actually, another quick sidebar about usability. You were mentioning this before we started recording. Tell our listeners a little bit what you're saying about the naming of these models and just how kind of ridiculous it is, specifically with OpenAI. I don't know that Google's a whole lot better, but what were your thoughts about that?
Mallory: 10:04
Oh well, I was telling Amit that I saw a post from Mike Kaput from the Marketing AI Institute on LinkedIn where he initially pointed this out, but that within the OpenAI world of models, all of them are named with the most horrible conventions, like O1, o3, mini, o3, medium, high, mini, high and, honestly, like how would anybody know what those do? I feel like even you and I and me talking about this all the time on the podcast I'm a bit confused now on what each of those individual models do. So I've got to point that out that I think they could do a better job with naming the models.
Amith: 10:39
Yeah, totally. And as a side note for those of you that are really interested in marketing content, we obviously cover a lot of it here, but we love the guys at the Marketing AI Institute. As a disclosure, we were actually their lead investor in their seed round a couple of years ago, so we like their business a lot too, but we really think their content is extraordinary. So you should check them out, in addition to continuing to check out all the stuff Sidecar does, car does. But my comment would be actually pretty simple. I'd love to shoot a note to Sam Altman and say, hey, you guys have this really cool thing called custom GPTs. Why don't you create one called Naming Agent and see if you can get some help, because it's quite the mess? So, not that I'm good at naming, I'm terrible at it, but it's tough. It's tough to get it right, but I can see that also, at the speed they're moving, it's actually even more important to step back and say, hey, what are some of the products we're going to release over the next 12, 24 months and what's the brand architecture for how these things fit together? There's got to be a better way to do that. So, in any event, coming back to your actual question. I do believe that this is worth really paying attention to, because deep research has the facility to really dig in and go deeper, as the name implies, and since, of course by the way, since they couldn't come up with their own name for this, they just copied Google's deep research. They could have at least made it funny and said deeper research or something like that. So in any event, coming back to the question, what is novel about this is the depth and the amount of compute that it spends. So Google's deep research tool when I used it, was pretty good actually I'd recommend it and it's freely accessible to people with a Google account. But it only goes so far. I think it limits itself to 10 or 15 different sources and then it kind of stops computing and you can ask it to do more, but then it kind of starts from scratch. It's not really incrementally going. I have not played with deep research from OpenAI yet, but from what I've seen from Ethan Mollick and others, it does appear like it's going far, far beyond that hundreds of sources and really crunching the numbers a lot better. So I'm quite excited by that.
Amith: 12:38
I'd love to throw some deeper market research questions at this tool as an example. We are constantly brainstorming what are some of the best categories that we could introduce new AI agents into within this particular vertical. So the question I had asked Deep Research, maybe 30 days ago, whenever it was I want you to study all of the available data on labor efficiency in the association market. Look at any reports, look at 990 data, look at anything that you can get your hands on and give me categories where there appear to be choke points, where, in essence, there are excess amounts of labor investments relative to the work that's being output. And it did a pretty good job. It identified choke points around event planning. It identified choke points around member communications, which are clearly areas that AI agentic, ai specifically can be very valuable.
Amith: 13:34
But, in any event, I think tools like this are very interesting to go a lot deeper and to do a lot of the homework. Now you couple deep research with the ability to take actions and you start getting into some really interesting combinations of things. I know that OpenAI claimed maybe it was Sam Altman, maybe it was someone else that at least 1% of all economically valuable activity could be performed by deep research. So that's an interesting statement. On the one hand, it's an unimpressive percentage, but on the other hand, it's 1% of global GDP, effectively, because most of that is labor or a large percentage is labor. So, in any event, I guess the point I would make is I think this is yet another step in that endless progression, seemingly, of advancements we're getting week by week, and people need to pay attention to this because it really steps up what you can do with just a single request to an AI.
Mallory: 14:27
Mm-hmm, and I mentioned that this deep research is powered by the O3 model with the confusing naming convention, and you mentioned before the pod Amit that O3 Mini is actually available for free to all users.
Amith: 14:41
That's right.
Amith: 14:41
Yeah, so I believe this is clearly in response to DeepSeek being so well received globally.
Amith: 14:47
When DeepSeek if you missed the earlier pod or haven't heard much about it other than you know I don't know where you've been living for the last week, but DeepSeek is a model from a Chinese startup and has performance very similar to the OpenAI 01 model and so, not to be you to be outdone by anyone, openai said this week that they're going to give away or actually, late last week, they're going to give away 03 to any user on the free tier of ChatGPT and they also made 03 mini available through the API for a very reasonable cost. So clearly it's a competitive reaction, because I think their normal pattern is to make their best model available only to the most premium tier and then bring it down over time. But clearly there's a lot of competitive pressure. So also, I think it was two days ago or maybe it was yesterday, that Google released Gemini 2 Pro, which has some very advanced reasoning capabilities of its own. So competition is heating up. That's exciting. That's good for everyone on the consumer side of this.
Mallory: 16:34
I went down a bit of a rabbit hole with this humanities last stand test because I had never heard of it and I started looking up example questions, which I highly recommend that you, Amit, do if you haven't, and all of our listeners, because you will be shocked at how some of these questions are phrased and how difficult they sound. But since OpenAI achieved at this point the highest score that we've ever seen on this exam, which was only a 27% but I think was more than 10% than the second place rankings, does this mean that this agent model is the most powerful reasoning model that we've seen thus far?
Amith: 17:09
I think that's a pretty clear and fair statement that, at the moment, o3, to the extent that we're aware is the best reasoning model that's out there. So if you need the very best model with reasoning skills for a specific problem you're working on, definitely give O3 Mini a chance. I would say, you know, I wouldn't really pay too much attention to that, because very quickly after O3 Mini, you're going to get something from Meta. They're due for a new release. I mean, it's been since December, since they released something big which was a dot release Lama 3.3, which was a big deal, but the training process for Lama 4 has been underway for some time. I would be shocked if there wasn't at least a reasoning mode, if not a special reasoning model, coming from Meta as well that they'll open source. So we're going to see a lot of cool stuff, and it seems to be accelerating, not staying the same. So if you didn't like last year's speed of innovation that was happening, this year is going to. It seems it's going to be even faster.
Amith: 18:09
My bottom line, though, that I keep advising people on is you know, it's important to stay aware of what's happening in terms of new tools, like deep research, or new models like O3, mini and, obviously, deepseek. But it's more important to look at the broader arc of what's happening, because there's so much churn in all these models that you know you think you're up to speed and then you're not. And so the more important thing is to look at where things are going, and that broader arc I'm referring to is saying, okay, what am I actually trying to get done? What's my business goal? Why am I trying to do this and what are the obstacles? Right? So to think, to break down the problem you're trying to solve into smaller chunks and say, okay, well, which models can solve each of these things? And you may not need the most high-end model for a lot of your work. You know, we find actually, like for example, within our Skip AI data analyst agent, that much of the work can be done by lower models. There's certain pieces of the work that Skip does that require higher-end models, but not most of it, and that's a very complex piece of software. So in most of the business cases I see out there, you can actually solve a large percentage of your problems with something like a Lama 3.3 or with the GPT-4.0, and you don't even need the reasoning models.
Amith: 19:25
So that's the main thing I keep pushing on for folks that are kind of really just looking at this in terms of adoption. You know you might be interested in what's the latest, but really you need to adopt what we have today and get that pushed into your workflows. Actually adopt these tools, you know, in your day-to-day work as part of your business process, not just an extra thing on the side. Right and I think that's what 2025 is going to be about is, most people in 2024 were just starting to experiment. In 2025, what I'm hopeful for is that people will actually deeply integrate these tools into their workflow, and so for that, it's important to choose the model right and to say, okay, for this particular step in our customer service workflow, we are going to use 01 or 03 or GPT-4. You want to do that thinking because that will give you consistent results.
Amith: 20:15
If you just let your different customer service agents or member service agents use whichever tool they want, not only is that a questionable thing from a policy viewpoint, from a cybersecurity viewpoint, but it also is going to produce different results. But it's not a one-size-fits-all thing. So you don't need to say, hey, everyone needs GPT, you know 4.0. Or everyone needs O3. Everyone needs the pro $200 licensed, or maybe you like Anthropic better. Everyone needs Cloud 3.5, sonnet, or everyone needs whatever. There are a number of different tools that you can use for different things, you know. So some tools require a pickup truck, some tools require a sports car and some tools, some things that you're trying to do it's just fine to have a Camry.
Mallory: 20:56
Yeah, and you've mentioned the idea, too on the pot of me, of AI models becoming a commodity where we can kind of pick whichever one we want in the future and they'll all kind of do roughly the same thing, maybe with some distinctions and features and functionality. But what you're saying is that right now it're going to run with this, we love this, we're going to integrate it into our association, and then, bam, a few weeks later, openai releases a more powerful version. That must be. It's hard to navigate, for sure, as a leader.
Amith: 21:29
Yeah, exactly, and I think that, again, it's important to be aware of what's going on. I don't think, unless you're like us and you're just super into this stuff and you want to go play with everything, to really get hands-on, you know, insights, and for us and the work that we do, obviously it's important for us to go do that because we're, you know, asked lots of questions about these things. That's part of our job. But for most business folks that are out there trying to make sense of this and decide what to do, it doesn't make sense to go and try every single tool that comes out. It's too frenetic of a pace and the value to you as a business leader isn't that high in trying everything. I don't think you should try nothing either. So it's not a license to kind of sit around and twiddle your thumbs, but rather I would say be selective and once a month or once every couple of weeks, do try something different. But pick the tools that you want to drive deeply into your business process. Want to drive deeply into your business process. Don't be happy with the idea that we're experimenting. Still, figure out some key process.
Amith: 22:25
And the way I sniff these things out in organizations that I'm advising is I look for choke points. I look for pain where they're having a hard time. And I look for choke points, which is usually because not enough labor, not enough people are available who know how to do a certain thing, find that thing and then figure out how to codify it in an AI first, or at least an AI enabled process. And then that's where the model selection and the tool selection becomes quite important, because you want to make sure the vendor you're working with is one you trust, because you want to be able to put sensitive data into these tools. You know, a lot of people have had this very generic blanket statement saying, had this very generic blanket statement saying thou shall not put confidential data into AI. And initially, when ChatGPT first came out, that was an absolutely good thing to tell people for like the first six months, because, first of all, chatgpt was the only game in town for a while in terms of broadly available tool and secondly, there was zero protection whatsoever.
Amith: 23:17
Either you know whether you believe OpenAI is a good company or not. Just like in their terms of service, they literally said they could use your content for model training. Now that changed and OpenAI shifted gears, and in the paid version and the team's version, the pro version. You can opt out of that and in fact in our team setting, one of the reasons we do that is we set the flag that says you may not use any of our content from anything for any purposes across our whole organization. We can set that policy once, and a lot of other companies, just by policy, do not do that at all. Like Anthropic, for example, doesn't use conversations to do model training. They use the feedback in terms of good bad, but they don't use the content itself.
Amith: 23:57
So you have to pick the vendor you're comfortable with, because if you limit yourself to only public domain information and you're not willing to put your confidential material in at all, the use cases, they narrow down quite quickly. So that's when it gets really important, when you want to essentially take these things from the lab into production mode. You have to have vendors that you can really rely on in terms of quality and consistency, but also security and safety, and that's a very important decision. And again, it doesn't necessarily have to be one vendor, but you have to know what your go-tos are and what the models are.
Amith: 24:32
We've talked a lot on this show about Grok, groq and how they have really rapid inference. But the other thing we love about those guys is their commitment to security. They have a number of models to choose from. They're all open source models. They've actually taken DeepSeq and distilled it with Lama 3.3, and they have a very interesting reasoning offering too, and they're big on security and their data centers. You can choose to inference your workloads here in the United States. They have offshore data centers as well.
Mallory: 25:11
But that's one of many companies right that are options. People just think open AI and that's the only. That's as far as their brains go. Of course, that's beneficial for open AI as the first mover. That occurs when technological advancements increase the efficiency of resource use, but instead of reducing overall consumption, it leads to an increase in the demand and total resource utilization.
Mallory: 25:31
This concept was first observed and described by English economist William Stanley Javon's in 1865 when studying coal consumption patterns during the Industrial Revolution, which I did not know prior to this podcast. The paradox operates through two main channels, one being improved efficiency lowers the relative cost of using a resource, which increases the quantity demanded, and then the second one is that efficiency gains increase real incomes and accelerate the economic growth, further driving up resource demand. Now the paradox has been observed in various sectors, and sometimes it helps to provide these real-world examples to understand it. So within energy, despite improvements in fuel efficiency, gasoline demand has not decreased as expected. Instead, people have opted for larger vehicles and increased their driving distances. We see it in home energy as well More efficient HVAC systems and windows have led to larger homes rather than reduced energy consumption. And then lighting the introduction of energy-efficient LED bulbs has resulted in more widespread use of lighting, offsetting the potential energy savings.
Mallory: 26:36
So what are the implications here? I think there are some good, some bad. On the bad side, obviously, there are environmental concerns. So the paradox challenges the notion that technological efficiency alone can solve environmental issues. It suggests that efficiency gains must be coupled with other measures to achieve sustainability goals. But on the good side, on economic growth, while potentially problematic for resource conservation, we see that the paradox can drive economic expansion by making resources more accessible and more affordable. So why are we talking about it on this pod? Well, we know that it remains highly relevant today, particularly in discussions around energy efficiency, sustainability and emerging technologies like AI. Of course, as AI makes processes more efficient, it may paradoxically lead to increased demand for those processes, which potentially drives up resource consumption in unexpected ways. So I was not familiar with Javon's paradox until the last few weeks, so this has been a learning experience for me as well, like with, I'm sure, a few of our listeners. Amit, why do you think the term has become popular recently?
Amith: 28:42
So, mallory, I think that when we're heading into the unknown and we have this unbelievable opportunity in front of us but it's unclear what it means, it's helpful, and sometimes it's instructive, to look back at similar patterns that we've seen. They may have a different magnitude, but they have similar patterns in the past. The ones you provided examples on, I think, are really good. Another example is just computing in general. The cost of computing has dramatically lowered year after year after year, and that's driven an enormous increase in demand. The broader pattern is when a resource is scarce, its consumption is necessarily limited and the ultimate resource we have is human intellect. That's the ultimate resource we have as a species, more so than anything else. So up until now, if we wanted to create more intelligence, we needed to create more humans Pretty simple, right. And then you had to spend 18 years or longer, 22 years in some societies training these people to do something from birth through when they enter the workforce. And then you have more intelligence, and then it takes maybe another five years or 10 years or whatever to get them into the mainstream of their career, and on and on. So it's a long tail process and obviously you have to, you know, get people to have kids. So there's, there's a lot involved there. Now you know, the whole reason this is so crazy is if we really are on the cusp of unlocking human level, or perhaps better intelligence on average. I think some could argue that with O3 mini and models like that, which are performing better on like PhD level exams than 90% of PhDs kind of there in some ways. So, but the point is, is that what in the world does this mean? Right, like this has implications in every aspect of society, of the economy. It's going to affect politics, it's going to affect entertainment, it's going to affect our lives and the way we raise our kids. So it's going to affect everything. So I think this has become a more popular phrase recently because it's instructive with regards to the pattern, and I think that here, with AI, we're taking a resource that's extremely scarce, which is human intellect, particularly in subdomains. You say, oh well, accounting people entering the accounting profession. A lot of our friends in the association community are in the accounting world and they are talking broadly about how difficult it is to recruit new accountants into school, into the program and then into the CPA world, and that's a great example of scarcity. You have a scarce resource, that's driving up costs and making availability lesser than you'd want in terms of meeting the market's needs, and potentially AI can help solve for that. So I think the broader idea that is so incredibly compelling is the lower cost. So when we say, hey, what's the resource? Is the resource legal advice, medical care, whatever it may be, if the cost can be lowered sufficiently so that it's essentially abundant and available for everyone, that's, of course, a massive quality of life improvement for everyone on planet Earth. And that's really the most compelling, the most exciting thing, because a lot of resource constraints have not only led to industries having challenges, but they've led to geopolitical tensions and sometimes armed conflicts, right. So if we can solve for a lot of these resource constraints and switch from a scarcity mindset to an abundance mindset, there's something really exciting there. The other thing is the compounding factor.
Amith: 32:11
In the talks that I give on the exponential era, I talk a lot about the convergence of multiple distinct exponential curves. Compute is one of them. Ai is on another one, that's, you know, a completely different level in terms of its speed. But we also have these curves happening in material science, innovation and energy and a number of other areas and so, as they compound together, that creates this abundance scenario. That's quite exciting.
Amith: 32:38
What I would tell you, too, is that I think AI is, you know, this provides us theoretically unlimited intellect on tap to go solve the world's problems. You know, many of the world's problems are materials problems. Many of the world's problems are energy problems, right, and so those are things that get exciting. So, coming back to Javon's paradox, if we say, look, right now we know that more energy consumption, for example, is fundamentally a concern because we have a limited amount of it for one. And on top of that, we know that, generally speaking, when we consume more energy, it's causing problems for climate. If we can solve for that, right. If we can create an abundance of clean energy over time, with AI helping a whole bunch, and the materials discovery, the fundamental science, whether it's fusion or SMRs, or if it's better solar or better battery technology all of these innovations on something that's in the world, the physical world, coupled with AI world.
Amith: 33:28
So, mallory, just a quick comment that's related to this is at the macro level. One of the ways to study this is to think about global gross domestic product, or the aggregate gross domestic product from all nations across the planet over a period of time, and so the short version of the story that I tell in much more detail when I give keynote talks on exponentials and AI and associations is that it took essentially the course of all of human history to get to the equivalent of, in today's dollars, the equivalent of a trillion dollars in global GDP, and that was through about the mid-1700s. Then the Industrial Revolution happened, and for about 200 years, we had a 10x increase in global GDP. Again, this is normalized. It's on a real basis, meaning an inflation-adjusted currency. So we went from a trillion in the 1700s to around 1950, we got to 10 trillion in global GDP, and then, from 1950s-ish through the early 2010s, we went from 10 to 100.
Amith: 34:29
And that was, of course, the computing revolution, and so what's happened here is exactly what you started this segment describing, which is that something that has gone from being incredibly scarce to being incredibly abundant drives massive increases in consumption, which is actually fundamentally exciting, because there's a lot of disparity in quality of life and there's a lot of disparity in terms of access to services, like healthcare being one, and many others that I think AI is going to have a big effect on. So there's I mean, every industry becomes a growth industry in a sense. Of course, there's all these questions about like, well, with unlimited intellect on tap through AI, what does that mean for most of us? What do we do day to day? And I think there's both the scary side of that and then there's the exciting part of that. It's like you know, how do you add value on top of the fundamental compute that's happening. That really does represent real intelligence. So that's why I think this has become a bigger conversation recently. Plus, it's just kind of a fun term to throw around at a cocktail party.
Mallory: 35:23
Javon's paradox. I'm going to start using that one at the next party.
Amith: 35:27
I go to.
Mallory: 35:27
I feel like I've seen it in reference to DeepSeek's release and kind of like oh, is there fear that one company or one model is going to destroy all the rest? But the idea with Javon's paradox is, well no, as we keep progressing and innovating, the demand will go up as well. So I think that's interesting. I have concerns on the consumption side, like just humans as endless consumers. I like your positive spin on it in terms of abundance and obviously the environmental concerns. I just think this is a really interesting paradox. But I like what you said about the converging exponential curves, so maybe we can hope that there's a chance that energy can kind of keep up with AI innovation.
Amith: 36:09
Well, I think that you know, when we think about the behavior and the decision making of each individual person, and we say, well, that individual is, theoretically, at least when they're acting rationally, going to behave in their own self-interest. That's not always actually true, but in general there's some truth to that statement. And so, in their own self-interest, what are they going to do? They're going to ensure that their basic needs are met, for themselves, their family, and then go higher up the hierarchy of needs. Essentially, and of course they're thinking, especially as they get to the higher levels of that conversation what does this mean for society, for the world? Am I being a good citizen and all that? But a very large percentage of the world isn't thinking in that higher order level. So when they get access to services they haven't had access to, like the ones we're talking about, it's life-changing. It literally can be life-saving as well in some cases. So I think that has to be our priority. I'm not saying to hell with the environment at all, by the way. I'm extremely concerned about it. But I also feel that we're not moving fast enough. Even with all the brilliant minds that are deeply committed to solving climate problems, I don't know that we're moving fast enough to solve them without a lot of help from our friend AI. So that's where I get excited is the kind of stuff we talk about in this pod. You know, in the material science realm, in terms of, you know, fundamental physics discoveries, things that we generally believe we're very close to unlocking with AI, are going to be fundamental game changers in making things like energy, for example, effectively infinite, abundant and very low cost. And if we can do that, we can solve a lot of other problems downstream, because if you solve energy, everything else is fixable right. You can solve water, you can solve materials, you can solve a lot of other problems if you can solve for energy. So in the next 20 years, I think there's a very strong chance we can do that if we use a lot of AI. If we don't use a lot of AI, sure, I mean humans are amazing, we could solve it anyway, even if we take away compute, but I think it's much more likely that we come up with incredible solutions sooner and better and more affordably if we use a lot of AI. That's really the point I'm trying to make.
Amith: 38:06
You know the one thing I wanted to pivot on before we wrap up on this whole Gervais paradox conversation is another implication for associations to be thinking about, which is their own products and services. So associations live in an environment where historically they have been number one, the center of the universe. It's kind of nice to be an association in the old school way because you're the only game in town for the content and the community you provide. Of course that's no longer the case, but associations often do benefit from major brand strength as well as content, product strength in their fields. So the question is okay, well, if great information and great connectivity is abundant and available anywhere, does that disintermediate the association from the content and from the community and make them less relevant content and from the community and make them less relevant? Or can the association be finding a way to jump on Javon's paradox and to find ways to increase the abundance of their own content and products and services in the market? And there's lots of ways to think about that strategically. But be part of that revolution, be part of that abundance mindset versus being displaced by it, because there's abundant high quality stuff out there.
Amith: 39:15
I'll give you a great example. Let's say I'm in a particular branch of medicine and I have great content, I have great learning, I have a great community, but let's just say, in this particular example, I'm not embracing AI and so my tools are kind of old school. I have online learning, but it's the same stuff we've had for 20 years, right, or at least 10. I have content, but it's you know, again, not particularly easy to navigate, it's hard to search and it's all the usual stuff that people have challenges with. And maybe the best content it might be unbelievable great people and great content, but it's just kind of hard to deal with, it's kind of hard to get to, and I haven't put AI on top of it, so I haven't, you know, made it accessible in the way people are quickly becoming accustomed to.
Amith: 39:58
Well, enter O3 Mini, brand new and totally free and operating at better than PhD levels across not just your discipline, but all disciplines. So what's going to happen? If someone has a question that's in your field and they can get a really, really excellent answer from O3. And even if it's not using your content, right, it's just using public domain content, they're going to go there because it's not only free but, more importantly, it's low friction. So my point is is that you know that overabundance scenario and that that Javon's paradox impact is going to affect the economics of what you do.
Amith: 40:33
Just be part of it. It's an opportunity and the way to be part of it is obviously to embrace AI but then to think of ways to segment and differentiate different tiers of value creation. So we talk a lot about this in the book in Ascend and we talk about this a little bit in the AI Learning Hub content in the strategy course. But just the basics of having content that by itself probably isn't going to be a competitive differentiator for you, certainly in five years, probably not even today. But how do you layer value on top of that? How do you monetize it when people are going deeper? Right, but then how do you reach out to people and create this ecosystem? That's far broader than you've been able to. So it requires a new frame, a new lens, if you will.
Amith: 41:15
I find that really exciting. I also think it's going to crush a lot of the associations that are in the space, sadly, because many of them are not moving quickly enough. I talk to a lot of folks who still tell me today when I ask them hey, what's going on in your organization with AI, they're like oh no, we're pretty much on top of it. We have three people out of like 100 plus usually, that have been experimenting with chat, gpt, and I'm like, hey, that's cool, like I'm glad you haven't like blocked it, but what are you really doing? You know so it's 2025, guys, we've got to get moving on this.
Mallory: 41:43
Yeah, yeah, I think, like Amit said, we're seeing Javon's paradox play out in the world, and I think we're at a really interesting pivotal point where we can watch it play out in associations as well, where you can ride the wave of abundance or potentially be crushed by it. So you're already taking a step in the right direction by listening to the Sidecar Sync pod, and we will see you all next week.
Amith: 42:06
Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.

February 13, 2025