1 min read
Exploring the Advancements & Implications of ChatGPT 4.0
The rapid evolution of artificial intelligence (AI) language models has revolutionized the way we interact with technology. From enhancing...
7 min read
Amith Nagarajan
:
Updated on June 13, 2025
This isn’t about the band. But yes, it’s a little bit rock and roll.
When you hear "AC/DC," your first thought might be blasting guitar riffs, not electrical engineering rivalries. But rewind to the late 19th century, and "AC vs. DC" meant something very different: a fierce technological battle between two of the era's most brilliant minds. On one side stood Thomas Edison, a relentless advocate for direct current (DC); on the other, Nikola Tesla, champion of alternating current (AC). Spoiler alert: Tesla was largely right: AC won for utility-scale power transmission. But not before Edison, driven by commercial interests, clung to a technically inferior approach for far too long.
Why bring this up now? Because we’re at another inflection point, this time in the realm of artificial intelligence. I started down this rabbit hole after reflecting on how even the smartest people can fall prey to their own blind spots when the ground is shifting beneath them. The story of Edison’s refusal to adapt felt eerily familiar to what I see happening today as organizations wrestle with AI.
There’s a crucial lesson here: don’t let yesterday’s logic dictate today’s choices. Much like Edison’s battle to preserve DC, today’s resistance to new business models, federated learning, and open-source AI solutions often has more to do with protecting a legacy than with what's best for the future. If we’re not careful, we could miss out on the full promise of this transformational technology.
Modern "DC arguments" in AI: Commercial interests masking as technical positions
OpenAI's shift from open to closed source mirrors Edison's protectionist stance. Despite its initial mission emphasizing openness, OpenAI transitioned to a closed approach once commercial pressures mounted. Evidence suggests open models like Meta's Llama can achieve comparable performance with greater transparency and lower resource requirements. In fact, the leaked Google "no moat" memo acknowledged open-source models were "faster, more customizable, more private, and pound-for-pound more capable" than proprietary alternatives.
Cloud providers' lock-in strategies represent another clear example. Major cloud platforms (Google Cloud, Microsoft Azure, Amazon AWS) aggressively promote their proprietary, cloud-based AI services despite evidence that on-premise solutions often provide better control, customization, and potentially lower long-term costs. The UK Competition and Markets Authority found that cloud vendors deliberately create interoperability barriers and charge enormous egress fees to maintain market control rather than technical necessity.
Resistance to federated learning persists despite its privacy advantages. Major AI companies continue advocating for centralized data collection while slow-walking federated approaches. Google’s own research demonstrates federated learning’s effectiveness for protecting privacy while training models, yet its implementation remains selective across Google’s product portfolio. Like Edison clinging to DC power despite AC’s advantages, companies resist federated learning primarily because centralized data collection provides valuable data assets they can monetize beyond the original application.
Google's public dismissal of open-source threats
While Google’s private acknowledgment of open-source models’ technical advantages reveals a classic commercially motivated position. A senior Google engineer admitted in a leaked memo that open-source models were achieving with "$100 and 13B params what we struggle with at $10M and 540B." Yet Google continues promoting its proprietary approaches, protecting billions invested in AI infrastructure that would be devalued if open-source alternatives became dominant.
True, Google does release some very small open-source models but their frontier AI is closed-source.
In comparison, firms like Meta, Mistral, and Deepseek, among hundreds of others, are publishing open-source, or open-weights, versions of their models. In most cases, the licenses for these models are permissive, allowing any type of use, including commercial and derivative works, (though this is not 100% the case for Meta).
Firms that are publishing open-source models aren’t doing so out of the goodness of their hearts; it is a strategic business model choice. By releasing software and AI models as open source, a large community of users and developers can benefit and also add value back to the platform.
Meta has a long history of doing this and has benefited immensely from broad adoption of technologies like React and PyTorch. Both are the most broadly adopted software tools in their respective categories. The compounding benefit for Meta has been enormous. The same can be said for many other open-source projects.
This isn’t about philosophy; it is about a business model. The open vs. closed source debate highlights how our business models can be turned on their heads. If we remain open to it, we can find new ways to create value for our communities and for our own organizations.
Historical patterns of technological resistance beyond Edison
Edison's resistance to AC wasn’t an isolated incident. Throughout history, commercial interests have consistently opposed superior technologies when financial stakes were high.
Specific challenges for associations and nonprofits in AI adoption
Associations and nonprofits face unique obstacles when navigating AI adoption decisions:
Frameworks for identifying technological blind spots
Organizations can deploy specific methodologies to avoid commercially-driven blind spots:
Where commercial interests create resistance in current AI advancement
Several key tensions exist in today’s AI landscape where commercial interests potentially impede technically superior approaches:
How nonprofits can navigate commercial-driven blind spots
Successful organizations are employing specific strategies to navigate these challenges:
Warning signs that you're hearing a "DC argument"
Based on historical patterns, certain indicators can help identify commercially motivated resistance to beneficial AI technologies:
Conclusion:Let’s Learning from Edison's mistake
Edison’s defense of DC power despite AC’s clear advantages represents a cautionary tale about how commercial interests can create technological blind spots. In today’s AI landscape, similar dynamics are playing out as established players resist open-source models, federated approaches, and on-premise solutions that might threaten their business models but offer technical advantages.
For associations and nonprofits navigating AI adoption, recognizing these commercially-driven positions is crucial. By applying structured evaluation frameworks, engaging diverse perspectives, implementing incremental approaches, and leveraging collaborative resources, organizations can make sound technology decisions that advance their missions rather than reinforcing commercial interests.
The lessons from Edison’s “War of Currents” remain relevant today: technical merit should guide adoption decisions, not commercial entrenchment. By understanding both historical patterns and current AI dynamics, mission-driven organizations can avoid Edison’s mistake—resisting superior approaches due to commercial investments—and instead harness AI’s full potential to advance their important work.
1 min read
The rapid evolution of artificial intelligence (AI) language models has revolutionized the way we interact with technology. From enhancing...
1 min read
Summary: In this jam-packed episode of Sidecar Sync, Amith and Mallory break down two of the biggest AI stories of the moment: DeepSeek’s R1 model...
1 min read
In an age where technology is advancing at an unprecedented pace, Artificial Intelligence (AI) stands out as a transformative force. For...