This isn’t about the band. But yes, it’s a little bit rock and roll.
When you hear "AC/DC," your first thought might be blasting guitar riffs, not electrical engineering rivalries. But rewind to the late 19th century, and "AC vs. DC" meant something very different: a fierce technological battle between two of the era's most brilliant minds. On one side stood Thomas Edison, a relentless advocate for direct current (DC); on the other, Nikola Tesla, champion of alternating current (AC). Spoiler alert: Tesla was largely right: AC won for utility-scale power transmission. But not before Edison, driven by commercial interests, clung to a technically inferior approach for far too long.
Why bring this up now? Because we’re at another inflection point, this time in the realm of artificial intelligence. I started down this rabbit hole after reflecting on how even the smartest people can fall prey to their own blind spots when the ground is shifting beneath them. The story of Edison’s refusal to adapt felt eerily familiar to what I see happening today as organizations wrestle with AI.
There’s a crucial lesson here: don’t let yesterday’s logic dictate today’s choices. Much like Edison’s battle to preserve DC, today’s resistance to new business models, federated learning, and open-source AI solutions often has more to do with protecting a legacy than with what's best for the future. If we’re not careful, we could miss out on the full promise of this transformational technology.
Modern "DC arguments" in AI: Commercial interests masking as technical positions
OpenAI's shift from open to closed source mirrors Edison's protectionist stance. Despite its initial mission emphasizing openness, OpenAI transitioned to a closed approach once commercial pressures mounted. Evidence suggests open models like Meta's Llama can achieve comparable performance with greater transparency and lower resource requirements. In fact, the leaked Google "no moat" memo acknowledged open-source models were "faster, more customizable, more private, and pound-for-pound more capable" than proprietary alternatives.
Cloud providers' lock-in strategies represent another clear example. Major cloud platforms (Google Cloud, Microsoft Azure, Amazon AWS) aggressively promote their proprietary, cloud-based AI services despite evidence that on-premise solutions often provide better control, customization, and potentially lower long-term costs. The UK Competition and Markets Authority found that cloud vendors deliberately create interoperability barriers and charge enormous egress fees to maintain market control rather than technical necessity.
Resistance to federated learning persists despite its privacy advantages. Major AI companies continue advocating for centralized data collection while slow-walking federated approaches. Google’s own research demonstrates federated learning’s effectiveness for protecting privacy while training models, yet its implementation remains selective across Google’s product portfolio. Like Edison clinging to DC power despite AC’s advantages, companies resist federated learning primarily because centralized data collection provides valuable data assets they can monetize beyond the original application.
Google's public dismissal of open-source threats
While Google’s private acknowledgment of open-source models’ technical advantages reveals a classic commercially motivated position. A senior Google engineer admitted in a leaked memo that open-source models were achieving with "$100 and 13B params what we struggle with at $10M and 540B." Yet Google continues promoting its proprietary approaches, protecting billions invested in AI infrastructure that would be devalued if open-source alternatives became dominant.
True, Google does release some very small open-source models but their frontier AI is closed-source.
In comparison, firms like Meta, Mistral, and Deepseek, among hundreds of others, are publishing open-source, or open-weights, versions of their models. In most cases, the licenses for these models are permissive, allowing any type of use, including commercial and derivative works, (though this is not 100% the case for Meta).
Firms that are publishing open-source models aren’t doing so out of the goodness of their hearts; it is a strategic business model choice. By releasing software and AI models as open source, a large community of users and developers can benefit and also add value back to the platform.
Meta has a long history of doing this and has benefited immensely from broad adoption of technologies like React and PyTorch. Both are the most broadly adopted software tools in their respective categories. The compounding benefit for Meta has been enormous. The same can be said for many other open-source projects.
This isn’t about philosophy; it is about a business model. The open vs. closed source debate highlights how our business models can be turned on their heads. If we remain open to it, we can find new ways to create value for our communities and for our own organizations.
Historical patterns of technological resistance beyond Edison
Edison's resistance to AC wasn’t an isolated incident. Throughout history, commercial interests have consistently opposed superior technologies when financial stakes were high.
- The Horse Association of America's fight against tractors (1920s–1940s) demonstrates how entrenched industries mobilize against disruptive technologies. Despite tractors offering 75% higher productivity, the HAA spent decades spreading misinformation, claiming tractors "ruined nearly every farmer" while evidence showed annual savings of $318 per farmer. This resistance significantly delayed agricultural modernization.
- The butter industry’s 80-year campaign against margarine included lobbying for prohibitive taxes, enforcing unappealing color requirements (pink, red, or black dye), and restricting its use in state institutions. Despite margarine’s advantages in cost, shelf life, and accessibility for lower-income consumers, dairy interests maintained legal barriers until the 1960s.
- The QWERTY vs. Dvorak keyboard layout shows how network effects and switching costs can preserve inferior technologies. Despite Dvorak’s proven efficiency advantages (70% of keystrokes on home row vs. 32% for QWERTY), the established standard persisted through subtle resistance from manufacturers and organizational inertia.
- The music industry’s resistance to digital formats demonstrates how industries can damage themselves through resistance. By aggressively fighting digital distribution through lawsuits, DRM restrictions, and refusing to create viable alternatives, record companies ceded control to technology platforms like Apple and Spotify.
Specific challenges for associations and nonprofits in AI adoption
Associations and nonprofits face unique obstacles when navigating AI adoption decisions:
- Resource constraints create significant barriers. According to the TechSoup/Tapp Network 2025 Report, 30% of nonprofits with annual budgets under $500,000 cite financial limitations as their primary obstacle to AI adoption. More concerning, 43% rely on just 1–2 staff members to manage all IT decisions, creating a critical expertise gap.
- Mission alignment issues require special consideration. NTEN’s AI Framework emphasizes that AI decisions must reflect an organization’s core values. Unlike for-profit entities, nonprofits must evaluate AI not just on efficiency but on how it advances social impact goals, which are often more challenging to quantify.
- Stakeholder trust and transparency concerns are heightened in mission-driven organizations. The Fundraising.AI framework notes that "using AI ethically is not a technical challenge but a leadership imperative" for nonprofits, as they face enhanced scrutiny from donors, beneficiaries, and communities.
- Digital divide between organizations is widening. Larger nonprofits (budgets exceeding $1 million) are adopting AI at nearly twice the rate of smaller organizations (66% vs. 34%), creating a growing technology gap within the sector.
Frameworks for identifying technological blind spots
Organizations can deploy specific methodologies to avoid commercially-driven blind spots:
- The AI Blindspot Framework from MIT Media Lab provides a structured approach to identifying potential oversights in AI evaluation. The framework examines factors like abusability, discrimination by proxy, and optimization criteria, emphasizing that "AI blindspots are universal—nobody is immune to them—but harm can be mitigated if we intentionally take action."
- NIST’s AI Risk Management Framework offers a comprehensive approach through four core functions: Govern, Map, Measure, and Manage. It emphasizes that "risk management must be an ongoing process throughout the entire AI lifecycle" and requires multidisciplinary teams to avoid blind spots.
- The 3-Layer Evaluation Framework assesses AI across capability (technical performance), human interaction (contextual use), and systemic impact (broader effects). Blind spots often emerge at the intersection of these layers, particularly when technical capabilities are evaluated in isolation.
- Independent verification processes are crucial for separating marketing claims from reality. Expert recommendations include third-party audits, custom validation with organization-specific data, and comparative benchmarking against multiple vendor solutions.
Where commercial interests create resistance in current AI advancement
Several key tensions exist in today’s AI landscape where commercial interests potentially impede technically superior approaches:
- Open vs. closed AI models represent perhaps the clearest parallel to Edison’s DC stance. While closed models from OpenAI and Anthropic received $37.5 billion in funding (compared to $14.9 billion for open-source developers since 2020), performance gaps are narrowing rapidly. Epoch AI research shows "the best open model today is on par with closed models in performance, but with a lag of about one year."
- On-premise vs. cloud-based solutions create tension between vendor revenue models and organizational needs. Cloud providers prefer subscription-based models over one-time sales and benefit from customer dependency on their ecosystems. Meanwhile, on-premise solutions offer advantages in latency, security, and specific compliance scenarios that cloud vendors often downplay.
- Centralized vs. federated approaches highlight the conflict between data control and privacy. Companies resist federated learning despite its privacy benefits primarily because it reduces their ability to aggregate and monetize user data beyond the original application, similar to how Edison resisted AC despite its technical advantages.
- Short-term profit vs. long-term advancement creates perhaps the most fundamental tension. As one innovation survey found, only 35% of organizations focus on long-term impactful ideas, while commercial pressures favor immediate returns over fundamental improvements.
How nonprofits can navigate commercial-driven blind spots
Successful organizations are employing specific strategies to navigate these challenges:
- Start with a clear AI policy before implementing specific tools. Both NTEN and NetHope emphasize the importance of establishing governance structures aligned with organizational values as the first step in AI adoption.
- Diversify assessment teams to minimize blind spots. Include domain experts, technical specialists, end users, and stakeholders from diverse backgrounds in technology evaluation processes. This multidisciplinary approach helps identify issues that might be invisible to homogeneous teams.
- Implement structured due diligence protocols for evaluating AI vendors. Develop standardized questionnaires covering technical, ethical, and operational aspects, and request documentation of risk management practices and bias mitigation strategies.
- Focus on mission-critical applications where AI can create immediate value. Organizations like HIAS and One Acre Fund succeeded by applying AI to specific problems—refugee resettlement and farmer communication, respectively—where it directly advanced their missions.
- Adopt incremental implementation approaches. According to the TechSoup/Tapp Network report, successful nonprofits start with one process that could benefit from automation or data-driven insights rather than attempting comprehensive AI adoption.
- Leverage collaborative resources rather than going alone. Industry collaboratives like Fundraising.AI, NTEN, and NetHope provide frameworks, policies, and learning communities that individual organizations can leverage instead of creating resources from scratch.
Warning signs that you're hearing a "DC argument"
Based on historical patterns, certain indicators can help identify commercially motivated resistance to beneficial AI technologies:
- Disproportionate focus on edge cases while ignoring overall benefits mirrors how the Horse Association emphasized tractor failures while ignoring productivity gains.
- Moving goalposts in criticism as technologies improve resembles how margarine critics shifted from safety to health to aesthetic concerns as earlier arguments were addressed.
- Exaggeration of transition costs while minimizing long-term efficiency gains follows the pattern seen in resistance to LED lighting adoption.
- Regulatory asymmetry that imposes stricter rules on new technologies while leaving existing ones with similar risks less regulated is a common commercial protection strategy.
- Incompatibility claims that prove technically unfounded upon closer examination echo how typewriter manufacturers resisted the Dvorak keyboard layout.
Conclusion:Let’s Learning from Edison's mistake
Edison’s defense of DC power despite AC’s clear advantages represents a cautionary tale about how commercial interests can create technological blind spots. In today’s AI landscape, similar dynamics are playing out as established players resist open-source models, federated approaches, and on-premise solutions that might threaten their business models but offer technical advantages.
For associations and nonprofits navigating AI adoption, recognizing these commercially-driven positions is crucial. By applying structured evaluation frameworks, engaging diverse perspectives, implementing incremental approaches, and leveraging collaborative resources, organizations can make sound technology decisions that advance their missions rather than reinforcing commercial interests.
The lessons from Edison’s “War of Currents” remain relevant today: technical merit should guide adoption decisions, not commercial entrenchment. By understanding both historical patterns and current AI dynamics, mission-driven organizations can avoid Edison’s mistake—resisting superior approaches due to commercial investments—and instead harness AI’s full potential to advance their important work.

June 12, 2025