Sidecar Blog

OpenAI Embraces 'Open' AI—What It Means for Associations

Written by Mallory Mejias | Aug 19, 2025 10:00:00 AM

OpenAI—the company that arguably put the "closed" in closed-source AI—just released their first open models in six years. As HubSpot Co-Founder and CTO Dharmesh Shah excitedly shared on LinkedIn, the entire 120 billion parameter model fits on a $15 USB stick. Let that sink in: One of the most sophisticated AI models ever created is small enough to carry in your pocket.

But this isn't just about OpenAI making headlines. It's about a fundamental shift in how associations can deploy AI—one that puts control, cost savings, and data sovereignty directly in your hands.

OpenAI Returns to Its Roots

Understanding the gpt-oss Models

Earlier this month, OpenAI released two open-weight language models: gpt-oss-120B (approximately 117 billion parameters) and gpt-oss-20B (20 billion parameters). For context, open source here means they've released the model weights—though not the training data used to create it.

The technical requirements are surprisingly modest. The smaller 20B model can run on consumer laptops with 16GB of RAM. The larger 120B model needs a high-end Nvidia GPU or a powerful Mac to run smoothly. Both models are specifically designed for tool-calling and agentic use cases, meaning they excel at interacting with other software and executing multi-step tasks rather than general conversation.

These models come with Apache 2.0 licensing, which means you're free to use them commercially, modify them, and redistribute them without paying OpenAI a cent.

Why This Reversal Matters

OpenAI's last open model was GPT-2 in 2019. Back then, they famously delayed releasing GPT-2's full version, citing concerns it was too dangerous to release publicly. The organization worried about misuse, from generating fake news to automating spam.

Fast forward to 2025, and Sam Altman admits they were "on the wrong side of history" with their closed approach. This isn't just a minor policy adjustment—it's a fundamental acknowledgment that open source AI is essential for the technology's development and democratization.

The shift signals something crucial: Even the market leader recognizes that the future of AI isn't monopolistic control but distributed innovation.

The Bigger Picture: Open Source AI Landscape

The Competition Driving Change

OpenAI isn't entering an empty field. They're responding to intense competition from global players who've been pushing open source boundaries:

Meta's Llama 4 series has been available for months, though reception has been mixed compared to their earlier releases.

Alibaba's Qwen3 models deliver 90-95% of frontier model capabilities at a fraction of the cost, becoming increasingly popular for practical applications.

DeepSeek from China continues releasing powerful models that rival Western alternatives.

Mistral Medium 3.1 from the French AI company offers impressive performance for its size.

Moonshot AI's Kimi K2 has gained attention for exceptional code generation capabilities.

This explosion of options means associations aren't dependent on any single provider. The competition has fundamentally changed AI economics.

The New Economics of AI

Here's what's transforming the landscape: Open source models typically cost just 3-20% of what frontier models charge to run. When multiple providers compete to host the same model—through services like Groq, Cerebras, and OpenRouter—prices drop even further.

This commoditization is driving inference costs toward zero. Some providers now offer millions of tokens for pennies, making previously expensive AI applications suddenly affordable for associations.

The speed improvements are equally dramatic. Groq, for instance, can run open source models at hundreds of tokens per second—far faster than traditional API services. For associations processing thousands of member queries or documents, this speed difference translates to real operational improvements.

What "Running AI Locally" Actually Means for Associations

The Practical Reality

Running AI locally means the model lives on your hardware—your laptop, your servers, your infrastructure. No data travels to OpenAI, Google, or anyone else. No monthly API bills accumulate. The AI works even if your internet connection fails.

You can download these models today from Hugging Face, install them on appropriate hardware, and start using them immediately. Tools like LM Studio make this process accessible even for non-technical users. Consider these practical applications where local AI makes sense:

Member service agents handling sensitive information: When members share personal financial data or health information, keeping that data on your servers isn't just preferable—it might be legally required.

Document processing for confidential materials: Board meeting minutes, strategic plans, or member complaints can be analyzed without external exposure.

Industry-specific tools: Train models on your sector's terminology, regulations, and best practices without sharing that valuable knowledge with tech companies.

Compliance-heavy applications: When data residency laws require information to stay within specific geographic boundaries, local AI is your only option.

The Trade-offs 

The Challenges Are Real

The gpt-oss models come with 49-53% hallucination rates—not so good, right? 

Well not exactly. These models aren't meant to be knowledge repositories like ChatGPT. If you ask them typical factual questions, they'll be wrong about half the time, which sounds terrible until you understand their actual purpose.

These models were built specifically for tool-calling and agentic use cases. Think of them as the brain that decides which tools to use and when, not as the encyclopedia that knows all the answers. When you tell a gpt-oss model "here's a web search tool, use it whenever someone asks a question," it will reliably use that tool—even when other models might ignore the instruction and try to answer from their own knowledge.

For associations building AI agents or automated workflows, this focused design is actually an advantage. You want a model that consistently follows instructions and uses your tools (like searching your member database or calling your APIs) rather than one that tries to answer from potentially outdated internal knowledge. The high "hallucination" rate simply means the model wasn't trained to memorize facts—it was trained to be excellent at deciding what to do and which tools to invoke.

Reminder! Running local AI means you're responsible for everything: hardware, updates, security, and troubleshooting. When OpenAI improves their cloud models, your local version doesn't automatically upgrade. You need technical expertise or willing partners to manage this infrastructure.

The compute requirements, while modest by AI standards, still exceed typical office equipment. That MacBook Pro with 16GB RAM? It'll run the model, but slowly. Serious deployment requires investment in appropriate hardware.

The Path Forward

OpenAI joining the open source movement validates what many have predicted: The future of AI is hybrid, with both closed and open models serving different purposes. For associations, this creates options that didn't exist six months ago.

The winner isn't open source or closed source—it's having the choice. Even if you never run a local model, understanding this shift helps you negotiate with vendors, evaluate proposals, and make informed decisions about your AI future.

Perhaps the most revolutionary aspect isn't the technology itself but what it represents: AI is becoming truly accessible to everyone, not just big tech companies. Your association can now own and control AI capabilities that would have seemed impossible just years ago.

Take time to explore the new oss models. Run some experiments. Talk to peers about their experiences. The open source AI revolution isn't coming—it's here. The question isn't whether to pay attention, but how to thoughtfully integrate these new possibilities into your association's future.