CES 2026 just wrapped in Las Vegas, and while the Consumer Electronics Show has historically been about TVs, phones, and gadgets, this year's biggest announcements weren't consumer products at all.
They were chips—the foundational infrastructure that determines how fast AI can advance.
For association leaders trying to plan AI strategy, understanding the chip landscape might feel distant from your day-to-day work. But whoever controls the chips controls the pace of AI progress. These announcements are setting the ceiling for what's possible over the next several years.
Nvidia Sets the Tone
Nvidia CEO Jensen Huang delivered a keynote that framed the entire show. His company—now worth approximately $4.5 trillion—announced the Vera Rubin platform, their next-generation AI supercomputer named after the astronomer who discovered evidence of dark matter.
The platform represents what Nvidia calls "extreme codesign" across six chips working together as a unified system. Rather than optimizing individual components, Nvidia designed the entire system as one coherent machine.
The specs matter: Roughly 5x the performance of their current Blackwell chips, with AI processing at about one-tenth the token cost. For organizations running AI at scale, that cost reduction is significant.
But perhaps more important was Huang's warning: AI's next breakthroughs will be limited by compute infrastructure, not ambition. Demand is outpacing supply.
His framing was striking—$10 trillion of legacy computing is now being modernized to AI-native systems. That's a massive infrastructure shift happening largely out of public view.
The Groq Deal Signals Where Things Are Heading
This isn't just conference rhetoric. Just before CES, Nvidia struck a $20 billion licensing deal with Groq (the one with the Q), the company specializing in inference chips.
While technically a licensing arrangement, most of Groq's executive team—including the founder and CEO—moved to Nvidia in senior roles. The deal signals Nvidia's recognition that inference-specific chips (running AI models, as opposed to training them) represent a critical piece of the puzzle they needed to strengthen.
Why does this matter? The more inference there is—the more people actually using AI—the more demand there is for training the next generation of models. And the more powerful those models become, the more demand for inference. It's a flywheel, and Nvidia is positioning itself on both sides.
AMD Fires Back
Lisa Su, AMD's CEO and Nvidia's main competitor, delivered the official CES opening keynote with her own ambitious claims.
She announced the Helios platform—AMD's answer to Nvidia's data center dominance—calling it "the world's best AI rack" in a direct shot at her competitor.
The numbers she projected were striking: AMD's next-gen MI500 chips, expected in 2027, will deliver up to 1,000x performance improvement over their MI300X chips from 2023.
Su also predicted 5 billion people will be using AI daily within five years, which would require a 100x increase in global computing capacity. Whether or not those exact numbers materialize, the underlying point stands: we're building toward a future that requires dramatically more compute than exists today.
Why This Pace Matters
Moore's Law—the observation that computing power doubles roughly every two years—has driven technological progress for 50 years. What's happening now makes Moore's Law seem almost quaint by comparison.
The implications extend beyond raw speed.
Modalities we currently consider hard to access—like real-time video generation or fluid interactions with live avatars—are slow and clunky today because they're computationally expensive. When inference costs drop by 10x and speeds increase by 5x, applications that seem futuristic become practical.
Video generation tools like Google's Veo 3 are already impressive; imagine them running in real-time. The companies making these announcements are essentially predicting use cases we haven't yet imagined. They're betting that if they build the infrastructure for effectively unlimited intelligence, applications will emerge to use it.
What This Actually Means for Associations
Here's the practical reality: these announcements are exciting, but they shouldn't cause anxiety about your current AI strategy.
Most associations have significant headroom with today's AI capabilities. The technology available right now—Claude, ChatGPT, Gemini, and countless specialized tools—far outstrips what most organizations are actually using.
The constraint isn't the technology. It's learning how to apply it effectively.
This leads to an important reframe: AI pilots and proofs of concept are more about teaching your organization than proving the technology works.
Historically, when you've piloted new technologies, you've been testing whether the tech was capable enough to meet a business need. With AI, it's almost guaranteed that in one, two, or three years, the technology will be sufficiently powerful to solve almost anything you can imagine.
So what are you actually proving with a pilot? You're learning how your organization needs to adapt. You're discovering what workflows need to change. You're building internal expertise and muscle memory.
The Real Challenge
This is harder than it sounds.
We all form deep channels in our thinking—assumptions about what's possible, patterns of behavior that drive our next decisions. Those assumptions often become outdated faster than we realize.
Even organizations at the cutting edge of AI adoption regularly discover that approaches from six months ago are no longer optimal because the underlying technology has shifted.
The chip wars matter because they're setting the trajectory. Understanding that trajectory helps you plan and helps you recognize when your assumptions need updating.
But don't let the pace of infrastructure advancement distract you from the work available today. There's enormous opportunity sitting right in front of you—regardless of whether Nvidia or AMD wins the next round.
The question isn't whether the technology will be capable enough. It will be.
The question is whether your organization will be ready to use it.
January 19, 2026