Economists Philip Trammell and Dwarkesh Patel recently published an essay that's been making the rounds in policy and technology circles. The premise is uncomfortable: what if the rules that have governed wealth and work for centuries are about to change?
The essay revisits Thomas Piketty's famous argument that inequality spirals upward unless policy stops it. Most economists thought Piketty was wrong about the past. But Trammell and Patel ask a different question: what if he turns out to be right about the future?
For associations—organizations built entirely on the premise that human expertise creates value—this line of thinking deserves attention. Not because the theory is certain to prove true, but because the questions it raises are already relevant to decisions you're making today.
How Capital and Labor Have Always Worked Together
To understand what might be changing, it helps to understand what's been stable.
Economists typically talk about three inputs to economic production: land, labor, and capital. Land is relatively fixed. But the relationship between labor and capital has driven most of the economic story of the past few centuries.
Here's how that relationship has traditionally worked: when wealthy people accumulate capital—factories, equipment, machines, tools—it actually raises wages for workers. That sounds counterintuitive, but the logic is straightforward. Workers become more productive when they have better tools. A construction worker with an excavator moves more earth than one with a shovel. A financial analyst with spreadsheet software processes more data than one with a ledger book.
More hammers make hands more valuable.
This creates a self-correcting mechanism. Capital accumulation doesn't just benefit capital owners. It benefits workers too, because their labor becomes more productive and therefore more valuable. The rich get richer, but wages also rise. Inequality exists, but it doesn't spiral without limit.
This dynamic has held remarkably steady across centuries of technological change. Steam engines, electricity, computers, the internet—each wave of innovation created new forms of capital, but labor remained essential to activate that capital and capture its value. You still needed people.
What Changes with AI
The essay's central argument is that AI could break this self-correcting mechanism.
Previous technologies complemented human labor. They made workers more productive, which made workers more valuable. But AI and advanced robotics have the potential to substitute for human labor rather than complement it.
When capital can substitute for labor, the economic dynamics flip. Robots don't need wages. AI systems don't need benefits, breaks, or career development. If capital can generate returns without requiring human workers to activate it, then accumulating more capital no longer raises the value of labor. It just generates more returns for capital owners.
Put differently: capital has historically been like a hammer that needed a hand to swing it. But AI creates the possibility of hammers that swing themselves. In that world, owning hammers matters a lot. Being a hand matters less.
The essay argues that if this substitution becomes sufficiently complete, whoever owns capital when the transition occurs could stay wealthy indefinitely—without employing humans at all. The self-correcting mechanism disappears. Inequality doesn't just grow; it compounds without limit.
Signs Already Visible
You don't have to accept the full theory to notice trends pointing in this direction.
AI wealth is concentrating in private markets that most people can't access. The leading AI companies—xAI, Anthropic, OpenAI—have remained private even as their valuations have soared into the hundreds of billions. You can't buy shares in these companies through a typical 401(k) or brokerage account. The returns are flowing to venture capital firms, sovereign wealth funds, and a small circle of institutional investors.
This matters because it means the economic gains from AI development are accruing to a narrow slice of the population before most people even have a chance to participate. By the time these companies go public—if they go public—much of the value creation will have already happened.
There's also a global dimension. For decades, developing countries grew faster than wealthy ones by importing capital and know-how to make their labor productive. Factory jobs moved to lower-wage countries because labor there was cheaper but could be made productive with the right equipment and training. This was a pathway out of poverty for hundreds of millions of people.
If capital substitutes for labor, that pathway narrows or closes. Why move a factory to a lower-wage country when you can automate it entirely? The mechanism that allowed poorer nations to catch up economically could weaken precisely when AI makes capital more powerful than ever.
From Country Gaps to Career Gaps
The essay focuses on inequality between nations and between capital owners and workers. But the same logic applies at smaller scales.
If AI adoption gaps persist between countries, they could become permanent economic gaps. The Microsoft AI diffusion report shows this is already happening—adoption in wealthy countries is growing nearly twice as fast as in developing ones.
The same dynamic plays out between organizations. Companies that deploy AI effectively will find efficiencies and capabilities that competitors miss. Over time, those advantages compound. Early adopters pull ahead; late adopters fall further behind.
And it plays out between individual professionals. Within any given field, practitioners who learn to work effectively with AI will have advantages over those who don't. They'll be faster, more capable, more valuable. The patterns being set today—who's learning these tools, who's avoiding them—have consequences that extend well beyond the next quarter or the next year.
The Case for Slowing Down
Reading all this, you might wonder: if AI is heading somewhere potentially destabilizing, shouldn't we slow down?
In theory, yes. A coordinated global slowdown would allow time for study, collaboration, and thoughtful policy development. We could hold hands across the world and make sure AI develops in a way that lifts people up rather than concentrating wealth and power.
In 2023, a group of industry leaders signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. The signatories included researchers, executives, and public intellectuals. Their concerns were legitimate. The letter made sense in principle.
But here's the problem: there's no mechanism for that kind of global cooperation. No authority exists that can tell every company, every country, every researcher to stop and wait. The incentives all point toward racing ahead. Any nation or company that pauses unilaterally just falls behind while others accelerate.
The letter's signatories knew this. The pause never happened. It was never going to happen. The request was reasonable; the probability of compliance was near zero.
This creates an uncomfortable reality. The concerns about AI's trajectory are valid. The desire to slow down and figure things out is understandable—even rational in a theoretical sense. But absent global coordination that isn't coming, slowing down just means being left behind by those who don't.
The practical conclusion is counterintuitive: AI adoption needs to speed up, not slow down, precisely because others won't wait.
What This Means for Associations
Associations exist to serve professionals. They represent labor—specifically, skilled labor that has traditionally commanded premium wages because of specialized expertise, credentials, and capabilities.
The entire model of professional development, continuing education, and credentialing assumes that human labor creates value. You invest in learning new skills because those skills make you more valuable in the labor market. You earn certifications because they signal competence that employers will pay for. You join a professional association because it helps you build capabilities and connections that advance your career.
If the relationship between capital and labor shifts in the direction the essay describes, these assumptions need reexamination. Not because they're suddenly wrong, but because the ground beneath them is moving.
What should associations do with this information?
Study the labor economics of your sector. What percentage of tasks in each role can current AI automate? What about the next generation of AI? No job is purely a collection of tasks, but mapping tasks to automation potential gives you a picture of where your profession is headed. Some roles will see 20% of tasks automated. Others might see 80%. The implications for training, credentialing, and career pathways differ dramatically.
Focus reskilling efforts on moving up the value chain. Associations have always helped members develop new capabilities. That work is more urgent than ever. The question is which capabilities. If AI can handle routine analysis, research, and content generation, what human skills become more valuable? Judgment, relationship-building, ethical reasoning, creative problem-solving, the ability to ask the right questions—these are areas where investment makes sense.
Engage your board in these conversations. Most association staff come from operational backgrounds, not economics or labor policy. But your volunteer leadership often includes people with deep expertise in your sector. They're living through these changes in their own careers. Bring them into strategic conversations about AI's impact on your profession. They have insights that staff alone won't generate.
Help members understand what's happening. The absence of knowledge leads to fear, but it also leads to poor decisions. Professionals who don't understand how AI is reshaping their field can't make informed choices about their own development. Associations can provide the context and analysis that individual members can't easily access on their own.
Reasons for Optimism
This essay has painted a fairly sobering picture. But there's another side worth acknowledging.
We've never been able to predict the jobs of the future. Work that exists today would have been unimaginable 30 years ago, let alone 100. Podcast producer, social media manager, user experience designer, cloud architect—none of these roles existed in their current form a generation ago. The economy has consistently generated new forms of valuable work even as old forms disappeared.
Humans are adaptive. We figure out how to create value in new ways. The agricultural share of employment collapsed over the past century, but we didn't end up with mass permanent unemployment. We found other things to do—things that would have been hard to envision from the vantage point of 1900.
The angst about AI and labor is healthy. It drives adaptation. Societies that worry about these transitions tend to handle them better than societies that don't. Complacency is more dangerous than concern.
But adaptation requires engagement, not avoidance. The professionals and organizations that will thrive are those learning these tools now, not those waiting to see what happens.
The Path Forward
The Trammell and Patel essay may or may not prove accurate. Economic predictions over long time horizons are notoriously unreliable, and this one depends on assumptions about AI capabilities that remain uncertain.
But you don't have to accept every element of the theory to find value in its framing. The questions it raises—about the relationship between human skills and technological capabilities, about who captures the gains from AI, about what associations should be doing to prepare their members—are relevant regardless of which specific predictions prove correct.
The desire to slow down and get this right is understandable. In an ideal world, we'd take the time to study these transitions carefully and develop thoughtful responses. But we don't live in that world. We live in one where the technology is advancing rapidly and coordination is elusive.
Associations that study the labor economics of their sector, engage their boards in substantive conversations about AI's implications, and help their members move up the value chain will be better positioned regardless of how the future unfolds. Those that wait for certainty before acting may find that the window for effective action has passed.
The countries racing ahead on AI adoption aren't ignoring the risks. They're engaging with them while building capacity. Associations should do the same.
January 28, 2026