Enterprise leaders like you have happily and cozily operated within a relatively stable social contract since the end of World War II: Governments set the rules, capital provided the fuel, and labor turned the growth engine. Firms were welcome to operate, attract consumers, and reap dividends in return for jobs and tax revenue. That bargain is breaking down.
AI is restructuring the foundations of value creation, workforce design, and economic legitimacy. This could lead in one of three directions, each outlined in the scenarios we explore in this paper: Techno-feudalism, democratic AI commons, or algorithmic corporatism. What you decide today will shape what is to come— and the role your organization will play in it.
Enterprises once needed large teams of people to do, well, pretty much anything—from innovate to produce to sell to deliver. Now, a few hundred engineers hooked up with LLMs and agents can unlock billions in market cap. The world’s most valuable firms today often have the fewest employees per dollar of revenue—such as OpenAI, with a valuation of $300bn and a headcount of 4500. In comparison, Accenture has a market cap of sub $200bn with a headcount of around 775,000. Scaling revenue without huge headcounts is becoming commonplace (see Exhibit 1).
Source: HFS Research, 2025. Examples are illustrative of a broader trend
As AI decouples economic growth from labor (as captured in the HFS 2028 Tech-Services Vision in which labor arbitrage is replaced by tech arbitrage—see Exhibit 2), the political and social systems that held post-war capitalism together start to wobble. Consequently, labor’s role as producer shrinks.
Source: HFS Research, 2025
Some governments, recognizing this disruption, are beginning to realign—shifting from a post-industrial pro-market stance to a more interventionist posture in defense of labor, sovereignty, and trust (enshrined in the EU’s AI Act, for example). Others are aligning themselves to capital, offering greater freedom to tech leaders such as the US support for the $500bn Stargate initiative.
This moment demands more than tactical AI adoption. It runs deeper. It can either happen to you, or you can help shape it. You are no longer just choosing between cloud vendors or model architectures. You are navigating three possible futures—each with radically different implications for how firms create value, compete, and justify their existence to society (see Exhibit 3).
By exploring these three futures, you can prepare for each.
The rapid advance of AI has created a new kind of value chain—one that replaces labor with technology arbitrage. Sitting at the apex, we find the digital lords: hyperscalers, frontier model owners, and platform monopolies. These folk don’t need large workforces or even massive consumer markets to thrive. Their power lies in owning the pipelines—data, infrastructure, and models.
This represents a new kind of feudalism. AI enables revenue without large headcounts; platforms generate rents from digital tenants (users, developers, enterprises) who pay to access compute, intelligence or reach. With a handful of players concentrating control, enterprise innovation risks being stifled unless leaders break free of dependency.
AI creates the possibility of an economy where production and consumption are orchestrated with minimal human involvement. For many firms, this may feel like liberation from ever-spiraling headcount, but for the societies in which you operate, it’s likely to be destabilizing.
In contrast to feudal control, an alternative is emerging—one rooted in transparency, participation, and collective ownership of AI infrastructure—the democratic AI commons. This scenario imagines AI not as a proprietary rent-extracting tool but a public good developed through partnerships between governments, citizens, open-source communities, and enterprises.
This future gains credibility as countries such as France, India, and Japan invest in sovereign AI programs and public digital infrastructure. Organizations such as Hugging Face and Stability AI push for transparency, auditability, and accessibility. The promise is that AI becomes a multiplier of social inclusion rather than inequality.
If you want to make this happen, it will take more than ethical posturing. Get busy co-creating with communities, aligning with open standards, and contributing to public AI stacks. In this scenario, firms are not only profit engines but good digital citizens accountable to broader societal interests.
Between unregulated techno-feudalism and AI democracy lies a more nuanced and necessarily negotiated path: algorithmic corporatism. In this model, states and corporations jointly manage AI development through formalized collaboration, infrastructure partnerships, and regulatory frameworks.
Signs of this future are already here:
In this world, AI development is not left to either the market or the state—it is co-governed. Enterprises participate in setting norms and earning licenses to operate. The upside is stability, inclusion, and legitimacy. The cost? Expect to have to be more accountable with limits on pure profit maximization.
Source: HFS Research, 2025
This is not a time to get caught in the headlights – you need to be at the steering wheel. Now is the moment you must:
AI is the engine of growth, but leadership is the steering wheel. Choose your road before it chooses you.
Learn more from these references:
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started