The specter of superintelligence—systems that surpass human intelligence—looms large, casting its shadow across enterprise operations and the software we rely on. Superintelligence offers a vision of systems capable of seamlessly managing workflows, anticipating market shifts, and optimizing operations without human oversight.
Yet enterprises remain woefully underprepared for superintelligence. HFS data shows that only 5% of firms (‘Pioneers’ in Exhibit 1) are making significant investments even in current AI capabilities—many are held back by a lack of digital and data maturity, fragmented and outdated architecture, and a lack of skills.
Sample: 550 Enterprise Leaders
Source: HFS Research, 2025
The potential for superintelligence comes hot on the heels of the arrival of GenAI and the still-emerging capabilities of agentic AI (Assemble the tech you need to shift AI from tool to agent). The thought of yet another AI breakthrough so soon is enough to set even the most change-hungry exec’s head spinning. Constant change is our new reality as timelines for the future continue to compress. On the plus side, at least superintelligence may be able to help us keep up with that constant change.
Superintelligence threatens to upend traditional tools and business models (Exhibit 2 defines how it varies from artificial general intelligence (AGI) and Ray Kurzweil’s notion of the singularity). The scaling laws AI developers have witnessed to date continue to drive exponential improvements in large language models (LLMs) and computational power. As both continue to grow, the emergence of systems surpassing human cognitive abilities becomes less hypothetical and more an imminent reality.
If achieved, superintelligence would surpass human intelligence in all respects, including creativity, problem-solving, and emotional intelligence. It would outperform humans in all cognitive tasks and achieve levels of reasoning and problem-solving unimaginable to humans. Such advanced capabilities would make general superintelligent systems better at everything currently delivered by our less advanced specific software solutions.
Source: HFS Research, 2025—timeline predictions from Anthropic CEO Dario Amodei and The Singularity Is Nearer author Ray Kurzweil
Anthropic CEO Dario Amodei predicts that superintelligent systems could emerge within two to three years. If that is anywhere close to being accurate, enterprise leaders must brace now for the transformational potential—while acknowledging the challenges of making it real. Even if Dario has his timing wrong, the evolution of AI to date suggests such capability is becoming increasingly inevitable.
The implications for our use of software are manifold:
Superintelligence promises to rewrite the rules of enterprise operations, forcing leaders to confront and balance the twin imperatives of rapid adaptation and business continuity.
But there’s a tension: while the technology races ahead, many enterprises are still grappling with the basics. Data silos persist, regulatory frameworks tighten, and cultural resistance undermines progress. Superintelligence may be barreling toward us, but the enterprise world isn’t built to absorb disruption at this speed.
Superintelligence offers extraordinary potential, but realizing it will be far from straightforward. The technology relies on data as its lifeblood, yet most enterprises remain bogged down in legacy systems and fragmented architectures (see Exhibit 3). These organizations aren’t prepared to deliver the quality or accessibility of data that superintelligence will demand.
Sample: 605 global enterprise executives
Source: HFS Research, Pulse 2025
Even if you are prepared for the data challenges, the promise of exponential improvement comes at an exponential cost, and the makers of superintelligence will seek a payback. Training these systems is computationally intense. It is also environmentally taxing. Enterprises increasingly tethered to ESG commitments must grapple with whether the carbon footprint of superintelligence is worth the trade-off.
Then there’s the human factor. Superintelligence challenges more than systems and processes—it threatens the very structure of organizations (read: Rethink enterprise ops as human-AI collaboration). Decision-making hierarchies found in the enterprise today are designed for gradual change, not real-time adaptation. Employees will question whether their expertise has a place in a world run by algorithms, while regulators must demand visibility into black-box decisions. The trust gap between AI outputs and human oversight remains too wide for many.
For all the talk of transformation, most enterprises remain locked in incrementalism. Sure, they’re experimenting with AI (see Exhibit 1)—integrating large language models here and automating tasks there. But superintelligence demands more than experimentation; it demands wholesale reinvention. Few organizations are prepared to make the leap.
Sample: 550 leaders from Global 2000 firms
Source: HFS Research, 2025
Culture is a significant barrier. Superintelligence forces a mindset shift from seeing AI as a tool to seeing it as a collaborator. That’s not an easy pivot. Employees fear displacement, middle managers resist relinquishing control, and leaders hesitate to dismantle structures that have delivered decades of success (see Exhibit 4). Even the most advanced technology risks being underutilized or rejected without a cultural transformation.
And then there’s governance. Emerging regulations such as the EU AI Act demand rigorous oversight of high-risk systems. Compliance isn’t just a box to tick; it’s a continuous, resource-intensive effort that could stifle innovation before it starts.
The future of enterprise software will be shaped not just by the pace of technological advancement but by the enterprise’s ability to keep up. Leaders must embrace the disruptive potential of superintelligence without losing sight of the practical realities.
Start by acknowledging the foundational gaps. Most organizations need to overhaul their data strategies, break down silos, and establish real-time flows before they can dream of superintelligent workflows. But this isn’t just about infrastructure—it’s about readiness at every level, from talent to technology to governance—debts must be paid (see Exhibit 3).
Proponents of LLMs argue that they represent a stepping stone to superintelligence, pointing to their rapid advancement in natural language understanding and their potential for integration with other AI systems. However, critics maintain that without a fundamental shift in architecture and purpose, LLMs will remain sophisticated tools—not a path to superintelligence. Some people believe that, in retrospect, we will consider today’s smartest LLMs to be an early form of AGI, but that is not yet the majority view.
While LLMs are a milestone in AI progress, the leap from advanced pattern recognition to autonomous, superintelligent reasoning is vast and fraught with technical and philosophical challenges. In this context, the idea that we could close these gaps in as little as ‘two-to-three years’ may seem somewhat hyperbolic. We advise paying close attention—but don’t bet the farm.
Superintelligence is an inevitability. The technology will reach maturity, but whether enterprises are prepared to harness it is a different story—and the timeline is far from certain. You can’t afford to dismiss the hype, nor can you simply mindlessly follow it. A balanced approach that prioritizes considering your foundational readiness while exploring bold possibilities will separate the winners from the also-rans. Prepare your people, refine your processes, and build the trust you’ll need to be on the front foot for superintelligence. Such preparations will serve you well no matter which twist or turn AI takes us on next.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started