Enterprise leaders navigating the complexities of AI adoption should pay close attention to a seismic shift in the AI market—one where model efficiency and architecture innovation are challenging the ever-scaling GPU strategies of the tech giants and hyperscalers. While the recent buzz around Chinese entrant DeepSeek’s groundbreaking model architecture has grabbed the headlines, it also underlines why rising US star Writer’s focus on enterprise-ready AI deserves more attention than it may have received.
DeepSeek has set a new industry standard by prioritizing efficiency and accessibility over raw computational force. Their radical optimizations—such as reducing numerical precision to cut memory usage by 75%, leveraging multi-token reading for faster processing, and activating only relevant model parameters (37 billion out of 671 billion)—have slashed training costs from $100 million to $5 million. These innovations reduce the demand for brute compute power, challenging GPU leader Nvidia’s dominant market position and disrupting the cost barriers that have traditionally made advanced AI development exclusive to tech giants.
The open-source nature of DeepSeek’s methods democratizes access to cutting-edge AI capabilities. DeepSeek enables smaller enterprises to innovate without requiring billion-dollar budgets or hyper-specialized infrastructure. For enterprise leaders, this signals a more competitive landscape with accelerated AI democratization, but it also raises questions about whether your investments are sufficiently agile to keep pace.
Writer’s Palmyra X 004, a 120-billion parameter general-purpose model released last year, exemplifies similar resource-efficient principles driving DeepSeek’s hype—but it was built with enterprises in mind. Trained at a GPU cost of just $700,000 (compared to DeepSeek’s reported $6 million), Palmyra is purpose-built for agent development, API function calling, multimodal processing, and custom enterprise AI applications. Yet, because Writer chose not to pursue consumer-facing hype cycles akin to ChatGPT or DeepSeek, Palmyra has largely flown under the radar. Read more about Writer’s approach in our November 2024 report: Writer’s self-learning LLM highlights new emphasis on cutting the cost of enterprise AI.
The lesson for enterprise leaders is clear: Don’t equate the lack of consumer headlines with a lack of innovation. Palmyra represents a critical step forward for companies seeking to embed generative AI into their enterprise workflows. It offers a full-stack platform that balances scalability with transparency—a core demand from regulators and stakeholders alike.
We expect to see a rapid impact on which LLMs enterprises use. When we surveyed 540 enterprise leaders in October 2024 (see Exhibit 1), Palmyra didn’t yet figure. The market leaders must now respond to new challenges to reduce costs—or risk being unseated.
* Others include MPT (Mosaic), Grok (xAI), Coral (Cohere), Mixtral (Mistral), Inflection, Vicuna, and Chatacter.ai
Note: Percentages don’t add up to 100% as most enterprises are working across multiple LLMs
Sample: 540 executives across the Global 2000 enterprises
Source: HFS Research, 2025
As generative AI technologies proliferate, enterprise leaders face an urgent challenge: moving beyond experimentation to scalable, ethical, and cost-efficient deployments. The contrast between Writer’s Palmyra and DeepSeek underscores the need for enterprises to scrutinize AI vendors not just on their technical achievements but on their alignment with enterprise priorities: cost efficiency, transparency, security, privacy, and domain specificity. And just so we are all clear, DeepSeek stores information it collects in Chinese data centers.
However, enterprises must be receptive to the possibilities of less GPU-heavy approaches. The era of brute-forcing AI development with unlimited hardware is waning as innovations in model architecture, such as those seen in Palmyra and DeepSeek, offer a roadmap for sustainable growth. Leaders must evaluate whether their current AI strategies reflect these shifts—or risk falling behind in an increasingly democratized AI ecosystem.
Of course, we should also bear in mind that there is little to prevent leading AI firms from applying architectural innovation to their own advanced AI programs—to deliver the benefits of both greater compute AND greater efficiency. We expect responses from Microsoft, Google, Meta, Inflection AI, Anthropic, and others in due course.
AI innovation is moving toward smarter, more efficient architectures that prioritize cost, accessibility, and enterprise applicability over sheer processing power. Writer’s Palmyra model showcases the type of resource-efficient, enterprise-ready solutions enterprises need, even if it hasn’t garnered the consumer-driven hype of competitors such as DeepSeek or ChatGPT. Enterprise leaders must focus AI investments on platforms that align with their specific use cases while also preparing for a new era of democratized innovation, where agility and transparency—not just brute computational force—can deliver success in enterprise use cases.
Now is the time to question: Is your AI strategy ready for this new paradigm? Don’t let the headlines distract you—invest in solutions designed for enterprise-scale transformation.
Register now for immediate access of HFS' research, data and forward looking trends.
Get StartedIf you don't have an account, Register here |
Register now for immediate access of HFS' research, data and forward looking trends.
Get Started