If you’ve been tracking the world of large language models (LLMs) and generative AI, you’ve probably noticed the ground shifting beneath our feet, especially in enterprise adoption. I recently came across some fascinating insights that reveal a major shakeup in the LLM market halfway through 2025.
Here’s the scoop: while OpenAI once dominated enterprise usage, it’s now been overtaken by Anthropic, according to a report from Menlo Ventures. This shift signals not only a change in market leadership but also highlights evolving priorities around model capabilities, cost dynamics, and the emergence of what’s being called the “year of agents.” Let’s unpack what’s really going on.
Anthropic’s meteoric rise: why this newcomer is winning the AI race
Not long ago, OpenAI controlled about half of enterprise LLM usage. Fast forward to mid-2025, and that share has shrunk to roughly a quarter. Meanwhile, Anthropic has surged ahead, claiming about 32% of enterprise usage, surpassing OpenAI and even Google.
What powered Anthropic’s rise? It boils down to a few key breakthroughs centered on their Claude model series—especially Claude Sonnet 3.5, 3.7, and the latest Claude Sonnet 4.
- Code generation is the first real killer app for AI. Claude quickly became a favorite among developers, capturing 42% of the market — twice the share of OpenAI’s models. This alone turned code generation from a niche product into a $1.9 billion ecosystem featuring AI-powered IDEs like Cursor and enterprise coding agents.
- Reinforcement learning with verifiers (RLVR) is reshaping how model intelligence scales. Instead of just pumping huge volumes of data into bigger models, this new approach fine-tunes models with verifiable rewards — a perfect fit for coding where outputs can be objectively checked.
- Training models as “agents” capable of step-by-step reasoning and tool usage is transforming usefulness. Unlike traditional LLMs that provide single-shot answers, these agents can perform tasks interactively, integrating external tools like calculators and search engines. Anthropic led this charge with their model context protocol (MCP), greatly expanding functional capabilities and driving adoption.
Open-source models struggle to gain enterprise ground
While open-source LLMs like Meta’s Llama remain popular, their share of enterprise AI workloads has actually declined slightly — from 19% to 13% in just six months. Despite launches by DeepSeek, Bytedance, and others, these models continue trailing the closed-source frontier by about nine to 12 months in performance.
There are advantages to open-source, including greater customization and on-prem deployment options. But the complexity in deploying these models and concerns around trust (especially for models from some Chinese companies) have slowed their uptake. Enterprises and startups alike are sticking with closed-source models to ensure top-tier performance.
“Enterprises are consolidating their AI spend around a few high-performing, closed-source models, signaling a maturity in the market where performance outweighs cost concerns.”
Model upgrades beat switching: performance is king
Interestingly, switching between AI vendors is pretty rare nowadays. Instead, most enterprises and startups upgrade within their existing platforms to the newest model versions. For example, within a month of the Claude 4 release, 45% of Anthropic users migrated to the new model, while older versions rapidly lost share.
Performance is consistently prioritized over price or speed. Even as individual models drop sharply in cost, builders don’t use cheaper older models — they flock to the best-performing versions as soon as they’re available.
AI spending shifts gears: inference outpaces training
Another big trend is in how enterprises spend their AI compute budgets. There’s a clear shift from training models—which can be expensive and complex—to inference, where models are actually deployed and used in production.
Startups lead this trend, with 74% reporting that the majority of their compute usage is now for inference, up from 48% a year ago. Large enterprises are close behind, with nearly half of them saying most of their AI compute is dedicated to inference workloads.
What’s next for enterprise LLMs?
The pace of change in the AI market still feels dizzying, with new model breakthroughs, evolving economic models, and rapid shifts in what enterprises want driving constant flux. But it’s clear that we’re entering a phase ripe for building durable AI businesses on top of these foundational models.
Few things stand out to me from this mid-year update:
- Closed-source, high-performance models are winning enterprise trust and dollars. The gap between open vs. closed model performance and usability still matters a lot.
- Model capabilities are advancing along multiple dimensions, especially through agent architectures and reinforcement learning. This is expanding what AI can actually do.
- The economics of AI are shifting toward large-scale, inference-driven production use. This will likely influence infrastructure, tooling, and cost optimizations going forward.
As the landscape continues evolving, staying close to these trends is crucial — whether you’re building AI infrastructure, applications, or simply trying to navigate where value flows in the AI ecosystem.
Watching Anthropic’s ascent, the meaning of “agents,” and the ongoing tug-of-war between open and closed source has been genuinely eye-opening. It’s becoming clear that AI’s long game is not just about flashy breakthroughs — it’s about foundational shifts in how models are built, deployed, and monetized.



