Things are heating up between two giants in the AI world: OpenAI, the creator of ChatGPT, and Anthropic, the company behind Claude. What started as a behind-the-scenes dispute has turned into one of the most public tensions in the AI industry to date. At the heart of this conflict is a serious accusation—Anthropic alleges that OpenAI used its proprietary coding tools and cloud APIs in ways that violate service agreements during the development of GPT-5.
The crux of the controversy: misuse of Anthropic’s APIs
I recently came across insights revealing that Anthropic’s main gripe is with how OpenAI accessed Claude—not just to casually benchmark performance, which is common practice, but for extensive internal testing while fine-tuning GPT-5. According to reports, OpenAI engineers weren’t using the standard front-line interface but instead relied on developer APIs that enabled large-scale automated testing. These tests ranged from coding tasks and creative writing to highly sensitive prompts involving subjects like child sexual abuse material (CSAM), self-harm, and defamation.
Anthropic believes these tests went beyond comparison; they were allegedly used to train GPT-5 itself. From their perspective, this isn’t a simple misunderstanding but a breach of trust and agreed policies designed to protect intellectual property. The company explicitly prohibits using its tools to build competing AI systems, a clause that Anthropic says OpenAI crossed.
“OpenAI’s use of Claude’s coding tools during GPT-5 development is seen by Anthropic as a clear breach of usage agreements.”
Christopher Noli, an Anthropic spokesperson, highlighted that Claude’s coding capabilities have become a popular choice among developers and that OpenAI’s technical team’s use was expected—but not in this intensive manner. In response, Anthropic cut OpenAI’s developer-level API access, blocking access to powerful coding functions and creative features that may have played a role in shaping GPT-5. However, they still allow OpenAI limited access solely for safety benchmarking and comparative testing, keeping a delicate balance between collaboration and protection.
OpenAI’s measured response and recurring patterns from Anthropic
OpenAI replied with a tone of disappointment but also respect for Anthropic’s policies. Their chief communications officer noted that cross-evaluation of AI models is industry standard and critical for progress and safety improvements. Though they disagree with the level of restriction imposed, OpenAI tread carefully, signaling a desire to avoid escalating the dispute.
This isn’t Anthropic’s first time acting decisively to protect its assets. Earlier this year, they cut off Claude API access to Windsurf, a startup known for AI coding tools, amid rumors OpenAI was set to acquire Windsurf. That move was also driven by fear of indirect OpenAI access to Claude through a backdoor. Jared Kaplan, Anthropic’s Chief Science Officer, pointed out how unlikely it is for them to cooperate openly with OpenAI, stressing their assertiveness in guarding their intellectual property.
What this means for AI’s competitive and ethical landscape
The controversy brings to light how blurry the line is between legitimate benchmarking and competitive exploitation. In an industry where developing state-of-the-art models like GPT-5 entails huge costs and strategic leverage, access to rival APIs becomes a hotly contested battleground. As companies like OpenAI and Anthropic race to lead, the stakes over who can use what tools—and for what purpose—are higher than ever.
Without universal, enforceable standards, each company is left defining and defending its own boundaries. What one sees as fair and standard practice, another may deem an outright violation. This incident might signal a move away from open benchmarking toward siloed environments guarded by strict access controls.
“The future of AI could see collaboration give way to secrecy, impacting transparency, safety, and fairness across the field.”
This trend could accelerate innovation for individual players but risks slowing overall progress, especially in crucial areas like safety and ethics, where cross-model comparisons are essential. Language models like Claude and GPT-5 are no longer just tech projects—they’re strategic assets that blur the lines between technical prowess, legal rights, and political maneuvering.
Key takeaways to keep in mind
- Benchmarking in AI is no longer just about performance metrics—it carries strategic and ethical weight.
- Access to proprietary tools and APIs is becoming a major point of contention and competition in AI development.
- The AI industry urgently needs clearer norms on research boundaries, intellectual property, and fair use.
Looking ahead
This unfolding drama between OpenAI and Anthropic is about more than just code. It highlights a shifting landscape where the battle for AI dominance also involves data rights, access control, and ethical boundaries. As GPT-5’s release draws near and Claude continues to evolve, the industry—and the world—will be watching how these two players navigate their complex relationship.
Will we see clearer, unified rules emerge? Or will AI progress become a fragmented race locked behind closed doors? It’s a pivotal moment, and understanding these dynamics helps us appreciate how AI’s future is shaped not only by innovation but by trust, competition, and the politics of tech.
Stay tuned, because this story is far from over.


