

Anthropic Revokes OpenAI's Access To Claude Over Terms of Service Violation 10
An anonymous reader quotes a report from Wired: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.
OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."
OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."
Oh No. How Terrible. (Score:5, Insightful)
An AI company who is completely dependent on mining every bit of data out there, copyright be dammed had another company who also is completely dependent on mining every bit of data, copyright be dammed violate their terms of service?
How awful.
No, wait, that's completely expected. It's sad that the best we can hope for is these companies get shut down by the courts because LLMs aren't fair use.
But, hey, companies can't inflate their market value and stock prices by announcing sensible investment in labor and making them actually more productive, versus making existential threats to the workforce and hope it doesn't trigger the biggest labor movement ever seen. Which isn't a bad bet, given how much has been done to discredit organized labor as force for positive change (note, organized labor dovetails with unionized labor, but it isn't the same). Then again, if you work two jobs and still have to share an apartment, maybe the odds aren't as good as those large companies would like.
Re: (Score:3, Informative)
An AI company who is completely dependent on mining every bit of data out there, copyright be dammed had another company who also is completely dependent on mining every bit of data, copyright be dammed violate their terms of service?
How awful.
No, wait, that's completely expected. It's sad that the best we can hope for is these companies get shut down by the courts because LLMs aren't fair use.
A court has already said that it is fair use for an LLM to be trained: https://tech.slashdot.org/stor... [slashdot.org]
Otherwise you would never be allowed to learn anything from stuff you buy like books, music, and movies/tv shows.
What is not fair use is pirating the data, to then have the LLM be trained on it.
Re: (Score:3)
Otherwise you would never be allowed to learn anything from stuff you buy like books, music, and movies/tv shows.
And damn right so. If god had wanted us to learn he'd made us smart.
Re: (Score:2)
Currently both are fair use. But fair use does not mean the company must grant you API access. If OpenAI would mine data from Anthropic (the article reads more like OpenAI employees using it to write code) it's completely legal from the copyright perspective, but against the ToS of Anthropic. That means they can cancel OpenAI's contract and that's what they just did. If some employee continues generating data at home and they don't notice, someone getting the data can train a model with it. The employee him
Re: (Score:2)
Re: (Score:2)
LOL (Score:3)
We've arrived at the finger pointing stage of the competition for most honest thieves.
Saturating in The Big Eight LLMs for Coding (Score:4, Informative)
I spent the last couple weeks on a bender working on a project with the first as a fullstop hard deadline. It was a devils brew of php, js, css, html, a sprinkle of perl legacy code, and a full rewrite of a (argh) vb.net custom client. Every day, I would open each of "the big eight" LLMS in a new window and then copy and paste the same prompt to all of them. By the end of the day, there was usually only one I was still using.
Given those constraints, what I found was I found some strong differences between the current crop of llms.
Kimi - Whew - amazing and unpretentious. I was taken aback by how much better this was than most of the rest.
Grok - (elmo aside) same as Kimi. Rarely made major mistakes and didn't double down if it found it made one. And it is Grok 3 I am using, not Grok 4. It is also quite fast. The biggest benefit is that when Grok makes changes to minor routines, it reprints the entire code base and doesn't force you to figure out which like of 10k it is talking about (like chatgpt) to get updated.
DeepSeek - seems very smilar to kimi but with a smaller context window and the code was very vanilla.
ChatGPT - often spend more time cleaning up the code it generates than it saves you. It makes subtle consistent mistakes. the context window often means ChatGPT completely ignores long sections of code you share with it. The biggest problem is that in referencing code after interaction, it makes vague references to where updated code should go.
Gemini - moderately ok, but like Chatgpt makes lots of mistakes (and not as nice when you call it out). However, I use it for small routines as it is the fastest of the bunch.
Claude - solid code but often judegmental. I don't subscribe ($) here but the small context window is a stopper. Lots of mistakes.
CoPilot - Not used it enough to judge.
Meta.ai (Llama) - Meh really slow and code out put always has errors.
So to me, there is little doubt why one LLM would be trying to figure out what the other one was doing for code.
Re: (Score:2)
I was paying $20/monthly for Claude, but now I pay the same price for Junie [jetbrains.com], a plug-in for Jetbrains IDEs because I find it to be superior for coding. Time permitting I'll try some of the other things on the short-list you detailed -- thanks!
I think it helps that I code with open-source stuff like Ansible, which manages to train the LLMs well with legit, well-documented code hosted on Github, etc.