Anthropic is increasing the amount of information that company customers can meet for Claude in a single prompt, part of an effort to attract more development in popular co -codes of the company.
For Customers API of Anthropic, those of the company Claude Sonnet 4 The Ai Model Ora has a token token context window, which means that the IA can manage requests up to 750,000 words, more than the lord of the rings’ lord, or 75,000 lines of code. This is about five times the previous limit of Claude (200,000 token) and more than double the 400,000 token context window offered by GPT-5 by Openai.
The long context will also be available for Claude Sonnet 4 via Anthropic Cloud Partners, included on Amazon Bedrock and Vertex Ai Di Google Cloud.
Anthropic built one of the Large business companies Among the developers of artificial intelligence models, largely selling Claude to coding platforms to as Github Copilot of Microsoft, Windsurf and Anysphen’s Cursor. While Claude has become the model of choice among the developers, GPT-5 can threaten anthropic domain with its competitive prices and strong coding performance. The CEO of Anysphere Michael Truell even helped Openai to announce the GPT-5 launchWhich is now the predefined model for new users in the cursor.
The main advantage of the anthropic product for the Claude platform, Brad Abrams, told Techcrunch in an interview that expects the coding platforms to obtain an “advantage” from this update. When he was asked if GPT-5 put a tooth in the use of Claude’s API, Abrams minimized concern, saying that he is “really happy with the business bees and the way he grew up.
While Openai generates most of its return by the undersigned of consumers to Chatgpt, the anthropic shopping centers cover the AI to companies through an API. This has made the coding platforms to an anthropic key customer and it could be the reason why the company is launching some new advantages to attract users in front of GPT-5.
Last weekend, Anthropic made an updated version of his broad model AI, Claude Opus 4.1, Which pushed a little beyond the coding skills of the company’s artificial intelligence.
Techcrunch event
San Francisco
|
27-29 October 2025
In general, artificial intelligence models tend to work better on all activities when they have more context, but above all for software engineering problems. For example, if you ask an artificial intelligence model to run a new feature for your app, it is likely that it is doing a better job if it can see the entire project, rather than just a small section.
Abrams also said to Techcrunch that Claude’s great context window helps him to work better in long agent coding activities, in which the Ai model is working automatically on a problem for minutes or hours. With a window of great context, Claude can remember all his previous passages in the activities for a long horizon.
But some companies have led to a large context of extreme windows, claiming that their artificial intelligence models can develop a huge prompt. Google offers a context window of 2 million tokens for Gemini 2.5 Pro and Meta offered to 10 million token Context window for Llamo 4 Scout.
Some studies suggest that there is a limit to the effectiveness of large large context windows; The AI models are Not exceptional To elaborate enormous those they suggest. Abrams said that the anthropic research team focused on the increase not only on the context window for Claude, but the “effective context window”, suggesting that its Ai can understand most of the information provided. However, he refused to reveal the exact anthropian techniques.
When the requests from Claude Sonnet 4 are over 200,000 tokens, Anthropic will charge more for API users, at $ 6 per million input tokens and $ 22.50 per million output tokens (increases from $ 3 per million input token and $ 15 per million exit and exit token).