Anthropic has released Claude Opus 4.7, the latest upgrade to its flagship model line, and the improvements are aimed squarely at one thing: letting you hand off harder work with less supervision.
What Is New
Opus 4.7 is designed to handle longer, less supervised tasks, verifying its own outputs before reporting back and following instructions with more precision. That last point is worth paying attention to. Instruction-following has been a persistent weak spot across AI models: the gap between what you ask for and what you actually get. Anthropic is signalling that Opus 4.7 narrows that gap meaningfully.
In internal evaluations, the model catches its own logical faults during the planning phase and accelerates execution, far beyond previous Claude models. On coding specifically, one 93-task coding benchmark saw Opus 4.7 lift resolution by 13% over Opus 4.6, including four tasks that neither Opus 4.6 nor Sonnet 4.6 could solve.
Better Vision, Finer Control
Opus 4.7 processes images at resolutions up to 2,576 pixels on the long edge, more than three times the capacity of prior Claude models. This has direct downstream effects for workflows involving visual content where image quality shapes output quality.
A new “xhigh” effort level slots between the existing high and max settings, giving developers finer control over the tradeoff between reasoning depth and response speed. Task budgets, currently in beta, also allow Claude to prioritise work and manage costs across longer runs, useful for teams running autonomous coding workflows at scale.
Enterprise and Developer Access
Opus 4.7 is available to Pro, Max, Team, and Enterprise users on Claude, and to developers via the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Pricing starts at $5 per million input tokens and $25 per million output tokens, with up to 90% savings through prompt caching and 50% through batch processing.
Where It Sits in the Market
GPT-5.4 trades blows with Opus 4.7 depending on the task, and Gemini 3.1 Pro holds its own on multilingual benchmarks. But on agentic and coding workloads where Claude has historically led, Opus 4.7 extends the gap rather than ceding ground.
Anthropic’s own Claude Mythos Preview remains more capable, but it is not publicly available. Mythos continues to have limited release due to safety concerns outlined in Project Glasswing. Opus 4.7 is the most powerful model the general public can actually use, and by the benchmarks, it is the strongest in that category right now.
The Broader Context
Claude’s traffic has grown roughly five times over the past year. Anthropic raised $30 billion at a $380 billion valuation in February 2026, and eight of the Fortune 10 are now Claude customers. Opus 4.7 arrives as Anthropic is running at a pace few in the industry anticipated, and as the competition between frontier labs intensifies week by week.
For developers and enterprise teams, the practical pitch is straightforward: a model that does more of the thinking, checks its own work, and needs less hand-holding to get there.










