AI Cookbook
← Voltar ao blog

Anthropic ships Claude 5 Sonnet — beats GPT-5.5 on coding while running 3x faster than Opus

Anthropic released Claude 5 Sonnet this morning. The mid-tier model now scores 87.3% on SWE-bench Verified, beating GPT-5.5 by 4 points and matching Opus 4.7 within margin of error — at one-third the latency and one-sixth the price.

For coding workflows specifically, Sonnet 5 is the new default. Faster than Opus, smarter than the prior Sonnet, and aggressively priced at $2.50 input / $12.50 output per million tokens.

What changed in Sonnet 5

Three concrete improvements over Sonnet 4.6:

  • **Coding accuracy**: 87.3% on SWE-bench Verified (was 73%); 90.4% on HumanEval+ (was 81%)
  • **Tool use chains**: 6 tools called in sequence with no hallucination, against 3 in the prior generation
  • **Speed**: 178 tokens/second on average prompts, beating GPT-5 fast tier and Gemini 2.5 Flash

The headline numbers are not the story. The story is that Sonnet 5 closes the gap with Opus 4.7 to under 2 percentage points across the eval suite — meaning most enterprise users can downgrade from Opus and save 60-80% on inference cost without measurable quality loss.

Pricing and availability

  • Available today via api.anthropic.com, AWS Bedrock, and GCP Vertex
  • Default Sonnet on Claude.ai, ChatGPT-style chat at claude.ai
  • Pricing: $2.50 input, $12.50 output per million tokens
  • Context window: 500k tokens (was 200k on Sonnet 4.6)
  • Vision and PDF input enabled by default

For comparison, GPT-5.5 charges $5/$15 and Opus 4.7 charges $5/$25. Sonnet 5 sits between them on quality but well below both on price.

Why this matters for builders

The race in 2026 is not who has the smartest model. It is who has the smartest model at the lowest cost per useful task. Sonnet 5 wins on cost-per-correct-answer for 80% of common workloads:

  • Code generation: dominant at this price point
  • Multi-step agent loops: tool-use accuracy beats GPT-5.5
  • Long-document processing: 500k context lets you skip RAG for many enterprise use cases

For teams currently using Opus 4.7: the migration path is one config flag. Most prompts work as-is. Total cost goes down 60-70%. Quality drop measurable only at the edge of the eval distribution.

For teams on GPT-5.5: switch test runs are free. Anthropic gives 10M free tokens for evaluation through May 31.

Sources

  • Anthropic blog (April 28, 2026): Introducing Claude 5 Sonnet
  • TechCrunch (April 28, 2026): Anthropic's Claude 5 Sonnet beats GPT-5.5 on SWE-bench
  • Anthropic Pricing Page (April 28, 2026)