Anthropic
AI safety company founded by ex-OpenAI researchers Dario and Daniela Amodei. Builds the Claude family of large language models with a focus on steerability, interpretability, and reducing harmful outputs.
Products
Claude (Opus · Sonnet · Haiku) · Claude.ai · Claude Code · Anthropic API
Reliability — last 10m
Composite score
Uptime
Incidents
Median MTTR
What's breaking (last 10m)
Components ranked by incident count. Click a row for the incident list. Down-time is attributed proportionally for multi-component incidents.
No incidents touched any Claude component in this window.
Type of failure
Categorised from the public incident headlines (outage / latency / auth / etc.). Click a category to drill in.
No incidents in window — nothing to categorise.
When it breaks
Incident starts by weekday × hour, UTC. Hover a cell for incident detail.
Not enough incidents to draw a pattern.
Anthropic — incidents by day (last 30d)
Worst incidents
Ranked by severity-weighted duration (impact × wall-clock).
No incidents in window.
Score breakdown
SLA compliance
How often does this provider clear their stated availability target?
Incident history
showing 6 of 200 · 4 of 105 days
How fast they fix things
Distribution of incident resolution times.
No resolved incidents to plot.
Method: a rolling quarter (13 weeks ≈ 91 days). Each cell is one 7-day slice of severity- and component-weighted uptime (same calc as the score's uptime card); red when the slice dips below the stated SLA.