AWS infrastructure failures and Kafka recovery issues temporarily halted trading across Coinbase.
Coinbase suffered a major service outage on May 7 that disrupted trading, exchange access, and customer balance updates across several platforms. Problems affected spot markets, derivatives, Prime services, and international trading operations for several hours. Engineers later traced the issue to a cooling system failure inside an AWS data center in the United States. Coinbase said customer funds remained safe and no data was lost during the incident.
Kafka Recovery Problems Deepen Coinbase Outage
Coinbase disclosed that monitoring systems first detected cascading quote failures at around 23:50 UTC. Multiple Sev1 incidents followed shortly after, prompting emergency response procedures across engineering teams. Internal systems tied to the exchange’s core infrastructure started failing as temperatures rose inside a subset of racks hosted in AWS us-east-1.
Yesterday @coinbase experienced a multi-hour service disruption affecting trading, exchange access, and balance updates. Here's our initial read from Coinbase engineering on what happened, how we recovered, and what we're addressing.
At approximately 23:50 UTC on 2026-05-07, our…
— rob (@rwitoff) May 8, 2026
According to Coinbase engineers, hardware failures struck systems connected to the exchange’s matching engine. That engine processes orders and maintains order books across Coinbase markets. Infrastructure problems inside the affected facility left only a portion of the nodes operational. As a result, the cluster failed to reach quorum, temporarily blocking trading for retail and institutional users.
Engineers also faced complications involving distributed Kafka clusters used for internal messaging. Coinbase said those clusters process several terabytes of data daily and were designed to remain operational during a data center outage. Recovery guarantees failed during the incident, forcing teams to manually restore partitions onto replacement hardware brokers.
Dedicated Hardware Failure Slows Recovery Process
Customers experienced delayed balance updates while Kafka replication recovered. Coinbase said balances would be automatically synchronized once systems caught up. Company representatives added that no customer or transaction data disappeared during the outage.
Automated recovery tools drained workloads from roughly 10 Kubernetes clusters tied to the affected zone. Most internal services returned within about 30 minutes after engineers isolated the problem.
Recovery took longer for systems tied directly to the exchange matching engine and Kafka infrastructure because both relied on dedicated hardware and storage configurations.
After stabilizing the environment, Coinbase reopened markets in stages. Trading first moved into cancel-only mode before teams audited product states. Markets then entered auction mode before full trading resumed across the exchange.
Coinbase Says No Data Was Lost During Multi-Hour Platform Outage
Coinbase acknowledged that parts of its architecture concentrated critical exchange infrastructure within a single availability zone. Engineers stated that standby systems were in place for failover scenarios, though the isolation measures failed during the event. That extended the duration and spread of the outage beyond intended limits.
Company executives praised internal coordination during the recovery process. Engineering and on-call teams reportedly followed established disaster recovery procedures while testing and validating fixes under constrained infrastructure conditions.
Coinbase apologized to customers who temporarily lost access to their accounts and trading services. Executives said a full root cause analysis will be released in the coming weeks, alongside planned reliability improvements aimed at preventing similar failures.


