Why did MetaMask show $0 on Ethereum when AWS went offline?
An Amazon Web Services disruption on Oct. 20 knocked out MetaMask and different ETH wallets shows and slowed Base community operations, exposing how cloud infrastructure dependencies ripple by decentralized methods when a single supplier fails.
AWS reported a fault in its US-EAST-1 area beginning at 03:11 ET, with DNS and EC2 load-balancer well being monitoring failures cascading into DynamoDB and different companies.
Amazon declared full mitigation by 06:35 ET and full restoration by night, although backlog clearing prolonged into Oct. 21.
Coinbase posted an energetic incident, noting an “AWS outage impacting a number of apps and companies,” whereas customers reported that MetaMask balances had been displaying zero and that Base community transactions had been experiencing delays.
The mechanical hyperlink runs by Infura, MetaMask’s default RPC supplier. MetaMask documentation directs customers to Infura’s standing web page throughout outages as a result of the pockets routes most learn and write operations by Infura endpoints by default.
When Infura’s cloud infrastructure wobbles, stability shows and transaction calls can misreport though funds stay safe on-chain.
The disruption affected Ethereum and layer-2 networks that rely on Infura’s RPC infrastructure, creating UI failures that mimicked on-chain issues regardless of consensus mechanisms persevering with to perform.
Base chain metrics from Oct. 21 show $17.19 billion in whole worth locked, roughly 11 million transactions per 24 hours, 842,000 energetic addresses day by day, and $1.37 billion in DEX quantity over the prior day.
Short outages of six hours or much less usually scale back DEX quantity by 5% to 12% and transaction counts by 3% to eight%, with TVL remaining secure as a result of the problems are beauty fairly than systemic.
Extended disruptions lasting six to 24 hours can lead to a ten% to 25% lower in DEX quantity, an 8% to twenty% lower in transactions, and a 0.5% to 1.5% lower in bridged TVL, as delayed bridging operations and risk-off rotations to Layer 1 take maintain.
However, transaction rely and DEX volumes remained regular between Oct. 20 and 21. DEX volumes had been $1.36 billion and $1.48 billion, respectively, whereas transactions amounted to 10.9 million and 10.74 million.

Base skilled a separate incident on Oct. 10 involving secure head delays from high transaction quantity, which the group resolved rapidly.
That episode demonstrated how layer-2 networks can hit finality and latency constraints throughout demand spikes unbiased of cloud infrastructure points.
Stacking these demand-side pressures with exterior infrastructure failures compounds the danger profile for networks operating on centralized cloud suppliers.
| Date & time | Service | Update | Symptom | Resolved? |
|---|---|---|---|---|
| Oct 20, 07:11 | AWS (us-east-1) | Outage recognized; inside DNS and EC2 load-balancer health-monitor fault | Global API/connectivity errors throughout main apps | “All 142 companies restored” by 22:53; some backlogs lingered into Oct 21. |
| Oct 20, 07:28 → Oct 21, 00:57 | Coinbase standing | Incident opened → resolved | Users unable to login, commerce, switch; “funds are secure” messaging | Recovered; monitoring by night Oct 20 (PDT). |
| Oct 20, 19:46 | Decrypt tracker | MetaMask balances displaying zero; Base/OpenSea struggling as AWS points persist; Infura implicated | Wallet UI misreads and RPC errors throughout ETH & L2s | Ongoing throughout afternoon; restoration staggered by supplier queues. |
| Oct 10, 21:40 (context) | Base standing | “Safe head delay from high tx quantity” (unrelated to AWS) | Finality/latency lag (“secure head” behind) | Resolved identical day; exhibits L2 latency edges unbiased of cloud occasions. |
Cloud focus surfaces as a systemic weak point
The AWS occasion refreshes longstanding considerations about cloud supplier focus in crypto infrastructure.
Prior AWS incidents in 2020, 2021, and 2023 revealed advanced interdependencies throughout DNS, Kinesis, Lambda, and DynamoDB companies that propagate to pockets RPC endpoints and layer-2 sequencers hosted within the cloud.
MetaMask’s default routing by Infura means a cloud hiccup can seem chain-wide to finish customers, regardless of on-chain consensus working usually.
Solana’s five-hour community halt in 2024, brought on by a software program bug, demonstrated person tolerance for transient downtime when restoration is executed cleanly and communication stays clear.
Optimism and Base have beforehand logged unsafe and secure head stalls on their OP-stack structure, points that groups can resolve by protocol enhancements.
The AWS disruption differs in that it exposes infrastructure dependencies outdoors the management of blockchain protocols themselves.
Infrastructure groups will probably speed up multi-cloud failover plans and increase RPC endpoint range following this incident.
Wallets might immediate customers to configure customized RPCs fairly than relying on a single default supplier.
Layer-2 groups usually publish post-mortems and service-level goal revisions inside one to 4 weeks of main incidents, doubtlessly elevating consumer range and multi-region deployment priorities in upcoming roadmaps.
What to observe
AWS will launch a post-event abstract detailing root causes and remediation steps for the US-EAST-1 disruption.
Base and Optimism groups ought to publish incident post-mortems addressing any sequencer or RPC influence particular to OP-stack chains.
RPC suppliers, together with Infura, face stress to commit publicly to multi-cloud architectures and geographic redundancy that may face up to single-provider failures.
Centralized exchanges that posted incidents in the course of the AWS outage, together with Coinbase, might expertise unfold widening and quantity shifts to decentralized exchanges on less-affected chains throughout future cloud disruptions.
Monitoring change standing pages and downdetector curves throughout infrastructure occasions gives real-time alerts for the way centralized and decentralized buying and selling venues diverge beneath stress.
The occasion confirms that blockchain’s decentralized consensus can’t totally insulate person expertise from centralized infrastructure chokepoints.
The RPC layer focus stays a sensible weak level, the place cloud supplier failures translate into pockets show errors and transaction delays that undermine confidence within the reliability of Ethereum and layer-2 ecosystems.
The publish Why did MetaMask show $0 on Ethereum when AWS went offline? appeared first on CryptoSlate.
