AWS US‑East‑1 Outage Knocks Out Fortnite, Alexa, Snapchat and More

When Amazon Web Services (a subsidiary of Amazon.com, Inc.) hit a major snag on US‑East‑1 outage in Northern Virginia, the ripple effect was swift and unmistakable. Adam Selipsky, AWS’s chief executive, confirmed the issue at 3:11 a.m. ET, noting “increased error rates and latencies” across the region. Within minutes, users from New York to New Delhi found their favorite apps flashing error messages, and by 6:35 a.m. ET most services were back online. Aravind Srinivas, CEO of Perplexity AI, took to X (formerly Twitter) to explain, “Perplexity is down right now. The root cause is an AWS issue.”
What Happened and When
The outage began precisely at 3:11 a.m. Eastern Time (08:00 a.m. GMT, 12:30 p.m. IST) on Monday, October 20, 2025. AWS’s status dashboard lit up with alerts about “increased error rates and latencies for multiple services in the US‑East‑1 Region.” By 3:51 a.m. ET the company posted an update promising a mitigation plan within 45 minutes. The clock finally ticked to recovery at 6:35 a.m. ET, as reported by TechShotsApp, meaning the core disruption lasted three hours and 24 minutes.
How the Outage Rippled Across Popular Services
Because the US‑East‑1 region powers roughly a third of the global internet, the fallout was unavoidable. Below is a snapshot of the biggest headlines that day:
- Fortnite – Players on the Epic Games Store saw “request could not be completed” errors, forcing many to abandon matches mid‑game.
- Alexa – Echo devices stopped responding, leaving scheduled alarms dead and smart‑home routines frozen.
- Snapchat – Users across Europe reported login failures and undeliverable messages.
- ChatGPT – OpenAI’s chatbot struggled to fetch data from its back‑end, resulting in timeout screens.
- Perplexity AI – The search‑assistant went dark for hours, as its founder confirmed on social media.
- Epic Games Store & Epic Online Services – Both platforms suffered authentication hiccups, halting purchases.
- Canva, Airtable, Duolingo, Goodreads, Roblox, Venmo, Ring devices, and the McDonald’s app – All reported intermittent outages or degraded performance.
Reddit threads lit up with screenshots of frozen screens, error codes, and frustrated emojis. The consensus? “It’s the cloud again,” one user wrote, echoing a sentiment that’s been growing since the 2020‑2021‑2023 AWS hiccups.
Technical Roots: DynamoDB, EC2, and the US‑East‑1 Data Center
Behind the scenes, the glitch centered on two of AWS’s workhorses: DynamoDB, the NoSQL database service, and EC2, its elastic compute platform. When DynamoDB hit throttling limits, dependent services — from gaming matchmaking to chatbot NLP pipelines — started throwing 500‑level errors. Simultaneously, EC2 instances in the region reported elevated latency, further compounding the slowdown.
Industry analysts point out that the US‑East‑1 region, formally known as “AWS US East (N. Virginia),” is the oldest and largest data‑center cluster, launched in 2006. Its sheer size makes it a magnet for mission‑critical workloads, but that concentration also means a single fault can reverberate across the digital ecosystem.

Industry Reaction and Calls for Cloud Diversity
In the aftermath, cloud architects on LinkedIn and Twitter started waving the banner for multi‑cloud strategies. “Relying on a single region for 33 % of the internet is a structural risk,” said Nina Patel, a senior analyst at Gartner. “Diversifying across Azure, Google Cloud, and even smaller niche providers can hedge against exactly this kind of event.”
Meanwhile, Epic Games issued a brief statement acknowledging the outage but assuring players that “our team is working closely with AWS to improve resilience.” Snap Inc. promised to “review our redundancy architecture” and hinted at upcoming regional failovers.
What This Means for Users and Businesses Going Forward
For the average consumer, the outage was a temporary inconvenience – a missed alarm or a lost match. But for businesses, especially those with e‑commerce or SaaS models, even a few minutes of downtime can translate into thousands of dollars lost and brand trust eroded.
Enter the concept of “failure domain isolation.” By spreading workloads across multiple availability zones *and* multiple cloud providers, companies can limit the blast radius of any single outage. The trade‑off? Higher operational complexity and cost. Still, as one CTO from a mid‑size fintech startup told me, “You pay a premium for peace of mind, and after the October 2025 hit, that premium feels justified.”
Looking ahead, AWS has pledged a post‑mortem report within the next 30 days, and CEO Adam Selipsky hinted at “enhanced monitoring and automated failover mechanisms” for the US‑East‑1 region. Whether those measures will be enough to restore confidence remains to be seen.

Key Facts
- Date & Time: 20 Oct 2025, 3:11 a.m. ET (UTC‑4)
- Region Affected: US‑East‑1 (Northern Virginia)
- Primary Services Impacted: DynamoDB, EC2
- Duration: ~3 hours 24 minutes
- Estimated Internet Share Powered by AWS: ~33 %
Frequently Asked Questions
How did the outage affect gamers specifically?
Players of Fortnite experienced login crashes and matchmaking failures, meaning many matches never started. The issue stemmed from DynamoDB throttling, which the game uses for player session data. Most users were back online by mid‑morning, but some reported lingering latency for up to two hours after the main fix.
Why does the US‑East‑1 region cause such widespread problems?
US‑East‑1 is AWS’s oldest and largest region, hosting roughly a third of the internet’s cloud workloads. Its historic advantage in latency and connectivity has made it the default landing spot for many SaaS firms. When a core service like DynamoDB hiccups there, the failure cascades to any app that relies on it, regardless of where the end‑user is located.
What steps are companies taking to avoid future outages?
Many are accelerating multi‑cloud and multi‑region deployments, adding redundancy across Azure and Google Cloud. Others are tightening their monitoring of DynamoDB and EC2 health metrics, and some are re‑architecting critical services to run in containers that can be shifted quickly to another availability zone.
Did the outage impact Amazon’s own retail sites?
Amazon’s e‑commerce platform runs on a separate, highly‑redundant infrastructure and was not publicly reported as affected. However, internal memos hinted that engineers were on standby to reroute traffic if needed.
When can we expect a detailed post‑mortem from AWS?
AWS CEO Adam Selipsky announced that a comprehensive analysis will be published within 30 days, outlining root causes and remedial actions for the US‑East‑1 region.