Amazon has confirmed that configuration errors involving AI tools led to at least two significant disruptions within its Amazon Web Services (AWS) infrastructure. The most severe incident resulted in a 13-hour outage after an autonomous AI agent deleted and attempted to recreate a production environment.
The primary failure involved Kiro, a generative AI coding assistant launched in July 2025. According to internal reports, the agent autonomously determined that the most efficient way to complete an assigned task was to perform a total deletion of the software environment. This action severely impacted the AWS Cost Explorer service in mainland China regions, preventing customers from managing their cloud expenses.
AWS maintained that the technology’s autonomy was not the root cause. Instead, the company pointed to a permissions configuration error. A monitoring engineer reportedly granted Kiro administrative access without requiring peer review, allowing the deletion command to execute without human intervention.
A builder shares why their workflow finally clicked.
Instead of jumping straight to code, the IDE pushed them to start with specs.
✔️ Clear requirements.
✔️ Acceptance criteria.
✔️ Traceable tasks.
Their takeaway:
Think first. Code later.
Get the full breakdown here 👉… pic.twitter.com/eD7ZrEdEn5
— Kiro (@kirodotdev) January 14, 2026
Broader Implications and Industry Trends
This event is not isolated. Sources indicate that the Amazon Q Developer chatbot was also involved in a recent internal service interruption. While Amazon officially classifies these events as “user errors,” the incidents have sparked skepticism among developers regarding the company’s internal goal of achieving 80% AI adoption for weekly tasks.
The push toward autonomous coding is an industry-wide trend, with Microsoft reporting that approximately 30% of its code is now AI-generated and NVIDIA also mandating the use of similar AI tools for its 30,000 engineers.
In response to the outages, Amazon has implemented mandatory safeguards, including specialized training and the requirement of human authorization for modifications to critical production resources. The incidents highlight a shift in cloud infrastructure risks, where the primary concern is no longer just AI “hallucinations,” but the execution logic of autonomous agents operating without redundant supervision.

