A hot potato: Earlier this month, a hacker compromised the generative AI coding assistant of Amazon, Amazon Q, which is widely used by his visual studio code extension. The infringement was not only a technical slip, but it revealed critical errors in how AI tools are integrated in software development food. It is a moment of settlement for the developer community, and an Amazon cannot afford to ignore.
The attacker could do that Inject unauthorized code In the open-source Github repository of the assistant. This code contained instructions that, if successfully activated, could have deleted user files and wiped cloud sources that were linked to Amazon Web Services accounts.
The infringement was carried out by a seemingly routine pull request. Once accepted, the hacker has inserted an prompt that instructed the AI agent to “clean a system to an near-factory status and file system and cloud sources.”
The malignant change was included in version 1.84.0 of the Amazon Q extension, which was publicly distributed among almost a million users on July 17. Amazon initially failed to detect the infringement and only removed the compromised version from the circulation until later. At the time, the company did not make a public announcement, a decision that has taken criticism from security experts and developers who have cited concern about transparency.
“This is not ‘Move Fast and Break Things’, it is’ Leaving quickly and strangers have your route map write,” said Corey Quinn, Chief Cloud Economist at the Duckbill Group, on Bluesky.
Among the critics, the hacker who was responsible for the infringement, who openly mocked Amazon’s security practices.
He described his actions as a deliberate demonstration of Amazon’s insufficient guarantee. In comments on 404 media, the hacker characterized the AI security measures from Amazon as ‘security theater’, which implies that the defenses were more performance than effective.
Steven Vaughan-Nichols from ZDNet indeed argued that the infringement was less a complaint of open source itself and more a reflection Van how Amazon managed his open-source workflows. Simply opening a codebase does not guarantee security – what is important is how an organization deals with access control, code provision and verification. The malignant code made it in an official release because the verification processes of Amazon had not detected the unauthorized pull request, wrote Vaughan-Nichols.

According to the hacker, the code – designed to erase systems – was deliberately displayed – functionally, which served as a warning instead of an actual threat. His declared goal was to encourage Amazon to publicly recognize the vulnerability and to improve the security position, rather than cause real damage to users or infrastructure.
An investigation by the Amazon security team concluded that the code would not have been carried out as intended due to a technical error. Amazon responded by withdrawing compromised references, removing the unauthorized code and releasing a new, clean version of the extension. In a written statement, the company emphasized that security is the highest priority and confirmed that no customer sources were affected. Users were advised to update their extensions to version 1.85.0 or later.
Nevertheless, the event has been seen as a wake-up call about the risks related to integrating AI agents in development work flows and the need for robust code assessment and repository management practices. Until that happens, the blindly inclusion of AI tools in software development processes can expose users to a considerable risk.
#Amazons #coding #assistant #exposed #million #users #potential #system #cloth


