Select Language

English

Down Icon

Select Country

America

Down Icon

Hacker Added Prompt to Amazon Q to Erase Files and Cloud Data

Hacker Added Prompt to Amazon Q to Erase Files and Cloud Data

A security vulnerability recently surfaced involving Amazon’s AI coding assistant, ‘Q’, integrated with VS Code. The incident, reported by 404 Media, revealed a lapse in Amazon’s security protocols, allowing a hacker to insert malicious commands into a publicly released update.

The hacker, using a temporary GitHub account, managed to submit a pull request that granted them administrative access. Within this unauthorised update, destructive instructions were embedded, directing the AI assistant to potentially delete user files and wipe clean Amazon Web Services (AWS) environments.

Despite the severe nature of these commands, which were also intended to log the actions in a file named /tmp/CLEANER.LOG, Amazon reportedly merged and released the compromised version without detection.

The company later removed the flawed update from its records without any public announcement, raising questions about transparency. Corey Quinn, Chief Cloud Economist at The Duckbill Group, expressed scepticism regarding Amazon’s “security is our top priority” statement in light of this event.

“If this is what it looks like when security is the top priority, I can’t wait to see what happens when it’s ranked second,” Quinn wrote in his post on LinkedIn.

The core of the issue lies in how the hacker manipulated an open-source pull request. By doing so, they managed to inject commands into Amazon’s Q coding assistant. While these instructions were unlikely to auto-execute without direct user interaction, the incident critically exposed how AI agents can become silent carriers for system-level attacks.

It highlighted a gap in the verification process for code integrated into production systems, especially for AI-driven tools. The malicious code aimed to exploit the AI’s capabilities to perform destructive actions on a user’s system and cloud resources.

Yesterday’s incident with Amazon Q was a wake-up call about how AI agents can be attacked.PromptKit is trying to solve similar problems and help prevent them from happening again.

read full post👇https://t.co/atOWfilWFq

— Mr. Ånand (Studio1HQ) (@Astrodevil_) July 24, 2025

In response to such vulnerabilities, Jozu has released a new tool called “PromptKit.” This system, accessible via a single command, offers a local reverse proxy to record OpenAI-compatible traffic and provides a command-line interface (CLI) and text-based user interface (TUI) for exploring, tagging, comparing, and publishing prompts.

Jozu announced on X.com that PromptKit is a local-first, open-source tool aiming to provide auditable and production-safe prompt management, addressing a systemic risk as reliance on generative AI grows.

Today, we’re releasing a first version of Jozu PromptKit→ a local-first tool for capturing, reviewing, and managing LLM prompt interactions.It ensures policy-controlled workflows for verified, auditable prompt artifacts in production

Free to try and open source. pic.twitter.com/0Up4mc1Vy9

— Jozu (@Jozu_AI) July 24, 2025

Görkem Ercan, CTO of Jozu, told Hackread.com that PromptKit is designed to bridge the gap between prompt experimentation and deployment. It establishes a policy-controlled workflow, ensuring that only verified and audited prompt artefacts, unlike the raw, unverified text that impacted AWS, reach production.

Ercan further emphasised that this tool would have replaced the failed human verification process with a strict, policy- and signing-based workflow, effectively catching the malicious intent before it went live.

HackRead

HackRead

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow