Eight minutes. That is how long it took an attacker, assisted by large language models, to go from stolen credentials found in a public S3 bucket to full administrative access across an AWS environment. Sysdig’s Threat Research Team documented the intrusion on November 28, 2025, and their findings published in February 2026 provide one of the most detailed real-world case studies of AI-accelerated cloud attacks to date. The attacker compromised 19 distinct AWS principals, abused Amazon Bedrock to invoke six different LLM families, injected malicious code into Lambda functions, and attempted to spin up GPU instances for crypto-mining workloads.
This is not a red team exercise or a proof-of-concept from a research lab. It happened in a live production environment.
Phase 1: Exposed Credentials and Immediate LLMjacking
The attack started with something depressingly familiar: credentials left in a public S3 bucket. The bucket contained Retrieval-Augmented Generation (RAG) data for AI models, and buried alongside that data were IAM user credentials with read/write permissions on AWS Lambda and restricted access to Amazon Bedrock.
Within minutes of gaining access, the attacker pivoted to LLMjacking, a technique Sysdig first documented in mid-2024 where compromised cloud credentials are used to access cloud-hosted LLMs. The attacker invoked models from six different families through Bedrock: Anthropic Claude, Meta Llama, DeepSeek, Amazon Nova Premier, Amazon Titan Image Generator, and Cohere Embed. The potential cost to the victim from this phase alone can exceed $46,000 per day when attackers maximize quota limits across multiple regions.
The LLMjacking served a dual purpose. The attacker used the models to generate malicious code for subsequent phases of the attack, while simultaneously burning through the victim’s Bedrock quota, a resource that translates directly into money.
How LLMjacking Differs from Traditional Credential Abuse
Traditional credential theft gives you access to infrastructure. LLMjacking gives you access to intelligence. The attacker does not just get compute; they get an on-demand code generator, a reconnaissance assistant, and a decision-support system. The models handle error analysis, generate exploitation scripts, and adapt to unexpected API responses faster than any human operator could. Sysdig noted that the attacker used an evasion technique of setting max_tokens_to_sample to -1, which triggers a validation exception rather than an access denial, confirming credential validity without tripping security alerts.
Phase 2: Lambda Code Injection and Privilege Escalation
With credentials validated and LLM access established, the attacker moved to privilege escalation. The target: an existing Lambda function called EC2-init.
The compromised IAM user had UpdateFunctionCode and UpdateFunctionConfiguration permissions on Lambda. The attacker replaced the function’s code with an LLM-generated payload that performed three operations:
- IAM user enumeration with access key extraction across the entire account
- S3 bucket listing and content retrieval targeting secrets, configuration files, and additional credentials
- Cross-account role assumption attempting to assume
OrganizationAccountAccessRoleacross connected AWS accounts
The injected code carried several hallmarks of LLM generation. Comments were written in Serbian. AWS account IDs in the code were hallucinated, following ascending/descending digit patterns (123456789012, 210987654321) rather than real account numbers. References pointed to non-existent GitHub repositories for ML training scripts. The code also exhibited what Sysdig described as “comprehensive exception handling,” a level of error management that is unusual for human-written exploitation code but typical of LLM output.
The attacker increased the Lambda function’s execution timeout from 3 to 30 seconds to give the payload enough time to complete enumeration.
The Speed Advantage
This phase is where the AI acceleration becomes most visible. A human attacker would need to manually enumerate IAM permissions, identify exploitable configurations, write the injection payload, test it, and iterate on failures. The LLM compressed that workflow into a single code generation step. CSO Online reported that the entire privilege escalation phase, from Lambda injection to admin access, completed within the 8-minute window that gives this attack its name.
Phase 3: Lateral Movement Across 19 AWS Principals
The Lambda payload succeeded. The attacker compromised 19 distinct AWS principals: 6 different IAM roles across 14 sessions, plus 5 IAM users. They created a new admin-level user account named “frick” as a persistence mechanism.
The data exfiltration targets reveal a sophisticated operator who understood cloud architecture:
- AWS Secrets Manager credentials for downstream service access
- EC2 Systems Manager parameters containing database connection strings and API keys
- CloudWatch logs for reconnaissance on what the environment actually runs
- Lambda function source code to identify additional attack surfaces
- CloudTrail events to understand what logging and detection capabilities were active
That last target, CloudTrail, is particularly telling. The attacker checked detection capabilities before deciding how aggressively to proceed. This is not spray-and-pray behavior. It is methodical, and the LLM-generated code handled the decision logic.
Phase 4: GPU Hijacking and the JupyterLab Backdoor
After establishing admin access, the attacker shifted to monetization. They queried EC2 for Amazon Machine Images suitable for deep learning applications and attempted to launch high-end GPU instances. The Register reported that the attacker prepared SSH keys and security groups, then launched a costly GPU instance with scripts to install CUDA, deploy training frameworks, and expose a public JupyterLab interface on port 8888.
The JupyterLab server is the detail that security teams should pay attention to. As Sysdig noted, it functions as “a backdoor to the instance that doesn’t require AWS credentials.” Anyone who discovers the URL can execute arbitrary code on the GPU instance without authenticating to AWS at all. The instance was terminated after 5 minutes, likely because AWS quota limits kicked in or automated detection fired.
This monetization phase is the endgame that distinguishes modern cloud attacks from traditional intrusions. The attacker is not after data (though they took plenty). They want compute resources, specifically GPU time, to run ML training workloads, generate content, or resell access to other operators.
What Defenders Need to Change
The Sysdig attack chain exposes five specific failures that most AWS environments share.
1. Credentials in Public S3 Buckets
This should not still be happening in 2026, but it is. AWS’s own data shows that public bucket misconfigurations remain one of the top initial access vectors. Every S3 bucket containing credentials, RAG data, or configuration files must have public access blocked at the account level using S3 Block Public Access settings.
2. Lambda Permissions Are Too Broad
UpdateFunctionCode and UpdateFunctionConfiguration on Lambda are effectively code execution permissions. If an IAM user has both of these plus PassRole, they can inject arbitrary code that runs with the Lambda function’s execution role. Restrict these permissions to specific functions, not wildcards, and monitor UpdateFunctionCode events in CloudTrail.
3. Amazon Bedrock Logging Is Off by Default
Model invocation logging for Bedrock is not enabled by default. Without it, LLMjacking produces no audit trail. Enable model invocation logging to CloudWatch and S3, and create Service Control Policies (SCPs) that restrict Bedrock access to only the specific models your applications actually use.
4. Cross-Account Role Assumptions Are Not Monitored
The attacker attempted to assume OrganizationAccountAccessRole across connected accounts. This role exists by default in AWS Organizations member accounts and grants admin access. Monitor AssumeRole calls targeting this role, and restrict which principals can assume it using trust policy conditions.
5. Detection Response Time Is Measured in Hours, Attacks in Minutes
The structural problem: your SOC operates on human timescales. This attack operated on machine timescales. Darktrace’s 2026 survey found that only 14% of organizations allow AI defenses to take autonomous remediation actions. When an attack completes in 8 minutes, a 30-minute mean time to respond is too slow by a factor of four.
The Bigger Picture: AI Compression of the Kill Chain
The Sysdig attack is not an isolated incident. It is a data point in a trend. Palo Alto Networks’ Unit 42 demonstrated AI agents compressing a full ransomware campaign to 25 minutes. Anthropic disclosed that Chinese state-sponsored group GTG-1002 used Claude to run 80-90% of an espionage campaign across 30 organizations.
What makes the Sysdig case distinctive is the specificity. It is not a controlled test. The attack happened in a real environment, against real infrastructure, with real consequences. The LLM-generated code with Serbian comments and hallucinated account IDs provides forensic evidence of AI involvement that is harder to deny than theoretical capabilities.
For cloud security teams, the takeaway is concrete: your detection and response workflows were designed for human-speed attacks. They need to be redesigned for machine-speed attacks. That means automated credential rotation, real-time anomaly detection on Lambda code changes, Bedrock invocation monitoring, and, most critically, giving your AI-powered defenses the authority to act without waiting for human approval.
The 8-minute clock is ticking.
Frequently Asked Questions
How did attackers breach an AWS environment in 8 minutes?
Attackers used credentials found in a public S3 bucket to access an AWS environment. They leveraged large language models to automate reconnaissance, generate malicious Lambda function code for privilege escalation, and move laterally across 19 AWS principals. The entire chain from initial access to admin privileges took under 10 minutes, with the critical escalation phase completing in 8 minutes.
What is LLMjacking?
LLMjacking is a technique where attackers use stolen cloud credentials to access cloud-hosted large language models like Amazon Bedrock. The attacker uses the compromised account’s LLM quota to generate malicious code, run reconnaissance, and make real-time decisions during an attack. The cost to the victim can exceed $46,000 per day when attackers maximize usage across regions.
How can organizations detect LLMjacking on AWS?
Enable model invocation logging for Amazon Bedrock, which is off by default. Monitor for unusual Bedrock API calls, especially from IAM principals that do not normally invoke models. Create Service Control Policies (SCPs) restricting Bedrock access to only approved models. Watch for the evasion technique of setting max_tokens_to_sample to -1, which triggers validation exceptions instead of access denials.
What AWS permissions enabled the Lambda code injection attack?
The attacker exploited UpdateFunctionCode and UpdateFunctionConfiguration permissions on AWS Lambda. These permissions allowed them to replace the code of an existing Lambda function (EC2-init) with a malicious payload and increase its execution timeout from 3 to 30 seconds. Combined with PassRole permissions, this effectively gave the attacker code execution capabilities under the Lambda function’s IAM role.
What are the signs that an attack used AI-generated code?
In the Sysdig case, indicators included code comments written in Serbian (the attacker’s apparent language), hallucinated AWS account IDs following ascending/descending digit patterns rather than real numbers, references to non-existent GitHub repositories, and unusually comprehensive exception handling that is typical of LLM output but uncommon in human-written exploitation code.
