Generate Malicious Commands (AML.T0102)

Tactic: AI Attack Staging

Tactics
AI Attack Staging
Maturity
realized
Reference
atlas.mitre.org/techniques/AML.T0102

Description

Adversaries may use large language models (LLMs) to dynamically generate malicious commands from natural language. Dynamically generated commands may be harder detect as the attack signature is constantly changing. AI-generated commands may also allow adversaries to more rapidly adapt to different environments and adjust their tactics.

Adversaries may utilize LLMs present in the victim’s environment or call out to externally hosted services. APT28 utilized a model hosted on HuggingFace in a campaign with their LAMEHUG malware [1]. In either case prompts to generate malicious code can blend in with normal traffic.

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the AI Attack Staging tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses