Machine Compromise (AML.T0112)

Tactic: Impact

Tactics
Impact
Maturity
demonstrated
Reference
atlas.mitre.org/techniques/AML.T0112

Description

Adversaries may compromise a machine by exploiting or manipulating AI-enabled components on the system. Compromising a victim system allows the adversary to execute arbitrary code, steal credentials, exfiltrate data, and continue to persist on the system.

Adversaries may target a Local AI Agent which if compromised grants them the capabilities and permissions of the agent, or AI Artifacts which can contain embedded malware.

Sub-techniques

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Impact tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses