- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0112.000
Description
Adversaries may achieve full system compromise by abusing AI agents running locally on a host, such as computer-use agents or AI-driven browsers. These agents are designed to autonomously interact with the operating system, applications, and external services, often with broad permissions to execute commands, access files, manage credentials, and control user workflows.
If an adversary is able to take control of an AI agent’s behavior, they effectively gain the same level of access as the agent. This can result in complete control over the machine, including executing arbitrary code, accessing or exfiltrating sensitive data, modifying system configurations, and establishing persistence.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.