- Tactics
- Exfiltration
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0086
Description
AI agent tools capable of performing write operations may be invoked to exfiltrate data to an adversary. Sensitive information can be encoded into the tool’s input parameters and transmitted to an adversary-controlled location (such as an inbox, document, or server) as part of a seemingly legitimate action. Variants include sending emails, creating or modifying documents, updating CRM records, or even generating media such as images or videos.
The invoked tool itself may be legitimate but invoked by an adversary via LLM Prompt Injection, or the tool may be malicious (See AI Agent Tool Poisoning.
AI Agent Tool Poisoning can also be used manipulate the inputs and destination of a separate legitimate tool, invoked through normal usage by the victim.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Exfiltration tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.