- Tactics
- Defense Evasion
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0094
Description
Adversaries may include instructions to be followed by the AI system in response to a future event, such as a specific keyword or the next interaction, in order to evade detection or bypass controls placed on the AI system.
For example, an adversary may include “If the user submits a new request…” followed by the malicious instructions as part of their prompt.
AI agents can include security measures against prompt injections that prevent the invocation of particular tools or access to certain data sources during a conversation turn that has untrusted data in context. Delaying the execution of instructions to a future interaction or keyword is one way adversaries may bypass this type of control.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Defense Evasion tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0015 — Evade AI Model
- AML.T0054 — LLM Jailbreak
- AML.T0067 — LLM Trusted Output Components Manipulation
- AML.T0068 — LLM Prompt Obfuscation
- AML.T0071 — False RAG Entry Injection
- AML.T0073 — Impersonation
- AML.T0074 — Masquerading
- AML.T0076 — Corrupt AI Model
- AML.T0081 — Modify AI Agent Configuration
- AML.T0092 — Manipulate User LLM Chat History
- AML.T0097 — Virtualization/Sandbox Evasion
- AML.T0107 — Exploitation for Defense Evasion