- Tactics
- Defense Evasion
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0073
Description
Adversaries may impersonate a trusted person or organization in order to persuade and trick a target into performing some action on their behalf. For example, adversaries may communicate with victims (via Phishing, or Spearphishing via Social Engineering LLM) while impersonating a known sender such as an executive, colleague, or third-party vendor. Established trust can then be leveraged to accomplish an adversary’s ultimate goals, possibly against multiple victims.
Adversaries may target resources that are part of the AI DevOps lifecycle, such as model repositories, container registries, and software registries.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Defense Evasion tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0015 — Evade AI Model
- AML.T0054 — LLM Jailbreak
- AML.T0067 — LLM Trusted Output Components Manipulation
- AML.T0068 — LLM Prompt Obfuscation
- AML.T0071 — False RAG Entry Injection
- AML.T0074 — Masquerading
- AML.T0076 — Corrupt AI Model
- AML.T0081 — Modify AI Agent Configuration
- AML.T0092 — Manipulate User LLM Chat History
- AML.T0094 — Delay Execution of LLM Instructions
- AML.T0097 — Virtualization/Sandbox Evasion
- AML.T0107 — Exploitation for Defense Evasion