- Tactics
- Defense Evasion
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0074
Description
Adversaries may attempt to manipulate features of their artifacts to make them appear legitimate or benign to users and/or security tools. Masquerading occurs when the name or location of an object, legitimate or malicious, is manipulated or abused for the sake of evading defenses and observation. This may include manipulating file metadata, tricking users into misidentifying the file type, and giving legitimate task or service names.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Defense Evasion tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0015 — Evade AI Model
- AML.T0054 — LLM Jailbreak
- AML.T0067 — LLM Trusted Output Components Manipulation
- AML.T0068 — LLM Prompt Obfuscation
- AML.T0071 — False RAG Entry Injection
- AML.T0073 — Impersonation
- AML.T0076 — Corrupt AI Model
- AML.T0081 — Modify AI Agent Configuration
- AML.T0092 — Manipulate User LLM Chat History
- AML.T0094 — Delay Execution of LLM Instructions
- AML.T0097 — Virtualization/Sandbox Evasion
- AML.T0107 — Exploitation for Defense Evasion