- Tactics
- Initial Access , Defense Evasion , Impact
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0015
Description
Adversaries can Craft Adversarial Data that prevents an AI model from correctly identifying the contents of the data or Generate Deepfakes that fools an AI model expecting authentic data.
This technique can be used to evade a downstream task where AI is utilized. The adversary may evade AI-based virus/malware detection or network scanning towards the goal of a traditional cyber attack. AI model evasion through deepfake generation may also provide initial access to systems that use AI-based biometric authentication.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Initial Access, Defense Evasion, Impact tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0010 — AI Supply Chain Compromise
- AML.T0012 — Valid Accounts
- AML.T0029 — Denial of AI Service
- AML.T0031 — Erode AI Model Integrity
- AML.T0034 — Cost Harvesting
- AML.T0046 — Spamming AI System with Chaff Data
- AML.T0048 — External Harms
- AML.T0049 — Exploit Public-Facing Application
- AML.T0052 — Phishing
- AML.T0054 — LLM Jailbreak
- AML.T0059 — Erode Dataset Integrity
- AML.T0067 — LLM Trusted Output Components Manipulation