- Tactics
- Resource Development , Persistence
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0020
Description
Adversaries may attempt to poison datasets used by an AI model by modifying the underlying data or its labels. This allows the adversary to embed vulnerabilities in AI models trained on the data that may not be easily detectable. Data poisoning attacks may or may not require modifying the labels. The embedded vulnerability is activated at a later time by data samples with an Insert Backdoor Trigger
Poisoned data can be introduced via AI Supply Chain Compromise or the data may be poisoned after the adversary gains Initial Access to the system.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Resource Development, Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0002 — Acquire Public AI Artifacts
- AML.T0008 — Acquire Infrastructure
- AML.T0016 — Obtain Capabilities
- AML.T0017 — Develop Capabilities
- AML.T0018 — Manipulate AI Model
- AML.T0019 — Publish Poisoned Datasets
- AML.T0021 — Establish Accounts
- AML.T0058 — Publish Poisoned Models
- AML.T0060 — Publish Hallucinated Entities
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0065 — LLM Prompt Crafting
- AML.T0066 — Retrieval Content Crafting