- Tactics
- Resource Development
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0019
Description
Adversaries may Poison Training Data and publish it to a public location. The poisoned dataset may be a novel dataset or a poisoned variant of an existing open source dataset. This data may be introduced to a victim system via AI Supply Chain Compromise.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Resource Development tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0002 — Acquire Public AI Artifacts
- AML.T0008 — Acquire Infrastructure
- AML.T0016 — Obtain Capabilities
- AML.T0017 — Develop Capabilities
- AML.T0020 — Poison Training Data
- AML.T0021 — Establish Accounts
- AML.T0058 — Publish Poisoned Models
- AML.T0060 — Publish Hallucinated Entities
- AML.T0065 — LLM Prompt Crafting
- AML.T0066 — Retrieval Content Crafting
- AML.T0079 — Stage Capabilities
- AML.T0104 — Publish Poisoned AI Agent Tool