- Tactics
- Persistence , AI Attack Staging
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0018
Description
Adversaries may directly manipulate an AI model to change its behavior or introduce malicious code. Manipulating a model gives the adversary a persistent change in the system. This can include poisoning the model by changing its weights, modifying the model architecture to change its behavior, and embedding malware which may be executed when the model is loaded.
Sub-techniques
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence, AI Attack Staging tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0005 — Create Proxy AI Model
- AML.T0020 — Poison Training Data
- AML.T0042 — Verify Attack
- AML.T0043 — Craft Adversarial Data
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0070 — RAG Poisoning
- AML.T0080 — AI Agent Context Poisoning
- AML.T0081 — Modify AI Agent Configuration
- AML.T0088 — Generate Deepfakes
- AML.T0093 — Prompt Infiltration via Public-Facing Application
- AML.T0099 — AI Agent Tool Data Poisoning
- AML.T0102 — Generate Malicious Commands