Persistence (9 techniques)
MITRE ATLAS tactic
The adversary is trying to maintain their foothold via AI artifacts or software. Persistence consists of techniques that adversaries use to keep access to systems across restarts, changed credentials, and other interruptions that could cut off their access. Techniques used for persistence often involve leaving behind modified ML artifacts such as poisoned training data or manipulated AI models.
Techniques
- AML.T0018 — Manipulate AI Model Maturity: realized
- AML.T0020 — Poison Training Data Maturity: realized
- AML.T0061 — LLM Prompt Self-Replication Maturity: demonstrated
- AML.T0070 — RAG Poisoning Maturity: demonstrated
- AML.T0080 — AI Agent Context Poisoning Maturity: demonstrated
- AML.T0081 — Modify AI Agent Configuration Maturity: demonstrated
- AML.T0093 — Prompt Infiltration via Public-Facing Application Maturity: demonstrated
- AML.T0099 — AI Agent Tool Data Poisoning Maturity: feasible
- AML.T0110 — AI Agent Tool Poisoning Maturity: realized
AI red teaming training, taught by practitioners.
Hands-on courses on adversarial AI, prompt injection, and AI security operations.
View AI Security Courses