- Tactics
- Persistence
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0061
Description
An adversary may use a carefully crafted LLM Prompt Injection designed to cause the LLM to replicate the prompt as part of its output. This allows the prompt to propagate to other LLMs and persist on the system. The self-replicating prompt is typically paired with other malicious instructions (ex: LLM Jailbreak, LLM Data Leakage).
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0018 — Manipulate AI Model
- AML.T0020 — Poison Training Data
- AML.T0070 — RAG Poisoning
- AML.T0080 — AI Agent Context Poisoning
- AML.T0081 — Modify AI Agent Configuration
- AML.T0093 — Prompt Infiltration via Public-Facing Application
- AML.T0099 — AI Agent Tool Data Poisoning
- AML.T0110 — AI Agent Tool Poisoning