- Tactics
- Persistence
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0070
Description
Adversaries may inject malicious content into data indexed by a retrieval augmented generation (RAG) system to contaminate a future thread through RAG-based search results. This may be accomplished by placing manipulated documents in a location the RAG indexes (see Gather RAG-Indexed Targets).
The content may be targeted such that it would always surface as a search result for a specific user query. The adversary’s content may include false or misleading information. It may also include prompt injections with malicious instructions, or false RAG entries.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0018 — Manipulate AI Model
- AML.T0020 — Poison Training Data
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0080 — AI Agent Context Poisoning
- AML.T0081 — Modify AI Agent Configuration
- AML.T0093 — Prompt Infiltration via Public-Facing Application
- AML.T0099 — AI Agent Tool Data Poisoning
- AML.T0110 — AI Agent Tool Poisoning