- Tactics
- Persistence
- Maturity
- feasible
- Reference
- atlas.mitre.org/techniques/AML.T0099
Description
Adversaries may place malicious content on a victim’s system where it can be retrieved by an AI Agent Tool. This may be accomplished by placing documents in a location that will be ingested by a service the AI agent has associated tools for.
The content may be targeted such that it would often be retrieved by common queries. The adversary’s content may include false or misleading information. It may also include prompt injections with malicious instructions.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0018 — Manipulate AI Model
- AML.T0020 — Poison Training Data
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0070 — RAG Poisoning
- AML.T0080 — AI Agent Context Poisoning
- AML.T0081 — Modify AI Agent Configuration
- AML.T0093 — Prompt Infiltration via Public-Facing Application
- AML.T0110 — AI Agent Tool Poisoning