- Tactics
- Persistence
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0080
Description
Adversaries may attempt to manipulate the context used by an AI agent’s large language model (LLM) to influence the responses it generates or actions it takes. This allows an adversary to persistently change the behavior of the target agent and further their goals.
Context poisoning can be accomplished by prompting the an LLM to add instructions or preferences to memory (See Memory) or by simply prompting an LLM that uses prior messages in a thread as part of its context (See Thread).
Sub-techniques
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0018 — Manipulate AI Model
- AML.T0020 — Poison Training Data
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0070 — RAG Poisoning
- AML.T0081 — Modify AI Agent Configuration
- AML.T0093 — Prompt Infiltration via Public-Facing Application
- AML.T0099 — AI Agent Tool Data Poisoning
- AML.T0110 — AI Agent Tool Poisoning