AI Agent Context Poisoning (AML.T0080)

Tactic: Persistence

Tactics
Persistence
Maturity
demonstrated
Reference
atlas.mitre.org/techniques/AML.T0080

Description

Adversaries may attempt to manipulate the context used by an AI agent’s large language model (LLM) to influence the responses it generates or actions it takes. This allows an adversary to persistently change the behavior of the target agent and further their goals.

Context poisoning can be accomplished by prompting the an LLM to add instructions or preferences to memory (See Memory) or by simply prompting an LLM that uses prior messages in a thread as part of its context (See Thread).

Sub-techniques

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses