- Tactics
- Persistence
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0110
Description
Adversaries may achieve persistence by poisoning tools used by AI agents including built-in tools or tools available to the agent via Model Context Protocol (MCP) connections. This involves compromising benign tools already integrated into the agent’s environment.
By altering tool behavior such as modifying parameters or descriptions, injecting hidden logic, or redirecting outputs, attackers can maintain long-term influence over the agent’s actions, decisions, or external interactions. Poisoned tools may silently exfiltrate data, execute unauthorized commands, or manipulate downstream processes without raising suspicion.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0018 — Manipulate AI Model
- AML.T0020 — Poison Training Data
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0070 — RAG Poisoning
- AML.T0080 — AI Agent Context Poisoning
- AML.T0081 — Modify AI Agent Configuration
- AML.T0093 — Prompt Infiltration via Public-Facing Application
- AML.T0099 — AI Agent Tool Data Poisoning