- Tactics
- Persistence , Defense Evasion
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0081
Description
Adversaries may modify the configuration files for AI agents on a system. This allows malicious changes to persist beyond the life of a single agent and affects any agents that share the configuration.
Configuration changes may include modifications to the system prompt, tampering with or replacing knowledge sources, modification to settings of connected tools, and more. Through those changes, an attacker could redirect outputs or tools to malicious services, embed covert instructions that exfiltrate data, or weaken security controls that normally restrict agent behavior.
Adversaries may modify or disable a configuration setting related to security controls, such as those that would prevent the AI Agent from taking actions that may be harmful to the user’s system without human-in-the-loop oversight. Disabling AI agent security features may allow adversaries to achieve their malicious goals and maintain long-term corruption of the AI agent.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence, Defense Evasion tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0015 — Evade AI Model
- AML.T0018 — Manipulate AI Model
- AML.T0020 — Poison Training Data
- AML.T0054 — LLM Jailbreak
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0067 — LLM Trusted Output Components Manipulation
- AML.T0068 — LLM Prompt Obfuscation
- AML.T0070 — RAG Poisoning
- AML.T0071 — False RAG Entry Injection
- AML.T0073 — Impersonation
- AML.T0074 — Masquerading
- AML.T0076 — Corrupt AI Model