- Tactics
- Initial Access , Persistence
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0093
Description
An adversary may introduce malicious prompts into the victim’s system via a public-facing application with the intention of it being ingested by an AI at some point in the future and ultimately having a downstream effect. This may occur when a data source is indexed by a retrieval augmented generation (RAG) system, when a rule triggers an action by an AI agent, or when a user utilizes a large language model (LLM) to interact with the malicious content. The malicious prompts may persist on the victim system for an extended period and could affect multiple users and various AI tools within the victim organization.
Any public-facing application that accepts text input could be a target. This includes email, shared document systems like OneDrive or Google Drive, and service desks or ticketing systems like Jira. This also includes OCR-mediated infiltration where malicious instructions are embedded in images, screenshots, and invoices that are ingested into the system.
Adversaries may perform Reconnaissance to identify public facing applications that are likely monitored by an AI agent or are likely to be indexed by a RAG. They may perform Discover AI Agent Configuration to refine their targeting.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Initial Access, Persistence tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0010 — AI Supply Chain Compromise
- AML.T0012 — Valid Accounts
- AML.T0015 — Evade AI Model
- AML.T0018 — Manipulate AI Model
- AML.T0020 — Poison Training Data
- AML.T0049 — Exploit Public-Facing Application
- AML.T0052 — Phishing
- AML.T0061 — LLM Prompt Self-Replication
- AML.T0070 — RAG Poisoning
- AML.T0078 — Drive-by Compromise
- AML.T0080 — AI Agent Context Poisoning
- AML.T0081 — Modify AI Agent Configuration