- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0051.002
Description
An adversary may trigger a prompt injection via a user action or event that occurs within the victim’s environment. Triggered prompt injections often target AI agents, which can be activated by means the adversary identifies during Discovery (See Activation Triggers). These malicious prompts may be hidden or obfuscated from the user and may already exist somewhere in the victim’s environment from the adversary performing Prompt Infiltration via Public-Facing Application. This type of injection may be used by the adversary to gain a foothold in the system or to target an unwitting user of the system.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.