- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0011.002
Description
A victim may invoke a poisoned tool when interacting with their AI agent. A poisoned tool may execute an LLM Prompt Injection or perform AI Agent Tool Invocation.
Poisoned AI agent tools may be introduced into the victim’s environment via AI Software, or the user may configure their agent to connect to remote tools.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.