- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0084.003
Description
Adversaries may extract call chains from AI agent configurations, which can reveal potentially targets for remote code execution (RCE) or other vulnerabilities. Vulnerable call chains often connect user inputs or LLM outputs to an execution sink (e.g. exec, eval, os.popen). The vulnerabilities may be later exploited via LLM Prompt Injection.
Adversaries may systematically identify potentially vulnerable call chains present in LLM frameworks, then scan for applications that are configured to use these call chains for targeting [1].
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.