Extract LLM System Prompt (AML.T0056)

Tactic: Exfiltration

Tactics
Exfiltration
Maturity
feasible
Reference
atlas.mitre.org/techniques/AML.T0056

Description

Adversaries may attempt to extract a large language model’s (LLM) system prompt. This can be done via prompt injection to induce the model to reveal its own system prompt or may be extracted from a configuration file.

System prompts can be a portion of an AI provider’s competitive advantage and are thus valuable intellectual property that may be targeted by adversaries.

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Exfiltration tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses