System Prompt (AML.T0069.002)

Maturity
demonstrated
Reference
atlas.mitre.org/techniques/AML.T0069.002

Description

Adversaries may discover a large language model’s system instructions provided by the AI system builder to learn about the system’s capabilities and circumvent its guardrails.

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses