- Tactics
- Discovery
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0069
Description
The adversary is trying to discover something about the large language model’s (LLM) system information. This may be found in a configuration file containing the system instructions or extracted via interactions with the LLM. The desired information may include the full system prompt, special characters that have significance to the LLM or keywords indicating functionality available to the LLM. Information about how the LLM is instructed can be used by the adversary to understand the system’s capabilities and to aid them in crafting malicious prompts.
Sub-techniques
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Discovery tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.