- Tactics
- Discovery
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0062
Description
Adversaries may prompt large language models and identify hallucinated entities. They may request software packages, commands, URLs, organization names, or e-mail addresses, and identify hallucinations with no connected real-world source. Discovered hallucinations provide the adversary with potential targets to Publish Hallucinated Entities. Different LLMs have been shown to produce the same hallucinations, so the hallucinations exploited by an adversary may affect users of other LLMs.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Discovery tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0007 — Discover AI Artifacts
- AML.T0013 — Discover AI Model Ontology
- AML.T0014 — Discover AI Model Family
- AML.T0063 — Discover AI Model Outputs
- AML.T0069 — Discover LLM System Information
- AML.T0075 — Cloud Service Discovery
- AML.T0084 — Discover AI Agent Configuration
- AML.T0089 — Process Discovery