- Tactics
- Discovery
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0013
Description
Adversaries may discover the ontology of an AI model’s output space, for example, the types of objects a model can detect. The adversary may discovery the ontology by repeated queries to the model, forcing it to enumerate its output space. Or the ontology may be discovered in a configuration file or in documentation about the model.
The model ontology helps the adversary understand how the model is being used by the victim. It is useful to the adversary in creating targeted attacks.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Discovery tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0007 — Discover AI Artifacts
- AML.T0014 — Discover AI Model Family
- AML.T0062 — Discover LLM Hallucinations
- AML.T0063 — Discover AI Model Outputs
- AML.T0069 — Discover LLM System Information
- AML.T0075 — Cloud Service Discovery
- AML.T0084 — Discover AI Agent Configuration
- AML.T0089 — Process Discovery