- Tactics
- Discovery
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0063
Description
Adversaries may discover model outputs, such as class scores, whose presence is not required for the system to function and are not intended for use by the end user. Model outputs may be found in logs or may be included in API responses. Model outputs may enable the adversary to identify weaknesses in the model and develop attacks.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Discovery tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0007 — Discover AI Artifacts
- AML.T0013 — Discover AI Model Ontology
- AML.T0014 — Discover AI Model Family
- AML.T0062 — Discover LLM Hallucinations
- AML.T0069 — Discover LLM System Information
- AML.T0075 — Cloud Service Discovery
- AML.T0084 — Discover AI Agent Configuration
- AML.T0089 — Process Discovery