- Tactics
- Exfiltration
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0024
Description
Adversaries may exfiltrate private information via AI Model Inference API Access. AI Models have been shown leak private information about their training data (e.g. Infer Training Data Membership, Invert AI Model). The model itself may also be extracted (Extract AI Model) for the purposes of AI Intellectual Property Theft.
Exfiltration of information relating to private training data raises privacy concerns. Private training data may include personally identifiable information, or other protected data.
Sub-techniques
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Exfiltration tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.