- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0024.002
Description
Adversaries may extract a functional copy of a private model. By repeatedly querying the victim’s AI Model Inference API Access, the adversary can collect the target model’s inferences into a dataset. The inferences are used as labels for training a separate model offline that will mimic the behavior and performance of the target model.
Adversaries may extract the model to avoid paying per query in an artificial-intelligence-as-a-service (AIaaS) setting. Model extraction is used for AI Intellectual Property Theft.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.