- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0005.001
Description
Adversaries may replicate a private model. By repeatedly querying the victim’s AI Model Inference API Access, the adversary can collect the target model’s inferences into a dataset. The inferences are used as labels for training a separate model offline that will mimic the behavior and performance of the target model.
A replicated model that closely mimic’s the target model is a valuable resource in staging the attack. The adversary can use the replicated model to Craft Adversarial Data for various purposes (e.g. Evade AI Model, Spamming AI System with Chaff Data).
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.