- Tactics
- AI Attack Staging
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0005
Description
Adversaries may obtain models to serve as proxies for the target model in use at the victim organization. Proxy models are used to simulate complete access to the target model in a fully offline manner.
Adversaries may train models from representative datasets, attempt to replicate models from victim inference APIs, or use available pre-trained models.
Sub-techniques
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the AI Attack Staging tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.