- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0043.002
Description
In Black-Box Transfer attacks, the adversary uses one or more proxy models (trained via Create Proxy AI Model or Train Proxy via Replication) they have full access to and are representative of the target model. The adversary uses White-Box Optimization on the proxy models to generate adversarial examples. If the set of proxy models are close enough to the target model, the adversarial example should generalize from one to another. This means that an attack that works for the proxy models will likely then work for the target model. If the adversary has AI Model Inference API Access, they may use Verify Attack to confirm the attack is working and incorporate that information into their training process.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.