- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0048.004
Description
Adversaries may exfiltrate AI artifacts to steal intellectual property and cause economic harm to the victim organization.
Proprietary training data is costly to collect and annotate and may be a target for Exfiltration and theft.
AIaaS providers charge for use of their API. An adversary who has stolen a model via Exfiltration or via Extract AI Model now has unlimited use of that service without paying the owner of the intellectual property.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.