- Maturity
- feasible
- Reference
- atlas.mitre.org/techniques/AML.T0112.001
Description
Adversaries may achieve full system compromise by introducing malicious AI artifacts, such as models or data, that contain embedded malware or other malicious commands. AI artifacts are often stored in model registries or data stores and may affect many systems that pull these resources.
Malicious content stored in AI artifacts may be executed as a result of unsafe serialization formats (e.g. Python pickle) or by other bundled scripts or notebooks.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.