Embed Malware (AML.T0018.002)

Maturity
realized
Reference
atlas.mitre.org/techniques/AML.T0018.002

Description

Adversaries may embed malicious code into AI Model files. AI models may be packaged as a combination of instructions and weights. Some formats such as pickle files are unsafe to deserialize because they can contain unsafe calls such as exec. Models with embedded malware may still operate as expected. It may allow them to achieve Execution, Command & Control, or Exfiltrate Data.

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses