Manipulate AI Model (AML.T0018)

Tactic: Persistence, AI Attack Staging

Tactics
Persistence , AI Attack Staging
Maturity
realized
Reference
atlas.mitre.org/techniques/AML.T0018

Description

Adversaries may directly manipulate an AI model to change its behavior or introduce malicious code. Manipulating a model gives the adversary a persistent change in the system. This can include poisoning the model by changing its weights, modifying the model architecture to change its behavior, and embedding malware which may be executed when the model is loaded.

Sub-techniques

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Persistence, AI Attack Staging tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses