AI Supply Chain Rug Pull (AML.T0109)

Tactic: Defense Evasion

Tactics
Defense Evasion
Maturity
realized
Reference
atlas.mitre.org/techniques/AML.T0109

Description

Adversaries may publish legitimate AI components or software, gain user adoption, then push an update with a malicious variant, leading to AI Supply Chain Compromise. More scrutiny is often placed on a supply chain dependency when it is first being considered for inclusion in an AI system. Performing a rug pull may allow adversaries to bypass these defenses and be more likely to achieve Initial Access.

Adversaries may publish malicious AI components via Publish Poisoned Models, Publish Poisoned Datasets, or Publish Poisoned AI Agent Tool.

Adversaries may use other techniques (See AI Supply Chain Reputation Inflation) to gain user trust and increase adoption before performing the rug pull.

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Defense Evasion tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses