- Tactics
- Defense Evasion
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0109
Description
Adversaries may publish legitimate AI components or software, gain user adoption, then push an update with a malicious variant, leading to AI Supply Chain Compromise. More scrutiny is often placed on a supply chain dependency when it is first being considered for inclusion in an AI system. Performing a rug pull may allow adversaries to bypass these defenses and be more likely to achieve Initial Access.
Adversaries may publish malicious AI components via Publish Poisoned Models, Publish Poisoned Datasets, or Publish Poisoned AI Agent Tool.
Adversaries may use other techniques (See AI Supply Chain Reputation Inflation) to gain user trust and increase adoption before performing the rug pull.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Defense Evasion tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0015 — Evade AI Model
- AML.T0054 — LLM Jailbreak
- AML.T0067 — LLM Trusted Output Components Manipulation
- AML.T0068 — LLM Prompt Obfuscation
- AML.T0071 — False RAG Entry Injection
- AML.T0073 — Impersonation
- AML.T0074 — Masquerading
- AML.T0076 — Corrupt AI Model
- AML.T0081 — Modify AI Agent Configuration
- AML.T0092 — Manipulate User LLM Chat History
- AML.T0094 — Delay Execution of LLM Instructions
- AML.T0097 — Virtualization/Sandbox Evasion