- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0010.001
Description
Adversaries may target software packages that are commonly used in AI-enabled systems or are part of the AI DevOps lifecycle. This can include deep learning frameworks used to build AI models (e.g. PyTorch, TensorFlow, Jax), generative AI integration frameworks (e.g. LangChain, LangFlow), inference engines, and AI DevOps tools. They may also target the dependency chains of any of these software packages [1]. Additionally, adversaries may target specific components used by AI software such as configuration files [2] or example usage of AI packages, which may be distributed in Jupyter notebooks [3].
Adversaries may compromise legitimate packages [4] or publish malicious software to a namesquatted location [1]. They may target package names that are hallucinated by large language models [5] (see: Publish Hallucinated Entities). They may also perform a AI Supply Chain Rug Pull in which they first publish a legitimate package and then publish a malicious version once they reach a critical mass of users.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.