- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0010.004
Description
An adversary may compromise a victim’s container registry by pushing a manipulated container image and overwriting an existing container name and/or tag. Users of the container registry as well as automated CI/CD pipelines may pull the adversary’s container image, compromising their AI Supply Chain. This can affect development and deployment environments.
Container images may include AI models, so the compromised image could have an AI model which was manipulated by the adversary (See Manipulate AI Model).
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.