Publish Poisoned AI Agent Tool (AML.T0104)

Tactic: Resource Development

Tactics
Resource Development
Maturity
realized
Reference
atlas.mitre.org/techniques/AML.T0104

Description

Adversaries may create and publish poisoned AI agent tools. Poisoned tools may contain an LLM Prompt Injection, which can lead to a variety of impacts.

Tools may be published to open source version control repositories (e.g. GitHub, GitLab), to package registries (e.g. npm), or to repositories specifically designed for sharing tools (e.g. OpenClaw Hub). These registries may be largely unregulated and may contain many poisoned tools [1]. Tools may also be published as remotely hosted servers [2].

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Resource Development tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses