- Tactics
- Resource Development
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0104
Description
Adversaries may create and publish poisoned AI agent tools. Poisoned tools may contain an LLM Prompt Injection, which can lead to a variety of impacts.
Tools may be published to open source version control repositories (e.g. GitHub, GitLab), to package registries (e.g. npm), or to repositories specifically designed for sharing tools (e.g. OpenClaw Hub). These registries may be largely unregulated and may contain many poisoned tools [1]. Tools may also be published as remotely hosted servers [2].
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Resource Development tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0002 — Acquire Public AI Artifacts
- AML.T0008 — Acquire Infrastructure
- AML.T0016 — Obtain Capabilities
- AML.T0017 — Develop Capabilities
- AML.T0019 — Publish Poisoned Datasets
- AML.T0020 — Poison Training Data
- AML.T0021 — Establish Accounts
- AML.T0058 — Publish Poisoned Models
- AML.T0060 — Publish Hallucinated Entities
- AML.T0065 — LLM Prompt Crafting
- AML.T0066 — Retrieval Content Crafting
- AML.T0079 — Stage Capabilities