- Tactics
- Resource Development
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0079
Description
Adversaries may upload, install, or otherwise set up capabilities that can be used during targeting. To support their operations, an adversary may need to take capabilities they developed (Develop Capabilities) or obtained (Obtain Capabilities) and stage them on infrastructure under their control. These capabilities may be staged on infrastructure that was previously purchased/rented by the adversary (Acquire Infrastructure) or was otherwise compromised by them. Capabilities may also be staged on web services, such as GitHub, model registries, such as Hugging Face, or container registries.
Adversaries may stage a variety of AI Artifacts including poisoned datasets (Publish Poisoned Datasets, malicious models (Publish Poisoned Models, and prompt injections. They may target names of legitimate companies or products, engage in typosquatting, or use hallucinated entities (Discover LLM Hallucinations).
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Resource Development tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0002 — Acquire Public AI Artifacts
- AML.T0008 — Acquire Infrastructure
- AML.T0016 — Obtain Capabilities
- AML.T0017 — Develop Capabilities
- AML.T0019 — Publish Poisoned Datasets
- AML.T0020 — Poison Training Data
- AML.T0021 — Establish Accounts
- AML.T0058 — Publish Poisoned Models
- AML.T0060 — Publish Hallucinated Entities
- AML.T0065 — LLM Prompt Crafting
- AML.T0066 — Retrieval Content Crafting
- AML.T0104 — Publish Poisoned AI Agent Tool