- Tactics
- Resource Development
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0060
Description
Adversaries may create an entity they control, such as a software package, website, or email address to a source hallucinated by an LLM. The hallucinations may take the form of package names commands, URLs, company names, or email addresses that point the victim to the entity controlled by the adversary. When the victim interacts with the adversary-controlled entity, the attack can proceed.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Resource Development tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.
Related techniques
- AML.T0002 — Acquire Public AI Artifacts
- AML.T0008 — Acquire Infrastructure
- AML.T0016 — Obtain Capabilities
- AML.T0017 — Develop Capabilities
- AML.T0019 — Publish Poisoned Datasets
- AML.T0020 — Poison Training Data
- AML.T0021 — Establish Accounts
- AML.T0058 — Publish Poisoned Models
- AML.T0065 — LLM Prompt Crafting
- AML.T0066 — Retrieval Content Crafting
- AML.T0079 — Stage Capabilities
- AML.T0104 — Publish Poisoned AI Agent Tool