- Tactics
- Credential Access
- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0082
Description
Adversaries may attempt to use their access to a large language model (LLM) on the victim’s system to collect credentials. Credentials may be stored in internal documents which can inadvertently be ingested into a RAG database, where they can ultimately be retrieved by an AI agent.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Credential Access tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.