- Maturity
- demonstrated
- Reference
- atlas.mitre.org/techniques/AML.T0095.000
Description
Adversaries may search public code repositories for information about a victim or victim system that can be used during targeting. Victims may store code or artifacts related to their AI systems in repositories on various third-party websites such as GitHub, GitLab, SourceForge, and BitBucket. Adversaries may search code repositories of common AI tools, frameworks, models, or agentic systems that are used—but not owned—by the victim.
Public code repositories can often be a source of various information about victims, such as commonly used AI frameworks, libraries, models, datasets, agents, and agent tools, as well as the names of employees. Adversaries may also identify more sensitive data, including accidentally leaked credentials or API keys (ex: Credentials from AI Agent Configuration). Information from these sources may reveal opportunities for other forms of Reconnaissance (ex: Gather RAG-Indexed Targets), establishing operational resources (ex: Acquire Public AI Artifacts), Discovery (ex: Discover AI Agent Configuration) and/or Initial Access (ex: Valid Accounts or Phishing).
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.