Acquire Public AI Artifacts (AML.T0002)

Tactic: Resource Development

Tactics
Resource Development
Maturity
realized
Reference
atlas.mitre.org/techniques/AML.T0002

Description

Adversaries may search public sources, including cloud storage, public-facing services, and software or data repositories, to identify AI artifacts. These AI artifacts may include the software stack used to train and deploy models, training and testing data, model configurations and parameters. An adversary will be particularly interested in artifacts hosted by or associated with the victim organization as they may represent what that organization uses in a production environment. Adversaries may identify artifact repositories via other resources associated with the victim organization (e.g. Search Victim-Owned Websites or Search Open Technical Databases). These AI artifacts often provide adversaries with details of the AI task and approach.

AI artifacts can aid in an adversary’s ability to Create Proxy AI Model. If these artifacts include pieces of the actual model in production, they can be used to directly Craft Adversarial Data. Acquiring some artifacts requires registration (providing user details such email/name), AWS keys, or written requests, and may require the adversary to Establish Accounts.

Artifacts might be hosted on victim-controlled infrastructure, providing the victim with some information on who has accessed that data.

Sub-techniques

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Resource Development tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses