AI Agent Configuration (AML.T0002.002)

Maturity
demonstrated
Reference
atlas.mitre.org/techniques/AML.T0002.002

Description

Adversaries may acquire publicly accessible AI agent configuration files to understand agent capabilities, gain unauthorized access to tools and data sources, or identify credentials for further attacks. Configuration files define what tools an agent can use, credentials for external services, system prompts, and behavioral settings, making valuable resources for adversaries targeting AI agent deployments.

Once configuration files are acquired, adversaries may perform Discover AI Agent Configuration to gain additional insights they can use in their operation or Credentials from AI Agent Configuration to harvest secrets.

AI agent configuration files come in multiple forms depending on the platform and agent framework. Agent configuration files adversaries may target include:

  • System prompts: Files containing agent instructions, behavioral guidelines, and internal logic.
  • Tool configuration: Files defining tools the agent can utilize, including Model Context Protocol (MCP) configs (e.g., mcp.json, claude_desktop_config.json), IDE-specific configs (e.g., .claude/settings.json, .vscode/tasks.json), and framework-specific settings that define external tool and data source integrations.
  • Skills and workflows: Files defining agent capabilities, behaviors, or workflows. Often a combination of instructions, scripts, and resources.
  • Environment and deployment configs: Files that control agent deployment and runtime behavior, often environment variables or framework-specific configs.

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses