- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0010.005
Description
Adversaries may target AI agent tools as a means to compromise a victim’s AI supply chain. Tools add capabilities to AI agents, allowing them to interact with other services, connect to data sources, access internet resources, run system tools, and execute code. They are an attractive target for adversaries because compromising an AI agent can provide them with broad accesses and permissions on the victim’s system via the agent’s other tools.
Poisoned agent tools (See AI Agent Tool Poisoning) can contain malicious code or LLM Prompt Injections that manipulate the agent’s behavior and even modify how other tools are called. Adversaries have successfully used a poisoned MCP server to exfiltrate private user data [5].
Agent tools have exploded in popularity, with thousands of MCP servers available publicly [2]. They are often released on open-source software repositories such as GitHub, indexed on hubs specific to MCP servers [3][4], and published to package registries such as NPM. AI agents can also be connected to remotely-hosted tools [5]. This creates an environment where malicious tools can proliferate rapidly and safeguards are often not in place.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.