Drive-by Compromise (AML.T0078)

Tactic: Initial Access

Tactics
Initial Access
Maturity
demonstrated
Reference
atlas.mitre.org/techniques/AML.T0078

Description

Adversaries may gain access to an AI system through a user visiting a website over the normal course of browsing, or an AI agent retrieving information from the web on behalf of a user. Websites can contain an LLM Prompt Injection which, when executed, can change the behavior of the AI model.

The same approach may be used to deliver other types of malicious code that don’t target AI directly (See Drive-by Compromise in ATT&CK).

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the Initial Access tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Related techniques

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses