- Tactics
- AI Attack Staging
- Maturity
- realized
- Reference
- atlas.mitre.org/techniques/AML.T0088
Description
Adversaries may use generative artificial intelligence (GenAI) to create synthetic media (i.e. imagery, video, audio, and text) that appear authentic. These “deepfakes” may mimic a real person or depict fictional personas. Adversaries may use deepfakes for impersonation to conduct Phishing or to evade AI applications such as biometric identity verification systems (see Evade AI Model).
Manipulation of media has been possible for a long time, however GenAI reduces the skill and level of effort required, allowing adversaries to rapidly scale operations to target more users or systems. It also makes real-time manipulations feasible.
Adversaries may utilize open-source models and software that were designed for legitimate use cases to generate deepfakes for malicious use. However, there are some projects specifically tailored towards malicious use cases such as ProKYC.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the AI Attack Staging tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.