Deepfake-Assisted Phishing (AML.T0052.001)

Maturity
feasible
Reference
atlas.mitre.org/techniques/AML.T0052.001

Description

Adversaries may use deepfakes (AI-generated synthetic images, audio, or video) in phishing campaigns to impersonate trusted individuals, executives, or organizations. These attacks exploit human trust by presenting fraudulent voice or video communications as legitimate, enabling adversaries to manipulate targets into disclosing credentials, transferring funds, or granting access to systems.

Voice deepfakes (AI-cloned voices) are used in vishing [1] (voice phishing) attacks over telephone or VoIP. Adversaries can clone a target’s voice using a few seconds [2] of publicly available audio from speeches, earnings calls, podcasts, or social media [3]. These cloned voices are then used in pre-recorded voicemail messages or live phone calls. Video deepfakes can impersonate a trusted individual’s face and voice. Adversaries use publicly available video from company meetings, earnings calls, or social media to create convincing AI-generated video of target individuals. They are used in live video conference calls or recorded video messages. AI-generated content has advanced to the point that it is often difficult to identify as synthetic [4].

Adversaries may first perform Obtain Capabilities: Generative AI followed by Generate Deepfakes in preparation for their Phishing campaign. Deepfake phishing campaigns often utilize other communication channels (such as email, SMS, or instant messaging) for layered social engineering attacks [5].

These attacks span a wide range of victims and attack types, demonstrating the breadth of deepfake-enabled fraud. Adversaries have conducted extensive deepfake-assisted phishing campaigns against the individuals, including targeted scams [6] [7] [8] [9], as well as large-scale credential harvesting campaigns targeting billions of users [10] [11]. Adversaries have used deepfakes to impersonate executives [12], causing business entities to suffer significant financial losses from [13] [14]. There are also reports of government officials being targeted in widespread campaigns [4] [15].

The attacks span communication channels including voice deepfakes for vishing [16] and video deepfakes in conference calls [13], as well as multi-channel campaigns combining phone, email, and messaging platforms [10].

How GTK Cyber trains on this

GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.

View AI security courses →

Train your team on real adversarial-AI attacks.

GTK Cyber's AI red teaming courses are taught by practitioners who break models for a living.

View AI Security Courses