- Maturity
- feasible
- Reference
- atlas.mitre.org/techniques/AML.T0034.000
Description
Adversaries may send an excessive number of otherwise normal or low-complexity queries to an AI system with the goal of overwhelming its capacity and increasing operating costs.
The attacker can automate high-volume request generation, exploiting rate limits, autoscaling policies, and pay-per-use billing models to drive sustained resource consumption without relying on specially crafted, computationally expensive inputs. This behavior can also lead to increased latency, request queuing, and service degradation or unavailability for legitimate users, as the system struggles to process the inflated traffic.
How GTK Cyber trains on this
GTK Cyber's hands-on AI security courses cover adversarial-AI techniques across the MITRE ATLAS framework, including the relevant tactic this technique falls under. Our practitioner-led training is taught by Charles Givre and other field-tested SMEs and focuses on real adversarial scenarios, not slide decks.