Hands-on training in adversarial testing of AI systems. Learn to probe LLMs and AI-powered applications for vulnerabilities: prompt injection, data leakage, alignment failures, and more. Essential for any organization deploying AI at scale.
Topics covered
- Adversarial prompt engineering and prompt injection
- Evaluating AI model robustness and safety boundaries
- Testing for bias, hallucination, and data exfiltration
- Building red-team frameworks for AI deployments
- Compliance with AI security standards and regulations
Tools & technologies
PythonJupyterCentaur VM