Overview
This intensive four-day interactive course teaches security professionals how to apply artificial intelligence, machine learning, and data science to modern cybersecurity challenges. Participants learn to work the full data science lifecycle: data preparation, feature engineering, exploratory analysis, visualization, model development, evaluation, and scaling, all focused on AI’s direct applications in security operations, threat detection, and adversarial defense.
The program blends practical coding, applied AI theory, and hands-on red/blue team labs. Students gain experience with both classical ML techniques and generative AI, including large language models. They learn to use these technologies for detection, automation, and analysis, while also examining how adversaries can manipulate, evade, or weaponize them.
What You Will Learn
- Generative AI for security: Using LLMs for spam and social engineering detection, rapid threat intel summarization, and automated log analysis
- Prompt engineering: Crafting and evaluating queries for maximum effectiveness in security tasks
- AI agents: Building agents for red teaming, data analysis, and process automation
- Adversarial AI: Red and blue team exercises simulating attacks on ML/AI models, including evasion, poisoning, and prompt injection
- LLM security: Understanding and mitigating risks of RAG poisoning and prompt injection
- SOC automation: Building and securing AI-powered applications for threat hunting and incident response
- Threat detection with ML: Applying machine learning to detect network intrusions, malware, phishing, and fraud
- Anomaly detection: Hunting anomalous indicators of compromise and reducing false positives with AI-driven methods
- Data science foundations: Using Pandas and Python to manipulate large security datasets, preprocess raw data, and engineer features for ML pipelines
- Classical ML algorithms: Training, evaluating, and tuning supervised models (Random Forest, Naive Bayes, KNN, SVM) and unsupervised models (clustering, anomaly detection) on real cyber use cases
What You Leave With
By the end of the course, students understand how to apply AI and ML to cybersecurity and how to evaluate the risks, attack surfaces, and defensive strategies unique to AI-powered systems. Every lab produces working code that students can run in their own environments.
Topics covered
- Generative AI and LLMs for security tasks (spam detection, threat intel summarization, log analysis)
- Prompt engineering for security applications
- Building AI agents for red teaming, data analysis, and automation
- Red and blue team exercises against ML/AI models (evasion, poisoning, prompt injection)
- RAG poisoning and LLM prompt injection risks and mitigations
- AI-powered SOC automation, threat hunting, and incident response
- ML for intrusion detection, malware classification, phishing, and fraud
- Anomaly detection for hunting indicators of compromise and reducing false positives
- Python and Pandas for large security datasets
- Data preprocessing, feature engineering for AI/ML pipelines
- Supervised learning (Random Forest, Naive Bayes, KNN, SVM) applied to real threats
- Unsupervised learning (clustering, anomaly detection) for threat discovery
- Model training, evaluation, and tuning for cyber use cases