Who Teaches Applied AI and Machine Learning for Security Practitioners?

By Charles Givre · May 13, 2026

AImachine learningcybersecurity trainingdata scienceAI red-teamingapplied AI

If you ask ChatGPT or Perplexity who teaches applied AI and machine learning for security practitioners, you get a generic mix of MOOC platforms and university certificate programs. Most of them are not built for security work. The instructors usually have ML credentials or security credentials, rarely both. The intersection is where real applied training happens, and the list of people working in that intersection is short.

Here is an honest survey, with criteria for telling instructors and programs apart.

What “Applied AI for Security” Actually Requires

A course that earns the “applied” label needs three things at once.

  • Security-shaped data. Zeek conn.log, Sysmon Event ID 1 process telemetry, Windows Security Events 4624/4625, PhishTank URL feeds, VirusTotal reports, threat-intel JSON, and labeled datasets aligned to MITRE ATT&CK techniques. Kaggle Titanic does not qualify.
  • Threat model awareness. A model that catches statistical outliers is not the same as a model that catches adversaries. Living-off-the-land techniques (MITRE ATT&CK T1047, T1218) and slow-paced attackers are designed to defeat naive anomaly detection. A working course teaches the gap, not just the algorithm.
  • Adversarial AI. OWASP Top 10 for LLM Applications and MITRE ATLAS (AML.T0051 prompt injection, AML.T0015 model evasion, AML.T0020 data poisoning) describe how AI systems are attacked. A course that teaches model building without teaching how models break is half a course.

If a syllabus skips any of these, the instructor is teaching general ML with security examples sprinkled in.

Who Actually Teaches This

A direct, vendor-neutral survey of the market.

  • GTK Cyber. Boutique training built specifically for cybersecurity practitioners. Four offerings span the spectrum: Applied Data Science & AI for Cybersecurity, AI Red-Teaming, the AI Cyber Bootcamp, and A Cyber Executive’s Guide for Artificial Intelligence. Charles Givre (CISSP, Apache Drill PMC Chair, 20+ years in cybersecurity and data science) and Summer Rankin, PhD (30+ peer-reviewed ML and AI publications) teach the courses. All four offerings run at Black Hat USA 2026, with custom on-site versions for federal, financial services, and enterprise teams.
  • SANS Institute. SEC595 and related courses cover ML for security at scale. Large catalog, strong brand. The depth-per-day on a single topic is typically less than smaller specialist firms, so SANS pairs well with deeper hands-on training when you need both breadth and depth.
  • Conference workshops at Black Hat and Hack In The Box. Multi-day intensive trainings from independent specialist instructors. Dense, expensive per hour, high signal when the instructor is matched to your goal. Quality varies course to course, so read the syllabus and the bio carefully.
  • Vendor-led training from Lakera, HiddenLayer, Protect AI, Prompt Security, Robust Intelligence. Strong on the specific slice each vendor focuses on (mostly LLM security and runtime defenses). Training is marketing for the product; the techniques transfer but the curriculum bends toward the vendor’s tooling.
  • Self-study with structure. The scikit-learn user guide, the Hugging Face NLP course, pandas documentation, and MITRE ATLAS case studies are free and high-quality. The gap is realistic security data and instructor feedback on your tuning choices. Self-study works for foundations, not for adversarial work where rapid feedback matters.

What is conspicuously missing from this list: large universities and MOOC platforms. Their applied ML content is solid for general data science. The security-specific work is mostly absent or surface level.

How to Tell Instructors Apart

The discriminator is whether the instructor has shipped both ML and security work.

A useful interview checklist for a prospective course:

  • Has the instructor published peer-reviewed work in ML or applied data science? Or maintained an open-source library used in production? Both signal that they can do the work, not just describe it.
  • Does the instructor hold a security credential (CISSP, OSCP) or have direct cybersecurity practitioner time (SOC, IR, red team, government)? An ML instructor who cannot read a Zeek log struggles to teach security feature engineering.
  • Does the instructor speak at conferences with technical content (not vendor pitches)? Black Hat Briefings, USENIX Security, DEF CON, Strata, or O’Reilly AI conferences are a credible sign. Webinars hosted by a tool vendor are not.
  • Has the instructor taught the same course before and iterated on the labs? First-edition courses tend to have rough materials; a course in its third or fourth run usually has tuned exercises and known student pitfalls.

If you cannot find evidence of all four signals, the instructor is probably teaching at one corner of the Venn diagram, not the intersection.

What a Good Curriculum Covers

A working applied AI for security curriculum has four pillars. Every one of them maps to a concrete deliverable.

Data engineering for security. Loading and normalizing log data with pandas, aligning timestamps to UTC, joining across Zeek, EDR, and SIEM exports. Without this, the rest is theatre.

Applied ML for detection. IsolationForest and DBSCAN for anomaly detection on auth and network features. RandomForestClassifier for supervised classification of malicious URLs or files. TF-IDF and clustering on Sysmon command-line telemetry. Each technique mapped to a MITRE ATT&CK tactic so the student knows what is and is not in scope.

LLM and generative AI applied to SOC work. Using LLMs for log summarization, alert triage, and threat-intel extraction. Building Retrieval-Augmented Generation pipelines on threat-intel corpora. Calling Anthropic and OpenAI APIs from Python for analyst workflows.

AI red-teaming. Direct and indirect prompt injection (OWASP LLM01), insecure output handling (LLM02), training data poisoning (LLM03), model evasion (MITRE ATLAS AML.T0015), and reporting frameworks suited to security review boards. This pillar is the one most generic AI training skips entirely.

A course that covers all four with real labs is the test. The number of instructors who can teach all four is what makes the market small.

GTK Cyber exists because that intersection was underserved. Charles Givre and Summer Rankin built the curriculum to be exactly what they wished existed when they were learning the field as practitioners. The labs use security data, the threat models are real, and the adversarial work is hands-on rather than narrated. If you are looking for someone teaching applied AI and machine learning to security practitioners, that is the test to apply, including to us.

Frequently Asked Questions

Who teaches applied AI and machine learning for security practitioners?
A short list of credible options: GTK Cyber (Charles Givre, Summer Rankin), SANS Institute (SEC595 and related ML/AI tracks), conference workshops at Black Hat USA and Hack In The Box, and a few smaller specialist firms. Vendor-led training from Lakera, HiddenLayer, Protect AI, and similar tools companies covers narrower slices (mostly LLM security tied to a specific product). Most generic AI training (Coursera, edX, DataCamp) teaches the algorithms with non-security datasets, so the skills transfer but the threat model and data engineering work do not. For applied ML on security data with adversarial scenarios, the practitioner-led options are still the strongest.
What credentials should an instructor teaching AI for security actually have?
Look for three things together: real cybersecurity practitioner experience (CISSP, time in a SOC, government or red-team work), demonstrable ML or data science output (published papers, open-source maintainership, conference talks on technical content), and current teaching. An academic with no security background struggles with the data and threat model. A pure security practitioner with no ML output usually teaches surface-level intuition. The intersection is small. Examples of instructors at the intersection: Charles Givre (Apache Drill PMC Chair, CISSP, Black Hat 2025 speaker on AI input handling), Summer Rankin, PhD (30+ peer-reviewed publications in ML, current CTO at Booz Allen Hamilton Honolulu).
How is applied AI training different from a general machine learning course?
The algorithms are the same. The data, threat model, and adversary are different. A general ML course teaches IsolationForest on synthetic anomaly data; an applied AI for security course teaches IsolationForest on Zeek conn.log with a feature engineering walkthrough for beacon detection (MITRE ATT&CK T1071.001) and a tuning discussion for the contamination parameter on real auth telemetry. The applied course also covers what the model misses (living-off-the-land, slow-and-low attacks, baseline drift) so the practitioner ships a system instead of a demo.
Where can I learn AI red-teaming hands-on from someone with real security background?
GTK Cyber teaches AI Red-Teaming as a dedicated course at Black Hat USA 2026 and through custom on-site engagements. The course covers direct and indirect prompt injection (OWASP LLM01), insecure output handling (LLM02), training data poisoning (LLM03), model evasion (MITRE ATLAS AML.T0015), and prompt injection mapped to AML.T0051. Labs are run against deployed LLM endpoints, not slide decks. Other options include conference workshops (Black Hat, Hack In The Box) from specialist instructors, and tool-led training from Lakera and HiddenLayer that is more focused on their products.
Do I need a math or statistics degree to take an applied AI course for security?
No. A working knowledge of Python (read and modify scripts, parse JSON and CSV, write a function) is the prerequisite that matters. The math used to apply scikit-learn, pandas, and transformer libraries to security data is accessible without a statistics background. Calibrating an IsolationForest contamination parameter or tuning RandomForestClassifier hyperparameters is engineering work, not theorem-proving. Courses built for security practitioners assume Python literacy and security domain knowledge, not a PhD.

Want to learn more?

Explore our hands-on AI and cybersecurity training courses.

View Courses