Robust and Secure AI - Ph.D. Course (20 hours)

📖 Course Overview

The course is designed to provide an introductory yet technical understanding of the security vulnerabilities and threats that modern AI systems face, with a focus on deep neural networks and computer vision applications. Participants will explore key definitions and techniques for designing and implementing AI-based security attacks and safety threats while also gaining a foundational understanding of how to protect AI systems against them. The course will begin by introducing the principles of robustness in neural networks from both a security and safety perspective. Topics such as evasion attacks (adversarial examples), data poisoning, backdoor attacks, model extraction (model stealing), and model manipulation (e.g., weight corruption attacks) will be thoroughly examined, along with countermeasures and state-of-the-art defenses. Additionally, the course will cover privacy attacks, including model inversion, membership inference attacks, and training data extraction. A portion of the course will be devoted to hands-on laboratory sessions, where participants will implement attacks and countermeasures in practical deep neural network applications.

🛠️ Format & Exam

  • Lectures: 20 hours (in-person).
  • Exam (2CFU): Project work + oral discussion.

If you would like to attend the course in the first semester of the 2025–2026 academic year, please fill out the following form: Course Registration Form

🗓️ Schedule

The course is organized into the following lectures (20 hours total).
Dates and rooms will be confirmed (TBD).

  1. Introduction and Foundations of AI
  2. Adversarial Learning and Attacks
  3. Adversarial Defenses and Robust Training
  4. Poisoning Attacks and Backdoors
  5. Out-of-Distribution Detection & Uncertainty Analysis
  6. Introduction to Privacy Attacks
  7. Hands-on Laboratories and Research Tips

📬 Contact

giulio.rossolini@santannapisa.it