🧠 Research Topics
My research focuses on improving and understanding the trustworthiness of AI architectures. I am particularly interested in the interplay between robustness, security, and efficiency, exploring how we can design AI systems that are both powerful and reliable under real-world constraints. Some of the central questions driving my work include:
- 🛡️ safety and security AI threats and countermeasures, with a focus on adversarial robustness and out of distribution samples in computer vision and natural language processing.
- 🔍 Explainability and uncertainty estimation as tools to strengthen trust in AI decisions and to better interpret feature patterns in large and complex vision and NLP models.
- 🚦 Risk-aware metrics to evaluate failures of AI in safety-critical domains such as autonomous driving or healthcare.
“Any technological advance can be dangerous. Fire was dangerous from the start, and so (even more so) was speech — and both are still dangerous to this day — but human beings would not be human without them.”
— Isaac Asimov, The Caves of Steel
📄 Publications
For a complete and updated list of my publications, please visit: Google Scholar · Semantic Scholar
-
2025
-
Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning
Internet of Things · Link -
Video Deblurring by Sharpness Prior Detection and Edge Information
arXiv · Link -
Leveraging Knowledge Graphs and LLMs for Structured Generation of Misinformation
20th International Conference on Availability, Reliability and Security (ARES 2025) · Link -
SynDRA: Synthetic Dataset for Railway Applications
Proceedings of the Winter Conference on Applications of Computer Vision (WACV) · Link -
Improving LLM Reasoning for Vulnerability Detection via Group Relative Policy Optimization
(preprint) arXiv · Link -
GTPO: Trajectory-Based Policy Optimization in Large Language Models
(preprint) arXiv · Link -
Benchmarking the Spatial Robustness of DNNs via Natural and Adversarial Localized Corruptions
Pattern Recognition 172:112412 · Link
-
Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning
-
2024
-
Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications
2024 ACM/IEEE 15th International Conference on Cyber-Physical Systems (ICCPS) · Link -
Towards Trustworthy AI: Understanding the Impact of AI Threats and Countermeasures
Società Italiana di Intelligence (SOCINT) - to appear · Link -
CARLA-GeAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision Models
IEEE Transactions on Intelligent Transportation Systems · Link -
Concise Thoughts: Impact of Output Length on LLM Reasoning and Cost
arXiv · Link -
Edge-Only Universal Adversarial Attacks in Distributed Learning
arXiv · Link
-
Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications
-
2023
-
On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
IEEE Transactions on Pattern Analysis and Machine Intelligence · Link -
Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI2023) · Link -
Robust-by-Design Classification via Unitary-Gradient Neural Networks
Proceedings of the AAAI Conference on Artificial Intelligence (AAAI2023) · Link -
TrainSim: A railway simulation framework for LiDAR and camera dataset generation
IEEE Transactions on Intelligent Transportation Systems · Link -
On the real-world adversarial robustness of real-time semantic segmentation models for autonomous driving
IEEE Transactions on Neural Networks and Learning Systems · Link
-
On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error
-
2022
-
Increasing the confidence of deep neural networks by coverage analysis
IEEE Transactions on Software Engineering · Link -
in-Car Entertainment via Group-wise Temporary Mobile Social Networking.
VEHITS 2022 · Link -
Evaluating the robustness of semantic segmentation for autonomous driving against real-world adversarial patch attacks
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision · Link
-
Increasing the confidence of deep neural networks by coverage analysis
🎓 Thesis Topics
For available master’s and doctoral thesis projects, please contact me.
Projects span a range of research areas, including (but not limited to) the following:
-
Adversarial Robustness for Vision
Localized attacks/defenses, universal perturbations, transferability (CNNs vs ViTs). -
Trustworthy AI for Safety-Critical Systems
Risk-aware metrics, robustness–accuracy trade-offs, runtime monitoring. -
Distributed & Split Inference (Edge–Cloud)
Offloading reliability, early exits, edge-only attacks/defenses, feature-space security. -
LLM Reliability & Efficiency
Reasoning under constraints, robust prompting, and LLM alignment. -
Out-of-Distribution & Uncertainty
OOD detection & calibration under distribution shift. -
Dataset Tools & Benchmarks
Spatial robustness benchmarking (natural + adversarial localized corruptions). -
Explainability for Robust Systems
Internal activation analysis, attribution-guided defenses, failure mode interpretation.
📌 Misc (Awards, Career & Services)
- PhD Dissertation Prize — Carlo Mosca Award, Società Italiana di Intelligence (link)
- Project Coordinator — “On the Safety and Security of Distributed AI-based Autonomous Multi-Agent Systems,” financed by the Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, Pisa.
- Associate Editor for the The Visual Computer journal (since 2024).
- Consulting Associate Editor for the IEEE Transactions on Information Forensics and Security journal (since 2024).
- Reviewer / Program Committee Member for several AI conferences and journals — IEEE T-PAMI, IEEE T-IFS, IEEE T-ITS, IEEE T-NNLS, ICCV 2023, ECCV 2024, CVPR 2025, NeurIPS 2025, AAAI 2023–2026, and others.
- Session Chair — DSD-HSTIEC 2024 and 2025.
📬 Contact
giulio.rossolini@santannapisa.it
Last update
July, 2025