I am an associate professor of computer science at Fudan University and a member of Fudan Vision and Learning Lab. I am also an honorary fellow at The University of Melbourne. My main research area is Trustworthy AI, aiming to develop secure, robust, explainable, privacy-preserving, and fair learning algorithms and AI models for different applications. I am also passionate about using AI to expand understandings of our mind and the universe.
I received my Ph.D. degree from The University of Melbourne and spent another 2 wonderful years as a postdoctoral research fellow. I worked for 1.5 years at Deakin University as a lecturer before joining Fudan University. I obtained my bachelor's and master's degrees from Jilin University and Tsinghua University, respectively.
Email / Google Scholar / GitHub
We are looking for motivated students, postdocs, and interns in the field of Trustworthy AI, Multimodal Learning, Reinforcement Learning, and Generative AI to join our team. Drop me an email if you are interested.
I am teaching a graduate level course Trustworthy Machine Learning and an introductory course Artificial Intelligence dedicated for high school students. We are also writing an AI Security book Artificial Intelligence - Data and Model Security in Chinese.
Latest News
- [09/2024] I will serve as an Area Chair for ICLR 2025.
- [09/2024] One paper on unlearnable examples for segmentation models is accepted to NeurIPS, 2024.
- [07/2024] Our works on model lock , detecting query-based adversarial attacks , and multimodal jailbreak attacks on VLMs are accepted by MM'24.
- [07/2024] Our work on adversarial prompt tuning is accepted by ECCV'24.
- [04/2024] Our work on intrinsic motivation for RL is accepted by IJCAI'24.
- [03/2024] Our work on adversarial policy learning in RL is accepted by DSN'24.
- [03/2024] Our work on safety alignment of LLMs is accepted by NAACL'24.
- [03/2024] Our work on machine unlearning is accepted by TDSC.
- [01/2024] Our work on self-supervised learning is accepted by ICLR'24.
Research Interests
- Trustworthy AI/ML
- Adversarial attack and defense
- Backdoor attack and defense
- Data privacy, AI fairness
- Self-supervised learning
- Reinforcement learning
- Intellectual Property Protection
- Multimodal and Generative AI
- Safety alignment of large models
- Multimodal attack and defense
- Multimodal learning, Vision-language models
- Diffuson models: theory and applications
Professional Activities
- Program Committee Member
- ICLR (2019-2025), ICML (2019-2024), NeurIPS (2019-2024), CVPR (2020-2023), ICCV (2021-2023), ECCV (2020), AAAI (2020-2022), IJCAI (2020-2021), KDD (2019,2021), ICDM (2021), SDM (2021), AICAI (2021)
- Journal Reviewer
- Nature Communications, Pattern Recognition, TPAMI, TIP, IJCV, JAIR, TNNLS, TKDE, TIFS, TOMM, KAIS