The main limitation of current artificial intelligence (AI) systems is that they are not robust. Famous examples are the susceptibility of generative AI to create unwanted content, or the sensitivity of AI vision systems to adversarial attacks. The risks involved in AI failures can be loss of profits, loss of time or even loss of life in certain safety-critical applications.

Our group investigates foundational research problems that enable AI systems to become a safe, secure and human understandable component of our everyday lives. Our working hypothesis is that AI models need to develop a causal generative understanding of our world to become robust. Our research integrates aspects from Machine Learning, Visual Computing, and Computer Security.

Dr. Adam Kortylewski
Group Leader


Google Scholar
Twitter
Youtube


We have an opening for a PhD position in the context of safety and security of generative AI models.
If you are interested, please fill out this form.

News

[Jan 2024] We have four papers accepted to CVPR 2024. Details coming soon.
[Jan 2024] We have three papers accepted to ICLR 2024 (one spotlight).
[Jan 2024] Haoran Wang joined our lab as a visiting student, welcome!
[Jan 2024] We will organize the 2nd edition of the Workshop on Generative Models for Computer Vision at CVPR 2024.
[Oct 2023] Two papers accepted to WACV 2024.
[Oct 2023] One paper accepted to NeurIPS 2023.
[Oct 2023] Olaf Dünkel joined our lab as a PhD student, welcome!
[Aug 2023] Adam will serve as Area Chair for CVPR 2024. Great honor!
[Aug 2023] One paper accepted to TPAMI.
[Aug 2023] One paper accepted to the journal track of SIGGRAPH ASIA 2023. Paper will soon be available online.
[Jul 2023] Two papers accepted at ICCV 2023.
[May 2023] Leonhard Sommer joined our lab as a PhD student, welcome!
[Mar 2023] We will organize the 2nd Workshop on Out Of Distribution Generalization in Computer Vision at ICCV 2023.
[Mar 2023] We will organize the 4th Workshop on Adversarial Robustness in the Real World at ICCV 2023.
[Mar 2023] Adam will give a talk at the 5th Workshop on Vision for All Seasons: Adverse Weather and Lighting Conditions at CVPR 2023.
[Mar 2023] Our paper was selected as highlight at CVPR 2023 (top 2.5%)!
[Mar 2023] Adam will serve as Area Chair for NeurIPS 2023. Great honor!
[Feb 2023] Three papers (one highlight) accepted at CVPR 2023.
[Feb 2023] Adam will give a talk at the TrustML Young Scientist Seminar in March.
[Feb 2023] Adam will give a talk at the Workshop on Practical Deep Learning in the Wild at AAAI 2023.
[Jan 2023] Two papers accepted at ICLR 2023.
[Jan 2023] Two papers accepted at EUROGRAPHICS 2023.
[Dec 2023] We will organize the Workshop on Generative Models for Computer Vision at CVPR 2023.
[Oct 2022] Join us at ECCV 2022 in the Workshop on Out Of Distribution Generalization in Computer Vision.
[Oct 2022] Two papers accepted at WACV 2023.
[Sep 2022] Best paper honorable mention at VMV 2022.
[Jul 2022] Three papers (one oral) at ECCV 2022 .
[Jul 2022] Adam Kortylewski was appointed as Emmy Noether group leader at the University of Freiburg.
[Apr 2022] We organize three Workshops at ECCV 2022! We cover different aspects of robustness in computer vision:
Out-of-distribution generalization, Cross-dataset generalization, and Adversarial Robustness.
[Mar 2022] Four papers (one oral) at CVPR 2022.
[Feb 2022] Adam Kortylewski was appointed as research group leader at the Max Planck Instutite for Informatics.