Development of Methods for Assessing the Safety of Predictive Neural Networks and Improving Their Robustness (AISafety)

Timing: 01.04.2023-31.03.2026
About the Project: Developments of a generic approach to develop robust NN-based classifiers, which are based on insufficient training-data. Development of a generic and statistically well-defined approach to estimate systematic uncertainties due to epistemic network uncertainties. Transform CMS-Open Data from Root to Panda-Data Frames. Transfer developed methods between different fields of science and to industry.
Principal Investigartors: Prof. Dr. Lucie Flek, Prof. Dr. Alexander Schmidt, Prof. Dr. Matthias Schott, Prof. Dr. Christopher Wiebusch
Team: Dr. Dirk Düllmann, Dr. Lars Perchalla, Dr. Akbar Karimi, Dr. Wei-Fan
Publications:
- NoiseAttack: Evaluating Robustness of LLMs to Noisy Context in Math Problem Solving, Zain Ul Abedin, Shahzeb Qamar, Lucie Flek, Akbar Karimi
- Exploring Robustness of LLMs to Sociodemographically-Conditioned Paraphrasing, Pulkit Arora, Akbar Karimi, Lucie Flek
- Exploring Robustness of Multilingual LLMs on Real-World Noisy Data, Amirhossein Aliakbarzadeh, Lucie Flek, Akbar Karimi
- A Comparison of Data Augmentation Techniques for Text Classification, Peyman Hassani Jalilian, Akbar Karimi
- ArithmAttack: Evaluating Robustness of LLMs to Noisy Context in Math Problem Solving, Abedin, Zain Ul, et al.
- Exploring Robustness of LLMs to Sociodemographically-Conditioned Paraphrasing, Arora, Pulkit, Akbar Karimi, and Lucie Flek
- Exploring Robustness of Multilingual LLMs on Real-World Noisy Data, Aliakbarzadeh, Amirhossein, Lucie Flek, and Akbar Karimi
- A Comparison of Data Augmentation Techniques for Text Classification, Peyman Hassani Jalilian, Akbar Karimi