Enhancing & Assessing Trustworthy AI using Out-of-Distribution Detection

[CFAR Outstanding PhD Student Seminar Series]
Enhancing & Assessing Trustworthy AI using Out-of-Distribution Detection by Mr David Berend
24 Mar 2022 | 10:00am (Singapore Time)

Deep neural networks have large potential for autonomous driving, healthcare diagnosis, financial risk modelling and public security. Yet, most models do not reach deployment after prototyping. One of the main reasons is the fear of a misprediction causing significant physical or financial harm. Therefore, a gap of trust prevails which hinders the potential of deep neural networks to be captured in those safety-critical applications.

In this talk, Mr David Berend from A*STAR, explores how quantitative assessments can be used to study and enhance such trust by taking a close look at the robustness, privacy and security of deep neural networks. In particular, how the fundamental out-of-distribution technique can be helpful in providing insight for missing data, data generation and optimised neural architectures to achieve higher and trustworthy performance. This talk highlights the essence of six research works and concludes with guiding research directions to further enhance trust in AI systems enabling fast adoption of vital safety-critical applications.


SPEAKER
Ph.D. NTU - A*STAR - David Berend
Mr David Berend
Ph.D. Candidate at Nanyang Technological University (NTU)
Singapore Institute of Manufacturing and Technology (SIMTech)
Agency of Science, Technology and Research (A*STAR)


David Berend received his B.Sc. degree in computer science & business from RheinMain University, Germany in 2018 and is now a 3rd year Ph.D. candidate at Nanyang Technological University (NTU) under the Singapore International Graduate Award program associated with SIMTech at A*STAR. David’s research focuses on assessing the robustness, security and privacy of deep neural networks (DNN). The research made fundamental discoveries, how a technique called out-of-distribution detection can help assess and enhance the quality of a DNN. David utilised this discovery to explore the effectiveness of real-world use cases in healthcare, automotive and public security together with AI Singapore. Recently, this has helped him to initiate a standardisation initiative for Trustworthy AI focusing on AI Security alongside his supervisor Prof. Liu Yang. Together with 30 industry and governmental leaders, including NVIDIA, A*STAR and Temasek, the standard is now being published under Enterprise Singapore and is integrated into international efforts under ISO.