Research Focus

Resilient & Safe AI

AI solutions are increasingly being deployed for real-world applications, to support human decision-making or even to make automated decisions. However, research has shown that AI models are trained highly specific to the datasets and not perform as well on novel unseen inputs. More precisely, AI models may fail when inputs are subject to shifts in the data distributions, for instance, due to adversarial attacks (such as adversarial examples and data poisoning) or changes in the deployment environment. These failures can sometimes cause catastrophic consequences, especially in safety-critical applications like self-driving vehicles and automated disease diagnosis. We therefore aim to develop AI models that are robust to such shifts (safe) and failing which, recover from these shifts (resilient).

Another aspect of ensuring that an AI system is safe is being able to understand how it works and decision it makes. This allows us to then build in appropriate fail-safe mechanisms and fix issues when they arise. Having an interpretable and explainable AI system would help build trust in its predictions.

In summary, we focus on the following research areas:

  • Adversarial Attacks & Defences: Ensure that AI models are resistant to various forms of intentional attacks.
  • Robustness to Data Shifts: The data distribution in the deployment environment may change from what it was trained on, and we aim to make AI models robust to these shifts.
  • Continual Learning: Research in this direction will help models adapt and recover from shifts in the data distribution.
  • Explainable and interpretable AI: Understand and interpret how and why a prediction is made by an AI model to help build safeguards and engender trust.

Safety Issues in AI

Backdoor face recognition system

Fig 2. Backdoor attacks in face recognition

Adversarial attacks in health information economy

Fig 3. Adversarial attacks in medical diagnosis

Self-driving car platform

Fig 4. Nvidia DAVE-2 self-driving car platform

Face recognition technologies

Fig 5. Biases in face recognition algorithms

Resilient & Safe AI

Fig 6. Resilient & Safe AI and its applications