Research Pillars

Resilient & Safe AI

AI solutions are increasingly being deployed for real-world applications, to support human decision-making or even to make automated decisions. However, research has shown that AI models are trained highly specific to the datasets and not perform as well on novel unseen inputs. More precisely, AI models may fail when inputs are subject to shifts in the data distributions, for instance, due to adversarial attacks (such as adversarial examples and data poisoning) or changes in the deployment environment. These failures can sometimes cause catastrophic consequences, especially in safety-critical applications like self-driving vehicles and automated disease diagnosis. We therefore aim to develop AI models that are robust to such shifts (safe) and failing which, recover from these shifts (resilient).

In addition, Deepfake technology fabricates realistic images, audio, and videos, making it difficult for AI systems to distinguish between genuine and fake content, increasing the complexity of detection and defense. The misuse of Deepfakes could mislead or manipulate AI systems, compromising their decision-making safety and reliability.  Moreover, the privacy and ethical issues posed by Deepfakes challenge the fairness and transparency of AI, highlighting the need for more robust safeguards and detection mechanisms to ensure the security of these systems. 

Ensuring the safety of an AI system involves understanding how it works and the decision it makes. This allows us to then build in appropriate fail-safe mechanisms and fix issues when they arise. Having an interpretable and explainable AI system would help build trust in its predictions.

In summary, we focus on the following research areas:

  • Adversarial Attacks & Defences: Ensure that AI models are resistant to various forms of intentional attacks.
  • Robustness to Data Shifts: Aim to make AI models robust to shifts in data distribution between their deployment and training environment.
  • Continual Learning: Enable models to adapt and recover from shifts in the data distribution.
  • Deepfake Generation for Enhanced AI Defense: Develop advanced generation techniques to boost AI’s defense capabilities against manipulative content.
  • Explainable and Interpretable AI: Understand and interpret how and why a prediction is made by an AI model to help build safeguards and engender trust.

Safety Issues in AI

Backdoor face recognition system

Fig 2. Backdoor attacks in face recognition


Adversarial attacks in health information economy

Fig 3. Adversarial attacks in medical diagnosis


Self-driving car platform

Fig 4. Nvidia DAVE-2 self-driving car platform

Face recognition technologies

Fig 5. Biases in face recognition algorithms

Fig 6. A video generated by our in-house deepfake technology

Resilient & Safe AI

Fig 7. Resilient & Safe AI and its applications