Research Pillars

Sustainable AI

While AI has attained remarkable achievements across multiple application domains, there remains significant concerns about the sustainability of AI. The quest for improved accuracy on large-scale problems is driving the use of increasingly deeper neural networks, which in turn increases energy consumption and climate-changing carbon emissions. For example, Strubell et al. estimated that training a particular state-of-the-art deep learning model resulted in 626,000 pounds of carbon dioxide emissions.

More broadly, decades of advances in scientific computing have clearly demonstrated the advantages of modelling and simulation across many domains. However, energy consumption will soon become a “hard feasibility constraint” for such computational modelling, and AI in HPC would be needed to reduce the energy cost of computation in the face of trends such as a declining Moore’s Law.

Thus, there is a rising and urgent need towards Sustainable AI for both AI and industry. To build and strengthen CFAR’s sustainable AI capabilities by significantly improving existing research, the team would focus on the following two paradigms, as previously described by van Wynsberghe:

  • Sustainability of AI: To reduce carbon emissions and huge computing power consumption for AI models.
  • AI for Sustainability and Sustainable Computing: To leverage AI for addressing environmental and climate problems and ameliorating the accelerating trend towards high-performance computing in modelling and simulation.

Specifically, for sustainability of AI, we will study how to reduce carbon emissions and huge computing power consumption via developing advanced AI technology in following areas:

      Resource-Efficient AI

      To improve sustainability by leveraging the characteristics of hardware to generate smaller yet accurate models that require less computational resources. In particular, we will study novel techniques, including knowledge distillation, Edge AI, Hardware/Software co-optimisation for power-efficiency and energy-aware model compression. Examples of this work include recent efforts towards energy-aware model compression, whereby the team obtained order of magnitude improvement in energy efficiency for various neural networks with negligible loss of accuracy, even when compared to other state-of-the-art model compression techniques.

      These advancements could be applied to fall detection systems, where prompt detection is crucial for preventing serious injuries and accidents. DeepFalluses a deep spatiotemporal convolutional autoencoder and a novel method for calculating anomaly scores, achieving an accuracy of 93.3% on the SDU Fall Dataset. In comparison, our Real-time Embedded Demo System (REDS2) approach compresses depth video data and employs a 2D object detector to detect falls, attaining an accuracy of 98.7% on the same dataset. Additionally, unlike DeepFall, REDS is designed to run efficiently in real-time on devices with relatively lower computational power, such as the NVIDIA Jetson AGX Xavier. REDS can achieve 23 FPS in real-time monitoring while consuming only 3 watts of power.

      Fig 1. Empowering Real-Time Fall Detection: REDS Outperforms with 98.7% Accuracy and Ultra-Low Power Consumption on NVIDIA Jetson Xavier

      1DeepFall -- Non-invasive Fall Detection with Deep Spatio-Temporal Convolutional Autoencoders
      2
      : Real-time Embedded Demo System for Fall Detection under 15W Power

      Data-efficient AI

      To improve sustainability by a) the use of less data for model training to achieve high-accuracy model performance, hence reducing the expensive process of data collection and annotations, and b) accelerated model training when confronted with a new problem, hence reducing the expensive process of training a new model from scratch. In particular, we will study novel techniques, including semi-supervised methods, the incorporation of external knowledge, active learning, transfer learning, few-shot learning approaches such as meta-learning and unsupervised representation learning. Examples of this work include development of methods to facilitate domain adaptation across problems, including in the area of natural language processing and predictive maintenance. 

      Contrastive Adversarial Domain Adaptation
      Fig 2. (a) Schematic of the process of transfer learning (Pan et al.)
      (b) Demonstrated use of Contrastive Adversarial Domain Adaptation for enhanced performance across cross-domain scenarios (Ragab et al. [ASTAR])

        Quantum Machine Learning (QML)

        The development of techniques to enable the implementation of machine learning on so-called noisy intermediate-scale quantum (NISQ) devices. In particular, we will conduct research on aspects such as classical data encoding, variational embedding layer, quantum measurement, hybrid quantum-classical model and large-scale QML.

        Interest in Sustainable AI
        Fig 3. List of topics of interest in Sustainable AI

        For AI for sustainability, we will leverage AI for addressing pressing environmental problems, and ameliorating the growing carbon footprint from the increasing trend towards HPC in modelling and simulation. Digital modelling and simulation are of great importance to promoting sustainability and mitigating climate change.

        AI can play an important role in both (i) enabling more accurate modelling, and (ii) reducing the cost of computing by reducing time-to-solution or reducing the need for high-resolution models, hence improving sustainability. In addition, this will also leverage on resource/data-efficient AI developments as per sustainability of AI to ensure that the application of AI to these challenges do not end up being as energy/resource consuming.