News

4 Papers Accepted at KDD 2025

Congratulations to the following scientists from A*STAR Centre for Frontier AI Research (A*STAR CFAR) on having their papers accepted at the Knowledge Discovery and Data Mining (KDD) conference:

  • Prof Ong Yew Soon, Chief Artificial Intelligence (AI) Scientist and Advisor
  • Dr Li Xiaoli, Senior Principal Scientist
  • Dr Chen Zhenghua, Senior Scientist
  • Dr Qian Hangwei, Scientist
  • Dr He Xin, Scientist

KDD 2025, a premier conference in data science, will be held from 3 to 7 August in Toronto, Canada, featuring both research and applied data science tracks.

List of accepted papers:

1.BurstGPT: A Real-World Workload Dataset to Optimise LLM Serving Systems
Yuxin Wang, Yuhan Chen, Zeyu Li, Xueze Kang, Yuchu Fang, Yeju Zhou, Yang Zheng, Zhenheng Tang, Xin He, Rui Guo, Xin Wang, Qiang Wang, Amelie Chi Zhou, Xiaowen Chu

This paper presents BurstGPT, a real-world large language model (LLM) serving workload dataset with 10.31 million traces from Azure OpenAI GPT services over 213 days, capturing user request concurrency, conversation patterns, model response lengths, and system failures to optimise LLM serving systems for real-world efficiency, stability, and reliability.
2.Temporal Restoration and Spatial Rewiring for Source-Free Multivariate Time Series Domain Adaptation
Peiliang Gong, Yucheng Wang, Min Wu, Zhenghua Chen, Xiaoli Li, Daoqiang Zhang

We propose Temporal Restoration and Spatial Rewiring (TERSE), a novel Source-Free Domain Adaptation (SFDA) method for multivariate time series (MTS), which explicitly models spatial-temporal consistency via temporal restoration and spatial rewiring. By leveraging pre-trained networks to guide target adaptation, TERSE achieves effective feature alignment without accessing source data.
3.FreRA: A Frequency-Refined Augmentation for Contrastive Learning on Time Series Classification
Tian Tian, Chunyao Miao, Hangwei Qian

We propose Frequency Refined Augmentation (FreRA), a lightweight yet effective approach designed for time series contrastive learning on classification tasks. FreRA demonstrates superior capabilities in contrastive representation learning and generalisation in transfer learning scenarios across diverse datasets.
4.

Stabilising Modality Gap & Lowering Gradient Norms Improve Zero-Shot Adversarial Robustness of VLMs
Junhao Dong, Piotr Koniusz, Xinghua Qu, Yew-Soon Ong

Contemporary Vision-Language Models (VLMs) like CLIP enable powerful zero-shot classification but remain susceptible to adversarial attacks. While robust fine-tuning improves generalisation and natural performance, current methods overly rely on the vision branch and static text prompts. This work finds that enhancing zero-shot adversarial robustness in CLIP is possible by stabilising the modality gap between image and text features.

Learn more about KDD 2025.