News

6 Papers Accepted at ACL 2025

The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) will take place in Vienna, Austria from 27 July – 1 August 2025. This premier annual conference highlights cutting-edge research in computational linguistics and natural language processing (NLP).

Congratulations to the following scientists from A*STAR Centre for Frontier AI Research (A*STAR CFAR) on having their papers accepted:

  • Dr Joey Zhou, Deputy Director
  • Dr Cheston Tan, Senior Principal Scientist
  • Dr Foo Chuan Sheng, Principal Scientist
  • Dr Basura Fernando, Principal Scientist

List of accepted papers:

1.WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data
Xinyang Lu, Jingtan Wang, Zitong Zhao, Zhongxiang Dai, Chuan-Sheng Foo, See-Kiong Ng, Bryan Kian Hsiang Low

This paper proposes a WAtermarking for Source Attribution (WASA) framework, the first of its kind designed to enable large language models (LLMs) to generate texts for effective source attribution.

Note: This paper was accepted to the Findings Track.
2.Understanding Large Language Model Vulnerabilities to Social Bias Attacks
Jiaxu Zhao, Meng Fang, Fanghua Ye, Ke Xu, Qin Zhang, Joey Tianyi Zhou, Mykola Pechenizkiy

In this paper, we comprehensively investigate the vulnerabilities of contemporary LLMs to various social bias attacks, including prefix injection, refusal suppression, and learned attack prompts. Insights from the study contribute to the development of more inclusive and ethically responsible LLMs.
3.DiffPO: Diffusion-styled Preference Optimisation for Inference Time Alignment of Large Language Models
Ruizhe Chen, Wenhao Chai, Zhifei Yang, Xiaotian Zhang, Ziyang Wang, Tony Quek, Joey Tianyi Zhou, Soujanya Poria, Zuozhu Liu

Inference-time alignment provides an efficient alternative for aligning LLMs with humans. However, these approaches still face challenges, such as limited scalability due to policy-specific value functions and latency during the inference phase. We propose a novel approach, Diffusion-styled Preference Optimisation (DiffPO), providing an efficient and policy-agnostic solution for aligning LLMs with humans.
4.How do Transformer Embeddings Represent Compositions? A Functional Analysis
Aishik Nagar, Ishaan Singh Rawal, Mansi Dhanania*, Cheston Tan

While transformer-based models have become the de facto standard for many language modelling tasks, little is known about how they represent compound words, and whether these representations are compositional. First, we evaluate compositionality in the representations by examining six diverse models of compositionality (addition, multiplication, dilation, regression, etc.). Surprisingly, we find that the classic vector addition model performs almost as well as any other model.
5.Theory of Mind in Large Language Models: Assessment and Enhancement
Ruirui Chen, Weifeng Jiang, Chengwei Qin, Cheston Tan

As LLMs become increasingly integrated into our daily lives, it is crucial to assess and enhance their capacity to interpret and respond to human mental states. In this paper, we review LLM’s theory of mind (ToM) capabilities by examining both evaluation benchmarks and strategies designed to improve them.
6.PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
Xinyu Zhang, Yuxuan Dong, Yanrui Wu, Jiaxing Huang, Chengyou Jia, Basura Fernando, Mike Zheng Shou, Lingling Zhang, Jun Liu


LLMs demonstrate remarkable capabilities across various domains, especially mathematics and logic reasoning. However, current evaluations overlook physics-based reasoning - a complex task requiring physics theorems and constraints. We present PhysReason, a novel and comprehensive benchmark for evaluating physics-based reasoning capabilities in LLMs.

*denotes former CFAR student

More on ACL 2025.