News

4 Papers Accepted at ACL 2026

Held from 2–7 July 2026 in San Diego, California, the 64th Annual Meeting of the Association for Computational Linguistics (ACL 2026) is one of the top global conferences in natural language processing (NLP) and computational linguistics.

Congratulations to the following researchers from A*STAR Centre for Frontier AI Research (A*STAR CFAR) on having their papers accepted at ACL 2026:

  • Prof Ivor Tsang, Director, A*STAR CFAR
  • Dr Qu Bohao, Scientist
  • Dr Yu Xingrui, Scientist
  • Dr Zhang Jie, Scientist

List of accepted papers:

1.CORBA: Contagious Recursive Blocking Attacks on Multi-Agent Systems Based on Large Language Models
Zhenhong Zhou*, Zherui Li, Jie Zhang, Yuanhe Zhang, Kun Wang, Yang Liu, Qing Guo**

We introduce Denial-of-Collaboration (DoC), a new attack class that disrupts the collaborative structure of LLM multi-agent systems rather than individual components. Our proposed CORBA attack induces benign but recursive communication loops, causing resource exhaustion and system paralysis.
2.From Language to Driving: A Dual-Loop SLM-Enhanced Framework for Multi-Planner Scheduling via a Domain-Specific Language
Jiawei Liu***, Xun Gong, Muli Yang, Xingrui Yu, Fen Fang, Xulei Yang, Ivor Tsang, Yunfeng hu, Hong Chen, Qing Guo**

This work frames instruction realisation for autonomous driving as a scheduling problem across multiple motion planners and introduces a dual-loop framework that translates natural language into safe, reliable vehicle control. A small language model handles high-level reasoning and scheduling, while a fast inner loop executes control, with receding-horizon planning, a constrained DSL, and reinforcement learning jointly improving long-horizon performance, safety, and instruction completion.
3.Safety Sidecar: Reflection-Driven Runtime Control for Safer Agents (ACL Findings)
Bin Wang***, Jiazheng Quan, Xingrui Yu, Hansen Hu, Hao Yu, Anjun Gao, Zhenglin Wan*, Hui Li, Ivor Tsang

Autonomous LLM agents are powerful but fragile, as small reasoning or retrieval errors can cascade into unsafe actions, and existing defenses lack real-time control and portability. We introduce Safety Sidecar, a model-agnostic runtime module that uses reflective, evidence-driven intervention with external verification to enforce safe execution, improving secure-solution rates across multiple CWE scenarios while maintaining efficiency and correctness.
4.Learn Like Humans: Use Meta-cognitive Reflection for Efficient Self-Improvement
Xinmeng Hou*, Peiliang Gong, Bohao Qu, Wuqi Wang, Qing Guo**, Yang Liu

MARS is a self-improving agent framework that enables efficient self-evolution within a single recurrence cycle, avoiding the high computational cost of multi-turn recursive refinement. By combining principle-based and procedural reflection inspired by human learning, it generates optimized instructions that significantly improve reasoning performance across multiple benchmarks.


* denotes former CFAR student
** denotes former CFAR researcher
*** denotes current CFAR student
(accurate at time of posting)

More on ACL 2026.