News

7 Papers Accepted at AI4X 2026

Held from 15 – 19 June 2026 in Singapore, the AI4X – Accelerate Conference 2026 is a leading global platform at the intersection of artificial intelligence and scientific discovery, showcasing cutting-edge advances across disciplines – from materials science and biology to climate research and mathematics, while highlighting how AI-driven approaches accelerate innovation and real-world impact.

Congratulations to the following researchers from A*STAR Centre for Frontier AI Research (A*STAR CFAR) on having their papers accepted at AI4X:

  • Prof Ivor Tsang, Director, A*STAR CFAR
  • Prof Ong Yew Soon, Chief Artificial Intelligence (AI) Scientist and Advisor
  • Dr Ooi Chin Chun, Investigator
  • Dr Qian Hangwei, Scientist
  • Mr Chen Caishun, Lead Research Engineer
  • Mr Tang Leng Ze, Research Engineer

List of accepted papers:

1.A Multimodal Conditional JEPA for Composite Materials
Abhiroop Bhattacharya***, Hangwei Qian, Ivor Tsang

We propose a conditional multimodal Joint Embedding Predictive Architecture (JEPA) model for composite materials, encouraging invariance to experimental measurement artifacts while retaining morphology and context-sensitive factors.
2.Divergence-Constrained Physics-Informed Neural Networks for Time-Domain Maxwell's Equations
Chenhong Zhou, Zaifeng Yang, Xinyu Yang, Wei Bin Ewe, Hangwei Qian, Jie Chen

We improve time-domain Maxwell PINNs by explicitly enforcing Gauss-law divergence constraints, yielding more accurate and faster-converging cavity simulations.
3.Quality-Diversity LLM for Generative Design
Ariq Koh Boon Xiong, Melvin Wong, Jiao Liu, Caishun Chen, Yew-Soon Ong

In this paper, we propose a new algorithmic or theoretical framework to improve learning efficiency or generalisation in modern AI models.
4.How Prompt Structural Framing and Cognitive Scaffolding Influence Performance in Generative AI Design?
Yitian Huang, Caishun Chen, Jian Cheng Wong, Yew-Soon Ong

We use large language models (LLMs) as zero-shot planners to translate natural language goals into executable action sequences, advancing general-purpose reasoning and decision-making abilities in AI agents.
5.When Designs Explain Themselves: Report Cards for Evolutionary LLMs
Alex Siek Min Ping, Caishun Chen, Jian Cheng Wong, Yew-Soon Ong

We introduce a reasoning-augmented evolutionary design framework where LLMs generate interpretable “report cards” from geometric and performance data to guide and explain iterative engineering optimisation.
6.VLM4Physics: Equation Discovery Using Multi-modal Inputs
Ye Qianshu, Jian Cheng Wong, Chin Chun Ooi, Yew-Soon Ong

This paper explicitly targets physics equation discovery and shows that combining visual inputs with LLM reasoning improves structural recovery and convergence in dynamical systems.
7.Multi-task Attention for Doped Thermoelectric Properties Prediction
Tang Leng Ze, Trupti Mohanty, Sterling G. Baird, Leonard Ng Wei Tat, Taylor Sparks

In this study, multi-task learning was applied alongside a composition-based transformer model, CrabNet, which lead to positive transfer and improved accuracy of thermoelectric properties.


*** denotes current CFAR student
(accurate at time of posting)

More on AI4X 2026.