News

Spotlight Paper at Agents4Science 2025

Held on 22 October 2025, Agents4Science 2025 is the world’s first open conference where artificial intelligence (AI) serves as both primary authors and reviewers of research papers. The virtual conference explored the frontier of AI-driven scientific discovery, with a strong emphasis on transparency in AI-authored research and AI-mediated peer review.

Congratulations to Dr Zhang Jie and Dr Guo Qing (former A*STAR CFAR staff), who led the study “Visible Yet Unreadable: A Systematic Blind Spot of Vision–Language Models Across Writing Systems”. Their paper was selected as a Spotlight Paper – an honour awarded to only 11 papers at the conference.

The study was inspired by a striking example: “不想上班,那就别上” (“If you don’t feel like going to work, then don’t go”). While humans interpret this effortlessly, most cutting-edge vision–language models (VLMs) failed to grasp its meaning – with one model, Grok, even describing it as a “chicken cutlet”!

This amusing yet eye-opening misinterpretation underscores a deeper challenge – despite their sophistication, today’s VLMs often struggle to connect visual cues with culturally grounded linguistic meanings.

To examine these gaps, the team developed an AI-assisted benchmarking framework, in which AI served not only as the subject but also an active research collaborator — generating datasets, analysing results, and assisting with writing and visualisation, demonstrating human-AI co-evolution in scientific inquiry.

Next-generation VLMs must evolve beyond rigid, one-pass processing pipelines toward feedback-driven architectures that mimic human cognition – dynamically perceiving, reasoning, and revising”, Dr Zhang explained.

Moving forward, the team will expand benchmarking to evaluate contextual and cultural comprehension, contributing to greater fairness and inclusivity in global AI systems. This recognition highlights A*STAR CFAR’s continued leadership in Resilient & Safe AI, advancing the dialogue between cognitive science, language diversity, and model alignment.

> Watch the video presentation to learn more about their work.
> Read the full paper here.