|
Xiangming Gu
I am a final-year Ph.D. candidate from National University of Singapore. I received my bachelor's degrees from Tsinghua University in 2021. I was a student researcher at Google Deepmind and a research intern at Sea AI Lab.
My most representative work is to thoroughly demystify attention sink phenomenon [1, 2] and promote the architecture modification, such as attention biases, to facilitate long-context, and training stability [post1, 2, 3].
I have research experiences on different modality data, including text, vision, and audio. My current research interests include pre-training, architecture design, and reasoning for generative models from first principles.
Email  / 
Google Scholar  / 
Openreview  / 
Linkedin  / 
Twitter  / 
Github
|
|
|
* denotes equal contribution. Please see my
Google Scholar for full list.
My most representative papers are highlighted.
|
|
Generative Models
|
When Attention Sink Emerges in Language Models: An Empirical View
Xiangming Gu,
Tianyu Pang,
Chao Du,
Qian Liu,
Fengzhuo Zhang,
Cunxiao Du,
Ye Wang,
Min Lin
International Conference on Learning Representations (ICLR), Singapore, Singapore, 2025. (Spotlight)
Also in Annual Conference on Neural Information Processing Systems Workshop on Attributing Model Behavior at Scale (ATTRIB @ NeurIPS), Vancouver, Canada, 2024. (Oral)
pdf /
code /
video /
long talk /
slides /
poster /
post1 /
post2 /
post3
|
Why Do LLMs Attend to the First Token?
Federico Barbero*,
รlvaro Arroyo*,
Xiangming Gu,
Christos Perivolaropoulos,
Michael Bronstein,
Petar Veliฤkoviฤ,
Razvan Pascanu
Conference on Language Modeling (COLM), Montreal, Canada, 2025.
pdf /
slides
|
Parallel and Sequential Test-Time-Scaling in Large Reasoning Models
Xiangming Gu and the Team
Google Deepmind Internal Technical Report, 2025.
|
On Memorization in Diffusion Models
Xiangming Gu,
Chao Du,
Tianyu Pang,
Chongxuan Li,
Min Lin,
Ye Wang
Transactions on Machine Learning Research (TMLR), 2025.
pdf /
code
|
SkyLadder: Better and Faster Pretraining via Context Window Scheduling
Tongyao Zhu,
Qian Liu,
Haonan Wang,
Shiqi Chen,
Xiangming Gu,
Tianyu Pang,
Min-Yen Kan
Annual Conference on Neural Information Processing Systems (NeurIPS), San Diego, USA, 2025.
Also in International Conference on Learning Representations Workshop on Open Science for Foundation Models (SCI-FM @ ICLR), Singapore, Singapore, 2025.
pdf /
code
|
|
Safety and Security
|
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Xiangming Gu*,
Xiaosen Zheng*,
Tianyu Pang*,
Chao Du,
Qian Liu,
Ye Wang,
Jing Jiang,
Min Lin
International Conference on Machine Learning (ICML), Vienna, Austria, 2024.
Also in International Conference on Learning Representations Workshop on Large Language Model Agents (LLMAgents @ ICLR), Vienna, Austria, 2024.
pdf /
project page /
code /
video /
slides /
ICML poster /
GYSS poster /
WIRED press
|
Extracting Alignment Data in Open Models
Federico Barbero,
Xiangming Gu,
Christopher A. Choquette-Choo,
Chawin Sitawarin,
Matthew Jagielski,
Itay Yona,
Petar Veliฤkoviฤ,
Ilia Shumailov,
Jamie Hayes
Technical Report, 2025.
pdf
|
On Calibration of LLM-based Guard Models for Reliable Content Moderation
Hongfu Liu,
Hengguan Huang,
Xiangming Gu,
Hao Wang,
Ye Wang
International Conference on Learning Representations (ICLR), Singapore, Singapore, 2025.
Also in Annual Conference on Neural Information Processing Systems Safe Generative AI Workshop (SafeGenAI @ NeurIPS), Vancouver, Canada, 2024. (Oral)
pdf /
code
|
|
Google Deepmind
Student Researcher
05.2025 - 10.2025 (London, United Kingdom), 11.2025 - 01.2026 (Singapore)
Hosted by Petar Veliฤkoviฤ and Larisa Markeeva.
Also worked closed with Razvan Pascanu and Soham De.
Research on reasoning and test-time-scaling of LLMs. Developing gemma_penzai to debug LLMs.
|
|
Sea AI Lab (Sea Limited)
Research Intern
03.2023 - 04.2025 (Singapore)
Mentored by Tianyu Pang and Chao Du.
Also worked closed with Qian Liu and Min Lin.
Understanding, advancing, and safely deploying generative models and agents.
|
|
National University of Singapore
Ph.D. candidate in Computer Science
08.2021 - 02.2026 (Singapore)
Supervised by Prof. Ye Wang.
Research on speech, singing and multi-modality.
|
|
Tsinghua University
B.E. degree in Electronic Engineering and B.S. degree in Finance
08.2017 - 06.2021 (Beijing, China)
Supervised by Prof. Jiansheng Chen.
Research on computer vision.
|
Dean's Graduate Research Excellence Award, National University of Singapore, 2024
Research Achievement Award, National University of Singapore, 2025/2022
MM'22 Top Paper Award, Association for Computing Machinery, 2022
President's Graduate Fellowship, National University of Singapore, 2021-2025
Tsinghua's Friend- Zheng Geru Scholarship (Academic Excellence Scholarship), Tsinghua University, 2018
|
[2026.03]: NextGen Data 2026, invited keynote talk on Demystifying Attention Sink in LLMs and its Applications to Architecture Design.
[2026.03]: National University of Singapore SIAM, invited talk on Demystifying Attention Sink in LLMs and its Applications to Architecture Design.
[2026.01]: AER Labs and Network School, invited talk on Demystifying Attention Sink in LLMs and its Applications to Architecture Design.
[2025.11]: Department of Electronic Engineering, Tsinghua University and Tencent Hunyuan, invited talk on Attention Sink in LLMs and its Applications.
[2025.10]: Google Deepmind Team DL: Agent Frontier, talk on Looking into LLMs: From Tokens to Solutions.
[2025.06]: Google Deepmind Team DL: Agent Frontier, talk on Understanding Attention Sink in (Large) Language Models.
[2025.05]: ASAP Seminar Series, invited talk on When Attention Sink Emerges in Language Models: An Empirical View.
[2025.04]: Singapore Alignment Workshop, poster presentation on Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast.
[2025.02]: NUS Research Week Open House, invited talk on On the Interpretability and Safety of Generative Models .
[2025.01]: Global Young Scientists Summit, poster presentation on Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast.
|
Conference reviewer for NeurIPS, ICML, ICLR, CVPR, ICCV, ECCV, ACL ARR, MM, IJCAI, AISTATS
Journal reviewer for TPAMI, TOMM, TASLP, RA-L
|
Teaching Assistant, CS4347/CS5647, Sound and Music Computing, Fall 2024
Teaching Assistant, CS6212, Topics in Media, Spring 2024
Teaching Assistant, CS5242, Neural Networks and Deep Learning, Spring 2023
Teaching Assistant, CS3244, Machine Learning, Fall 2022
Teaching Assistant, CS4243, Computer Vision and Pattern Recognition, Spring 2022
|
I love tourism, movies, food, etc. I have been lived in ๐จ๐ณ๐ธ๐ฌ๐ฌ๐ง, and travelled to ๐น๐ญ๐ซ๐ฎ๐ต๐น๐ง๐ช๐บ๐ธ๐ญ๐ฐ๐ฒ๐พ๐จ๐ฆ๐ฆ๐ช๐ฆ๐น๐ฏ๐ต๐ญ๐บ๐จ๐ฟ๐ฎ๐น๐ป๐ฆ๐ญ๐ท๐ซ๐ท๐จ๐ญ๐ฉ๐ช๐ณ๐ฑ๐ฐ๐ท for holidays/conferences.
|
You've probably seen this website template before, thanks to Jon Barron.
|
|