My research focuses on developing algorithms that enable the training of capable language models with less computational resources. I am fascinated by simple, general, and scalable approaches. Specifically, my work spans across:
- Efficient Pre-training: Explore efficient methods for building small-scale yet competitive models through model compression (Sheared Llama, CoFiPruning) and conditional computation (Lory), and study their strengths and limitations through training trajectories.
- Understanding and optimizing data's impact on model behaviors: Investigate how data influences model capabilities (LESS), safety (Benign Data Attack), and transparency (Mink-Prob) during the post-training stage, and how on-policy data interacts with novel objectives (SimPO).
- Evaluating and advancing model reasoning: These days, I am getting particularly excited about further enhancing reasoning capabilities of models (e.g., adapting to novel scenarios efficiently, learning to conduct sequential decision making). We have released several challenging reasoning-intensive benchmarks such as CharXiv, BRIGHT, and LitSearch.
Please find me on
Google Scholar,
Semantic Scholar,
Github,
X, and here is my updated
CV.
I am on the job market for 2025! Please reach out if you think I could be a fit for your institution or organization!
News
- [12/2024] I will attend NeurIPS 2024 in Vancouver!
- [10/2024] I attented the MIT Rising Stars in EECS Workshop!
- [09/2024] Our gemma-2-9b-it-SimPO model turned out to be the strongest <10B model on Chatbot Arena, check it out!
- [09/2024] SimPO and CharXiv are accepted to NeurIPS 2024!
- [08/2024] Gave a talk on CharXiv at Google Research.
- [07/2024] I will attend ICLR and ICML in Vienna, and will be co-organizing the High-dimensional Learning Dynamics (HiLD) workshop at ICML!
- [07/2024] Gave a talk on SimPO at Microsoft Research.
- [05/2024] Benign data attack won the best paper award at ICLR 2024 Workshop on Navigating and Addressing Data Problems for Foundation Models.
- [03/2024] I received the Apple Scholars in AIML PhD fellowship!
- [01/2024] Three papers are accepted to ICLR 2024! See you in Vienna 🥳
Selected Publications and Preprints
For a full list of publications, please refer to
this page.
- SimPO: Simple Preference Optimization with a Reference-Free Reward
Yu Meng*, Mengzhou Xia*, Danqi Chen
NeurIPS 2024;
[arXiv]
[Code]
- CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, Danqi Chen
NeurIPS 2024 Datasets and Benchmarks Track;
[arXiv]
[Code]
[Project Page]
- LESS: Selecting Influential Data for Targeted Instruction Tuning
Mengzhou Xia*, Sadhika Malladi*, Suchin Gururangan, Sanjeev Arora, Danqi Chen
ICML 2024;
[arXiv]
[Code]
[Blog]
- What is in Your Safe Data? Identifying Benign Data that Breaks Safety
Luxi He*, Mengzhou Xia*, Peter Henderson
COLM 2024;
DPFM Workshop@ICLR 2024 (Best Paper);
[arXiv]
[Code]
- Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, Danqi Chen
ICLR 2024;
[arXiv]
[Code]
[Blog]