
My research focuses on developing algorithms that enable the training of capable language models with less computational resources. I am fascinated by simple, general, and scalable approaches. Specifically, my work spans across:
- Efficient Pre-training: Explore efficient methods for building small-scale yet competitive models through model compression (Sheared Llama, CoFiPruning) and conditional computation (Lory), and study their strengths and limitations through training trajectories.
- Understanding and optimizing data's impact on model behaviors: Investigate how data influences model capabilities (LESS), safety (Benign Data Attack), and transparency (Mink-Prob) during the post-training stage, and how on-policy data interacts with novel objectives (SimPO).
- Evaluating and advancing model reasoning: These days, I am getting particularly excited about further enhancing reasoning capabilities of models (e.g., adapting to novel scenarios efficiently, learning to conduct sequential decision making). We have released several challenging reasoning-intensive benchmarks such as CharXiv, BRIGHT, and LitSearch.
Please find me on
Google Scholar,
Semantic Scholar,
Github,
X, and here is my updated
CV.
Selected Publications and Preprints
For a full list of publications, please refer to
this page.
- SimPO: Simple Preference Optimization with a Reference-Free Reward
Yu Meng*, Mengzhou Xia*, Danqi Chen
NeurIPS 2024;
[arXiv]
[Code]
- CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu, Richard Zhu, Kaiqu Liang, Xindi Wu, Haotian Liu, Sadhika Malladi, Alexis Chevalier, Sanjeev Arora, Danqi Chen
NeurIPS 2024 Datasets and Benchmarks Track;
[arXiv]
[Code]
[Project Page]
- LESS: Selecting Influential Data for Targeted Instruction Tuning
Mengzhou Xia*, Sadhika Malladi*, Suchin Gururangan, Sanjeev Arora, Danqi Chen
ICML 2024;
[arXiv]
[Code]
[Blog]
- What is in Your Safe Data? Identifying Benign Data that Breaks Safety
Luxi He*, Mengzhou Xia*, Peter Henderson
COLM 2024;
DPFM Workshop@ICLR 2024 (Best Paper);
[arXiv]
[Code]
- Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
Mengzhou Xia, Tianyu Gao, Zhiyuan Zeng, Danqi Chen
ICLR 2024;
[arXiv]
[Code]
[Blog]