Invited Talks
        
            - 
                
                Aligning Language Models with LESS Data and a Simple (SimPO) Objective
            
- 
                
                Aligning Language Models with LESS Data and a Simple (SimPO) Objective
            
- 
                
                Aligning Language Models with LESS Data and a Simple (SimPO) Objective
            
- 
                
                Aligning Language Models with LESS Data and a Simple (SimPO) Objective
            
- 
                
                Aligning Language Models with LESS Data and a Simple (SimPO) Objective
            
- 
                
                CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
            
- 
                
                Training and Aligning Language Models: Algorithmic Advances in Objectives and Data Curation
            
- 
                
                Exploring the Pareto-Frontier of Performance and Efficiency of Large Language Models
            
- 
                
                Data- and Parameter-Efficient Adaptation of Large Language Models
            
- 
                
                Sheared-LLaMA: Accelerating Language Model Pre-training via Structured Pruning
            
- 
                
                Training Trajectories of Large Language Models Across Scales
            
- 
                
                Towards Building Efficient Language Models
            
- 
                
                Structured Pruning Learns Compact and Accurate Models