MLGym: A New Framework and Benchmark for Advancing AI Research Agents Paper • 2502.14499 • Published 9 days ago • 167
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark Paper • 2501.05444 • Published Jan 9
Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language Models Paper • 2502.14191 • Published 9 days ago • 7
CodeCriticBench: A Holistic Code Critique Benchmark for Large Language Models Paper • 2502.16614 • Published 5 days ago • 22