LatestEval/full-latest
Viewer
•
Updated
•
1.13k
•
7
We create LLMs benchmark and update it every half month to make it "uncheatable".
Humans receive new test questions every exam, but LLMs? They've been evaluated with the same benchmarks for too long. Why not assess LLMs with fresh test just like we test our students? In this project, we introduce LatestEval, which automatically constructs language model benchmarks using the latest materials (e.g., arXiv, BBC, GitHub, etc.) to prevent "cheating" and data contamination.
News!!