why the c-eval result is 76.8 for base model but only 38.9 for instruct model?

#8
by xianf - opened

I use lm-eval to test the benchmark result, the base model performe well like the README said, but the instruct model is only 38.9 in this testset? What happened?

Thanks for sharing. We actually evaluated C-Eval internally, and it performed reasonable. Could you please help check your log to see if the prompts are off or anything unexpected happened?

Sign up or log in to comment