Datasets:
metadata
license: mit
task_categories:
- text-classification
- question-answering
- multiple-choice
language:
- zh
size_categories:
- 10K<n<100K
arxiv:
- 2305.10263
M3KE, or Massive Multi-Level Multi-Subject Knowledge Evaluation, is a benchmark developed to assess the knowledge acquired by large Chinese language models by evaluating their multitask accuracy in both zero- and few-shot settings. The benchmark comprises 20,477 questions spanning 71 tasks. For further information about M3KE, please consult our paper or visit our GitHub page.