You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

1. Introduction

(1) As far as we know, this is the largest QA dataset for Chinese Construction Laws and Regulations (CCLR). For example, well-known datasets like c-eval typically contain only about 500 questions in a single domain, whereas our dataset specifically focuses on the CCLR domain and includes 6,179 questions.

(2) This dataset has 2,060 questions from Registered Constructor Qualification Examination (RCQE) and 4,119 self-designed questions covering 8 CCLR subdomains.

(3) This dataset is developed and maintained by Southeast University, University of Cambridge, and City University of Hong Kong.

(4) Make sure to read the specification and follow the rules.

2. Submission of your LLM’s answers

The answers could be submitted through https://forms.gle/bKLj6GgyxSnGenXS8. Please use “Template of answer submission.xls” in this repository to submit your LLM's answers

3. Citation requirement

The reuse of this repository requires citation. Should any individual or entity utilize this repository without appropriate acknowledgment and citation, they do not have the right to use our data. We will take measures to protect our copyright, including, but not limited to, retracting their papers and initiating legal action.

4.LLM Leaderboard for CCLR QA

Large Language Model Contributors Overall Scoring Rate D1 D2 D3 D4 D5 D6 D7 D8 Ranking
ERNIE-Bot 4.0 with knowledge graph Baidu & The authors 0.822 0.850 0.828 0.836 0.803 0.843 0.844 0.800 0.860 1
ERNIE-Bot 4.0 Baidu 0.757 0.783 0.716 0.763 0.769 0.718 0.725 0.732 0.785 2
GPT-4 with knowledge graph OpenAI & The authors 0.663 0.720 0.731 0.668 0.656 0.754 0.685 0.668 0.688 3
GPT-4 OpenAI 0.537 0.602 0.487 0.559 0.537 0.565 0.517 0.513 0.566 4
GPT-3.5-turbo with knowledge graph OpenAI & The authors 0.503 0.532 0.505 0.523 0.468 0.613 0.522 0.544 0.464 5
ChatGLM3-6B with knowledge graph Tsinghua, Zhipu.AI & The authors 0.483 0.497 0.450 0.509 0.428 0.536 0.499 0.545 0.445 6
Text-davinci-003 with knowledge graph OpenAI & The authors 0.481 0.507 0.522 0.470 0.479 0.576 0.514 0.515 0.514 7
Qianfan-Chinese-Llama-2-7B with knowledge graph Baidu & The authors 0.474 0.474 0.490 0.493 0.467 0.560 0.530 0.517 0.474 8
ChatGLM2-6B with knowledge graph Tsinghua, Zhipu.AI & The authors 0.467 0.469 0.468 0.488 0.462 0.515 0.504 0.530 0.465 9
ChatGLM2-6B Tsinghua & Zhipu.AI 0.427 0.452 0.411 0.475 0.412 0.461 0.447 0.492 0.421 10
ChatGLM3-6B Tsinghua & Zhipu.AI 0.399 0.454 0.391 0.412 0.362 0.410 0.388 0.416 0.400 11
Qianfan-Chinese-Llama-2-7B Baidu 0.373 0.419 0.380 0.368 0.360 0.415 0.376 0.416 0.357 12
GPT-3.5-turbo OpenAI 0.346 0.425 0.316 0.362 0.326 0.432 0.332 0.406 0.334 13
Llama-2-70b with knowledge graph MetaAI & The authors 0.377 0.336 0.368 0.320 0.331 0.411 0.354 0.337 0.336 14
Text-davinci-003 OpenAI 0.327 0.352 0.316 0.341 0.337 0.381 0.344 0.363 0.343 15
Llama-2-70b MetaAI 0.284 0.285 0.338 0.255 0.316 0.312 0.288 0.302 0.295 16
Downloads last month
29