Datasets:

Modalities:
Text
Formats:
json
Languages:
Chinese
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,324 Bytes
93f4d2d
 
 
 
adb3857
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93f4d2d
adb3857
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---

license: other
license_name: tencent-ai-lab-naturalconv-dataset-terms-and-conditions
license_link: LICENSE
task_categories:
- text-generation
language:
- zh
tags:
- dialogue
- multi-turn
- topic-driven
- document
- news
- conversation
size_categories:
- 10K<n<100K
configs:
- config_name: default
  data_files: dialog_release.json
---


# NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation

## Introduction

This dataset is described in the paper [NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation](https://arxiv.org/abs/2103.02548). The entire dataset contains 5 data files. 

### 1. dialog_release.json: 



It is a json file containing a list of dictionaries.



After loading in python this way:



```python

import json

import codecs

dialog_list = json.loads(codecs.open("dialog_release.json", "r", "utf-8").read())

```



dialog_list is a list whose element is a dictionary.

Each dictionary contains three keys: "dialog_id", "document_id" and "content":
"dialog_id" is an unique id for this dialogue.

"document_id" represents which doucment this dialogue is grounded on.
"content" is a list of the whole dialogue session. 

Altogether there are 19,919 dialogs, with approximately 400K dialogue utterances.


### 2. document_url_release.json:

It is a json file containing a list of dictionaries.

After loading in python this way:

```python

import json

import codecs

document_list = json.loads(codecs.open("document_url_release.json", "r", "utf-8").read())

```

document_list is a list whose element is a dictionary.



Each dictionary contains three keys: "document_id", "topic", and "url":
"document_id" is an unique id for this document.

"topic" represents which topic this document comes from.

"url" represents the url of the original document.



Altogether there are 6,500 documents.





### 3, 4, and 5. train.txt, dev.txt, and test.txt:



Each file contains the "dialog_id" for train, dev and test, respectively.


## Document Downloading

For research purpose only, you can refer to the code shared in this [repositary](https://github.com/naturalconv/NaturalConvDataSet) for downloading the document texts through the released urls in the document_url_release.json file.


## Citation

Please kindly cite our paper if you find this dataset useful:

```

@inproceedings{aaai-2021-naturalconv,

    title={NaturalConv: A Chinese Dialogue Dataset Towards Multi-turn Topic-driven Conversation},

    author={Wang, Xiaoyang and Li, Chen and Zhao, Jianqiao and Yu, Dong},

    booktitle={Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI-21)},

    year={2021}

}

```

## License

The dataset is released for non-commercial usage only. By downloading, you agree to the terms and conditions in our [LICENSE](https://huggingface.co./datasets/xywang1/NaturalConv/blob/main/LICENSE). For the authorization of commercial usage, please contact [email protected] for details.


## Disclaimers

The dataset is provided AS-IS, without warranty of any kind, express or implied. The views and opinions expressed in the dataset including the documents and the dialogues do not necessarily reflect those of Tencent or the authors of the above paper.