IlyaGusev commited on
Commit
7da8fbe
1 Parent(s): b04a69c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md CHANGED
@@ -1,5 +1,11 @@
1
  ---
2
  license: other
 
 
 
 
 
 
3
  dataset_info:
4
  features:
5
  - name: question_id
@@ -71,3 +77,113 @@ dataset_info:
71
  download_size: 670468664
72
  dataset_size: 3013377174
73
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - ru
7
+ size_categories:
8
+ - 100K<n<1M
9
  dataset_info:
10
  features:
11
  - name: question_id
 
77
  download_size: 670468664
78
  dataset_size: 3013377174
79
  ---
80
+
81
+ # Dataset Card for Russian StackOverflow
82
+
83
+ ## Table of Contents
84
+ - [Table of Contents](#table-of-contents)
85
+ - [Dataset Description](#dataset-description)
86
+ - [Languages](#languages)
87
+ - [Usage](#usage)
88
+ - [Dataset Structure](#dataset-structure)
89
+ - [Data Instances](#data-instances)
90
+ - [Dataset Creation](#dataset-creation)
91
+ - [Curation Rationale](#curation-rationale)
92
+ - [Source Data](#source-data)
93
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
94
+ - [Licensing Information](#licensing-information)
95
+
96
+ ## Dataset Description
97
+
98
+ **Summary**: Dataset of questions, answers, and comments from [ru.stackoverflow.com](https://ru.stackoverflow.com/).
99
+
100
+ **Script:** [create_stackoverflow.py](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py)
101
+
102
+ **Point of Contact:** [Ilya Gusev]([email protected])
103
+
104
+ ### Languages
105
+
106
+ The dataset is in Russian with some programming code.
107
+
108
+ ### Usage
109
+
110
+ ```python
111
+ from datasets import load_dataset
112
+ dataset = load_dataset('IlyaGusev/ru_stackoverflow', split="train")
113
+ ```
114
+
115
+ ## Dataset Structure
116
+
117
+ ### Data Instances
118
+ For each instance, there is a string for the article, a string for the summary, and a string for the url. Additionally, a string for the title and a date are provided.
119
+
120
+ ```
121
+ {
122
+ "question_id": 11235,
123
+ "answer_count": 1,
124
+ "url": "https://ru.stackoverflow.com/questions/11235",
125
+ "score": 2,
126
+ "tags": ["c++", "сериализация"],
127
+ "title": "Извлечение из файла, запись в файл",
128
+ "views": 1309,
129
+ "author": "...",
130
+ "timestamp": 1303205289,
131
+ "text_html": "...",
132
+ "text_markdown": "...",
133
+ "comments": [
134
+ {
135
+ "text": "...",
136
+ "author": "...",
137
+ "comment_id": 11236,
138
+ "score": 0,
139
+ "timestamp": 1303205411
140
+ }
141
+ ],
142
+ "answers": [
143
+ {
144
+ "answer_id": 11243,
145
+ "timestamp": 1303207791,
146
+ "is_accepted": 1,
147
+ "text_html": "...",
148
+ "text_markdown": "...",
149
+ "score": 3,
150
+ "author": "...",
151
+ "comments": [
152
+ {
153
+ "text": "...",
154
+ "author": "...",
155
+ "comment_id": 11246,
156
+ "score": 0,
157
+ "timestamp": 1303207961
158
+ }
159
+ ]
160
+ }
161
+ ]
162
+ }
163
+ ```
164
+
165
+ ## Dataset Creation
166
+
167
+ ### Curation Rationale
168
+
169
+ [TBW]
170
+
171
+ ### Source Data
172
+
173
+ #### Initial Data Collection and Normalization
174
+
175
+ * The data source is the [Russian StackOverflow](https://ru.stackoverflow.com/) website.
176
+ * Original XMLs: [ru.stackoverflow.com.7z](https://ia600107.us.archive.org/27/items/stackexchange/ru.stackoverflow.com.7z).
177
+ * Processing script is [here](https://github.com/IlyaGusev/rulm/blob/hf/data_processing/create_stackoverflow.py).
178
+
179
+ #### Who are the source language producers?
180
+
181
+ Texts and summaries were written by website users.
182
+
183
+ ### Personal and Sensitive Information
184
+
185
+ The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.
186
+
187
+ ## Licensing Information
188
+
189
+ According to the license of original data, this dataset is distributed under [CC BY-SA 2.5](https://creativecommons.org/licenses/by-sa/2.5/).