KaraKaraWitch commited on
Commit
c51a17e
·
verified ·
1 Parent(s): 1e08f0f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +472 -0
README.md ADDED
@@ -0,0 +1,472 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ size_categories:
3
+ - 100M<n<1B
4
+ configs:
5
+ - config_name: default
6
+ data_files:
7
+ - split: train
8
+ path: data/chunk_*/*.jsonl
9
+ pretty_name: OKReddit (Release Candidate 3)
10
+ task_categories:
11
+ - text-generation
12
+ - fill-mask
13
+ task_ids:
14
+ - language-modeling
15
+ - masked-language-modeling
16
+ source_datasets:
17
+ - original
18
+ language:
19
+ - en
20
+ ---
21
+
22
+ # OKReddit α (Alpha)
23
+
24
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/iIT11kzCFgbKSc0E-p5S4.png)
25
+
26
+ # Dataset Summary
27
+
28
+ OKReddit is a filtered collection of **5TiB** of reddit submissions and comments from 2005 to 2023. This dataset has been prepared for research or archival purposes.
29
+
30
+ This dataset includes (obviously) a filtered list of subreddits.
31
+
32
+ - **Curated by:** KaraKaraWitch
33
+ - **Funded by:** Recursal.ai
34
+ - **Shared by:** KaraKaraWitch
35
+ - **Language(s) (NLP):** Mainly English. Other languages are available at smaller sizes.
36
+ - **License:** `Scripts` folder are Apache 2.0. Refer to [Licensing Information](#licensing-information) for data license.
37
+
38
+ We are currently addressing these issues. (Re-running the fixed script) In the next release, these shall be fixed.
39
+
40
+ ### Dataset Sources
41
+
42
+ - **Source Data:** [Academic Torrents](https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4) by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.)
43
+
44
+ ## Supported Tasks and Leaderboards
45
+
46
+ The dataset may be used for a variety of natural language processing (NLP) tasks including:
47
+
48
+ - Text Classification: Classifying comments and posts into categories based on sentiment, topic, or subreddit.
49
+
50
+ - Language Modeling: Training language models to understand and generate conversational text.
51
+
52
+ - Sentiment Analysis: Analyzing the sentiment of comments and posts across different subreddits and topics.
53
+
54
+ - Topic Modeling: Identifying and modeling topics discussed in the posts and comments.
55
+
56
+ ## Languages
57
+
58
+ The primary language of the dataset is English, as the majority of redditors are English educated. However, posts in other languages may also be present in smaller quanitites.
59
+
60
+ ## Dataset Structure
61
+
62
+ ### Data Instances
63
+
64
+ Each data instance repreasents a submission thread within a subreddit.
65
+
66
+ - `thread_id`: The submission thread ID. Inclusive of the `t3_` that reddit uses to mark an id as a thread. `https://reddit.com/r/<SUBREDDIT>/comments/<THREAD_ID>/`
67
+ - `subreddit`: The name of the subreddit. Case-insensitive. Reddit just redirects you to the correct-cased subreddit.
68
+ - `namedconversation`: A OpenAI "compatible" conversation:
69
+ - `from`: The author username that posted the content. **It is not `user`, `system`,`model`!**
70
+ - `content`: The reddit markdown posted.
71
+ - The first value of `namedconversation` is the submission. The rest are replies.
72
+ - If a submission is marked as NSFW / Mature, a `[R-18]` is appended to the front of the title.
73
+ - `submission` / `comments`: The raw submission and comments respectively.
74
+
75
+ Unsure or Confused? We have provided a real sample below.
76
+
77
+ ### Data Sample
78
+
79
+ <details>
80
+ <summary>Sample Thread</summary>
81
+ <pre>
82
+ <code class="language-json">
83
+ {
84
+ "thread_id": "t3_of7h2",
85
+ "subreddit": "Gaben",
86
+ "namedconversation": [
87
+ {
88
+ "from": "[deleted]",
89
+ "content": "[13 Jan 2012, 07:01:07] TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"\n\nLink: half-life.wikia.com"
90
+ },
91
+ {
92
+ "from": "clydethefrog",
93
+ "content": "[15 Jan 2012, 18:01:06] That's my password too"
94
+ },
95
+ {
96
+ "from": "Dunge",
97
+ "content": "[29 Feb 2012, 02:02:34] \"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad"
98
+ },
99
+ {
100
+ "from": "captainregularr",
101
+ "content": "[13 Jan 2012, 14:01:14] Did you know gaben makes me gaben my gaben?"
102
+ },
103
+ {
104
+ "from": "Turellio",
105
+ "content": "[13 Jan 2012, 17:01:53] that's what gaben gaben"
106
+ },
107
+ {
108
+ "from": "captainregularr",
109
+ "content": "[13 Jan 2012, 17:01:05] I gaben to gaben's demands."
110
+ },
111
+ {
112
+ "from": "RagingRetard",
113
+ "content": "[13 Jan 2012, 17:01:49] Oh, quit your incessant gaben."
114
+ }
115
+ ],
116
+ "submission": {
117
+ "sub": {
118
+ "name": "Gaben",
119
+ "id": "2scx1",
120
+ "subs": null,
121
+ "type": null
122
+ },
123
+ "author": null,
124
+ "title": "TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"",
125
+ "score": 23,
126
+ "created": 1326440407.0,
127
+ "id": "of7h2",
128
+ "flags": "",
129
+ "link_flair": null,
130
+ "url": "http://half-life.wikia.com/wiki/Half-Life_2_Beta#Source_code_leak",
131
+ "text": "",
132
+ "removed": [],
133
+ "cross": []
134
+ },
135
+ "comments": [
136
+ {
137
+ "sub": {
138
+ "name": "Gaben",
139
+ "id": "2scx1",
140
+ "subs": -1,
141
+ "type": ""
142
+ },
143
+ "author": {
144
+ "name": "clydethefrog",
145
+ "uid": "",
146
+ "create": -1,
147
+ "flair": null,
148
+ "patreon": false,
149
+ "premium": false
150
+ },
151
+ "text": "That's my password too",
152
+ "score": 1,
153
+ "created": "1326652326",
154
+ "id": "c3hge04",
155
+ "parent_id": "t3_of7h2",
156
+ "thread_id": "t3_of7h2",
157
+ "flags": "A",
158
+ "children": []
159
+ },
160
+ {
161
+ "sub": {
162
+ "name": "Gaben",
163
+ "id": "2scx1",
164
+ "subs": -1,
165
+ "type": ""
166
+ },
167
+ "author": {
168
+ "name": "Dunge",
169
+ "uid": "",
170
+ "create": -1,
171
+ "flair": null,
172
+ "patreon": false,
173
+ "premium": false
174
+ },
175
+ "text": "\"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad",
176
+ "score": 3,
177
+ "created": "1330483894",
178
+ "id": "c3w2ulz",
179
+ "parent_id": "t3_of7h2",
180
+ "thread_id": "t3_of7h2",
181
+ "flags": "A",
182
+ "children": []
183
+ },
184
+ {
185
+ "sub": {
186
+ "name": "Gaben",
187
+ "id": "2scx1",
188
+ "subs": -1,
189
+ "type": ""
190
+ },
191
+ "author": {
192
+ "name": "captainregularr",
193
+ "uid": "",
194
+ "create": -1,
195
+ "flair": null,
196
+ "patreon": false,
197
+ "premium": false
198
+ },
199
+ "text": "Did you know gaben makes me gaben my gaben?",
200
+ "score": 5,
201
+ "created": "1326463514",
202
+ "id": "c3gsfkx",
203
+ "parent_id": "t3_of7h2",
204
+ "thread_id": "t3_of7h2",
205
+ "flags": "A",
206
+ "children": [
207
+ {
208
+ "sub": {
209
+ "name": "Gaben",
210
+ "id": "2scx1",
211
+ "subs": -1,
212
+ "type": ""
213
+ },
214
+ "author": {
215
+ "name": "Turellio",
216
+ "uid": "",
217
+ "create": -1,
218
+ "flair": null,
219
+ "patreon": false,
220
+ "premium": false
221
+ },
222
+ "text": "that's what gaben gaben",
223
+ "score": 3,
224
+ "created": "1326476873",
225
+ "id": "c3guihp",
226
+ "parent_id": "t1_c3gsfkx",
227
+ "thread_id": "t3_of7h2",
228
+ "flags": "A",
229
+ "children": [
230
+ {
231
+ "sub": {
232
+ "name": "Gaben",
233
+ "id": "2scx1",
234
+ "subs": -1,
235
+ "type": ""
236
+ },
237
+ "author": {
238
+ "name": "captainregularr",
239
+ "uid": "",
240
+ "create": -1,
241
+ "flair": null,
242
+ "patreon": false,
243
+ "premium": false
244
+ },
245
+ "text": "I gaben to gaben's demands.",
246
+ "score": 5,
247
+ "created": "1326477005",
248
+ "id": "c3guje0",
249
+ "parent_id": "t1_c3guihp",
250
+ "thread_id": "t3_of7h2",
251
+ "flags": "AE",
252
+ "children": [
253
+ {
254
+ "sub": {
255
+ "name": "Gaben",
256
+ "id": "2scx1",
257
+ "subs": -1,
258
+ "type": ""
259
+ },
260
+ "author": {
261
+ "name": "RagingRetard",
262
+ "uid": "",
263
+ "create": -1,
264
+ "flair": null,
265
+ "patreon": false,
266
+ "premium": false
267
+ },
268
+ "text": "Oh, quit your incessant gaben.",
269
+ "score": 2,
270
+ "created": "1326477409",
271
+ "id": "c3gulzh",
272
+ "parent_id": "t1_c3guje0",
273
+ "thread_id": "t3_of7h2",
274
+ "flags": "A",
275
+ "children": []
276
+ }
277
+ ]
278
+ }
279
+ ]
280
+ }
281
+ ]
282
+ }
283
+ ]
284
+ }
285
+ </code>
286
+ </pre>
287
+ </details>
288
+
289
+ # Dataset Creation
290
+
291
+ ### Curation Rationale
292
+
293
+ Reddit's distinctive architecture and commenting system, featuring deeply nested comment threads,
294
+ offer a wealth of conversational data that can be transformed into coherent, linear dialogues.
295
+
296
+ Given Reddit's longevity since 2005, a vast reservoir of information awaits exploration and utilization,
297
+ particularly as modern language models have already tapped into this resource.
298
+
299
+ #### Filtering Subreddit Quality
300
+
301
+ In contrast to UpVoteWeb's methodologies, our aim is to construct a more inclusive dataset while still maintaining standards.
302
+ To achieve this balance, we implemented a pruning process targeting subreddits lacking valuable content according to three key metrics:
303
+
304
+ 1. Engagement: The ratio of total comments to total submissions, reflecting a subreddit's activity level
305
+ 2. Richness The square of the proportion of media submissions among all posts, indicating multimedia content density.
306
+ 3. Diversity The combined count of unique authors in comments and submissions divided by the count of unique submission authors, signaling the breadth of community participation.
307
+
308
+ In practice, it looks something like this:
309
+
310
+ ```py
311
+ # ...
312
+
313
+ engagement = comment_data["comments"] / submission_data["submissions"]
314
+ richness = (submission_data["media"] / submission_data["submissions"]) ** 2
315
+ diversity = (
316
+ comment_data["authors"] + submission_data["authors"]
317
+ ) / submission_data["submissions"]
318
+ ```
319
+
320
+ Furthermore, we enforce certain baseline thresholds for submissions and author counts:
321
+
322
+ ```py
323
+ if (
324
+ stats_data["submission"]["authors"] < 70 # Total unique authors
325
+ or stats_data["comment"]["authors"] < 20 # Total unique commentors
326
+ or stats_data["submission"]["submissions"] < 450 # Total submissions count
327
+ or stats_data["comment"]["comments"] < 585 # Total comments count
328
+ ):
329
+ # Skip the subreddit
330
+ ```
331
+
332
+ By applying these criteria, we have narrowed down to approximately 62,000 high-quality subreddits.
333
+
334
+ #### Valuable Submissions
335
+
336
+ To eliminate subreddits with an insufficient number of meaningful contributions, we first identify "useful threads" characterized by either:
337
+
338
+ - At least five responses,
339
+ - Or, if the original post is textual, exceeding 2,500 characters.
340
+
341
+ A randomized threshold between 5 and 20 is established, and any subreddit falling short of this randomly generated requirement is excluded.
342
+
343
+ In practice:
344
+
345
+ ```py
346
+ fuzz_threads = random.randrange(FUZZY_SUBREDDIT[0], FUZZY_SUBREDDIT[1])
347
+ if usable_threads <= fuzz_threads:
348
+ logger.debug(
349
+ f"/r/{REDDIT} has {usable_threads}, which is less than {fuzz_threads}. Skipping subreddit entirely..."
350
+ )
351
+ # Cleanup...
352
+ return
353
+ ```
354
+
355
+ This step streamlines the dataset by removing less active communities.
356
+
357
+ #### Refining Comment Selection
358
+
359
+ Post-thread-level filtration, comments undergo additional scrutiny based on:
360
+
361
+ 1. Comments with a score lower than -4 are discarded.
362
+ 2. Within threads boasting over 50 comments, those nested deeper than six levels are removed.
363
+ 3. If a comment thread's cumulative score dips below zero, the remainder of that thread is pruned.
364
+ 4. Child comments linked to any pruned parent under points 2 or 3 are also eliminated.
365
+
366
+ An occasional challenge arises when comments lack identifiable parent comments, disrupting the conversation flow; in such cases, the entire comment chain has been deleted.
367
+
368
+ For more infomation, refer to the scripts provided alongside this repo. Specifically `RedditScoring.py` for subreddit filtering and `RedditThreader.py` for per thread filtering.
369
+
370
+
371
+ ### Source Data
372
+
373
+ This dataset is a filtered collection of posts and comments from the beginning of reddit to up to end of 2023.
374
+
375
+ # Considerations for Using the Data
376
+
377
+ ### Social Impact of Dataset
378
+
379
+ With the release of this dataset, we aim to make this development resource available to the community at large.
380
+
381
+ ### Discussion of Biases
382
+
383
+ We've decided **not to censor out NSFW or toxic content.** This allows for better toxic analysis and a varied dataset.
384
+
385
+ # Additional Information
386
+
387
+ ## Recursal's Vision
388
+
389
+ > To make AI accessible to everyone, regardless of language, or economical status
390
+
391
+ This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it.
392
+
393
+ We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.
394
+
395
+ ### About RWKV
396
+
397
+ RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.
398
+
399
+ The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.
400
+
401
+ You can find out more about the project, and latest models, at the following
402
+
403
+ - [https://blog.rwkv.com](https://blog.rwkv.com)
404
+ - [https://wiki.rwkv.com](https://wiki.rwkv.com)
405
+
406
+
407
+ ### About Recursal AI
408
+
409
+ Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.
410
+
411
+ As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets.
412
+
413
+ The following dataset/models provided here, is part of that commitment.
414
+
415
+ You can find out more about recursal AI here
416
+
417
+ - [https://recursal.ai](https://recursal.ai)
418
+ - [https://blog.recursal.ai](https://blog.recursal.ai)
419
+
420
+ ### Licensing Information
421
+
422
+ Since this dataset is derived from a public crawl of reddit, the original content may be subject to copyright and other licensing terms set by the original site owner and/or the content creators.
423
+ Additionally, this dataset is for research and archival purposes only.
424
+
425
+ ### Citation Information
426
+
427
+ If you use this dataset in your research or project, please cite it as follows:
428
+ ```TeX
429
+ @dataset{OKReddit,
430
+ title = {OKReddit},
431
+ year = {2024},
432
+ publisher = {KaraKaraWitch},
433
+ url = {<https://huggingface.co/datasets/KaraKaraWitch/OKReddit>}
434
+ }
435
+ ```
436
+
437
+ Additionally, pleace cite the following source bibtex as well.
438
+ ```TeX
439
+ @article{,
440
+ title= {Reddit comments/submissions 2005-06 to 2023-12},
441
+ journal= {},
442
+ author= {stuck_in_the_matrix, Watchful1, RaiderBDev},
443
+ year= {},
444
+ url= {},
445
+ abstract= {Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev.
446
+
447
+ These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps
448
+
449
+ The more recent dumps are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift},
450
+ keywords= {reddit},
451
+ terms= {},
452
+ license= {},
453
+ superseded= {}
454
+ }
455
+ ```
456
+
457
+ ## ...
458
+
459
+ ```
460
+ Qngnfrg Mra
461
+ - XnenXnenJvgpu @ erphefny.nv FRCG 24
462
+
463
+ - Nalguvat, naq rirelguvat pna or pbyyngrq vagb qngnfrg.
464
+ - Gb orpbzr bar jvgu gur qngn, bar zhfg or jvyyvat gb bcra gurve zvaqf.
465
+ - Ab znggre ubj phefrq vg znl frra, gurer'f nyjnlf zber jbefr guvatf bhg gurer.
466
+ - NCV Yvzvgf, Cnljnyyf, Fhofpevcgvbaf naq bgure yvzvgngvbaf ner n "fhttrfgvba".
467
+ - Vs nyy ryfr snvyf, cebkvrf naq nppbhagf.
468
+ - Bar funyy arire cehar pbagrag jvgubhg eulzr be ernfba.
469
+ - Hayrff vg'f pyrneyl NV-Fybc. Lbh'er serr gb tb unz.
470
+ - Qngnfrgf ner Rireterra, arire qrpvqhbhf.
471
+ - Ohvyq gb fpnyr, arire fvatyr-guernqrq.
472
+ ```