File size: 19,428 Bytes
c51a17e
 
 
 
 
 
4c548f8
 
c51a17e
 
 
 
 
 
 
 
 
 
 
4c548f8
 
c51a17e
 
86dcb45
c51a17e
0bb9a7a
 
 
c51a17e
 
 
71dcd5b
c51a17e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bb9a7a
c51a17e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2df7748
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c51a17e
 
 
 
 
 
 
5f443f9
c51a17e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f443f9
c51a17e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b249db
c51a17e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
---
size_categories:
- 100M<n<1B
configs:
- config_name: default
  data_files:
  - split: chunk_0000_sample
    path: OKReddit_Sample.jsonl
pretty_name: OKReddit (Release Candidate 3)
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
language:
- en
tags:
- not-for-all-audiences
---

# OKReddit - Release Candidate 2023

<div>
  <a href="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/iIT11kzCFgbKSc0E-p5S4.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/iIT11kzCFgbKSc0E-p5S4.png" style="margin-left:auto;margin-right:auto"></a>
</div>

# Dataset Summary

OKReddit is a filtered collection of **6.5 TiB** (An estimated 600M rows of reddit submissions) of reddit submissions and comments from 2005 to 2023. This dataset has been prepared for research or archival purposes.

This dataset includes (obviously) a filtered list of subreddits.

- **Curated by:** KaraKaraWitch
- **Funded by:** Recursal.ai
- **Shared by:** KaraKaraWitch
- **Language(s) (NLP):** Mainly English. Other languages are available at smaller sizes.
- **License:** `Scripts` folder are Apache 2.0. Refer to [Licensing Information](#licensing-information) for data license.

### Dataset Sources

- **Source Data:** [Academic Torrents](https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4) by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.)

## Supported Tasks and Leaderboards

The dataset may be used for a variety of natural language processing (NLP) tasks including:

- Text Classification: Classifying comments and posts into categories based on sentiment, topic, or subreddit.

- Language Modeling: Training language models to understand and generate conversational text.

- Sentiment Analysis: Analyzing the sentiment of comments and posts across different subreddits and topics.

- Topic Modeling: Identifying and modeling topics discussed in the posts and comments.

## Languages

The primary language of the dataset is English, as the majority of redditors are English educated. However, posts in other languages may also be present in smaller quanitites.

## Dataset Structure

### Data Instances

Each data instance repreasents a submission thread within a subreddit.

- `thread_id`: The submission thread ID. Inclusive of the `t3_` that reddit uses to mark an id as a thread. `https://reddit.com/r/<SUBREDDIT>/comments/<THREAD_ID>/`
- `subreddit`: The name of the subreddit. Case-insensitive. Reddit just redirects you to the correct-cased subreddit.
- `namedconversation`: A OpenAI "compatible" conversation:
  - `from`: The author username that posted the content. **It is not `user`, `system`,`model`!**
  - `content`: The reddit markdown posted.
- The first value of `namedconversation` is the submission. The rest are replies.
- If a submission is marked as NSFW / Mature, a `[R-18]` is appended to the front of the title.
- `submission` / `comments`: The raw submission and comments respectively.
  - Refer to sample for full structure

Unsure or Confused? We have provided a real sample below.

### Data Sample

<details>
  <summary>Sample Thread</summary>
  <pre>
    <code class="language-json">
{
    "thread_id": "t3_of7h2",
    "subreddit": "Gaben",
    "namedconversation": [
        {
            "from": "[deleted]",
            "content": "[13 Jan 2012, 07:01:07] TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"\n\nLink: half-life.wikia.com"
        },
        {
            "from": "clydethefrog",
            "content": "[15 Jan 2012, 18:01:06] That's my password too"
        },
        {
            "from": "Dunge",
            "content": "[29 Feb 2012, 02:02:34] \"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad"
        },
        {
            "from": "captainregularr",
            "content": "[13 Jan 2012, 14:01:14] Did you know gaben makes me gaben my gaben?"
        },
        {
            "from": "Turellio",
            "content": "[13 Jan 2012, 17:01:53] that's what gaben gaben"
        },
        {
            "from": "captainregularr",
            "content": "[13 Jan 2012, 17:01:05] I gaben to gaben's demands."
        },
        {
            "from": "RagingRetard",
            "content": "[13 Jan 2012, 17:01:49] Oh, quit your incessant gaben."
        }
    ],
    "submission": {
        "sub": {
            "name": "Gaben",
            "id": "2scx1",
            "subs": null,
            "type": null
        },
        "author": null,
        "title": "TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"",
        "score": 23,
        "created": 1326440407.0,
        "id": "of7h2",
        "flags": "",
        "link_flair": null,
        "url": "http://half-life.wikia.com/wiki/Half-Life_2_Beta#Source_code_leak",
        "text": "",
        "removed": [],
        "cross": []
    },
    "comments": [
        {
            "sub": {
                "name": "Gaben",
                "id": "2scx1",
                "subs": -1,
                "type": ""
            },
            "author": {
                "name": "clydethefrog",
                "uid": "",
                "create": -1,
                "flair": null,
                "patreon": false,
                "premium": false
            },
            "text": "That's my password too",
            "score": 1,
            "created": "1326652326",
            "id": "c3hge04",
            "parent_id": "t3_of7h2",
            "thread_id": "t3_of7h2",
            "flags": "A",
            "children": []
        },
        {
            "sub": {
                "name": "Gaben",
                "id": "2scx1",
                "subs": -1,
                "type": ""
            },
            "author": {
                "name": "Dunge",
                "uid": "",
                "create": -1,
                "flair": null,
                "patreon": false,
                "premium": false
            },
            "text": "\"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad",
            "score": 3,
            "created": "1330483894",
            "id": "c3w2ulz",
            "parent_id": "t3_of7h2",
            "thread_id": "t3_of7h2",
            "flags": "A",
            "children": []
        },
        {
            "sub": {
                "name": "Gaben",
                "id": "2scx1",
                "subs": -1,
                "type": ""
            },
            "author": {
                "name": "captainregularr",
                "uid": "",
                "create": -1,
                "flair": null,
                "patreon": false,
                "premium": false
            },
            "text": "Did you know gaben makes me gaben my gaben?",
            "score": 5,
            "created": "1326463514",
            "id": "c3gsfkx",
            "parent_id": "t3_of7h2",
            "thread_id": "t3_of7h2",
            "flags": "A",
            "children": [
                {
                    "sub": {
                        "name": "Gaben",
                        "id": "2scx1",
                        "subs": -1,
                        "type": ""
                    },
                    "author": {
                        "name": "Turellio",
                        "uid": "",
                        "create": -1,
                        "flair": null,
                        "patreon": false,
                        "premium": false
                    },
                    "text": "that's what gaben gaben",
                    "score": 3,
                    "created": "1326476873",
                    "id": "c3guihp",
                    "parent_id": "t1_c3gsfkx",
                    "thread_id": "t3_of7h2",
                    "flags": "A",
                    "children": [
                        {
                            "sub": {
                                "name": "Gaben",
                                "id": "2scx1",
                                "subs": -1,
                                "type": ""
                            },
                            "author": {
                                "name": "captainregularr",
                                "uid": "",
                                "create": -1,
                                "flair": null,
                                "patreon": false,
                                "premium": false
                            },
                            "text": "I gaben to gaben's demands.",
                            "score": 5,
                            "created": "1326477005",
                            "id": "c3guje0",
                            "parent_id": "t1_c3guihp",
                            "thread_id": "t3_of7h2",
                            "flags": "AE",
                            "children": [
                                {
                                    "sub": {
                                        "name": "Gaben",
                                        "id": "2scx1",
                                        "subs": -1,
                                        "type": ""
                                    },
                                    "author": {
                                        "name": "RagingRetard",
                                        "uid": "",
                                        "create": -1,
                                        "flair": null,
                                        "patreon": false,
                                        "premium": false
                                    },
                                    "text": "Oh, quit your incessant gaben.",
                                    "score": 2,
                                    "created": "1326477409",
                                    "id": "c3gulzh",
                                    "parent_id": "t1_c3guje0",
                                    "thread_id": "t3_of7h2",
                                    "flags": "A",
                                    "children": []
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}
    </code>
  </pre>
</details>


### Extra dataset notes

**Flags**: Reddit has some boolean switches which can be compacted into a string. We have done so to compact the number of boolean switches that needs to be stored.

For submissions the list of flag characters to boolean names is as follows:

```py
flag_map = {
    "!": "spoiler",
    "#": "stickied",
    ">": "pinned",
    "A": "archived",
    "C": "is_crosspostable",
    "c": "is_original_content",
    "E": "edited",
    "e": "is_meta",
    "G": "can_gild",
    "H": "hidden",
    "i": "is_robot_indexable",
    "L": "allow_live_comments",
    "l": "locked",
    "m": "is_reddit_media_domain",
    "M": "over_18",
    "O": "contest_mode",
    "q": "quarantine",
    "s": "is_self",
    "v": "is_video",
}
```

For comments:

```
flag_map = {
    "#": "stickied",
    "A": "archived",
    "E": "edited",
    "G": "can_gild",
    "H": "hidden",
    "l": "locked",
    "=": "score_hidden",
    "P": "author_premium",
    "R": "send_replies",
    "O": "can_mod_post",
    "N": "no_follow",
}
```

Within named conversation, only the `over_18` flag for submissions are used.

# Dataset Creation

### Curation Rationale

Reddit's distinctive architecture and commenting system, featuring deeply nested comment threads,
offer a wealth of conversational data that can be transformed into coherent, linear dialogues.

Given Reddit's longevity since 2005, a vast reservoir of information awaits exploration and utilization, particularly as modern language models have already tapped into this resource.

#### Filtering Subreddit Quality

In contrast to UpVoteWeb's methodologies, our aim is to construct a more inclusive dataset while still maintaining standards.
To achieve this balance, we implemented a pruning process targeting subreddits lacking valuable content according to three key metrics:

1. Engagement: The ratio of total comments to total submissions, reflecting a subreddit's activity level
2. Richness The square of the proportion of media submissions among all posts, indicating multimedia content density.
3. Diversity The combined count of unique authors in comments and submissions divided by the count of unique submission authors, signaling the breadth of community participation.

In practice, it looks something like this:

```py
# ...

engagement = comment_data["comments"] / submission_data["submissions"]
richness = (submission_data["media"] / submission_data["submissions"]) ** 2
diversity = (
    comment_data["authors"] + submission_data["authors"]
) / submission_data["submissions"]
```

Furthermore, we enforce certain baseline thresholds for submissions and author counts:

```py
if (
    stats_data["submission"]["authors"] < 70 #  Total unique authors
    or stats_data["comment"]["authors"] < 20 #  Total unique commentors
    or stats_data["submission"]["submissions"] < 450 #  Total submissions count
    or stats_data["comment"]["comments"] < 585 #  Total comments count
):
  # Skip the subreddit
```

By applying these criteria, we have narrowed down to approximately 62,000 high-quality subreddits.

#### Valuable Submissions

To eliminate subreddits with an insufficient number of meaningful contributions, we first identify "useful threads" characterized by either:

- At least five responses,
- Or, if the original post is textual, exceeding 2,500 characters.

A randomized threshold between 5 and 20 is established, and any subreddit falling short of this randomly generated requirement is excluded.

In practice:

```py
fuzz_threads = random.randrange(FUZZY_SUBREDDIT[0], FUZZY_SUBREDDIT[1])
if usable_threads <= fuzz_threads:
    logger.debug(
        f"/r/{REDDIT} has {usable_threads}, which is less than {fuzz_threads}. Skipping subreddit entirely..."
    )
    # Cleanup...
    return
```

This step streamlines the dataset by removing less active communities.

#### Refining Comment Selection

Post-thread-level filtration, comments undergo additional scrutiny based on:

1. Comments with a score lower than -4 are discarded.
2. Within threads boasting over 50 comments, those nested deeper than six levels are removed.
3. If a comment thread's cumulative score dips below zero, the remainder of that thread is pruned.
4. Child comments linked to any pruned parent under points 2 or 3 are also eliminated.

An occasional challenge arises when comments lack identifiable parent comments, disrupting the conversation flow; in such cases, the entire comment chain has been deleted.

For more infomation, refer to the scripts provided alongside this repo. Specifically `RedditScoring.py` for subreddit filtering and `RedditThreader.py` for per thread filtering. We have also included a `NOTES.md` for those looking to replicate it on their hardware. You might need a lot of storage (3x current datasize) though.

### Source Data

This dataset is a filtered collection of posts and comments from the beginning of reddit to up to end of 2023.

# Considerations for Using the Data

### Social Impact of Dataset

With the release of this dataset, we aim to make this development resource available to the community at large. 

### Discussion of Biases

We've decided **not to censor out NSFW or toxic content.** This allows for better toxic analysis and a varied dataset.

# Additional Information

## Recursal's Vision

> To make AI accessible to everyone, regardless of language, or economical status

This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it.

We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.

### About RWKV

RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.

The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.

You can find out more about the project, and latest models, at the following

- [https://blog.rwkv.com](https://blog.rwkv.com)
- [https://wiki.rwkv.com](https://wiki.rwkv.com)


### About Recursal AI

Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.

As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets. 

The following dataset/models provided here, is part of that commitment.

You can find out more about recursal AI here

- [https://recursal.ai](https://recursal.ai)
- [https://blog.recursal.ai](https://blog.recursal.ai)

### Licensing Information

Since this dataset is derived from a public crawl of reddit, the original content may be subject to copyright and other licensing terms set by the original site owner and/or the content creators.  
Additionally, this dataset is for research and archival purposes only.

### Citation Information

If you use this dataset in your research or project, please cite it as follows:
```TeX
@dataset{OKReddit,
  title = {OKReddit},
  year = {2024},
  publisher = {KaraKaraWitch},
  url = {<https://huggingface.co./datasets/recursal/OKReddit-ReleaseCandidate3>}
}
```

Additionally, pleace cite the following source bibtex as well.
```TeX
@article{,
title= {Reddit comments/submissions 2005-06 to 2023-12},
journal= {},
author= {stuck_in_the_matrix, Watchful1, RaiderBDev},
year= {},
url= {},
abstract= {Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev.

These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps

The more recent dumps are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift},
keywords= {reddit},
terms= {},
license= {},
superseded= {}
}
```

## ...

```
Qngnfrg Mra
- XnenXnenJvgpu @ erphefny.nv FRCG 24

- Nalguvat, naq rirelguvat pna or pbyyngrq vagb qngnfrg.
- Gb orpbzr bar jvgu gur qngn, bar zhfg or jvyyvat gb bcra gurve zvaqf.
- Ab znggre ubj phefrq vg znl frra, gurer'f nyjnlf zber jbefr guvatf bhg gurer.
- NCV Yvzvgf, Cnljnyyf, Fhofpevcgvbaf naq bgure yvzvgngvbaf ner n "fhttrfgvba".
- Vs nyy ryfr snvyf, cebkvrf naq nppbhagf.
- Bar funyy arire cehar pbagrag jvgubhg eulzr be ernfba.
- Hayrff vg'f pyrneyl NV-Fybc. Lbh'er serr gb tb unz.
- Qngnfrgf ner Rireterra, arire qrpvqhbhf.
- Ohvyq gb fpnyr, arire fvatyr-guernqrq.
```