|
--- |
|
size_categories: |
|
- 100M<n<1B |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: chunk_0000_sample |
|
path: OKReddit_Sample.jsonl |
|
pretty_name: OKReddit (Release Candidate 3) |
|
task_categories: |
|
- text-generation |
|
- fill-mask |
|
task_ids: |
|
- language-modeling |
|
- masked-language-modeling |
|
source_datasets: |
|
- original |
|
language: |
|
- en |
|
tags: |
|
- not-for-all-audiences |
|
--- |
|
|
|
# OKReddit - Release Candidate 2023 |
|
|
|
<div> |
|
<a href="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/iIT11kzCFgbKSc0E-p5S4.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/iIT11kzCFgbKSc0E-p5S4.png" style="margin-left:auto;margin-right:auto"></a> |
|
</div> |
|
|
|
# Dataset Summary |
|
|
|
OKReddit is a filtered collection of **6.5 TiB** (An estimated 600M rows of reddit submissions) of reddit submissions and comments from 2005 to 2023. This dataset has been prepared for research or archival purposes. |
|
|
|
This dataset includes (obviously) a filtered list of subreddits. |
|
|
|
- **Curated by:** KaraKaraWitch |
|
- **Funded by:** Recursal.ai |
|
- **Shared by:** KaraKaraWitch |
|
- **Language(s) (NLP):** Mainly English. Other languages are available at smaller sizes. |
|
- **License:** `Scripts` folder are Apache 2.0. Refer to [Licensing Information](#licensing-information) for data license. |
|
|
|
### Dataset Sources |
|
|
|
- **Source Data:** [Academic Torrents](https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4) by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.) |
|
|
|
## Supported Tasks and Leaderboards |
|
|
|
The dataset may be used for a variety of natural language processing (NLP) tasks including: |
|
|
|
- Text Classification: Classifying comments and posts into categories based on sentiment, topic, or subreddit. |
|
|
|
- Language Modeling: Training language models to understand and generate conversational text. |
|
|
|
- Sentiment Analysis: Analyzing the sentiment of comments and posts across different subreddits and topics. |
|
|
|
- Topic Modeling: Identifying and modeling topics discussed in the posts and comments. |
|
|
|
## Languages |
|
|
|
The primary language of the dataset is English, as the majority of redditors are English educated. However, posts in other languages may also be present in smaller quanitites. |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
Each data instance repreasents a submission thread within a subreddit. |
|
|
|
- `thread_id`: The submission thread ID. Inclusive of the `t3_` that reddit uses to mark an id as a thread. `https://reddit.com/r/<SUBREDDIT>/comments/<THREAD_ID>/` |
|
- `subreddit`: The name of the subreddit. Case-insensitive. Reddit just redirects you to the correct-cased subreddit. |
|
- `namedconversation`: A OpenAI "compatible" conversation: |
|
- `from`: The author username that posted the content. **It is not `user`, `system`,`model`!** |
|
- `content`: The reddit markdown posted. |
|
- The first value of `namedconversation` is the submission. The rest are replies. |
|
- If a submission is marked as NSFW / Mature, a `[R-18]` is appended to the front of the title. |
|
- `submission` / `comments`: The raw submission and comments respectively. |
|
- Refer to sample for full structure |
|
|
|
Unsure or Confused? We have provided a real sample below. |
|
|
|
### Data Sample |
|
|
|
<details> |
|
<summary>Sample Thread</summary> |
|
<pre> |
|
<code class="language-json"> |
|
{ |
|
"thread_id": "t3_of7h2", |
|
"subreddit": "Gaben", |
|
"namedconversation": [ |
|
{ |
|
"from": "[deleted]", |
|
"content": "[13 Jan 2012, 07:01:07] TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"\n\nLink: half-life.wikia.com" |
|
}, |
|
{ |
|
"from": "clydethefrog", |
|
"content": "[15 Jan 2012, 18:01:06] That's my password too" |
|
}, |
|
{ |
|
"from": "Dunge", |
|
"content": "[29 Feb 2012, 02:02:34] \"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad" |
|
}, |
|
{ |
|
"from": "captainregularr", |
|
"content": "[13 Jan 2012, 14:01:14] Did you know gaben makes me gaben my gaben?" |
|
}, |
|
{ |
|
"from": "Turellio", |
|
"content": "[13 Jan 2012, 17:01:53] that's what gaben gaben" |
|
}, |
|
{ |
|
"from": "captainregularr", |
|
"content": "[13 Jan 2012, 17:01:05] I gaben to gaben's demands." |
|
}, |
|
{ |
|
"from": "RagingRetard", |
|
"content": "[13 Jan 2012, 17:01:49] Oh, quit your incessant gaben." |
|
} |
|
], |
|
"submission": { |
|
"sub": { |
|
"name": "Gaben", |
|
"id": "2scx1", |
|
"subs": null, |
|
"type": null |
|
}, |
|
"author": null, |
|
"title": "TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"", |
|
"score": 23, |
|
"created": 1326440407.0, |
|
"id": "of7h2", |
|
"flags": "", |
|
"link_flair": null, |
|
"url": "http://half-life.wikia.com/wiki/Half-Life_2_Beta#Source_code_leak", |
|
"text": "", |
|
"removed": [], |
|
"cross": [] |
|
}, |
|
"comments": [ |
|
{ |
|
"sub": { |
|
"name": "Gaben", |
|
"id": "2scx1", |
|
"subs": -1, |
|
"type": "" |
|
}, |
|
"author": { |
|
"name": "clydethefrog", |
|
"uid": "", |
|
"create": -1, |
|
"flair": null, |
|
"patreon": false, |
|
"premium": false |
|
}, |
|
"text": "That's my password too", |
|
"score": 1, |
|
"created": "1326652326", |
|
"id": "c3hge04", |
|
"parent_id": "t3_of7h2", |
|
"thread_id": "t3_of7h2", |
|
"flags": "A", |
|
"children": [] |
|
}, |
|
{ |
|
"sub": { |
|
"name": "Gaben", |
|
"id": "2scx1", |
|
"subs": -1, |
|
"type": "" |
|
}, |
|
"author": { |
|
"name": "Dunge", |
|
"uid": "", |
|
"create": -1, |
|
"flair": null, |
|
"patreon": false, |
|
"premium": false |
|
}, |
|
"text": "\"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad", |
|
"score": 3, |
|
"created": "1330483894", |
|
"id": "c3w2ulz", |
|
"parent_id": "t3_of7h2", |
|
"thread_id": "t3_of7h2", |
|
"flags": "A", |
|
"children": [] |
|
}, |
|
{ |
|
"sub": { |
|
"name": "Gaben", |
|
"id": "2scx1", |
|
"subs": -1, |
|
"type": "" |
|
}, |
|
"author": { |
|
"name": "captainregularr", |
|
"uid": "", |
|
"create": -1, |
|
"flair": null, |
|
"patreon": false, |
|
"premium": false |
|
}, |
|
"text": "Did you know gaben makes me gaben my gaben?", |
|
"score": 5, |
|
"created": "1326463514", |
|
"id": "c3gsfkx", |
|
"parent_id": "t3_of7h2", |
|
"thread_id": "t3_of7h2", |
|
"flags": "A", |
|
"children": [ |
|
{ |
|
"sub": { |
|
"name": "Gaben", |
|
"id": "2scx1", |
|
"subs": -1, |
|
"type": "" |
|
}, |
|
"author": { |
|
"name": "Turellio", |
|
"uid": "", |
|
"create": -1, |
|
"flair": null, |
|
"patreon": false, |
|
"premium": false |
|
}, |
|
"text": "that's what gaben gaben", |
|
"score": 3, |
|
"created": "1326476873", |
|
"id": "c3guihp", |
|
"parent_id": "t1_c3gsfkx", |
|
"thread_id": "t3_of7h2", |
|
"flags": "A", |
|
"children": [ |
|
{ |
|
"sub": { |
|
"name": "Gaben", |
|
"id": "2scx1", |
|
"subs": -1, |
|
"type": "" |
|
}, |
|
"author": { |
|
"name": "captainregularr", |
|
"uid": "", |
|
"create": -1, |
|
"flair": null, |
|
"patreon": false, |
|
"premium": false |
|
}, |
|
"text": "I gaben to gaben's demands.", |
|
"score": 5, |
|
"created": "1326477005", |
|
"id": "c3guje0", |
|
"parent_id": "t1_c3guihp", |
|
"thread_id": "t3_of7h2", |
|
"flags": "AE", |
|
"children": [ |
|
{ |
|
"sub": { |
|
"name": "Gaben", |
|
"id": "2scx1", |
|
"subs": -1, |
|
"type": "" |
|
}, |
|
"author": { |
|
"name": "RagingRetard", |
|
"uid": "", |
|
"create": -1, |
|
"flair": null, |
|
"patreon": false, |
|
"premium": false |
|
}, |
|
"text": "Oh, quit your incessant gaben.", |
|
"score": 2, |
|
"created": "1326477409", |
|
"id": "c3gulzh", |
|
"parent_id": "t1_c3guje0", |
|
"thread_id": "t3_of7h2", |
|
"flags": "A", |
|
"children": [] |
|
} |
|
] |
|
} |
|
] |
|
} |
|
] |
|
} |
|
] |
|
} |
|
</code> |
|
</pre> |
|
</details> |
|
|
|
|
|
### Extra dataset notes |
|
|
|
**Flags**: Reddit has some boolean switches which can be compacted into a string. We have done so to compact the number of boolean switches that needs to be stored. |
|
|
|
For submissions the list of flag characters to boolean names is as follows: |
|
|
|
```py |
|
flag_map = { |
|
"!": "spoiler", |
|
"#": "stickied", |
|
">": "pinned", |
|
"A": "archived", |
|
"C": "is_crosspostable", |
|
"c": "is_original_content", |
|
"E": "edited", |
|
"e": "is_meta", |
|
"G": "can_gild", |
|
"H": "hidden", |
|
"i": "is_robot_indexable", |
|
"L": "allow_live_comments", |
|
"l": "locked", |
|
"m": "is_reddit_media_domain", |
|
"M": "over_18", |
|
"O": "contest_mode", |
|
"q": "quarantine", |
|
"s": "is_self", |
|
"v": "is_video", |
|
} |
|
``` |
|
|
|
For comments: |
|
|
|
``` |
|
flag_map = { |
|
"#": "stickied", |
|
"A": "archived", |
|
"E": "edited", |
|
"G": "can_gild", |
|
"H": "hidden", |
|
"l": "locked", |
|
"=": "score_hidden", |
|
"P": "author_premium", |
|
"R": "send_replies", |
|
"O": "can_mod_post", |
|
"N": "no_follow", |
|
} |
|
``` |
|
|
|
Within named conversation, only the `over_18` flag for submissions are used. |
|
|
|
# Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
Reddit's distinctive architecture and commenting system, featuring deeply nested comment threads, |
|
offer a wealth of conversational data that can be transformed into coherent, linear dialogues. |
|
|
|
Given Reddit's longevity since 2005, a vast reservoir of information awaits exploration and utilization, particularly as modern language models have already tapped into this resource. |
|
|
|
#### Filtering Subreddit Quality |
|
|
|
In contrast to UpVoteWeb's methodologies, our aim is to construct a more inclusive dataset while still maintaining standards. |
|
To achieve this balance, we implemented a pruning process targeting subreddits lacking valuable content according to three key metrics: |
|
|
|
1. Engagement: The ratio of total comments to total submissions, reflecting a subreddit's activity level |
|
2. Richness The square of the proportion of media submissions among all posts, indicating multimedia content density. |
|
3. Diversity The combined count of unique authors in comments and submissions divided by the count of unique submission authors, signaling the breadth of community participation. |
|
|
|
In practice, it looks something like this: |
|
|
|
```py |
|
# ... |
|
|
|
engagement = comment_data["comments"] / submission_data["submissions"] |
|
richness = (submission_data["media"] / submission_data["submissions"]) ** 2 |
|
diversity = ( |
|
comment_data["authors"] + submission_data["authors"] |
|
) / submission_data["submissions"] |
|
``` |
|
|
|
Furthermore, we enforce certain baseline thresholds for submissions and author counts: |
|
|
|
```py |
|
if ( |
|
stats_data["submission"]["authors"] < 70 # Total unique authors |
|
or stats_data["comment"]["authors"] < 20 # Total unique commentors |
|
or stats_data["submission"]["submissions"] < 450 # Total submissions count |
|
or stats_data["comment"]["comments"] < 585 # Total comments count |
|
): |
|
# Skip the subreddit |
|
``` |
|
|
|
By applying these criteria, we have narrowed down to approximately 62,000 high-quality subreddits. |
|
|
|
#### Valuable Submissions |
|
|
|
To eliminate subreddits with an insufficient number of meaningful contributions, we first identify "useful threads" characterized by either: |
|
|
|
- At least five responses, |
|
- Or, if the original post is textual, exceeding 2,500 characters. |
|
|
|
A randomized threshold between 5 and 20 is established, and any subreddit falling short of this randomly generated requirement is excluded. |
|
|
|
In practice: |
|
|
|
```py |
|
fuzz_threads = random.randrange(FUZZY_SUBREDDIT[0], FUZZY_SUBREDDIT[1]) |
|
if usable_threads <= fuzz_threads: |
|
logger.debug( |
|
f"/r/{REDDIT} has {usable_threads}, which is less than {fuzz_threads}. Skipping subreddit entirely..." |
|
) |
|
# Cleanup... |
|
return |
|
``` |
|
|
|
This step streamlines the dataset by removing less active communities. |
|
|
|
#### Refining Comment Selection |
|
|
|
Post-thread-level filtration, comments undergo additional scrutiny based on: |
|
|
|
1. Comments with a score lower than -4 are discarded. |
|
2. Within threads boasting over 50 comments, those nested deeper than six levels are removed. |
|
3. If a comment thread's cumulative score dips below zero, the remainder of that thread is pruned. |
|
4. Child comments linked to any pruned parent under points 2 or 3 are also eliminated. |
|
|
|
An occasional challenge arises when comments lack identifiable parent comments, disrupting the conversation flow; in such cases, the entire comment chain has been deleted. |
|
|
|
For more infomation, refer to the scripts provided alongside this repo. Specifically `RedditScoring.py` for subreddit filtering and `RedditThreader.py` for per thread filtering. We have also included a `NOTES.md` for those looking to replicate it on their hardware. You might need a lot of storage (3x current datasize) though. |
|
|
|
### Source Data |
|
|
|
This dataset is a filtered collection of posts and comments from the beginning of reddit to up to end of 2023. |
|
|
|
# Considerations for Using the Data |
|
|
|
### Social Impact of Dataset |
|
|
|
With the release of this dataset, we aim to make this development resource available to the community at large. |
|
|
|
### Discussion of Biases |
|
|
|
We've decided **not to censor out NSFW or toxic content.** This allows for better toxic analysis and a varied dataset. |
|
|
|
# Additional Information |
|
|
|
## Recursal's Vision |
|
|
|
> To make AI accessible to everyone, regardless of language, or economical status |
|
|
|
This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it. |
|
|
|
We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english. |
|
|
|
### About RWKV |
|
|
|
RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision. |
|
|
|
The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture. |
|
|
|
You can find out more about the project, and latest models, at the following |
|
|
|
- [https://blog.rwkv.com](https://blog.rwkv.com) |
|
- [https://wiki.rwkv.com](https://wiki.rwkv.com) |
|
|
|
|
|
### About Recursal AI |
|
|
|
Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings. |
|
|
|
As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets. |
|
|
|
The following dataset/models provided here, is part of that commitment. |
|
|
|
You can find out more about recursal AI here |
|
|
|
- [https://recursal.ai](https://recursal.ai) |
|
- [https://blog.recursal.ai](https://blog.recursal.ai) |
|
|
|
### Licensing Information |
|
|
|
Since this dataset is derived from a public crawl of reddit, the original content may be subject to copyright and other licensing terms set by the original site owner and/or the content creators. |
|
Additionally, this dataset is for research and archival purposes only. |
|
|
|
### Citation Information |
|
|
|
If you use this dataset in your research or project, please cite it as follows: |
|
```TeX |
|
@dataset{OKReddit, |
|
title = {OKReddit}, |
|
year = {2024}, |
|
publisher = {KaraKaraWitch}, |
|
url = {<https://huggingface.co./datasets/recursal/OKReddit-ReleaseCandidate3>} |
|
} |
|
``` |
|
|
|
Additionally, pleace cite the following source bibtex as well. |
|
```TeX |
|
@article{, |
|
title= {Reddit comments/submissions 2005-06 to 2023-12}, |
|
journal= {}, |
|
author= {stuck_in_the_matrix, Watchful1, RaiderBDev}, |
|
year= {}, |
|
url= {}, |
|
abstract= {Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev. |
|
|
|
These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps |
|
|
|
The more recent dumps are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift}, |
|
keywords= {reddit}, |
|
terms= {}, |
|
license= {}, |
|
superseded= {} |
|
} |
|
``` |
|
|
|
## ... |
|
|
|
``` |
|
Qngnfrg Mra |
|
- XnenXnenJvgpu @ erphefny.nv FRCG 24 |
|
|
|
- Nalguvat, naq rirelguvat pna or pbyyngrq vagb qngnfrg. |
|
- Gb orpbzr bar jvgu gur qngn, bar zhfg or jvyyvat gb bcra gurve zvaqf. |
|
- Ab znggre ubj phefrq vg znl frra, gurer'f nyjnlf zber jbefr guvatf bhg gurer. |
|
- NCV Yvzvgf, Cnljnyyf, Fhofpevcgvbaf naq bgure yvzvgngvbaf ner n "fhttrfgvba". |
|
- Vs nyy ryfr snvyf, cebkvrf naq nppbhagf. |
|
- Bar funyy arire cehar pbagrag jvgubhg eulzr be ernfba. |
|
- Hayrff vg'f pyrneyl NV-Fybc. Lbh'er serr gb tb unz. |
|
- Qngnfrgf ner Rireterra, arire qrpvqhbhf. |
|
- Ohvyq gb fpnyr, arire fvatyr-guernqrq. |
|
``` |