--- license: mit task_categories: - text-generation tags: - human-feedback - rlhf - preferences - reddit size_categories: - 100K= 2) (float) ## Dataset Design The data is sourced from Reddit, which is a public forum organized into topic-specific fora called *subreddits*. For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users. ### Subreddit Selection This may be due to the aggregate human preferences in SHP being more stable easier to predict than the individual human preferences in the Anthropic data, as well as our strict data filtering described above. SHP contains a train, validation, and test split for comments scraped from 18 different subreddits. We chose subreddits based on: 1. whether they were well-known (subscriber count >= 50K) 2. whether they were actively moderated 3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`) The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits. Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%: | subreddit | train | validation | test | total | | ------------------ | -------: | ---------: | ---: | ----: | | askacademia | 31450 | 2095 | 1708 | 35253 | | askanthropology | 3910 | 203 | 268 | 4381 | | askbaking | 44007 | 2096 | 1544 | 47647 | | askcarguys | 3227 | 159 | 117 | 3503 | | askculinary | 45710 | 2094 | 2563 | 50367 | | askdocs | 6449 | 315 | 455 | 7219 | | askengineers | 57096 | 3154 | 2638 | 62888 | | askhistorians | 3264 | 113 | 164 | 3541 | | askhr | 8295 | 641 | 395 | 9331 | | askphilosophy | 10307 | 608 | 677 | 11592 | | askphysics | 7364 | 409 | 587 | 8360 | | askscience | 13316 | 899 | 977 | 15192 | | asksciencefiction | 29382 | 1576 | 1987 | 32945 | | asksocialscience | 2706 | 147 | 188 | 3041 | | askvet | 3300 | 170 | 224 | 3694 | | changemyview | 38173 | 1637 | 1836 | 41646 | | explainlikeimfive | 19592 | 1014 | 1070 | 21676 | | legaladvice | 21170 | 1106 | 1011 | 23287 | | ALL | 348718 | 18436 | 18409 | 385563 | ### of The input in SHP contains more [FLANT5-usable information](https://icml.cc/virtual/2022/oral/16634) about the preference label than in Specifically, given a post P and two comments (A,B) we only included the preference A > B in the dataset if 1. A was written *no later than* B. 2. Despite being written later, A has a score that is at least 2 times as high as B's. 3. Both comments have a score >= 2 and the post has a score >= 10. 4. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18). 5. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator. Since comments made earlier get more visibility, the first condition is needed to ensure that A's higher score is not the result of a first-mover advantage. Since the comment score is also a noisy estimate of the comment's utility, the second and third conditions were enforced to ensure that the preference is genuine. ## Files ## Disclaimer Although we filtered out posts with NSFW (over 18) content, some of the data may contain discriminatory or harmful language. The data does not reflect the views of the dataset creators. Please only engage with the data in accordance with your own personal risk tolerance. Reddit users on these subreddits are also not necessarily representative of the broader population, which one should keep in mind before using any models trained on this data. As always, remember to evaluate! ## FAQs **Q**: *I'm trying to train a FLAN-T5/T5 model on these preferences, but the loss won't converge. Help!* **A**: The most likely problem is that you're feeding the post text AND one or both comments as input, which is a lot larger than the 512 tokens these models can support. Even though they use relative position embeddings, in our experience, this is not helpful when training a preference/reward model on this data. To avoid this, truncate the post text as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however). If this is still over 512 tokens, simply skip the example. This should allow you to still train on most of the examples and get a preference model that is still ~75% accurate at predicting human preferencess. We are currently training a preference model on this data and will make it available shortly. **Q**: *Why did you use threshold the score ratio rather than the score difference when filtering preferences?* **A**: Some Reddit posts get far less traffic than others, which means their comments have lower absolute scores. An absolute difference threshold would disproportionately exclude comments from these posts, a kind of bias that we didn't want to introduce. **Q**: *Did you scrape every post on those 18 subreddits?* **A**: No. Reddit makes it very difficult to get anything beyond the top 1000 posts. We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using the Reddit search function. By doing this recursively, we scraped up to 7500 post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post. We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data. **Q**: *How did you preprocess the text?* **A**: We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded ("CMV" to "Change my view that"). In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept). ## Contact Please contact kawin@stanford.edu if you have any questions about the data.