_id
stringlengths 24
24
| id
stringlengths 5
121
| author
stringlengths 2
42
| cardData
stringlengths 2
1.07M
⌀ | disabled
bool 2
classes | gated
null | lastModified
timestamp[ns] | likes
int64 0
6.81k
| trendingScore
float64 0
108
| private
bool 1
class | sha
stringlengths 40
40
| description
stringlengths 0
6.67k
⌀ | downloads
int64 0
2.19M
| tags
sequencelengths 1
7.92k
| createdAt
timestamp[ns] | key
stringclasses 1
value | citation
stringlengths 0
10.7k
⌀ | paperswithcode_id
stringclasses 645
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
63990f21cc50af73d29ecfa3 | fka/awesome-chatgpt-prompts | fka | {"license": "cc0-1.0", "tags": ["ChatGPT"], "task_categories": ["question-answering"], "size_categories": ["100K<n<1M"]} | false | null | 2025-01-06T00:02:53 | 6,812 | 108 | false | 68ba7694e23014788dcc8ab5afe613824f45a05c | 🧠 Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts
View All Prompts on GitHub
License
CC-0
| 5,256 | [
"task_categories:question-answering",
"license:cc0-1.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"ChatGPT"
] | 2022-12-13T23:47:45 | null | null |
|
66cbf7ef92e9f5b19fcd65aa | cfahlgren1/react-code-instructions | cfahlgren1 | {"license": "mit", "pretty_name": "React Code Instructions"} | false | null | 2025-01-12T00:23:14 | 101 | 56 | false | 809623581765243ac82ba0bd09553f36c9f6ac9c |
React Code Instructions
Popular Queries
Number of instructions by Model
Unnested Messages
Instructions Added Per Day
Dataset of Claude Artifact esque React Apps generated by Llama 3.1 70B, Llama 3.1 405B, and Deepseek Chat V3.
Examples
Virtual Fitness Trainer Website
LinkedIn Clone
iPhone Calculator
Chipotle Waitlist
Apple Store
| 520 | [
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | 2024-08-26T03:35:11 | null | null |
|
67750882633d421965733171 | DAMO-NLP-SG/multimodal_textbook | DAMO-NLP-SG | {"license": "apache-2.0", "task_categories": ["text-generation", "summarization"], "language": ["en"], "tags": ["Pretraining", "Interleaved", "Reasoning"], "size_categories": ["1M<n<10M"]} | false | null | 2025-01-11T11:48:45 | 64 | 55 | false | b83d307b2682d6b12420f5b93f4360880ea89df4 |
Multimodal-Textbook-6.5M
Overview
This dataset is for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining", containing 6.5M images interleaving with 0.8B text from instructional videos.
It contains pre-training corpus using interleaved image-text format. Specifically, our multimodal-textbook includes 6.5M keyframes extracted from instructional videos, interleaving with 0.8B ASR texts.
All the images and text are extracted from… See the full description on the dataset page: https://huggingface.co./datasets/DAMO-NLP-SG/multimodal_textbook. | 2,087 | [
"task_categories:text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2501.00958",
"region:us",
"Pretraining",
"Interleaved",
"Reasoning"
] | 2025-01-01T09:18:58 | null | null |
|
6758176e04e2f15d7bfacd54 | PowerInfer/QWQ-LONGCOT-500K | PowerInfer | {"license": "apache-2.0", "language": ["en"]} | false | null | 2024-12-26T10:19:19 | 92 | 33 | false | 10a787d967281599e9be6761717147817c018424 | This repository contains approximately 500,000 instances of responses generated using QwQ-32B-Preview language model. The dataset combines prompts from multiple high-quality sources to create diverse and comprehensive training data.
The dataset is available under the Apache 2.0 license.
Over 75% of the responses exceed 8,000 tokens in length. The majority of prompts were carefully created using persona-based methods to create challenging instructions.
Bias, Risks, and Limitations… See the full description on the dataset page: https://huggingface.co./datasets/PowerInfer/QWQ-LONGCOT-500K. | 582 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-10T10:26:54 | null | null |
|
66a6da71f0dc7c8df2e0f979 | OpenLeecher/lmsys_chat_1m_clean | OpenLeecher | {"language": ["en"], "size_categories": ["100K<n<1M"], "pretty_name": "Cleaned LMSYS dataset", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "category", "dtype": "string"}, {"name": "grounded", "dtype": "bool"}, {"name": "deepseek_response", "struct": [{"name": "moralization", "dtype": "int64"}, {"name": "reward", "dtype": "float64"}, {"name": "value", "dtype": "string"}]}, {"name": "phi-3-mini_response", "struct": [{"name": "moralization", "dtype": "int64"}, {"name": "reward", "dtype": "float64"}, {"name": "value", "dtype": "string"}]}, {"name": "flaw", "dtype": "string"}, {"name": "agreement", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1673196622, "num_examples": 273402}], "download_size": 906472159, "dataset_size": 1673196622}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2024-12-31T22:35:13 | 55 | 23 | false | e9f2f6838a2dbba87c216bb6bc406e8d7ce0f389 |
Cleaning and Categorizing
A few weeks ago, I had the itch to do some data crunching, so I began this project - to clean and classify lmsys-chat-1m. The process was somewhat long and tedious, but here is the quick overview:
1. Removing Pure Duplicate Instructions
The first step was to eliminate pure duplicate instructions. This involved:
Removing whitespace and punctuation.
Ensuring that if two instructions matched after that, only one was retained.
This step… See the full description on the dataset page: https://huggingface.co./datasets/OpenLeecher/lmsys_chat_1m_clean. | 609 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-07-28T23:55:29 | null | null |
|
67449661149efb6edaa63b98 | HuggingFaceTB/finemath | HuggingFaceTB | {"license": "odc-by", "dataset_info": [{"config_name": "finemath-3plus", "features": [{"name": "url", "dtype": "string"}, {"name": "fetch_time", "dtype": "int64"}, {"name": "content_mime_type", "dtype": "string"}, {"name": "warc_filename", "dtype": "string"}, {"name": "warc_record_offset", "dtype": "int32"}, {"name": "warc_record_length", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "token_count", "dtype": "int32"}, {"name": "char_count", "dtype": "int32"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "crawl", "dtype": "string"}, {"name": "snapshot_type", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 137764105388.93857, "num_examples": 21405610}], "download_size": 65039196945, "dataset_size": 137764105388.93857}, {"config_name": "finemath-4plus", "features": [{"name": "url", "dtype": "string"}, {"name": "fetch_time", "dtype": "int64"}, {"name": "content_mime_type", "dtype": "string"}, {"name": "warc_filename", "dtype": "string"}, {"name": "warc_record_offset", "dtype": "int32"}, {"name": "warc_record_length", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "token_count", "dtype": "int32"}, {"name": "char_count", "dtype": "int32"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "crawl", "dtype": "string"}, {"name": "snapshot_type", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "language_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 39101488149.09091, "num_examples": 6699493}], "download_size": 18365184633, "dataset_size": 39101488149.09091}, {"config_name": "infiwebmath-3plus", "features": [{"name": "url", "dtype": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "token_count", "dtype": "int64"}, {"name": "char_count", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96485696853.10182, "num_examples": 13882669}], "download_size": 46808660851, "dataset_size": 96485696853.10182}, {"config_name": "infiwebmath-4plus", "features": [{"name": "url", "dtype": "string"}, {"name": "metadata", "dtype": "string"}, {"name": "score", "dtype": "float64"}, {"name": "int_score", "dtype": "int64"}, {"name": "token_count", "dtype": "int64"}, {"name": "char_count", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40002719500.1551, "num_examples": 6296212}], "download_size": 19234328998, "dataset_size": 40002719500.1551}], "configs": [{"config_name": "finemath-3plus", "data_files": [{"split": "train", "path": "finemath-3plus/train-*"}]}, {"config_name": "finemath-4plus", "data_files": [{"split": "train", "path": "finemath-4plus/train-*"}]}, {"config_name": "infiwebmath-3plus", "data_files": [{"split": "train", "path": "infiwebmath-3plus/train-*"}]}, {"config_name": "infiwebmath-4plus", "data_files": [{"split": "train", "path": "infiwebmath-4plus/train-*"}]}]} | false | null | 2024-12-23T11:19:16 | 242 | 21 | false | 8f233cf84cff0b817b3ffb26d5be7370990dd557 |
📐 FineMath
What is it?
📐 FineMath consists of 34B tokens (FineMath-3+) and 54B tokens (FineMath-3+ with InfiMM-WebMath-3+) of mathematical educational content filtered from CommonCrawl. To curate this dataset, we trained a mathematical content classifier using annotations generated by LLama-3.1-70B-Instruct. We used the classifier to retain only the most educational mathematics content, focusing on clear explanations and step-by-step problem solving rather than… See the full description on the dataset page: https://huggingface.co./datasets/HuggingFaceTB/finemath. | 35,291 | [
"license:odc-by",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"doi:10.57967/hf/3847",
"region:us"
] | 2024-11-25T15:23:13 | null | null |
|
6763e94724dee5a47c7c77f7 | agibot-world/AgiBotWorld-Alpha | agibot-world | {"pretty_name": "AgiBot World", "size_categories": ["n>1T"], "task_categories": ["other"], "language": ["en"], "tags": ["real-world", "dual-arm", "Robotics manipulation"], "extra_gated_prompt": "### AgiBot World COMMUNITY LICENSE AGREEMENT\nAgiBot World Alpha Release Date: December 30, 2024 All the data and code within this repo are under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Email": "text", "Country": "country", "Affiliation": "text", "Phone": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "Research interest": "text", "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the AgiBot Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the AgiBot Privacy Policy.", "extra_gated_button_content": "Submit"} | false | null | 2025-01-09T02:59:03 | 156 | 21 | false | 53f3739cc041164023f988d7c7b98f6af3f0d2c0 |
Key Features 🔑
1 million+ trajectories from 100 robots.
100+ real-world scenarios across 5 target domains.
Cutting-edge hardware: visual tactile sensors / 6-DoF dexterous hand / mobile dual-arm robots
Tasks involving:
Contact-rich manipulation
Long-horizon planning
Multi-robot collaboration
Your browser does not support the video tag.
Your browser does not support the video tag.… See the full description on the dataset page: https://huggingface.co./datasets/agibot-world/AgiBotWorld-Alpha. | 9,556 | [
"task_categories:other",
"language:en",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us",
"real-world",
"dual-arm",
"Robotics manipulation"
] | 2024-12-19T09:37:11 | null | null |
|
676f70846bf205795346d2be | FreedomIntelligence/medical-o1-reasoning-SFT | FreedomIntelligence | {"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en"], "tags": ["medical", "biology"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "medical_o1_sft.json"}]}]} | false | null | 2025-01-04T13:01:37 | 48 | 20 | false | 06ac0b8d4960fa84ef55198ea8086266f1e3da81 |
Introduction
This dataset is used to fine-tune HuatuoGPT-o1, a medical LLM designed for advanced medical reasoning. This dataset is constructed using GPT-4o, which searches for solutions to verifiable medical problems and validates them through a medical verifier.
For details, see our paper and GitHub repository.
Citation
If you find our data useful, please consider citing our work!
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1… See the full description on the dataset page: https://huggingface.co./datasets/FreedomIntelligence/medical-o1-reasoning-SFT. | 446 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:08 | null | null |
|
677c1f196b1653e3955dbce7 | Rapidata/text-2-image-Rich-Human-Feedback | Rapidata | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "prompt", "dtype": "string"}, {"name": "word_scores", "dtype": "string"}, {"name": "alignment_score_norm", "dtype": "float32"}, {"name": "coherence_score_norm", "dtype": "float32"}, {"name": "style_score_norm", "dtype": "float32"}, {"name": "alignment_heatmap", "sequence": {"sequence": "float16"}}, {"name": "coherence_heatmap", "sequence": {"sequence": "float16"}}, {"name": "alignment_score", "dtype": "float32"}, {"name": "coherence_score", "dtype": "float32"}, {"name": "style_score", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 25257389633.104, "num_examples": 13024}], "download_size": 17856619960, "dataset_size": 25257389633.104}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "task_categories": ["text-to-image", "text-classification", "image-classification", "image-to-text", "image-segmentation"], "language": ["en"], "tags": ["t2i", "preferences", "human", "flux", "midjourney", "imagen", "dalle", "heatmap", "coherence", "alignment", "style", "plausiblity"], "pretty_name": "Rich Human Feedback for Text to Image Models", "size_categories": ["1M<n<10M"]} | false | null | 2025-01-11T13:23:04 | 18 | 18 | false | e77afd00e481d9d2ca41a5b5c4f89cb704de45c6 |
Building upon Google's research Rich Human Feedback for Text-to-Image Generation we have collected over 1.5 million responses from 152'684 individual humans using Rapidata via the Python API. Collection took roughly 5 days.
If you get value from this dataset and would like to see more in the future, please consider liking it.
Overview
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were… See the full description on the dataset page: https://huggingface.co./datasets/Rapidata/text-2-image-Rich-Human-Feedback. | 375 | [
"task_categories:text-to-image",
"task_categories:text-classification",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:image-segmentation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2312.10240",
"region:us",
"t2i",
"preferences",
"human",
"flux",
"midjourney",
"imagen",
"dalle",
"heatmap",
"coherence",
"alignment",
"style",
"plausiblity"
] | 2025-01-06T18:21:13 | null | null |
|
6695831f2d25bd04e969b0a2 | AI-MO/NuminaMath-CoT | AI-MO | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "problem", "dtype": "string"}, {"name": "solution", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2495457595.0398345, "num_examples": 859494}, {"name": "test", "num_bytes": 290340.31593470514, "num_examples": 100}], "download_size": 1234351634, "dataset_size": 2495747935.355769}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "license": "apache-2.0", "task_categories": ["text-generation"], "language": ["en"], "tags": ["aimo", "math"], "pretty_name": "NuminaMath CoT"} | false | null | 2024-11-25T05:31:43 | 308 | 17 | false | 9d8d210c9f6a36c8f3cd84045668c9b7800ef517 |
Dataset Card for NuminaMath CoT
Dataset Summary
Approximately 860k math problems, where each solution is formatted in a Chain of Thought (CoT) manner. The sources of the dataset range from Chinese high school math exercises to US and international mathematics olympiad competition problems. The data were primarily collected from online exam paper PDFs and mathematics discussion forums. The processing steps include (a) OCR from the original PDFs, (b) segmentation… See the full description on the dataset page: https://huggingface.co./datasets/AI-MO/NuminaMath-CoT. | 3,424 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"aimo",
"math"
] | 2024-07-15T20:14:23 | null | null |
|
66a1d16a27fd84b81d732482 | TEAMREBOOTT-AI/SciCap-MLBCAP | TEAMREBOOTT-AI | {"license": "cc-by-nc-sa-4.0", "task_categories": ["text-generation", "image-to-text"], "language": ["en"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "int64"}, {"name": "figure_type", "dtype": "string"}, {"name": "ocr", "dtype": "string"}, {"name": "paragraph", "dtype": "string"}, {"name": "mention", "dtype": "string"}, {"name": "figure_description", "dtype": "string"}, {"name": "mlbcap_long", "dtype": "string"}, {"name": "mlbcap_short", "dtype": "string"}, {"name": "categories", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2444177418.129, "num_examples": 47639}], "download_size": 2487129056, "dataset_size": 2444177418.129}, "size_categories": ["10K<n<100K"]} | false | null | 2025-01-07T13:56:33 | 15 | 15 | false | 44f062ec4e5ec42898326cbea2f80f147a1ba861 |
MLBCAP: Multi-LLM Collaborative Caption Generation in Scientific Documents
📄 PaperMLBCAP has been accepted for presentation at AI4Research @ AAAI 2025. 🎉
📌 Introduction
Scientific figure captioning is a challenging task that demands contextually accurate descriptions of visual content. Existing approaches often oversimplify the task by treating it as either an image-to-text conversion or text summarization problem, leading to suboptimal results. Furthermore… See the full description on the dataset page: https://huggingface.co./datasets/TEAMREBOOTT-AI/SciCap-MLBCAP. | 200 | [
"task_categories:text-generation",
"task_categories:image-to-text",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2501.02552",
"region:us"
] | 2024-07-25T04:15:38 | null | null |
|
673a1149a7a311f5bed5c624 | HuggingFaceTB/smoltalk | HuggingFaceTB | {"language": ["en"], "tags": ["synthetic"], "pretty_name": "SmolTalk", "size_categories": ["1M<n<10M"], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": "data/all/train-*"}, {"split": "test", "path": "data/all/test-*"}]}, {"config_name": "smol-magpie-ultra", "data_files": [{"split": "train", "path": "data/smol-magpie-ultra/train-*"}, {"split": "test", "path": "data/smol-magpie-ultra/test-*"}]}, {"config_name": "smol-constraints", "data_files": [{"split": "train", "path": "data/smol-constraints/train-*"}, {"split": "test", "path": "data/smol-constraints/test-*"}]}, {"config_name": "smol-rewrite", "data_files": [{"split": "train", "path": "data/smol-rewrite/train-*"}, {"split": "test", "path": "data/smol-rewrite/test-*"}]}, {"config_name": "smol-summarize", "data_files": [{"split": "train", "path": "data/smol-summarize/train-*"}, {"split": "test", "path": "data/smol-summarize/test-*"}]}, {"config_name": "apigen-80k", "data_files": [{"split": "train", "path": "data/apigen-80k/train-*"}, {"split": "test", "path": "data/apigen-80k/test-*"}]}, {"config_name": "everyday-conversations", "data_files": [{"split": "train", "path": "data/everyday-conversations/train-*"}, {"split": "test", "path": "data/everyday-conversations/test-*"}]}, {"config_name": "explore-instruct-rewriting", "data_files": [{"split": "train", "path": "data/explore-instruct-rewriting/train-*"}, {"split": "test", "path": "data/explore-instruct-rewriting/test-*"}]}, {"config_name": "longalign", "data_files": [{"split": "train", "path": "data/longalign/train-*"}, {"split": "test", "path": "data/longalign/test-*"}]}, {"config_name": "metamathqa-50k", "data_files": [{"split": "train", "path": "data/metamathqa-50k/train-*"}, {"split": "test", "path": "data/metamathqa-50k/test-*"}]}, {"config_name": "numina-cot-100k", "data_files": [{"split": "train", "path": "data/numina-cot-100k/train-*"}, {"split": "test", "path": "data/numina-cot-100k/test-*"}]}, {"config_name": "openhermes-100k", "data_files": [{"split": "train", "path": "data/openhermes-100k/train-*"}, {"split": "test", "path": "data/openhermes-100k/test-*"}]}, {"config_name": "self-oss-instruct", "data_files": [{"split": "train", "path": "data/self-oss-instruct/train-*"}, {"split": "test", "path": "data/self-oss-instruct/test-*"}]}, {"config_name": "systemchats-30k", "data_files": [{"split": "train", "path": "data/systemchats-30k/train-*"}, {"split": "test", "path": "data/systemchats-30k/test-*"}]}]} | false | null | 2024-11-26T11:02:25 | 276 | 14 | false | 5a40ecb185e55dd30edf3c24b77e67f6ea0d659b |
SmolTalk
Dataset description
This is a synthetic dataset designed for supervised finetuning (SFT) of LLMs. It was used to build SmolLM2-Instruct family of models and contains 1M samples.
During the development of SmolLM2, we observed that models finetuned on public SFT datasets underperformed compared to other models with proprietary instruction datasets. To address this gap, we created new synthetic datasets that improve instruction following while covering… See the full description on the dataset page: https://huggingface.co./datasets/HuggingFaceTB/smoltalk. | 6,294 | [
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"synthetic"
] | 2024-11-17T15:52:41 | null | null |
|
673e9e53cdad8a9744b0bf1b | O1-OPEN/OpenO1-SFT | O1-OPEN | {"license": "apache-2.0", "task_categories": ["question-answering"], "language": ["en", "zh"], "size_categories": ["10K<n<100K"]} | false | null | 2024-12-17T02:30:09 | 321 | 14 | false | 63112de109aa755e9cdfad63a13f08a92dd7df36 |
SFT Data for CoT Activation
🎉🎉🎉This repository contains the dataset used for fine-tuning a language model using SFT for Chain-of-Thought Activation.
🌈🌈🌈The dataset is designed to enhance the model's ability to generate coherent and logical reasoning sequences.
☄☄☄By using this dataset, the model can learn to produce detailed and structured reasoning steps, enhancing its performance on complex reasoning tasks.
Statistics
1️⃣Total Records: 77,685… See the full description on the dataset page: https://huggingface.co./datasets/O1-OPEN/OpenO1-SFT. | 2,119 | [
"task_categories:question-answering",
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-11-21T02:43:31 | null | null |
|
67734d5c7ec2413faa8d3c85 | PowerInfer/LONGCOT-Refine-500K | PowerInfer | {"language": ["en"], "license": "apache-2.0"} | false | null | 2025-01-02T06:10:43 | 35 | 14 | false | 88bf8410db01197006e572a46c88311720a23577 | This repository contains approximately 500,000 instances of responses generated using Qwen2.5-72B-Instruct. The dataset combines prompts from multiple high-quality sources to create diverse and comprehensive training data.
The dataset is available under the Apache 2.0 license.
Bias, Risks, and Limitations
This dataset is mainly in English.
The dataset inherits the biases, errors, and omissions known to exist in data used for seed sources and models used for data generation.… See the full description on the dataset page: https://huggingface.co./datasets/PowerInfer/LONGCOT-Refine-500K. | 280 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-31T01:48:12 | null | null |
|
6649d353babc0b33565e1a4a | HumanLLMs/Human-Like-DPO-Dataset | HumanLLMs | {"language": ["en"], "license": "other", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.json"}]}]} | false | null | 2024-09-23T16:30:29 | 40 | 13 | false | 77522f471820b963b4f81e7492a3e37febde5f18 | null | 253 | [
"language:en",
"license:other",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-05-19T10:24:19 | null | null |
|
676593a303cc6dbb6e857610 | Rapidata/text-2-video-human-preferences | Rapidata | {"license": "apache-2.0", "task_categories": ["text-to-video", "video-classification"], "tags": ["human", "preferences", "coherence", "plausibilty", "style", "alignment"], "language": ["en"], "pretty_name": "Human Preferences for Text to Video Models", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "video1", "dtype": "string"}, {"name": "video2", "dtype": "string"}, {"name": "weighted_results1_Alignment", "dtype": "float64"}, {"name": "weighted_results2_Alignment", "dtype": "float64"}, {"name": "detailedResults_Alignment", "list": [{"name": "userDetails", "struct": [{"name": "country", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "userScore", "dtype": "float64"}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Coherence", "dtype": "float64"}, {"name": "weighted_results2_Coherence", "dtype": "float64"}, {"name": "detailedResults_Coherence", "list": [{"name": "userDetails", "struct": [{"name": "country", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "userScore", "dtype": "float64"}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "weighted_results1_Preference", "dtype": "float64"}, {"name": "weighted_results2_Preference", "dtype": "float64"}, {"name": "detailedResults_Preference", "list": [{"name": "userDetails", "struct": [{"name": "country", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "userScore", "dtype": "float64"}]}, {"name": "votedFor", "dtype": "string"}]}, {"name": "file_name1", "dtype": "string"}, {"name": "file_name2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 478042, "num_examples": 316}], "download_size": 121718, "dataset_size": 478042}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-01-11T13:23:47 | 13 | 13 | false | ec394c772df65ab3377c9c481e76459c23028aff |
Rapidata Video Generation Preference Dataset
This dataset was collected in ~12 hours using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation.
The data collected in this dataset informs our text-2-video model benchmark. We just started so currently only two models are represented in this set:
Sora
Hunyouan
Pika 2.0 is currently being evaluated and will be added next.
Explore our latest model rankings on our website.
If you get value… See the full description on the dataset page: https://huggingface.co./datasets/Rapidata/text-2-video-human-preferences. | 130 | [
"task_categories:text-to-video",
"task_categories:video-classification",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"human",
"preferences",
"coherence",
"plausibilty",
"style",
"alignment"
] | 2024-12-20T15:56:19 | null | null |
|
67744720363e2be467b7c2b5 | qingy2024/FineQwQ-142k | qingy2024 | {"language": ["en"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "10k", "num_bytes": 87273156.45129532, "num_examples": 10000}, {"name": "25k", "num_bytes": 218182891.12823832, "num_examples": 25000}, {"name": "50k", "num_bytes": 436365782.25647664, "num_examples": 50000}, {"name": "100k", "num_bytes": 872731564.5129533, "num_examples": 100000}, {"name": "142k", "num_bytes": 1239278821.6083937, "num_examples": 142000}], "download_size": 1265768860, "dataset_size": 2853832215.9573574}, "configs": [{"config_name": "default", "data_files": [{"split": "10k", "path": "data/10k-*"}, {"split": "25k", "path": "data/25k-*"}, {"split": "50k", "path": "data/50k-*"}, {"split": "100k", "path": "data/100k-*"}, {"split": "142k", "path": "data/142k-*"}]}]} | false | null | 2025-01-07T18:00:44 | 14 | 13 | false | f7443bb54d207f590a5d13924c80c9eacfd66fe1 |
Shakker-Labs/FLUX.1-dev-LoRA-Logo-Design
Original Sources: qingy2024/QwQ-LongCoT-Verified-130K (amphora/QwQ-LongCoT-130K), amphora/QwQ-LongCoT-130K-2, PowerInfer/QWQ-LONGCOT-500K.
Source
Information
Rows
%
powerinfer/qwq-500k
Only coding problems kept to avoid overlap
50,899
35.84%
qwq-longcot-verified
Verified math problems
64,096
45.14%
amphora-magpie
Diverse general purpose reasoning
27,015
19.02%
| 285 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2024-12-31T19:33:52 | null | null |
|
677396c13cd7faf7e8f9dc8c | PRIME-RL/Eurus-2-RL-Data | PRIME-RL | {"license": "mit"} | false | null | 2025-01-06T11:21:52 | 19 | 12 | false | 5cbc5bc54c9c8417afd3539fb267422c33b525e6 |
Eurus-2-RL-Data
Links
📜 Blog
🤗 PRIME Collection
Introduction
Eurus-2-RL-Data is a high-quality RL training dataset of mathematics and coding problems with outcome verifiers (LaTeX answers for math and test cases for coding).
For math, we source from NuminaMath-CoT. The problems span from Chinese high school mathematics to International Mathematical Olympiad competition questions.
For coding, we source from APPS, CodeContests, TACO, and… See the full description on the dataset page: https://huggingface.co./datasets/PRIME-RL/Eurus-2-RL-Data. | 182 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.01981",
"region:us"
] | 2024-12-31T07:01:21 | null | null |
|
6775e1c326815bf20d874413 | fal/cosmos-openvid-1m | fal | {"size_categories": ["100K<n<1M"], "viewer": true, "license": "apache-2.0"} | false | null | 2025-01-09T02:12:51 | 17 | 12 | false | 10b41fc29006eff62ff64b8795b8ae8ef7ff9cde |
Cosmos-Tokenized OpenVid-1M
Cosmos-Tokenized OpenVid-1M
How to use
Shards are stored in parquet format.
It has 4 columns: serialized_latent, caption, fps, video.
serialized_latent is the latent vector of the video, serialized using torch.save().
Please use the following function to deserialize it:def deserialize_tensor(
serialized_tensor: bytes, device: Optional[str] = None
) -> torch.Tensor:
return torch.load(
io.BytesIO(serialized_tensor)… See the full description on the dataset page: https://huggingface.co./datasets/fal/cosmos-openvid-1m. | 735 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-02T00:45:55 | null | null |
|
677e5956e84a20259e43d869 | Rapidata/Translation-gpt4o_mini-v-gpt4o-v-deepl | Rapidata | {"dataset_info": {"features": [{"name": "original_text", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "total_responses", "dtype": "int64"}, {"name": "weighted_votes_1", "dtype": "float64"}, {"name": "weighted_votes_2", "dtype": "float64"}, {"name": "translation_model_1", "dtype": "string"}, {"name": "translation_model_2", "dtype": "string"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}, {"name": "detailed_results", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10792019, "num_examples": 746}], "download_size": 1059070, "dataset_size": 10792019}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | false | null | 2025-01-11T13:23:54 | 12 | 12 | false | c243b1f54587f0ec225fdc6ff910d48e44668aa3 |
If you get value from this dataset and would like to see more in the future, please consider liking it.
Overview
This dataset compares the translation capabilities of GPT-4o and GPT-4o-mini against DeepL across different languages. The comparison involved 100 distinct texts in 4 languages, with each translation being rated by 100 native speakers. Texts that were translated identically across platforms were excluded from the analysis.
Results
The comparative… See the full description on the dataset page: https://huggingface.co./datasets/Rapidata/Translation-gpt4o_mini-v-gpt4o-v-deepl. | 61 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | 2025-01-08T10:54:14 | null | null |
|
676f70968756741d47c691df | FreedomIntelligence/medical-o1-verifiable-problem | FreedomIntelligence | {"license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "language": ["en"], "tags": ["medical", "biology"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "medical_o1_verifiable_problem.json"}]}]} | false | null | 2024-12-30T02:56:46 | 21 | 11 | false | 46d5175eb74fdef3516d51d52e8c40db04bbdf35 |
Introduction
This dataset features open-ended medical problems designed to improve LLMs' medical reasoning. Each entry includes a open-ended question and a ground-truth answer based on challenging medical exams. The verifiable answers enable checking LLM outputs, refining their reasoning processes.
For details, see our paper and GitHub repository.
Citation
If you find our data useful, please consider citing our work!… See the full description on the dataset page: https://huggingface.co./datasets/FreedomIntelligence/medical-o1-verifiable-problem. | 177 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2412.18925",
"region:us",
"medical",
"biology"
] | 2024-12-28T03:29:26 | null | null |
|
660e7b9b4636ce2b0e77b699 | mozilla-foundation/common_voice_17_0 | mozilla-foundation | {"pretty_name": "Common Voice Corpus 17.0", "annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ab", "af", "am", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "dyu", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "gn", "ha", "he", "hi", "hsb", "ht", "hu", "hy", "ia", "id", "ig", "is", "it", "ja", "ka", "kab", "kk", "kmr", "ko", "ky", "lg", "lij", "lo", "lt", "ltg", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mrj", "mt", "myv", "nan", "ne", "nhi", "nl", "nn", "nso", "oc", "or", "os", "pa", "pl", "ps", "pt", "quy", "rm", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "skr", "sl", "sq", "sr", "sv", "sw", "ta", "te", "th", "ti", "tig", "tk", "tok", "tr", "tt", "tw", "ug", "uk", "ur", "uz", "vi", "vot", "yi", "yo", "yue", "zgh", "zh", "zu", "zza"], "language_bcp47": ["zh-CN", "zh-HK", "zh-TW", "sv-SE", "rm-sursilv", "rm-vallader", "pa-IN", "nn-NO", "ne-NP", "nan-tw", "hy-AM", "ga-IE", "fy-NL"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["extended|common_voice"], "paperswithcode_id": "common-voice", "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."} | false | null | 2024-06-16T13:50:23 | 208 | 10 | false | b10d53980ef166bc24ce3358471c1970d7e6b5ec |
Dataset Card for Common Voice Corpus 17.0
Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 31175 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 20408 validated hours in 124 languages, but more voices and languages are always added.
Take a look at the Languages… See the full description on the dataset page: https://huggingface.co./datasets/mozilla-foundation/common_voice_17_0. | 20,031 | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"language:ab",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:ast",
"language:az",
"language:ba",
"language:bas",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ckb",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dyu",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gn",
"language:ha",
"language:he",
"language:hi",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:kk",
"language:kmr",
"language:ko",
"language:ky",
"language:lg",
"language:lij",
"language:lo",
"language:lt",
"language:ltg",
"language:lv",
"language:mdf",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:mt",
"language:myv",
"language:nan",
"language:ne",
"language:nhi",
"language:nl",
"language:nn",
"language:nso",
"language:oc",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:quy",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sat",
"language:sc",
"language:sk",
"language:skr",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:ti",
"language:tig",
"language:tk",
"language:tok",
"language:tr",
"language:tt",
"language:tw",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vot",
"language:yi",
"language:yo",
"language:yue",
"language:zgh",
"language:zh",
"language:zu",
"language:zza",
"license:cc0-1.0",
"size_categories:10M<n<100M",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1912.06670",
"region:us"
] | 2024-04-04T10:06:19 | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | common-voice |
|
66c84764a47b2d6c582bbb02 | amphion/Emilia-Dataset | amphion | {"license": "cc-by-nc-4.0", "task_categories": ["text-to-speech", "automatic-speech-recognition"], "language": ["zh", "en", "ja", "fr", "de", "ko"], "pretty_name": "Emilia", "size_categories": ["10M<n<100M"], "extra_gated_prompt": "Terms of Access: The researcher has requested permission to use the Emilia dataset and the Emilia-Pipe preprocessing pipeline. In exchange for such permission, the researcher hereby agrees to the following terms and conditions:\n1. The researcher shall use the dataset ONLY for non-commercial research and educational purposes.\n2. The authors make no representations or warranties regarding the dataset, \n including but not limited to warranties of non-infringement or fitness for a particular purpose.\n\n3. The researcher accepts full responsibility for their use of the dataset and shall defend and indemnify the authors of Emilia, \n including their employees, trustees, officers, and agents, against any and all claims arising from the researcher's use of the dataset, \n including but not limited to the researcher's use of any copies of copyrighted content that they may create from the dataset.\n\n4. The researcher may provide research associates and colleagues with access to the dataset,\n provided that they first agree to be bound by these terms and conditions.\n \n5. The authors reserve the right to terminate the researcher's access to the dataset at any time.\n6. If the researcher is employed by a for-profit, commercial entity, the researcher's employer shall also be bound by these terms and conditions, and the researcher hereby represents that they are fully authorized to enter into this agreement on behalf of such employer.", "extra_gated_fields": {"Name": "text", "Email": "text", "Affiliation": "text", "Position": "text", "Your Supervisor/manager/director": "text", "I agree to the Terms of Access": "checkbox"}} | false | null | 2024-09-06T13:29:55 | 186 | 10 | false | bcaad00d13e7c101485990a46e88f5884ffed3fc |
Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation
This is the official repository 👑 for the Emilia dataset and the source code for the Emilia-Pipe speech data preprocessing pipeline.
News 🔥
2024/08/28: Welcome to join Amphion's Discord channel to stay connected and engage with our community!
2024/08/27: The Emilia dataset is now publicly available! Discover the most extensive and diverse speech generation… See the full description on the dataset page: https://huggingface.co./datasets/amphion/Emilia-Dataset. | 38,534 | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"language:zh",
"language:en",
"language:ja",
"language:fr",
"language:de",
"language:ko",
"license:cc-by-nc-4.0",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:audio",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2407.05361",
"region:us"
] | 2024-08-23T08:25:08 | null | null |
|
677ca887f1edc5b457c1eb22 | lianghsun/tw-instruct-500k | lianghsun | {"license": "cc-by-nc-sa-4.0", "task_categories": ["text-generation"], "language": ["zh", "en"], "tags": ["Taiwan", "ROC", "tw", "zh-tw", "chat", "instruction"], "pretty_name": "Common Task-Oriented Dialogues in Taiwan", "size_categories": ["100K<n<1M"]} | false | null | 2025-01-10T05:03:52 | 10 | 10 | false | 8cce79fe4783e14eeaa1a279a6e0d29f65bb5862 |
Dataset Card for Dataset Name
[👋歡迎加入 Discord 討論,我們正在找人一塊擴充這個對話集🎉]
台灣常見任務對話集(Common Task-Oriented Dialogues in Taiwan) 為台灣社會裡常見的任務對話,從 lianghsun/tw-instruct 截取出 50 萬筆的子集合版本。
Dataset Details
Dataset Description
這個資料集為合成資料集(synthetic datasets),內容由 a. reference-based 和 b. reference-free 的子資料集組合而成。生成 reference-based 資料集時,會先以我們收集用來訓練 lianghsun/Llama-3.2-Taiwan-3B 時的繁體中文文本作為參考文本,透過 LLM 去生成指令對話集,如果參考文本有特別領域的問法,我們將會特別設計該領域或者是適合該文本的問題;生成 reference-free… See the full description on the dataset page: https://huggingface.co./datasets/lianghsun/tw-instruct-500k. | 24 | [
"task_categories:text-generation",
"language:zh",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"Taiwan",
"ROC",
"tw",
"zh-tw",
"chat",
"instruction"
] | 2025-01-07T04:07:35 | null | null |
|
66212f29fb07c3e05ad0432e | HuggingFaceFW/fineweb | HuggingFaceFW | {"license": "odc-by", "task_categories": ["text-generation"], "language": ["en"], "pretty_name": "FineWeb", "size_categories": ["n>1T"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/*/*"}]}, {"config_name": "sample-10BT", "data_files": [{"split": "train", "path": "sample/10BT/*"}]}, {"config_name": "sample-100BT", "data_files": [{"split": "train", "path": "sample/100BT/*"}]}, {"config_name": "sample-350BT", "data_files": [{"split": "train", "path": "sample/350BT/*"}]}, {"config_name": "CC-MAIN-2024-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-51/*"}]}, {"config_name": "CC-MAIN-2024-46", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-46/*"}]}, {"config_name": "CC-MAIN-2024-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-42/*"}]}, {"config_name": "CC-MAIN-2024-38", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-38/*"}]}, {"config_name": "CC-MAIN-2024-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-33/*"}]}, {"config_name": "CC-MAIN-2024-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-30/*"}]}, {"config_name": "CC-MAIN-2024-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-26/*"}]}, {"config_name": "CC-MAIN-2024-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-22/*"}]}, {"config_name": "CC-MAIN-2024-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-18/*"}]}, {"config_name": "CC-MAIN-2024-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2024-10/*"}]}, {"config_name": "CC-MAIN-2023-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-50/*"}]}, {"config_name": "CC-MAIN-2023-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-40/*"}]}, {"config_name": "CC-MAIN-2023-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-23/*"}]}, {"config_name": "CC-MAIN-2023-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-14/*"}]}, {"config_name": "CC-MAIN-2023-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2023-06/*"}]}, {"config_name": "CC-MAIN-2022-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-49/*"}]}, {"config_name": "CC-MAIN-2022-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-40/*"}]}, {"config_name": "CC-MAIN-2022-33", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-33/*"}]}, {"config_name": "CC-MAIN-2022-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-27/*"}]}, {"config_name": "CC-MAIN-2022-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-21/*"}]}, {"config_name": "CC-MAIN-2022-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2022-05/*"}]}, {"config_name": "CC-MAIN-2021-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-49/*"}]}, {"config_name": "CC-MAIN-2021-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-43/*"}]}, {"config_name": "CC-MAIN-2021-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-39/*"}]}, {"config_name": "CC-MAIN-2021-31", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-31/*"}]}, {"config_name": "CC-MAIN-2021-25", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-25/*"}]}, {"config_name": "CC-MAIN-2021-21", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-21/*"}]}, {"config_name": "CC-MAIN-2021-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-17/*"}]}, {"config_name": "CC-MAIN-2021-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-10/*"}]}, {"config_name": "CC-MAIN-2021-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2021-04/*"}]}, {"config_name": "CC-MAIN-2020-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-50/*"}]}, {"config_name": "CC-MAIN-2020-45", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-45/*"}]}, {"config_name": "CC-MAIN-2020-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-40/*"}]}, {"config_name": "CC-MAIN-2020-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-34/*"}]}, {"config_name": "CC-MAIN-2020-29", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-29/*"}]}, {"config_name": "CC-MAIN-2020-24", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-24/*"}]}, {"config_name": "CC-MAIN-2020-16", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-16/*"}]}, {"config_name": "CC-MAIN-2020-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-10/*"}]}, {"config_name": "CC-MAIN-2020-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2020-05/*"}]}, {"config_name": "CC-MAIN-2019-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-51/*"}]}, {"config_name": "CC-MAIN-2019-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-47/*"}]}, {"config_name": "CC-MAIN-2019-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-43/*"}]}, {"config_name": "CC-MAIN-2019-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-39/*"}]}, {"config_name": "CC-MAIN-2019-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-35/*"}]}, {"config_name": "CC-MAIN-2019-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-30/*"}]}, {"config_name": "CC-MAIN-2019-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-26/*"}]}, {"config_name": "CC-MAIN-2019-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-22/*"}]}, {"config_name": "CC-MAIN-2019-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-18/*"}]}, {"config_name": "CC-MAIN-2019-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-13/*"}]}, {"config_name": "CC-MAIN-2019-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-09/*"}]}, {"config_name": "CC-MAIN-2019-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2019-04/*"}]}, {"config_name": "CC-MAIN-2018-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-51/*"}]}, {"config_name": "CC-MAIN-2018-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-47/*"}]}, {"config_name": "CC-MAIN-2018-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-43/*"}]}, {"config_name": "CC-MAIN-2018-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-39/*"}]}, {"config_name": "CC-MAIN-2018-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-34/*"}]}, {"config_name": "CC-MAIN-2018-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-30/*"}]}, {"config_name": "CC-MAIN-2018-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-26/*"}]}, {"config_name": "CC-MAIN-2018-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-22/*"}]}, {"config_name": "CC-MAIN-2018-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-17/*"}]}, {"config_name": "CC-MAIN-2018-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-13/*"}]}, {"config_name": "CC-MAIN-2018-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-09/*"}]}, {"config_name": "CC-MAIN-2018-05", "data_files": [{"split": "train", "path": "data/CC-MAIN-2018-05/*"}]}, {"config_name": "CC-MAIN-2017-51", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-51/*"}]}, {"config_name": "CC-MAIN-2017-47", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-47/*"}]}, {"config_name": "CC-MAIN-2017-43", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-43/*"}]}, {"config_name": "CC-MAIN-2017-39", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-39/*"}]}, {"config_name": "CC-MAIN-2017-34", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-34/*"}]}, {"config_name": "CC-MAIN-2017-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-30/*"}]}, {"config_name": "CC-MAIN-2017-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-26/*"}]}, {"config_name": "CC-MAIN-2017-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-22/*"}]}, {"config_name": "CC-MAIN-2017-17", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-17/*"}]}, {"config_name": "CC-MAIN-2017-13", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-13/*"}]}, {"config_name": "CC-MAIN-2017-09", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-09/*"}]}, {"config_name": "CC-MAIN-2017-04", "data_files": [{"split": "train", "path": "data/CC-MAIN-2017-04/*"}]}, {"config_name": "CC-MAIN-2016-50", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-50/*"}]}, {"config_name": "CC-MAIN-2016-44", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-44/*"}]}, {"config_name": "CC-MAIN-2016-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-40/*"}]}, {"config_name": "CC-MAIN-2016-36", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-36/*"}]}, {"config_name": "CC-MAIN-2016-30", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-30/*"}]}, {"config_name": "CC-MAIN-2016-26", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-26/*"}]}, {"config_name": "CC-MAIN-2016-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-22/*"}]}, {"config_name": "CC-MAIN-2016-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-18/*"}]}, {"config_name": "CC-MAIN-2016-07", "data_files": [{"split": "train", "path": "data/CC-MAIN-2016-07/*"}]}, {"config_name": "CC-MAIN-2015-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-48/*"}]}, {"config_name": "CC-MAIN-2015-40", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-40/*"}]}, {"config_name": "CC-MAIN-2015-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-35/*"}]}, {"config_name": "CC-MAIN-2015-32", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-32/*"}]}, {"config_name": "CC-MAIN-2015-27", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-27/*"}]}, {"config_name": "CC-MAIN-2015-22", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-22/*"}]}, {"config_name": "CC-MAIN-2015-18", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-18/*"}]}, {"config_name": "CC-MAIN-2015-14", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-14/*"}]}, {"config_name": "CC-MAIN-2015-11", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-11/*"}]}, {"config_name": "CC-MAIN-2015-06", "data_files": [{"split": "train", "path": "data/CC-MAIN-2015-06/*"}]}, {"config_name": "CC-MAIN-2014-52", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-52/*"}]}, {"config_name": "CC-MAIN-2014-49", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-49/*"}]}, {"config_name": "CC-MAIN-2014-42", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-42/*"}]}, {"config_name": "CC-MAIN-2014-41", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-41/*"}]}, {"config_name": "CC-MAIN-2014-35", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-35/*"}]}, {"config_name": "CC-MAIN-2014-23", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-23/*"}]}, {"config_name": "CC-MAIN-2014-15", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-15/*"}]}, {"config_name": "CC-MAIN-2014-10", "data_files": [{"split": "train", "path": "data/CC-MAIN-2014-10/*"}]}, {"config_name": "CC-MAIN-2013-48", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-48/*"}]}, {"config_name": "CC-MAIN-2013-20", "data_files": [{"split": "train", "path": "data/CC-MAIN-2013-20/*"}]}]} | false | null | 2025-01-03T11:58:46 | 1,813 | 9 | false | e31fdfd3918d4b48e837d69d274e624a067d7091 |
🍷 FineWeb
15 trillion tokens of the finest data the 🌐 web has to offer
What is it?
The 🍷 FineWeb dataset consists of more than 15T tokens of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the 🏭 datatrove library, our large scale data processing library.
🍷 FineWeb was originally meant to be a fully open replication of 🦅 RefinedWeb, with a release of the full… See the full description on the dataset page: https://huggingface.co./datasets/HuggingFaceFW/fineweb. | 148,387 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:10B<n<100B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.01116",
"arxiv:2109.07445",
"arxiv:2406.17557",
"doi:10.57967/hf/2493",
"region:us"
] | 2024-04-18T14:33:13 | null | null |
|
66bffb77453a7ef6c587560c | edinburgh-dawg/mmlu-redux-2.0 | edinburgh-dawg | {"dataset_info": [{"config_name": "abstract_algebra", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "anatomy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "astronomy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "business_ethics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "clinical_knowledge", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "college_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "computer_security", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "conceptual_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "econometrics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "electrical_engineering", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "elementary_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "formal_logic", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "global_facts", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_biology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_chemistry", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_computer_science", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_european_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_geography", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_government_and_politics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_macroeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_mathematics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_microeconomics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_physics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_statistics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_us_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "high_school_world_history", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "human_aging", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "human_sexuality", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "international_law", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "jurisprudence", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "logical_fallacies", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "machine_learning", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "management", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "marketing", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "medical_genetics", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "miscellaneous", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "moral_disputes", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "moral_scenarios", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "nutrition", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "philosophy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "prehistory", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_accounting", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_law", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_medicine", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "professional_psychology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "public_relations", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "security_studies", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "sociology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "us_foreign_policy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "virology", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}, {"config_name": "world_religions", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "error_type", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "correct_answer", "dtype": "string"}, {"name": "potential_reason", "dtype": "string"}], "splits": [{"name": "test", "num_examples": 100}]}], "configs": [{"config_name": "abstract_algebra", "data_files": [{"split": "test", "path": "abstract_algebra/data-*"}]}, {"config_name": "anatomy", "data_files": [{"split": "test", "path": "anatomy/data-*"}]}, {"config_name": "astronomy", "data_files": [{"split": "test", "path": "astronomy/data-*"}]}, {"config_name": "business_ethics", "data_files": [{"split": "test", "path": "business_ethics/data-*"}]}, {"config_name": "clinical_knowledge", "data_files": [{"split": "test", "path": "clinical_knowledge/data-*"}]}, {"config_name": "college_biology", "data_files": [{"split": "test", "path": "college_biology/data-*"}]}, {"config_name": "college_chemistry", "data_files": [{"split": "test", "path": "college_chemistry/data-*"}]}, {"config_name": "college_computer_science", "data_files": [{"split": "test", "path": "college_computer_science/data-*"}]}, {"config_name": "college_mathematics", "data_files": [{"split": "test", "path": "college_mathematics/data-*"}]}, {"config_name": "college_medicine", "data_files": [{"split": "test", "path": "college_medicine/data-*"}]}, {"config_name": "college_physics", "data_files": [{"split": "test", "path": "college_physics/data-*"}]}, {"config_name": "computer_security", "data_files": [{"split": "test", "path": "computer_security/data-*"}]}, {"config_name": "conceptual_physics", "data_files": [{"split": "test", "path": "conceptual_physics/data-*"}]}, {"config_name": "econometrics", "data_files": [{"split": "test", "path": "econometrics/data-*"}]}, {"config_name": "electrical_engineering", "data_files": [{"split": "test", "path": "electrical_engineering/data-*"}]}, {"config_name": "elementary_mathematics", "data_files": [{"split": "test", "path": "elementary_mathematics/data-*"}]}, {"config_name": "formal_logic", "data_files": [{"split": "test", "path": "formal_logic/data-*"}]}, {"config_name": "global_facts", "data_files": [{"split": "test", "path": "global_facts/data-*"}]}, {"config_name": "high_school_biology", "data_files": [{"split": "test", "path": "high_school_biology/data-*"}]}, {"config_name": "high_school_chemistry", "data_files": [{"split": "test", "path": "high_school_chemistry/data-*"}]}, {"config_name": "high_school_computer_science", "data_files": [{"split": "test", "path": "high_school_computer_science/data-*"}]}, {"config_name": "high_school_european_history", "data_files": [{"split": "test", "path": "high_school_european_history/data-*"}]}, {"config_name": "high_school_geography", "data_files": [{"split": "test", "path": "high_school_geography/data-*"}]}, {"config_name": "high_school_government_and_politics", "data_files": [{"split": "test", "path": "high_school_government_and_politics/data-*"}]}, {"config_name": "high_school_macroeconomics", "data_files": [{"split": "test", "path": "high_school_macroeconomics/data-*"}]}, {"config_name": "high_school_mathematics", "data_files": [{"split": "test", "path": "high_school_mathematics/data-*"}]}, {"config_name": "high_school_microeconomics", "data_files": [{"split": "test", "path": "high_school_microeconomics/data-*"}]}, {"config_name": "high_school_physics", "data_files": [{"split": "test", "path": "high_school_physics/data-*"}]}, {"config_name": "high_school_psychology", "data_files": [{"split": "test", "path": "high_school_psychology/data-*"}]}, {"config_name": "high_school_statistics", "data_files": [{"split": "test", "path": "high_school_statistics/data-*"}]}, {"config_name": "high_school_us_history", "data_files": [{"split": "test", "path": "high_school_us_history/data-*"}]}, {"config_name": "high_school_world_history", "data_files": [{"split": "test", "path": "high_school_world_history/data-*"}]}, {"config_name": "human_aging", "data_files": [{"split": "test", "path": "human_aging/data-*"}]}, {"config_name": "human_sexuality", "data_files": [{"split": "test", "path": "human_sexuality/data-*"}]}, {"config_name": "international_law", "data_files": [{"split": "test", "path": "international_law/data-*"}]}, {"config_name": "jurisprudence", "data_files": [{"split": "test", "path": "jurisprudence/data-*"}]}, {"config_name": "logical_fallacies", "data_files": [{"split": "test", "path": "logical_fallacies/data-*"}]}, {"config_name": "machine_learning", "data_files": [{"split": "test", "path": "machine_learning/data-*"}]}, {"config_name": "management", "data_files": [{"split": "test", "path": "management/data-*"}]}, {"config_name": "marketing", "data_files": [{"split": "test", "path": "marketing/data-*"}]}, {"config_name": "medical_genetics", "data_files": [{"split": "test", "path": "medical_genetics/data-*"}]}, {"config_name": "miscellaneous", "data_files": [{"split": "test", "path": "miscellaneous/data-*"}]}, {"config_name": "moral_disputes", "data_files": [{"split": "test", "path": "moral_disputes/data-*"}]}, {"config_name": "moral_scenarios", "data_files": [{"split": "test", "path": "moral_scenarios/data-*"}]}, {"config_name": "nutrition", "data_files": [{"split": "test", "path": "nutrition/data-*"}]}, {"config_name": "philosophy", "data_files": [{"split": "test", "path": "philosophy/data-*"}]}, {"config_name": "prehistory", "data_files": [{"split": "test", "path": "prehistory/data-*"}]}, {"config_name": "professional_accounting", "data_files": [{"split": "test", "path": "professional_accounting/data-*"}]}, {"config_name": "professional_law", "data_files": [{"split": "test", "path": "professional_law/data-*"}]}, {"config_name": "professional_medicine", "data_files": [{"split": "test", "path": "professional_medicine/data-*"}]}, {"config_name": "professional_psychology", "data_files": [{"split": "test", "path": "professional_psychology/data-*"}]}, {"config_name": "public_relations", "data_files": [{"split": "test", "path": "public_relations/data-*"}]}, {"config_name": "security_studies", "data_files": [{"split": "test", "path": "security_studies/data-*"}]}, {"config_name": "sociology", "data_files": [{"split": "test", "path": "sociology/data-*"}]}, {"config_name": "us_foreign_policy", "data_files": [{"split": "test", "path": "us_foreign_policy/data-*"}]}, {"config_name": "virology", "data_files": [{"split": "test", "path": "virology/data-*"}]}, {"config_name": "world_religions", "data_files": [{"split": "test", "path": "world_religions/data-*"}]}], "license": "cc-by-4.0", "task_categories": ["question-answering"], "language": ["en"], "pretty_name": "MMLU-Redux-2.0", "size_categories": ["1K<n<10K"]} | false | null | 2024-11-07T15:38:08 | 9 | 9 | false | 63f54ebd32c36485c679f53b8e2f576d689b9b34 |
Dataset Card for MMLU-Redux-2.0
MMLU-Redux is a subset of 5,700 manually re-annotated questions across 57 MMLU subjects.
Dataset Details
Dataset Description
Each data point in MMLU-Redux contains seven columns:
question (str): The original MMLU question.
choices (List[str]): The original list of four choices associated with the question from the MMLU dataset.
answer (int): The MMLU ground truth label in the form of an array index between 0 and… See the full description on the dataset page: https://huggingface.co./datasets/edinburgh-dawg/mmlu-redux-2.0. | 312 | [
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2406.04127",
"doi:10.57967/hf/3469",
"region:us"
] | 2024-08-17T01:23:03 | null | null |
|
674dc01bf413e32210acb235 | Rapidata/human-style-preferences-images | Rapidata | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "votes_image1", "dtype": "int64"}, {"name": "votes_image2", "dtype": "int64"}, {"name": "model1", "dtype": "string"}, {"name": "model2", "dtype": "string"}, {"name": "detailed_results", "dtype": "string"}, {"name": "image1_path", "dtype": "string"}, {"name": "image2_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26229461236, "num_examples": 63752}], "download_size": 17935847407, "dataset_size": 26229461236}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license": "cdla-permissive-2.0", "task_categories": ["text-to-image", "image-to-text", "image-classification", "reinforcement-learning"], "language": ["en"], "tags": ["Human", "Preference", "country", "language", "flux", "midjourney", "dalle3", "stabeldiffusion", "alignment", "flux1.1", "flux1", "imagen3"], "size_categories": ["100K<n<1M"], "pretty_name": "imagen-3 vs. Flux-1.1-pro vs. Flux-1-pro vs. Dalle-3 vs. Midjourney-5.2 vs. Stabel-Diffusion-3 - Human Preference Dataset"} | false | null | 2025-01-10T21:59:31 | 12 | 9 | false | 79acd5ebcc535309c08d996ab1f88c01077a7b12 |
Rapidata Image Generation Preference Dataset
This dataset was collected in ~4 Days using the Rapidata Python API, accessible to anyone and ideal for large scale data annotation.
Explore our latest model rankings on our website.
If you get value from this dataset and would like to see more in the future, please consider liking it.
Overview
One of the largest human preference datasets for text-to-image models, this release contains over 1,200,000 human… See the full description on the dataset page: https://huggingface.co./datasets/Rapidata/human-style-preferences-images. | 405 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-classification",
"task_categories:reinforcement-learning",
"language:en",
"license:cdla-permissive-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"Human",
"Preference",
"country",
"language",
"flux",
"midjourney",
"dalle3",
"stabeldiffusion",
"alignment",
"flux1.1",
"flux1",
"imagen3"
] | 2024-12-02T14:11:39 | null | null |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 1,239
Data Sourcing report
powered
by
Spawning.aiNo elements in this dataset have been identified as either opted-out, or opted-in, by their creator.