Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -163,6 +163,33 @@ configs:
|
|
163 |
|
164 |
# Sangraha
|
165 |
|
166 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
167 |
|
168 |
|
|
|
163 |
|
164 |
# Sangraha
|
165 |
|
166 |
+
<p align="center">
|
167 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ef3cd11e695b35aa48bebc/nDnyidcqIOLAP9dTw9GrK.png" />
|
168 |
+
</p>
|
169 |
+
|
170 |
+
Sangraha is the largest Indic language pretraining data containing 251B tokens summed up over 22 languages, extracted from curated sources, existing multilingual corpora and large scale translations.
|
171 |
+
|
172 |
+
**Coming Soon**:
|
173 |
+
- Sangraha Synthetic
|
174 |
+
- Sangraha Verified - Hindi YouTube data
|
175 |
+
|
176 |
+
More information:
|
177 |
+
|
178 |
+
- Read more about Sangraha [on Arxiv](https://arxiv.org/);
|
179 |
+
- Check out the scraping and cleaning pipelines used to curate Sangraha [on GitHub](https://github.com/AI4Bharat/IndicLLMSuite);
|
180 |
+
|
181 |
+
## Getting Started
|
182 |
+
You can download the dataset using Hugging Face datasets:
|
183 |
+
```python
|
184 |
+
from datasets import load_dataset
|
185 |
+
ds = load_dataset("cerebras/SlimPajama-627B")
|
186 |
+
```
|
187 |
+
|
188 |
+
## Background
|
189 |
+
Sangraha contains three broad components:
|
190 |
+
- **Sangraha Verified**: Containing scraped data from "human-verified" Websites, OCR-extracted data from high quality Indic language PDFs, transcribed data from various Indic language videos, podcasts, movies, courses, etc.
|
191 |
+
- **Sangraha Unverfied**: High quality Indic language data extracted from existing multilingual corpora employing perplexity filtering using n-gram language models trained on Sangraha Verified.
|
192 |
+
- **Sangraha Synthetic**: WikiMedia English translated to 14 Indic languages and further "romanised" from 14 languages by transliteration to English.
|
193 |
+
|
194 |
|
195 |
|