Datasets:
up
Browse files
README.md
CHANGED
@@ -376,9 +376,7 @@ Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basqu
|
|
376 |
|
377 |
### How to use
|
378 |
|
379 |
-
The
|
380 |
-
|
381 |
-
The minimalistic API ensure that you can plug-and-play this dataset in your existing Machine Learning workflow with just a few lines of code.
|
382 |
|
383 |
The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. To download the Hindi split, simply specify the corresponding dataset config name:
|
384 |
```python
|
@@ -387,7 +385,7 @@ from datasets import load_dataset
|
|
387 |
cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
|
388 |
```
|
389 |
|
390 |
-
Using the datasets library, you can stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
391 |
```python
|
392 |
from datasets import load_dataset
|
393 |
|
@@ -396,7 +394,7 @@ cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train"
|
|
396 |
print(next(iter(cv_11)))
|
397 |
```
|
398 |
|
399 |
-
*Bonus*:
|
400 |
```python
|
401 |
from datasets import load_dataset
|
402 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
|
|
376 |
|
377 |
### How to use
|
378 |
|
379 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python at scale. No need to rely on decades old hacky shell scripts and C/C++ pre-processing scripts anymore. The minimalistic API ensure that you can plug-and-play this dataset in your existing Machine Learning workflow with just a few lines of code.
|
|
|
|
|
380 |
|
381 |
The entire dataset (or a particular split) can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. To download the Hindi split, simply specify the corresponding dataset config name:
|
382 |
```python
|
|
|
385 |
cv_11 = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
|
386 |
```
|
387 |
|
388 |
+
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
389 |
```python
|
390 |
from datasets import load_dataset
|
391 |
|
|
|
394 |
print(next(iter(cv_11)))
|
395 |
```
|
396 |
|
397 |
+
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your local/ streaming datasets.
|
398 |
```python
|
399 |
from datasets import load_dataset
|
400 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|