Update README with code snippets and scripts
Browse files
README.md
CHANGED
@@ -39,6 +39,7 @@ source_datasets:
|
|
39 |
- extended|other-common-voice
|
40 |
task_categories:
|
41 |
- automatic-speech-recognition
|
|
|
42 |
task_ids: []
|
43 |
paperswithcode_id: null
|
44 |
pretty_name: CoVoST 2
|
@@ -916,6 +917,7 @@ dataset_info:
|
|
916 |
- [Dataset Summary](#dataset-summary)
|
917 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
918 |
- [Languages](#languages)
|
|
|
919 |
- [Dataset Structure](#dataset-structure)
|
920 |
- [Data Instances](#data-instances)
|
921 |
- [Data Fields](#data-fields)
|
@@ -957,6 +959,34 @@ crowdsourced voice recordings. There are 2,900 hours of speech represented in th
|
|
957 |
|
958 |
The dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.
|
959 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
960 |
## Dataset Structure
|
961 |
|
962 |
### Data Instances
|
|
|
39 |
- extended|other-common-voice
|
40 |
task_categories:
|
41 |
- automatic-speech-recognition
|
42 |
+
- audio-to-audio
|
43 |
task_ids: []
|
44 |
paperswithcode_id: null
|
45 |
pretty_name: CoVoST 2
|
|
|
917 |
- [Dataset Summary](#dataset-summary)
|
918 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
919 |
- [Languages](#languages)
|
920 |
+
- [How to use](#how-to-use)
|
921 |
- [Dataset Structure](#dataset-structure)
|
922 |
- [Data Instances](#data-instances)
|
923 |
- [Data Fields](#data-fields)
|
|
|
959 |
|
960 |
The dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.
|
961 |
|
962 |
+
### How to use
|
963 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
964 |
+
|
965 |
+
For example, to download the English-German config, simply specify the corresponding language config name (i.e., "en_de" for German):
|
966 |
+
```python
|
967 |
+
from datasets import load_dataset
|
968 |
+
|
969 |
+
covost2 = load_dataset("covost2", "en_de", split="train")
|
970 |
+
```
|
971 |
+
Note: For a successful load, you'd first need to download the Common Voice 4.0 `en` split from the Hugging Face Hub. You can download it via `cv4 = load_dataset("mozilla-foundation/common_voice_4_0", "en", split="all")`.
|
972 |
+
|
973 |
+
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets.
|
974 |
+
|
975 |
+
```python
|
976 |
+
from datasets import load_dataset
|
977 |
+
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
978 |
+
|
979 |
+
covost2 = load_dataset("covost2", "en_de", split="train")
|
980 |
+
batch_sampler = BatchSampler(RandomSampler(covost2), batch_size=32, drop_last=False)
|
981 |
+
dataloader = DataLoader(covost2, batch_sampler=batch_sampler)
|
982 |
+
```
|
983 |
+
|
984 |
+
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
|
985 |
+
|
986 |
+
### Example scripts
|
987 |
+
|
988 |
+
Train your own CTC or Seq2Seq Speech Translation models on CoVoST2 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
|
989 |
+
|
990 |
## Dataset Structure
|
991 |
|
992 |
### Data Instances
|