Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,13 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
features:
|
4 |
- name: audio
|
@@ -7,13 +16,36 @@ dataset_info:
|
|
7 |
dtype: string
|
8 |
splits:
|
9 |
- name: train
|
10 |
-
num_bytes: 187348211
|
11 |
num_examples: 1000
|
12 |
download_size: 169120503
|
13 |
-
dataset_size: 187348211
|
14 |
configs:
|
15 |
- config_name: default
|
16 |
data_files:
|
17 |
- split: train
|
18 |
path: data/train-*
|
19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
task_categories:
|
4 |
+
- automatic-speech-recognition
|
5 |
+
- text-to-speech
|
6 |
+
language:
|
7 |
+
- vi
|
8 |
+
pretty_name: VAIS-1000
|
9 |
+
size_categories:
|
10 |
+
- n<1K
|
11 |
dataset_info:
|
12 |
features:
|
13 |
- name: audio
|
|
|
16 |
dtype: string
|
17 |
splits:
|
18 |
- name: train
|
19 |
+
num_bytes: 187348211
|
20 |
num_examples: 1000
|
21 |
download_size: 169120503
|
22 |
+
dataset_size: 187348211
|
23 |
configs:
|
24 |
- config_name: default
|
25 |
data_files:
|
26 |
- split: train
|
27 |
path: data/train-*
|
28 |
---
|
29 |
+
|
30 |
+
# unofficial mirror of VAIS-1000
|
31 |
+
|
32 |
+
official announcement: https://vais.vn/vi/tai-ve/hts_for_vietnamese (dead)
|
33 |
+
|
34 |
+
mirror: https://github.com/undertheseanlp/text_to_speech/tree/run/data/vais1000/raw
|
35 |
+
|
36 |
+
small only 1h40min audio - 1 speaker (female northern accent) - 1k samples
|
37 |
+
|
38 |
+
pre-process: none
|
39 |
+
|
40 |
+
need to do: check misspelling, restore foreign words phonetised to vietnamese
|
41 |
+
|
42 |
+
usage with HuggingFace:
|
43 |
+
```python
|
44 |
+
# pip install -q "datasets[audio]"
|
45 |
+
from datasets import load_dataset
|
46 |
+
from torch.utils.data import DataLoader
|
47 |
+
|
48 |
+
dataset = load_dataset("doof-ferb/vais1000", split="train")
|
49 |
+
dataset.set_format(type="torch", columns=["audio", "transcription"])
|
50 |
+
dataloader = DataLoader(dataset, batch_size=4)
|
51 |
+
```
|