Datasets:
Róger Nascimento Santos
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,33 +1,33 @@
|
|
1 |
---
|
2 |
-
language:
|
3 |
-
- pt
|
4 |
dataset_info:
|
5 |
features:
|
6 |
- name: audio
|
7 |
dtype: audio
|
8 |
- name: rttm
|
9 |
dtype: string
|
10 |
-
- name: episode_name
|
11 |
-
dtype: string
|
12 |
splits:
|
13 |
- name: train
|
14 |
-
num_bytes:
|
15 |
-
num_examples:
|
16 |
-
download_size:
|
17 |
-
dataset_size:
|
18 |
configs:
|
19 |
- config_name: default
|
20 |
data_files:
|
21 |
- split: train
|
22 |
path: data/train-*
|
|
|
|
|
23 |
---
|
24 |
|
25 |
# What is this?
|
26 |
|
27 |
This is a dataset extracted from brazilian cartoon called **Fudêncio (by MTV)**, which is somehow **similar to South Park**.
|
28 |
|
29 |
-
This dataset has three features
|
|
|
30 |
|
31 |
The separation of Demucs is *not perfect*, but can help people to train **RVC** voices for different characters.
|
32 |
|
|
|
33 |
I plan to use this later to split voicelines by characters names into another dataset.
|
|
|
1 |
---
|
|
|
|
|
2 |
dataset_info:
|
3 |
features:
|
4 |
- name: audio
|
5 |
dtype: audio
|
6 |
- name: rttm
|
7 |
dtype: string
|
|
|
|
|
8 |
splits:
|
9 |
- name: train
|
10 |
+
num_bytes: 3326774904
|
11 |
+
num_examples: 126
|
12 |
+
download_size: 3323845215
|
13 |
+
dataset_size: 3326774904
|
14 |
configs:
|
15 |
- config_name: default
|
16 |
data_files:
|
17 |
- split: train
|
18 |
path: data/train-*
|
19 |
+
language:
|
20 |
+
- pt
|
21 |
---
|
22 |
|
23 |
# What is this?
|
24 |
|
25 |
This is a dataset extracted from brazilian cartoon called **Fudêncio (by MTV)**, which is somehow **similar to South Park**.
|
26 |
|
27 |
+
### This dataset has three features
|
28 |
+
**rttm** *string* ( to identify speakers ), **episode_name** ( for reference ) and the **audio** *voicelines Acapella* made using Demucs (no_voice files are not included)
|
29 |
|
30 |
The separation of Demucs is *not perfect*, but can help people to train **RVC** voices for different characters.
|
31 |
|
32 |
+
### Plan to future upgrades
|
33 |
I plan to use this later to split voicelines by characters names into another dataset.
|