File size: 2,211 Bytes
70187e3 ae2c8a6 77b7d81 ae2c8a6 70187e3 d3ec255 ae2c8a6 4cea631 ae2c8a6 839c4b9 ae2c8a6 0a6bfed ae2c8a6 0fd8708 ae2c8a6 d275295 ae2c8a6 0fd8708 ae2c8a6 50abf40 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
tags:
- DiffSVC
- pre-trained_model
- basemodel
- diff-svc
license: "gpl"
datasets:
- 512rc_50k
- 512rc_80k
- 512rc_100k
---
**English** | [简体中文](./README_CN.md)
# DiffSVCBaseModel
A Diff-SVC base model for all kind of voice
## How to use?
1. Choose and download this model
2. Fill your config and put your datasets into ```(diffsvc-root)/data/raw/{speaker_name}/```
3. Throw this base model(only .ckpt file) into ```(diffsvc-root)/checkpoints/{speaker_name}```
4. Then start preprocessing and training as usual
## How much data do you use?
I use 2 public datasets(opencpop ,m4singer),40h+ audio in total.
## I want to train my own base model!
OK, you can download [this bianry file](./BaseModelBinary.tar.gz).
## Download
** Please choose a model that matches your config.yaml or config_nsf.yaml **
| Version | URL | Reference value of lr |
| -------------- | ------------------------------------ | --------------------- |
| 384rc,50k_step | [Click here](./384rc_50k_step.zip) | 0.0016 |
| 384rc,80k_step | [Click here](./384rc_80k_step.zip) | 0.0032 |
| 384rc,100k_step | [Click here](./384rc_100k_step.zip) | 0.0032 |
> rc: residual_channels
More coming soon...
## Repos
| Repo | URL |
| -------------------------------------------------------- | ------------------------------------------------------------------- |
| Diff-SVC | [Click here](https://github.com/prophesier/diff-svc) |
| 44.1KHz Vocoder | [Click here](https://openvpi.github.io/vocoders) |
| M4Singer | [Click here](https://github.com/M4Singer/M4Singer) |
| OpenCPOP | [Click here](https://github.com/wenet-e2e/opencpop) |
| Pre-trained_Models(My friend's pre-trained model) | [Click here](https://huggingface.co./Erythrocyte/Pre-trained_Models) |
|