End-To-End TEXT-2-ASMR with Transformers
This repository contains pretrained text2asmr model files, audio files and training+inference notebooks.
Dataset Details
This unique dataset is tailored for training and deploying text-to-speech (TTS) systems specifically focused on ASMR (Autonomous Sensory Meridian Response) content. It includes a comprehensive collection of pretrained model files, audio files and training code suitable for TTS applications.
Dataset Description
Inside this dataset, you shall find zipped folders as is follows:
- wavs_original: original wav files as it was converted from the original video
- wavs: original wav files broken into 1 minute chunks
- transcripts_original: transribed scripts of the original wav files
- transcripts: transribed scripts of the files in wav folder
- models: text to spectrogram model trained on Glow-TTS
- ljspeech: alignment files and respective checkpoint models (text to phoneme)
- transformer_tts_data.ljspeech: trained checkpoint models and other files
And the following files:
- Glow-TTS.ipynb: Training and inference code for GlowTTS models
- TransformerTTS.ipynb: Training and inference code for Transformer models
- VITS_TTS.ipynb: Optional code for training VITS models; follows the same format as GlowTTS
- metadata_original.csv: ljspeech formatted transcriptions of wav_original folder; ready for TTS training
- metadata.csv: ljspeech formatted transcriptions of wav folder; ready for TTS training
Latest Update: End-To-End TEXT-2-ASMR with Diffusion
Based on the paper, E3 TTS: EASY END-TO-END DIFFUSION-BASED TEXT TO SPEECH
(Yuan Gao, Nobuyuki Morioka, Yu Zhang, Nanxin Chen) Google
A text-to-asmr UNet-Diffusion model differing slightly from the framework mentioned in the paper was trained on the same audio-transcript paired dataset for 1000DDPM and 10 epochs.
Model metrics:
- General Loss: 0.000134
- MSE Loss: 0.000027
- RMSE Loss: 0.000217
- MAE Loss: 0.000018
- Curated by: Alosh Denny, Anish S
- Language(s) (NLP): English
- License: MIT
Dataset Sources
Youtube: Rebeccas ASMR, Nanou ASMR, Gibi ASMR, Cherie Lorraine ASMR, etc.
Uses
The dataset can be used to train text2spec2mel, text2wav, and/or other end-to-end text-to-speech models.
Direct Use
Pretrained models can be tested out with the TransformerTTS notebook and the Glow-TTS notebook.
Dataset Card Authors
Alosh Denny, Anish S
Dataset Card Contact
- Downloads last month
- 208