kodoqmc commited on
Commit
6246ffc
โ€ข
1 Parent(s): 177805f

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +103 -103
  3. fromtts.wav +3 -0
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  reference.wav filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  reference.wav filter=lfs diff=lfs merge=lfs -text
37
+ fromtts.wav filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,103 +1,103 @@
1
- ---
2
- license: other
3
- license_name: coqui-public-model-license
4
- license_link: https://coqui.ai/cpml
5
- library_name: coqui
6
- pipeline_tag: text-to-speech
7
- widget:
8
- - text: "Once when I was six years old I saw a magnificent picture"
9
- ---
10
-
11
- # โ“TTS_v2 - Peter Drury Fine-Tuned Model
12
-
13
- This repository hosts a fine-tuned version of the โ“TTS model.
14
-
15
- ![Peter Drury](peterdrury.jpg)
16
-
17
- Listen to a sample of the โ“TTS_v2 - Peter Drury Fine-Tuned Model:
18
-
19
- <audio controls>
20
- <source src="https://huggingface.co/kodoqmc/XTTS-v2_PeterDrury/blob/main/reference.wav" type="audio/wav">
21
- Your browser does not support the audio element.
22
- </audio>
23
-
24
- Here's a Peter Drury mp3 voice line clip from the training data:
25
-
26
- <audio controls>
27
- <source src="https://huggingface.co/kodoqmc/XTTS-v2_PeterDrury/raw/main/reference.wav" type="audio/wav">
28
- Your browser does not support the audio element.
29
- </audio>
30
-
31
- ## Features
32
- - ๐ŸŽ™๏ธ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
33
- - ๐ŸŒ **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining Peter Drury's voice.
34
- - ๐Ÿ˜ƒ **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
35
- - ๐Ÿ”„ **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
36
- - ๐ŸŽง **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
37
-
38
- ## Supported Languages
39
- The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
40
-
41
- ## Usage in Roll Cage
42
- ๐Ÿค–๐Ÿ’ฌ Boost your AI experience with this Ollama add-on! Enjoy real-time audio ๐ŸŽ™๏ธ and text ๐Ÿ” chats, LaTeX rendering ๐Ÿ“œ, agent automations โš™๏ธ, workflows ๐Ÿ”„, text-to-image ๐Ÿ“โžก๏ธ๐Ÿ–ผ๏ธ, image-to-text ๐Ÿ–ผ๏ธโžก๏ธ๐Ÿ”ค, image-to-video ๐Ÿ–ผ๏ธโžก๏ธ๐ŸŽฅ transformations. Fine-tune text ๐Ÿ“, voice ๐Ÿ—ฃ๏ธ, and image ๐Ÿ–ผ๏ธ gens. Includes Windows macro controls ๐Ÿ–ฅ๏ธ and DuckDuckGo search.
43
-
44
- [ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
45
-
46
- ## CoquiTTS and Resources
47
- - ๐Ÿธ๐Ÿ’ฌ **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
48
- - ๐Ÿ“š **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
49
- - ๐Ÿ‘ฉโ€๐Ÿ’ป **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
50
- - ๐Ÿ—ฏ **Community**: [Discord](https://discord.gg/5eXr5seRrv)
51
-
52
- ## License
53
- This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
54
-
55
- ## Contact
56
- Join our ๐ŸธCommunity on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
57
-
58
- Using ๐ŸธTTS API:
59
-
60
- ```python
61
- from TTS.api import TTS
62
-
63
- tts = TTS(model_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/",
64
- config_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/config.json", progress_bar=False, gpu=True).to(self.device)
65
-
66
- # generate speech by cloning a voice using default settings
67
- tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
68
- file_path="output.wav",
69
- speaker_wav="/path/to/target/speaker.wav",
70
- language="en")
71
-
72
- ```
73
-
74
- Using ๐ŸธTTS Command line:
75
-
76
- ```console
77
- tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
78
- --text "Bugรผn okula gitmek istemiyorum." \
79
- --speaker_wav /path/to/target/speaker.wav \
80
- --language_idx tr \
81
- --use_cuda true
82
- ```
83
-
84
- Using the model directly:
85
-
86
- ```python
87
- from TTS.tts.configs.xtts_config import XttsConfig
88
- from TTS.tts.models.xtts import Xtts
89
-
90
- config = XttsConfig()
91
- config.load_json("/path/to/xtts/config.json")
92
- model = Xtts.init_from_config(config)
93
- model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
94
- model.cuda()
95
-
96
- outputs = model.synthesize(
97
- "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
98
- config,
99
- speaker_wav="/data/TTS-public/_refclips/3.wav",
100
- gpt_cond_len=3,
101
- language="en",
102
- )
103
- ```
 
1
+ ---
2
+ license: other
3
+ license_name: coqui-public-model-license
4
+ license_link: https://coqui.ai/cpml
5
+ library_name: coqui
6
+ pipeline_tag: text-to-speech
7
+ widget:
8
+ - text: "Once when I was six years old I saw a magnificent picture"
9
+ ---
10
+
11
+ # โ“TTS_v2 - Peter Drury Fine-Tuned Model
12
+
13
+ This repository hosts a fine-tuned version of the โ“TTS model.
14
+
15
+ ![Peter Drury](peterdrury.jpg)
16
+
17
+ Listen to a sample of the โ“TTS_v2 - Peter Drury Fine-Tuned Model:
18
+
19
+ <audio controls>
20
+ <source src="https://huggingface.co/Borcherding/XTTS-v2_C3PO/raw/main/sample_c3po_generated.wav" type="audio/wav">
21
+ Your browser does not support the audio element.
22
+ </audio>
23
+
24
+ Here's a Peter Drury mp3 voice line clip from the training data:
25
+
26
+ <audio controls>
27
+ <source src="https://huggingface.co/Borcherding/XTTS-v2_C3PO/raw/main/reference2.mp3" type="audio/wav">
28
+ Your browser does not support the audio element.
29
+ </audio>
30
+
31
+ ## Features
32
+ - ๐ŸŽ™๏ธ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
33
+ - ๐ŸŒ **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining Peter Drury's voice.
34
+ - ๐Ÿ˜ƒ **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
35
+ - ๐Ÿ”„ **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
36
+ - ๐ŸŽง **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
37
+
38
+ ## Supported Languages
39
+ The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
40
+
41
+ ## Usage in Roll Cage
42
+ ๐Ÿค–๐Ÿ’ฌ Boost your AI experience with this Ollama add-on! Enjoy real-time audio ๐ŸŽ™๏ธ and text ๐Ÿ” chats, LaTeX rendering ๐Ÿ“œ, agent automations โš™๏ธ, workflows ๐Ÿ”„, text-to-image ๐Ÿ“โžก๏ธ๐Ÿ–ผ๏ธ, image-to-text ๐Ÿ–ผ๏ธโžก๏ธ๐Ÿ”ค, image-to-video ๐Ÿ–ผ๏ธโžก๏ธ๐ŸŽฅ transformations. Fine-tune text ๐Ÿ“, voice ๐Ÿ—ฃ๏ธ, and image ๐Ÿ–ผ๏ธ gens. Includes Windows macro controls ๐Ÿ–ฅ๏ธ and DuckDuckGo search.
43
+
44
+ [ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
45
+
46
+ ## CoquiTTS and Resources
47
+ - ๐Ÿธ๐Ÿ’ฌ **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
48
+ - ๐Ÿ“š **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
49
+ - ๐Ÿ‘ฉโ€๐Ÿ’ป **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
50
+ - ๐Ÿ—ฏ **Community**: [Discord](https://discord.gg/5eXr5seRrv)
51
+
52
+ ## License
53
+ This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
54
+
55
+ ## Contact
56
+ Join our ๐ŸธCommunity on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
57
+
58
+ Using ๐ŸธTTS API:
59
+
60
+ ```python
61
+ from TTS.api import TTS
62
+
63
+ tts = TTS(model_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/",
64
+ config_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/config.json", progress_bar=False, gpu=True).to(self.device)
65
+
66
+ # generate speech by cloning a voice using default settings
67
+ tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
68
+ file_path="output.wav",
69
+ speaker_wav="/path/to/target/speaker.wav",
70
+ language="en")
71
+
72
+ ```
73
+
74
+ Using ๐ŸธTTS Command line:
75
+
76
+ ```console
77
+ tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
78
+ --text "Bugรผn okula gitmek istemiyorum." \
79
+ --speaker_wav /path/to/target/speaker.wav \
80
+ --language_idx tr \
81
+ --use_cuda true
82
+ ```
83
+
84
+ Using the model directly:
85
+
86
+ ```python
87
+ from TTS.tts.configs.xtts_config import XttsConfig
88
+ from TTS.tts.models.xtts import Xtts
89
+
90
+ config = XttsConfig()
91
+ config.load_json("/path/to/xtts/config.json")
92
+ model = Xtts.init_from_config(config)
93
+ model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
94
+ model.cuda()
95
+
96
+ outputs = model.synthesize(
97
+ "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
98
+ config,
99
+ speaker_wav="/data/TTS-public/_refclips/3.wav",
100
+ gpt_cond_len=3,
101
+ language="en",
102
+ )
103
+ ```
fromtts.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f523fe36e634b1f373da63346fe3fec4eb2ffe93091d9495ca199471ff649997
3
+ size 2027600