File size: 31,451 Bytes
8d0209c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
---

title: Loonly
app_file: app.py
sdk: gradio
sdk_version: 4.16.0
---

# Digital Human Intelligent Dialogue System - Linly-Talker โ€” 'Interactive Dialogue with Your Virtual Self'

<div align="center">
<h1>Linly-Talker WebUI</h1>

[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange)](https://github.com/Kedreamix/Linly-Talker)

<img src="docs/linly_logo.png" /><br>

[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/Kedreamix/Linly-Talker/blob/main/colab_webui.ipynb)
[![Licence](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/Kedreamix/Linly-Talker/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/๐Ÿค—%20-Models%20Repo-yellow.svg?style=for-the-badge)](https://huggingface.co./Kedreamix/Linly-Talker)

[**English**](./README.md) | [**ไธญๆ–‡็ฎ€ไฝ“**](./README_zh.md)

</div>

**2023.12 Update** ๐Ÿ“†

**Users can upload any images for the conversation**

**2024.01 Update** ๐Ÿ“†๐Ÿ“†

- **Exciting news! I've now incorporated both the powerful GeminiPro and Qwen large models into our conversational scene. Users can now upload images during the conversation, adding a whole new dimension to the interactions.** 
- **The deployment invocation method for FastAPI has been updated.**
- **The advanced settings options for Microsoft TTS have been updated, increasing the variety of voice types. Additionally, video subtitles have been introduced to enhance visualization.**
- **Updated the GPT multi-turn conversation system to establish contextual connections in dialogue, enhancing the interactivity and realism of the digital persona.**

**2024.02 Update** ๐Ÿ“†

- **Updated Gradio to the latest version 4.16.0, providing the interface with additional functionalities such as capturing images from the camera to create digital personas, among others.**
- **ASR and THG have been updated. FunASR from Alibaba has been integrated into ASR, enhancing its speed significantly. Additionally, the THG section now incorporates the Wav2Lip model, while ER-NeRF is currently in preparation (Coming Soon).**
- **I have incorporated the GPT-SoVITS model, which is a voice cloning method. By fine-tuning it with just one minute of a person's speech data, it can effectively clone their voice. The results are quite impressive and worth recommending.**
- **I have integrated a web user interface (WebUI) that allows for better execution of Linly-Talker.**

**2024.04 Update** ๐Ÿ“†

- **Updated the offline mode for Paddle TTS, excluding Edge TTS.**
- **Updated ER-NeRF as one of the choices for Avatar generation.**
- **Updated app_talk.py to allow for the free upload of voice and images/videos for generation without being based on a dialogue scenario.**



---



<details>

<summary>Content</summary>



<!-- TOC -->



- [Digital Avatar Conversational System - Linly-Talker โ€”โ€” "Digital Persona Interaction: Interact with Your Virtual Selfโ€](#digital-avatar-conversational-system---linly-talker--digital-persona-interaction-interact-with-your-virtual-self)

  - [Introduction](#introduction)

  - [TO DO LIST](#to-do-list)

  - [Example](#example)

  - [Setup Environment](#setup-environment)

  - [ASR - Speech Recognition](#asr---speech-recognition)

    - [Whisper](#whisper)

    - [FunASR](#funasr)

  - [TTS - Edge TTS](#tts---edge-tts)

  - [Voice Clone](#voice-clone)

    - [GPT-SoVITS๏ผˆRecommend๏ผ‰](#gpt-sovitsrecommend)

    - [XTTS](#xtts)

  - [THG - Avatar](#thg---avatar)

    - [SadTalker](#sadtalker)

    - [Wav2Lip](#wav2lip)

    - [ER-NeRF (Coming Soon)](#er-nerf-coming-soon)

  - [LLM - Conversation](#llm---conversation)

    - [Linly-AI](#linly-ai)

    - [Qwen](#qwen)

    - [Gemini-Pro](#gemini-pro)

    - [LLM Model Selection](#llm-model-selection)

  - [Optimizations](#optimizations)

  - [Gradio](#gradio)

  - [Start WebUI](#start-webui)

  - [Folder structure](#folder-structure)

  - [Reference](#reference)

  - [Star History](#star-history)



<!-- /TOC -->



</details>





## Introduction



Linly-Talker is an innovative digital human conversation system that integrates the latest artificial intelligence technologies, including Large Language Models (LLM) ๐Ÿค–, Automatic Speech Recognition (ASR) ๐ŸŽ™๏ธ, Text-to-Speech (TTS) ๐Ÿ—ฃ๏ธ, and voice cloning technology ๐ŸŽค. This system offers an interactive web interface through the Gradio platform ๐ŸŒ, allowing users to upload images ๐Ÿ“ท and engage in personalized dialogues with AI ๐Ÿ’ฌ.



The core features of the system include:



1. **Multi-Model Integration**: Linly-Talker combines major models such as Linly, GeminiPro, Qwen, as well as visual models like Whisper, SadTalker, to achieve high-quality dialogues and visual generation.

2. **Multi-Turn Conversational Ability**: Through the multi-turn dialogue system powered by GPT models, Linly-Talker can understand and maintain contextually relevant and coherent conversations, significantly enhancing the authenticity of the interaction.

3. **Voice Cloning**: Utilizing technologies like GPT-SoVITS, users can upload a one-minute voice sample for fine-tuning, and the system will clone the user's voice, enabling the digital human to converse in the user's voice.

4. **Real-Time Interaction**: The system supports real-time speech recognition and video captioning, allowing users to communicate naturally with the digital human via voice.

5. **Visual Enhancement**: With digital human generation technologies, Linly-Talker can create realistic digital human avatars, providing a more immersive experience.



The design philosophy of Linly-Talker is to create a new form of human-computer interaction that goes beyond simple Q&A. By integrating advanced technologies, it offers an intelligent digital human capable of understanding, responding to, and simulating human communication.



![The system architecture of multimodal humanโ€“computer interaction.](docs/HOI_en.png)



> You can watch the demo video [here](https://www.bilibili.com/video/BV1rN4y1a76x/).

>

> I have recorded a series of videos on Bilibili, which also represent every step of my updates and methods of use. For detailed information, please refer to [Digital Human Dialogue System - Linly-Talker Collection](https://space.bilibili.com/241286257/channel/collectiondetail?sid=2065753).

>

> - [๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Digital Human Dialogue System Linly-Talker ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ](https://www.bilibili.com/video/BV1rN4y1a76x/)

> - [๐Ÿš€ The Future of Digital Humans: The Empowerment Path of Linly-Talker + GPT-SoVIT Voice Cloning Technology](https://www.bilibili.com/video/BV1S4421A7gh/)

> - [Deploying Linly-Talker on AutoDL Platform (Super Detailed Tutorial for Beginners)](https://www.bilibili.com/video/BV1uT421m74z/)

> - [Linly-Talker Update: Offline TTS Integration and Customized Digital Human Solutions](https://www.bilibili.com/video/BV1Mr421u7NN/)



## TO DO LIST



- [x] Completed the basic conversation system flow, capable of `voice interactions`.

- [x] Integrated the LLM large model, including the usage of `Linly`, `Qwen`, and `GeminiPro`.

- [x] Enabled the ability to upload `any digital person's photo` for conversation.

- [x] Integrated `FastAPI` invocation for Linly.

- [x] Utilized Microsoft `TTS` with advanced options, allowing customization of voice and tone parameters to enhance audio diversity.

- [x] `Added subtitles` to video generation for improved visualization.

- [x] GPT `Multi-turn Dialogue System` (Enhance the interactivity and realism of digital entities, bolstering their intelligence)

- [x] Optimized the Gradio interface by incorporating additional models such as Wav2Lip, FunASR, and others.

- [x] `Voice Cloning` Technology (Synthesize one's own voice using voice cloning to enhance the realism and interactive experience of digital entities)

- [x] Integrate offline TTS (Text-to-Speech) along with NeRF-based methods and models.

- [ ] `Real-time` Speech Recognition (Enable conversation and communication between humans and digital entities using voice)



๐Ÿ”† The Linly-Talker project is ongoing - pull requests are welcome! If you have any suggestions regarding new model approaches, research, techniques, or if you discover any runtime errors, please feel free to edit and submit a pull request. You can also open an issue or contact me directly via email. ๐Ÿ“ฉโญ If you find this repository useful, please give it a star! ๐Ÿคฉ



> If you encounter any issues during deployment, please consult the [Common Issues Summary](https://github.com/Kedreamix/Linly-Talker/blob/main/ๅธธ่ง้—ฎ้ข˜ๆฑ‡ๆ€ป.md) section, where I have compiled a list of all potential problems. Additionally, a discussion group is available here, and I will provide regular updates. Thank you for your attention and use of Linly-Talker!



## Example



|                        ๆ–‡ๅญ—/่ฏญ้Ÿณๅฏน่ฏ                         |                          ๆ•ฐๅญ—ไบบๅ›ž็ญ”                          |

| :----------------------------------------------------------: | :----------------------------------------------------------: |

|                 ๅบ”ๅฏนๅŽ‹ๅŠ›ๆœ€ๆœ‰ๆ•ˆ็š„ๆ–นๆณ•ๆ˜ฏไป€ไนˆ๏ผŸ                 | <video src="https://github.com/Kedreamix/Linly-Talker/assets/61195303/f1deb189-b682-4175-9dea-7eeb0fb392ca"></video> |

|                      ๅฆ‚ไฝ•่ฟ›่กŒๆ—ถ้—ด็ฎก็†๏ผŸ                      | <video src="https://github.com/Kedreamix/Linly-Talker/assets/61195303/968b5c43-4dce-484b-b6c6-0fd4d621ac03"></video> |

|  ๆ’ฐๅ†™ไธ€็ฏ‡ไบคๅ“ไน้Ÿณไนไผš่ฏ„่ฎบ๏ผŒ่ฎจ่ฎบไนๅ›ข็š„่กจๆผ”ๅ’Œ่ง‚ไผ—็š„ๆ•ดไฝ“ไฝ“้ชŒใ€‚  | <video src="https://github.com/Kedreamix/Linly-Talker/assets/61195303/f052820f-6511-4cf0-a383-daf8402630db"></video> |

| ็ฟป่ฏ‘ๆˆไธญๆ–‡๏ผšLuck is a dividend of sweat. The more you sweat, the luckier you get. | <video src="https://github.com/Kedreamix/Linly-Talker/assets/61195303/118eec13-a9f7-4c38-b4ad-044d36ba9776"></video> |



## Setup Environment



To install the environment using Anaconda and PyTorch, follow the steps below:



```bash

conda create -n linly python=3.10

conda activate linly



# PyTorch Installation Method 1: Conda Installation (Recommended)

conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch



# PyTorch Installation Method 2: Pip Installation

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113



conda install -q ffmpeg # ffmpeg==4.2.2



pip install -r requirements_app.txt

```



If you want to use models like voice cloning, you may need a higher version of PyTorch. However, the functionality will be more diverse. You may need to use CUDA 11.8 as the driver version, which you can choose.



```bash

conda create -n linly python=3.10  

conda activate linly



pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118



conda install -q ffmpeg # ffmpeg==4.2.2



pip install -r requirements_app.txt



# Install dependencies for voice cloning

pip install -r VITS/requirements_gptsovits.txt

```



If you wish to use NeRF-based models, you may need to set up the corresponding environment:



```bash

# Install dependencies for NeRF

pip install "git+https://github.com/facebookresearch/pytorch3d.git"

pip install -r TFG/requirements_nerf.txt



# If there are issues with PyAudio, you can install the corresponding dependencies

# sudo apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0



# Note the following modules. If installation is unsuccessful, you can navigate to the path and use pip install . or python setup.py install to compile and install.

# NeRF/freqencoder

# NeRF/gridencoder

# NeRF/raymarching

# NeRF/shencoder

```



If you are using PaddleTTS, you can set up the corresponding environment with:



```bash

pip install -r TTS/requirements_paddle.txt

```



Next, you need to install the corresponding models. You can download them using the following methods. Once downloaded, place the files in the specified folder structure (explained at the end of this document).



- [Baidu (็™พๅบฆไบ‘็›˜)](https://pan.baidu.com/s/1eF13O-8wyw4B3MtesctQyg?pwd=linl) (Password: `linl`)

- [huggingface](https://huggingface.co./Kedreamix/Linly-Talker)

- [modelscope](https://www.modelscope.cn/models/Kedreamix/Linly-Talker/summary)



**HuggingFace Download**

If the download speed is too slow, consider using a mirror site. For more information, refer to [Efficiently Obtain Hugging Face Models Using Mirror Sites](https://kedreamix.github.io/2024/01/05/Note/HuggingFace/?highlight=้•œๅƒ).

```bash

# Download pre-trained models from Hugging Face

git lfs install

git clone https://huggingface.co./Kedreamix/Linly-Talker

```

**ModelScope Download**

```bash

# Download pre-trained models from ModelScope

# 1. Git method

git lfs install

git clone https://www.modelscope.cn/Kedreamix/Linly-Talker.git



# 2. Python code download

pip install modelscope

from modelscope import snapshot_download

model_dir = snapshot_download('Kedreamix/Linly-Talker')

```

**Move All Models to the Current Directory**

If you downloaded from Baidu Netdisk, you can refer to the directory structure at the end of the document to move the models.

```bash

# Move all models to the current directory

# Checkpoints contain SadTalker and Wav2Lip

mv Linly-Talker/checkpoints/* ./checkpoints



# Enhanced GFPGAN for SadTalker

# pip install gfpgan

# mv Linly-Talker/gfpan ./



# Voice cloning models

mv Linly-Talker/GPT_SoVITS/pretrained_models/* ./GPT_SoVITS/pretrained_models/



# Qwen large model

mv Linly-Talker/Qwen ./

```

For the convenience of deployment and usage, an `configs.py` file has been updated. You can modify some hyperparameters in this file for customization:

```bash

# Device Running Port

port = 7870



# API Running Port and IP

# Localhost port is 127.0.0.1; for global port forwarding, use "0.0.0.0"

ip = '127.0.0.1'

api_port = 7871



# Linly Model Path

mode = 'api'  # For 'api', Linly-api-fast.py must be run first

mode = 'offline'

model_path = 'Linly-AI/Chinese-LLaMA-2-7B-hf'



# SSL Certificate (required for microphone interaction)

# Preferably an absolute path

ssl_certfile = "./https_cert/cert.pem"

ssl_keyfile = "./https_cert/key.pem"

```

This file allows you to adjust parameters such as the device running port, API running port, Linly model path, and SSL certificate paths for ease of deployment and configuration.

## ASR - Speech Recognition

For detailed information about the usage and code implementation of Automatic Speech Recognition (ASR), please refer to [ASR - Bridging the Gap with Digital Humans](./ASR/README.md).

### Whisper

To implement ASR (Automatic Speech Recognition) using OpenAI's Whisper, you can refer to the specific usage methods provided in the GitHub repository: [https://github.com/openai/whisper](https://github.com/openai/whisper)



### FunASR

The speech recognition performance of Alibaba's FunASR is quite impressive and it is actually better than Whisper in terms of Chinese language. Additionally, FunASR is capable of achieving real-time results, making it a great choice. You can experience FunASR by accessing the FunASR file in the ASR folder. Please refer to [https://github.com/alibaba-damo-academy/FunASR](https://github.com/alibaba-damo-academy/FunASR) for more information.



## TTS - Text To Speech

For detailed information about the usage and code implementation of Text-to-Speech (TTS), please refer to [TTS - Empowering Digital Humans with Natural Speech Interaction](./TTS/README.md).

### Edge TTS

To use Microsoft Edge's online text-to-speech service from Python without needing Microsoft Edge or Windows or an API key, you can refer to the GitHub repository at [https://github.com/rany2/edge-tts](https://github.com/rany2/edge-tts). It provides a Python module called "edge-tts" that allows you to utilize the service. You can find detailed installation instructions and usage examples in the repository's README file.

### PaddleTTS

In practical use, there may be scenarios that require offline operation. Since Edge TTS requires an online environment to generate speech, we have chosen PaddleSpeech, another open-source alternative, for Text-to-Speech (TTS). Although there might be some differences in the quality, PaddleSpeech supports offline operations. For more information, you can refer to the GitHub page of PaddleSpeech: [https://github.com/PaddlePaddle/PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech).



## Voice Clone

For detailed information about the usage and code implementation of Voice Clone, please refer to [Voice Clone - Stealing Your Voice Quietly During Conversations](./VITS/README.md).

### GPT-SoVITS๏ผˆRecommend๏ผ‰

Thank you for your open source contribution. I have also found the `GPT-SoVITS` voice cloning model to be quite impressive. You can find the project at [https://github.com/RVC-Boss/GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS).



### XTTS

Coqui XTTS is a leading deep learning toolkit for Text-to-Speech (TTS) tasks, allowing for voice cloning and voice transfer to different languages using a 5-second or longer audio clip.

๐Ÿธ TTS is a library for advanced text-to-speech generation.

๐Ÿš€ Over 1100 pre-trained models for various languages.

๐Ÿ› ๏ธ Tools for training new models and fine-tuning existing models in any language.

๐Ÿ“š Utility programs for dataset analysis and management.

- Experience XTTS online [https://huggingface.co./spaces/coqui/xtts](https://huggingface.co./spaces/coqui/xtts)
- Official GitHub repository: [https://github.com/coqui-ai/TTS](https://github.com/coqui-ai/TTS)



## THG - Avatar

Detailed information about the usage and code implementation of digital human generation can be found in [THG - Building Intelligent Digital Humans](./TFG/README.md).

### SadTalker

Digital persona generation can utilize SadTalker (CVPR 2023). For detailed information, please visit [https://sadtalker.github.io](https://sadtalker.github.io).

Before usage, download the SadTalker model:

```bash

bash scripts/sadtalker_download_models.sh  

```

[Baidu (็™พๅบฆไบ‘็›˜)](https://pan.baidu.com/s/1eF13O-8wyw4B3MtesctQyg?pwd=linl) (Password: `linl`)

> If downloading from Baidu Cloud, remember to place it in the `checkpoints` folder. The model downloaded from Baidu Cloud is named `sadtalker` by default, but it should be renamed to `checkpoints`.

### Wav2Lip

Digital persona generation can also utilize Wav2Lip (ACM 2020). For detailed information, refer to [https://github.com/Rudrabha/Wav2Lip](https://github.com/Rudrabha/Wav2Lip).

Before usage, download the Wav2Lip model:

| Model                        | Description                                           | Link to the model                                            |
| ---------------------------- | ----------------------------------------------------- | ------------------------------------------------------------ |
| Wav2Lip                      | Highly accurate lip-sync                              | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/Eb3LEzbfuKlJiR600lQWRxgBIY27JZg80f7V9jtMfbNDaQ?e=TBFBVW) |
| Wav2Lip + GAN                | Slightly inferior lip-sync, but better visual quality | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EdjI7bZlgApMqsVoEUUXpLsBxqXbn5z8VTmoxp55YNDcIA?e=n9ljGW) |
| Expert Discriminator         | Weights of the expert discriminator                   | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQRvmiZg-HRAjvI6zqN9eTEBP74KefynCwPWVmF57l-AYA?e=ZRPHKP) |
| Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup     | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQVqH88dTm1HjlK11eNba5gBbn15WMS0B0EZbDBttqrqkg?e=ic0ljo) |



### ER-NeRF (Coming Soon)

ER-NeRF (ICCV 2023) is a digital human built using the latest NeRF technology. It allows for the customization of digital characters and can reconstruct them using just a five-minute video of a person. For more details, please refer to [https://github.com/Fictionarry/ER-NeRF](https://github.com/Fictionarry/ER-NeRF).

Updates have been made in the app_talk.py section. If better results are desired, it might be considered to clone and customize the voice of a digital human to achieve improved effects.







## LLM - Conversation



For detailed information about the usage and code implementation of Large Language Models (LLM), please refer to [LLM - Empowering Digital Humans with Powerful Language Models](./LLM/README.md).



### Linly-AI



Linly-AI is a Large Language model developed by CVI at Shenzhen University. You can find more information about Linly-AI on their GitHub repository: https://github.com/CVI-SZU/Linly



Download Linly models: [https://huggingface.co./Linly-AI/Chinese-LLaMA-2-7B-hf](https://huggingface.co./Linly-AI/Chinese-LLaMA-2-7B-hf)



You can use `git` to download:



```bash

git lfs install

git clone https://huggingface.co./Linly-AI/Chinese-LLaMA-2-7B-hf

```



Alternatively, you can use the `huggingface` download tool `huggingface-cli`:



```bash

pip install -U huggingface_hub

# Set up mirror acceleration
# Linux
export HF_ENDPOINT="https://hf-mirror.com"

# Windows PowerShell

$env:HF_ENDPOINT="https://hf-mirror.com"

huggingface-cli download --resume-download Linly-AI/Chinese-LLaMA-2-7B-hf --local-dir Linly-AI/Chinese-LLaMA-2-7B-hf
```







### Qwen



Qwen is an AI model developed by Alibaba Cloud. You can check out the GitHub repository for Qwen here: https://github.com/QwenLM/Qwen



If you want to quickly use Qwen, you can choose the 1.8B model, which has fewer parameters and can run smoothly even with limited GPU memory. Of course, this part can be replaced with other options.



You can download the Qwen 1.8B model from this link: https://huggingface.co./Qwen/Qwen-1_8B-Chat



You can use `git` to download:



```bash

git lfs install

git clone https://huggingface.co./Qwen/Qwen-1_8B-Chat

```

Alternatively, you can use the `huggingface` download tool `huggingface-cli`:

```bash

pip install -U huggingface_hub



# Set up mirror acceleration

# Linux

export HF_ENDPOINT="https://hf-mirror.com"

# Windows PowerShell

$env:HF_ENDPOINT="https://hf-mirror.com"



huggingface-cli download --resume-download Qwen/Qwen-1_8B-Chat --local-dir Qwen/Qwen-1_8B-Chat

```



### Gemini-Pro

Gemini-Pro is an AI model developed by Google. To learn more about Gemini-Pro, you can visit their website: https://deepmind.google/technologies/gemini/

If you want to request an API key for Gemini-Pro, you can visit this link: https://makersuite.google.com/



### LLM Model Selection

In the app.py file, tailor your model choice with ease.

```python

# Uncomment and set up the model of your choice:



# llm = LLM(mode='offline').init_model('Linly', 'Linly-AI/Chinese-LLaMA-2-7B-hf')

# llm = LLM(mode='offline').init_model('Gemini', 'gemini-pro', api_key = "your api key")

# llm = LLM(mode='offline').init_model('Qwen', 'Qwen/Qwen-1_8B-Chat')



# Manual download with a specific path

llm = LLM(mode=mode).init_model('Qwen', model_path)

```



## Optimizations

Some optimizations:

- Use fixed input face images, extract features beforehand to avoid reading each time
- Remove unnecessary libraries to reduce total time
- Only save final video output, don't save intermediate results to improve performance 
- Use OpenCV to generate final video instead of mimwrite for faster runtime

## Gradio

Gradio is a Python library that provides an easy way to deploy machine learning models as interactive web apps. 

For Linly-Talker, Gradio serves two main purposes:

1. **Visualization & Demo**: Gradio provides a simple web GUI for the model, allowing users to see the results intuitively by uploading an image and entering text. This is an effective way to showcase the capabilities of the system.

2. **User Interaction**: The Gradio GUI can serve as a frontend to allow end users to interact with Linly-Talker. Users can upload their own images and ask arbitrary questions or have conversations to get real-time responses. This provides a more natural speech interaction method.

Specifically, we create a Gradio Interface in app.py that takes image and text inputs, calls our function to generate the response video, and displays it in the GUI. This enables browser interaction without needing to build complex frontend. 

In summary, Gradio provides visualization and user interaction interfaces for Linly-Talker, serving as effective means for showcasing system capabilities and enabling end users.

## Start WebUI

Previously, I had separated many versions, but it became cumbersome to run multiple versions. Therefore, I have added a WebUI feature to provide a single interface for a seamless experience. I will continue to update it in the future.

The current features available in the WebUI are as follows:

- [x] Text/Voice-based dialogue with virtual characters (fixed characters with male and female roles)
- [x] Dialogue with virtual characters using any image (you can upload any character image)
- [x] Multi-turn GPT dialogue (incorporating historical dialogue data to maintain context)
- [x] Voice cloning dialogue (based on GPT-SoVITS settings for voice cloning, including a built-in smoky voice that can be cloned based on the voice of the dialogue)

```bash

# WebUI

python webui.py

```

![](docs/WebUI.png)

There are three modes for the current startup, and you can choose a specific setting based on the scenario.

The first mode involves fixed Q&A with a predefined character, eliminating preprocessing time.

```bash

python app.py

```

![](docs/UI.png)

The first mode has recently been updated to include the Wav2Lip model for dialogue.

```bash

python appv2.py

```



The second mode allows for conversing with any uploaded image.

```bash

python app_img.py

```

![](docs/UI2.png)

The third mode builds upon the first one by incorporating a large language model for multi-turn GPT conversations.

```bash

python app_multi.py

```

![](docs/UI3.png)

Now, the part of voice cloning has been added, allowing for freely switching between cloned voice models and corresponding person images. Here, I have chosen a deep, smoky voice and an image of a male.

```bash

python app_vits.py

```

A fourth method has been added, which does not fixate on a specific scenario for conversation. Instead, it allows for direct input of voice or the generation of voice for the creation of a digital human. It incorporates methods such as Sadtalker, Wav2Lip, and ER-NeRF.

> ER-NeRF is trained on videos of a single individual, so a specific model needs to be replaced to render and obtain the correct results. It comes with pre-installed weights for Obama, which can be used directly with the following command:

```bash

python app_talk.py

```

![](docs/UI4.png)



## Folder structure

The folder structure of the weight files is as follows:

- `Baidu (็™พๅบฆไบ‘็›˜)`: You can download the weights from [here](https://pan.baidu.com/s/1eF13O-8wyw4B3MtesctQyg?pwd=linl) (Password: `linl`).
- `huggingface`: You can access the weights at [this link](https://huggingface.co./Kedreamix/Linly-Talker).
- `modelscope`: The weights will be available soon at [this link](https://www.modelscope.cn/models/Kedreamix/Linly-Talker/files).

```bash

Linly-Talker/ 

โ”œโ”€โ”€ checkpoints

โ”‚ย ย  โ”œโ”€โ”€ hub

โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ checkpoints

โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ s3fd-619a316812.pth

โ”‚ย ย  โ”œโ”€โ”€ lipsync_expert.pth

โ”‚ย ย  โ”œโ”€โ”€ mapping_00109-model.pth.tar

โ”‚ย ย  โ”œโ”€โ”€ mapping_00229-model.pth.tar

โ”‚ย ย  โ”œโ”€โ”€ SadTalker_V0.0.2_256.safetensors

โ”‚ย ย  โ”œโ”€โ”€ visual_quality_disc.pth

โ”‚ย ย  โ”œโ”€โ”€ wav2lip_gan.pth

โ”‚ย ย  โ””โ”€โ”€ wav2lip.pth

โ”œโ”€โ”€ gfpgan

โ”‚ย ย  โ””โ”€โ”€ weights

โ”‚ย ย      โ”œโ”€โ”€ alignment_WFLW_4HG.pth

โ”‚ย ย      โ””โ”€โ”€ detection_Resnet50_Final.pth

โ”œโ”€โ”€ GPT_SoVITS

โ”‚ย ย  โ””โ”€โ”€ pretrained_models

โ”‚ย ย      โ”œโ”€โ”€ chinese-hubert-base

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ config.json

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ preprocessor_config.json

โ”‚ย ย      โ”‚ย ย  โ””โ”€โ”€ pytorch_model.bin

โ”‚ย ย      โ”œโ”€โ”€ chinese-roberta-wwm-ext-large

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ config.json

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ pytorch_model.bin

โ”‚ย ย      โ”‚ย ย  โ””โ”€โ”€ tokenizer.json

โ”‚ย ย      โ”œโ”€โ”€ README.md

โ”‚ย ย      โ”œโ”€โ”€ s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt

โ”‚ย ย      โ”œโ”€โ”€ s2D488k.pth

โ”‚ย ย      โ”œโ”€โ”€ s2G488k.pth

โ”‚ย ย      โ””โ”€โ”€ speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch

โ”œโ”€โ”€ Qwen

โ”‚ย ย  โ””โ”€โ”€ Qwen-1_8B-Chat

โ”‚ย ย      โ”œโ”€โ”€ assets

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ logo.jpg

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ qwen_tokenizer.png

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ react_showcase_001.png

โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ react_showcase_002.png

โ”‚ย ย      โ”‚ย ย  โ””โ”€โ”€ wechat.png

โ”‚ย ย      โ”œโ”€โ”€ cache_autogptq_cuda_256.cpp

โ”‚ย ย      โ”œโ”€โ”€ cache_autogptq_cuda_kernel_256.cu

โ”‚ย ย      โ”œโ”€โ”€ config.json

โ”‚ย ย      โ”œโ”€โ”€ configuration_qwen.py

โ”‚ย ย      โ”œโ”€โ”€ cpp_kernels.py

โ”‚ย ย      โ”œโ”€โ”€ examples

โ”‚ย ย      โ”‚ย ย  โ””โ”€โ”€ react_prompt.md

โ”‚ย ย      โ”œโ”€โ”€ generation_config.json

โ”‚ย ย      โ”œโ”€โ”€ LICENSE

โ”‚ย ย      โ”œโ”€โ”€ model-00001-of-00002.safetensors

โ”‚ย ย      โ”œโ”€โ”€ model-00002-of-00002.safetensors

โ”‚ย ย      โ”œโ”€โ”€ modeling_qwen.py

โ”‚ย ย      โ”œโ”€โ”€ model.safetensors.index.json

โ”‚ย ย      โ”œโ”€โ”€ NOTICE

โ”‚ย ย      โ”œโ”€โ”€ qwen_generation_utils.py

โ”‚ย ย      โ”œโ”€โ”€ qwen.tiktoken

โ”‚ย ย      โ”œโ”€โ”€ README.md

โ”‚ย ย      โ”œโ”€โ”€ tokenization_qwen.py

โ”‚ย ย      โ””โ”€โ”€ tokenizer_config.json

โ””โ”€โ”€ README.md

```



## Reference

**ASR**

- [https://github.com/openai/whisper](https://github.com/openai/whisper)
- [https://github.com/alibaba-damo-academy/FunASR](https://github.com/alibaba-damo-academy/FunASR)

**TTS**

- [https://github.com/rany2/edge-tts](https://github.com/rany2/edge-tts)  

**LLM**

- [https://github.com/CVI-SZU/Linly](https://github.com/CVI-SZU/Linly)
- [https://github.com/QwenLM/Qwen](https://github.com/QwenLM/Qwen)
- [https://deepmind.google/technologies/gemini/](https://deepmind.google/technologies/gemini/)

**THG**

- [https://github.com/OpenTalker/SadTalker](https://github.com/OpenTalker/SadTalker)
- [https://github.com/Rudrabha/Wav2Lip](https://github.com/Rudrabha/Wav2Lip)

**Voice Clone**

- [https://github.com/RVC-Boss/GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS)
- [https://github.com/coqui-ai/TTS](https://github.com/coqui-ai/TTS)

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=Kedreamix/Linly-Talker&type=Date)](https://star-history.com/#Kedreamix/Linly-Talker&Date)