poeroz commited on
Commit
2c21386
1 Parent(s): 9074c64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -30,6 +30,9 @@ LLaMA-Omni is a speech-language model built upon Llama-3.1-8B-Instruct. It suppo
30
 
31
  ♻️ **Trained in less than 3 days using just 4 GPUs.**
32
 
 
 
 
33
  ## Install
34
 
35
  1. Clone this repository.
@@ -99,6 +102,8 @@ python -m omni_speech.serve.model_worker --host 0.0.0.0 --controller http://loca
99
 
100
  4. Visit [http://localhost:8000/](http://localhost:8000/) and interact with LLaMA-3.1-8B-Omni!
101
 
 
 
102
  ## Local Inference
103
 
104
  To run inference locally, please organize the speech instruction files according to the format in the `omni_speech/infer/examples` directory, then refer to the following script.
 
30
 
31
  ♻️ **Trained in less than 3 days using just 4 GPUs.**
32
 
33
+
34
+ <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/65b7573482d384513443875e/dr4XWUxzuVQ52lBuzNBTt.mp4"></video>
35
+
36
  ## Install
37
 
38
  1. Clone this repository.
 
102
 
103
  4. Visit [http://localhost:8000/](http://localhost:8000/) and interact with LLaMA-3.1-8B-Omni!
104
 
105
+ **Note: Due to the instability of streaming audio playback in Gradio, we have only implemented streaming audio synthesis without enabling autoplay. If you have a good solution, feel free to submit a PR. Thanks!**
106
+
107
  ## Local Inference
108
 
109
  To run inference locally, please organize the speech instruction files according to the format in the `omni_speech/infer/examples` directory, then refer to the following script.