hellork commited on
Commit
d5692b8
1 Parent(s): fe14c76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -1
README.md CHANGED
@@ -36,7 +36,28 @@ llama-cli --hf-repo hellork/falcon-mamba-7b-instruct-IQ4_NL-GGUF --hf-file falco
36
  llama-server --hf-repo hellork/falcon-mamba-7b-instruct-IQ4_NL-GGUF --hf-file falcon-mamba-7b-instruct-iq4_nl-imat.gguf -c 2048
37
  ```
38
 
39
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  Step 1: Clone llama.cpp from GitHub.
42
  ```
 
36
  llama-server --hf-repo hellork/falcon-mamba-7b-instruct-IQ4_NL-GGUF --hf-file falcon-mamba-7b-instruct-iq4_nl-imat.gguf -c 2048
37
  ```
38
 
39
+ ### The Ship's Computer:
40
+
41
+ [whisper_dictation](https://github.com/themanyone/whisper_dictation)
42
+
43
+ Interact with this model by speaking to it. Lean, fast, & private, networked speech to text, AI images, multi-modal voice chat, control apps, webcam, and sound with less than 4GiB of VRAM.
44
+
45
+ ```bash
46
+ git clone -b main --single-branch https://github.com/themanyone/whisper_dictation.git
47
+ pip install -r whisper_dictation/requirements.txt
48
+
49
+ git clone https://github.com/ggerganov/whisper.cpp
50
+ cd whisper.cpp
51
+ GGML_CUDA=1 make -j # assuming CUDA is available. see docs
52
+ ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
53
+
54
+ whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
55
+ cd whisper_dictation
56
+ ./whisper_cpp_client.py
57
+ ```
58
+ See [the docs](https://github.com/themanyone/whisper_dictation) for tips on integrating with llama.cpp server, enabling the computer to talk back, draw AI images, carry out voice commands, and other features.
59
+
60
+ ### Install Llama.cpp via git:
61
 
62
  Step 1: Clone llama.cpp from GitHub.
63
  ```