Update README.md
Browse files
README.md
CHANGED
@@ -15,17 +15,17 @@ license: llama2
|
|
15 |
|
16 |
Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga2_13b_lora).
|
17 |
|
18 |
-
* Download one of the versions, for example `ggml-model-
|
19 |
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
|
20 |
|
21 |
How to run:
|
22 |
```
|
23 |
sudo apt-get install git-lfs
|
24 |
-
pip install llama-cpp-python
|
25 |
|
26 |
-
python3 interact_llamacpp.py ggml-model-
|
27 |
```
|
28 |
|
29 |
System requirements:
|
30 |
-
* 18GB RAM for
|
31 |
-
*
|
|
|
15 |
|
16 |
Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga2_13b_lora).
|
17 |
|
18 |
+
* Download one of the versions, for example `ggml-model-q4_K.gguf`.
|
19 |
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
|
20 |
|
21 |
How to run:
|
22 |
```
|
23 |
sudo apt-get install git-lfs
|
24 |
+
pip install llama-cpp-python fire
|
25 |
|
26 |
+
python3 interact_llamacpp.py ggml-model-q4_K.gguf
|
27 |
```
|
28 |
|
29 |
System requirements:
|
30 |
+
* 18GB RAM for q8_K
|
31 |
+
* 8GB RAM for q4_K
|