Commit
·
6bbda19
1
Parent(s):
c840bb0
Add link to Q8 checkpoint
Browse files
README.md
CHANGED
@@ -102,7 +102,7 @@ We followed the instructions in the [dpo repo](https://github.com/eric-mitchell/
|
|
102 |
|
103 |
Please follow these steps to use a quantized version of AmberSafe on your personal computer or laptop:
|
104 |
|
105 |
-
1. First, install Ollama by following the instructions provided [here](https://github.com/jmorganca/ollama/tree/main?tab=readme-ov-file#ollama). Next, create a quantized version of AmberSafe model (say ambersafe.Q8_0.gguf for 8 bit quantized version) following instructions [here](https://github.com/jmorganca/ollama/blob/main/docs/import.md#manually-converting--quantizing-models).
|
106 |
|
107 |
2. Create an Ollama Modelfile locally using the template provided below:
|
108 |
```
|
|
|
102 |
|
103 |
Please follow these steps to use a quantized version of AmberSafe on your personal computer or laptop:
|
104 |
|
105 |
+
1. First, install Ollama by following the instructions provided [here](https://github.com/jmorganca/ollama/tree/main?tab=readme-ov-file#ollama). Next, create a quantized version of AmberSafe model (say ambersafe.Q8_0.gguf for 8 bit quantized version) following instructions [here](https://github.com/jmorganca/ollama/blob/main/docs/import.md#manually-converting--quantizing-models). Alternatively, you can download the 8bit quantized version that we created [ambersafe.Q8_0.gguf](https://huggingface.co/LLM360/AmberSafe/resolve/Q8_0/ambersafe.Q8_0.gguf?download=true)
|
106 |
|
107 |
2. Create an Ollama Modelfile locally using the template provided below:
|
108 |
```
|