File size: 872 Bytes
a144728
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: apache-2.0
---

# Test in python

* [test_transformers_gguf.py](test_transformers_gguf.py): Test GGUF model with `transformers` package (WARNING: loading the model is long)

# Test with ollama

* Download and install [Ollama](https://ollama.com/download)
* Download the [GGUF model](https://huggingface.co./OpenLLM-France/Lucie-7B-Baby-Instruct/resolve/main/Lucie-7B-q4_k_m.gguf)
* Copy the [`Modelfile`](Modelfile), adpating if necessary the path to the GGUF file (line starting with `FROM`).
* Run in a shell:
    * `ollama create -f Modelfile Lucie`
    * `ollama run Lucie`
* Once ">>>" appears, type your prompt(s) and press Enter.
* Optionally, restart a conversation by typing "`/clear`"
* End the session by typing "`/bye`".

Useful for debug:
* [How to print input requests and output responses in Ollama server?](https://stackoverflow.com/a/78831840)