Test in python

Test with ollama

  • Download and install Ollama
  • Download the GGUF model
  • Copy the Modelfile, adpating if necessary the path to the GGUF file (line starting with FROM).
  • Run in a shell:
    • ollama create -f Modelfile Lucie
    • ollama run Lucie
  • Once ">>>" appears, type your prompt(s) and press Enter.
  • Optionally, restart a conversation by typing "/clear"
  • End the session by typing "/bye".

Useful for debug:

Downloads last month
115
GGUF
Model size
6.71B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .