support ollama ??
i coverted this to .gguf file or used another link .gguf file
but do not support in ollama (version 0.3.8)
ollama create : success
ollama run : failed
Here is simple guidelines for using the EXAONE model on ollama:
Download the EXAONE model from HuggingFace, and save to /path/to/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct.
Llamafy the EXAONE model by referring to the followings.
- maywell/EXAONE-3.0-7.8B-Instruct-Llamafied
- CarrotAI/EXAONE-3.0-7.8B-Instruct-Llamafied-cpuCreate the EXAONE Modelfile. See https://github.com/ollama/ollama/blob/main/docs/modelfile.md for more information. This is an example of the EXAONE Modelfile.
# Set the base model.
FROM /path/to/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct-Llamafied
# Set the parameter values according to your application.
PARAMETER stop "[|endofturn|]"
PARAMETER num_predict -2
PARAMETER top_k 1
# Set the template.
TEMPLATE """{{ if .System }}[|system|]{{ .System }}[|endofturn|]
{{ end }}{{ if .Prompt }}[|user|]{{ .Prompt }}
{{ end }}[|assistant|]{{ .Response }}[|endofturn|]
"""
# Set the system prompt.
SYSTEM """You are EXAONE model from LG AI Research, a helpful assistant."""
# Set the license.
LICENSE """EXAONE AI Model License Agreement 1.1 - NC """
- Convert the EXAONE model saved as pyTorch safetensors to ollama. To quantize the EXAONE model, you can add --quantize flag. Please refer to https://github.com/ollama/ollama/blob/main/docs/import.md for the quantization flag.
$ ollama create exaone3 -f <the EXAONE Modelfile>
Good luck to you.
@yireun
Thank you for your reply. But when I tried
Again, the following message occurs.
ollama create : ok
ollama run : failed
"Error: llama runner process has terminated: this model is not supported by your version of Ollama. You may need to upgrade"
I think the llama.cpp library inside ollama should be updated.
Would you check your path again?
According to https://github.com/ollama/ollama/blob/4a8069f9c4c8cb761cd6c10ca5f4be6af21fa0ae/cmd/cmd.go#L222,
the error "Error: no safetensors or torch files found" occurs when ollama cannot find files "model*.safetensors".
When I used the maywell/EXAONE-3.0-7.8B-Instruct-Llamafied weights, no errors occurred.
If you use an ollama docker container, those two paths should point the paths in container.
- FROM <the EXAONE-Llamafied model path>
- $ ollama create exaone3 -f <the EXAONE Modelfile>
Good luck to you.
Thank you for your help!
my mistake Modelfile Path
I succeeded in loading the ollama EXAONE
ollama create -q q4_K_M EXAONE-3.0 -f EXAONE-3.0-7.8B-Instruct-Llamafied/Modelfile
ollama run EXAONE-3.0:latest
μλ
νμΈμ
@yireun
νΉμ μμ¬μμ μ¬λΌλ§μ μ¬λ €μ μ¬μ©μ ν΄μ νΈμΆν μ μλModelfile(ν νλ¦Ώ)λ μλμ?
μλ νμΈμ, @hunie
κΈ° 곡κ°λ EXAONE v3.0μ Tool Calling κΈ°λ₯μ μ§μνμ§ μμ΅λλ€.
κ·Έλ¬λ, ν΄λΉ κΈ°λ₯μ΄ νμνλ€κ³ νλ¨νμ¬ EXAONE λͺ¨λΈμ Tool Calling κΈ°λ₯μ μΆκ°νκΈ° μν μ°κ΅¬/κ°λ°μ μ§ννκ³ μμ΅λλ€.
κ°μ¬ν©λλ€.