tags: | |
- llama | |
- chat | |
- Q3_K_M | |
- 7b | |
This is a converted model to GGUF from `NousResearch/Llama-2-7b-chat-hf` quantized to `Q3_K_M` using `llama.cpp` library. |
tags: | |
- llama | |
- chat | |
- Q3_K_M | |
- 7b | |
This is a converted model to GGUF from `NousResearch/Llama-2-7b-chat-hf` quantized to `Q3_K_M` using `llama.cpp` library. |