tags: | |
- llamafile | |
- GGUF | |
base_model: QuantFactory/Phi-3-mini-4k-instruct-GGUF | |
## phi-3-mini-llamafile-nonAVX | |
llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/) | |
#### Downloads | |
- [Phi-3-mini-4k-instruct.Q4_0.llamafile](https://huggingface.co./blueprintninja/phi-3-mini-llamafile-nonAVX/resolve/main/Phi-3-mini-4k-instruct.Q4_0.llamafile) | |
This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder) | |