--- tags: - llamafile - GGUF base_model: lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF --- ## llama-3-8b-llamafile-q8-nonAVX llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/) #### Downloads - [Meta-Llama-3-8B-Instruct-Q8_0.llamafile](https://huggingface.co./blueprintninja/llama-3-8b-llamafile-q8-nonAVX/resolve/main/Meta-Llama-3-8B-Instruct-Q8_0.llamafile) This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder)