--- language: - en license: llama3 library_name: transformers base_model: arcee-ai/Llama-3.1-SuperNova-Lite datasets: - arcee-ai/EvolKit-20k tags: - llama-cpp - gguf-my-repo model-index: - name: Llama-3.1-SuperNova-Lite results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 80.17 name: strict accuracy source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/Llama-3.1-SuperNova-Lite name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 31.57 name: normalized accuracy source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/Llama-3.1-SuperNova-Lite name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 15.48 name: exact match source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/Llama-3.1-SuperNova-Lite name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 7.49 name: acc_norm source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/Llama-3.1-SuperNova-Lite name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 11.67 name: acc_norm source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/Llama-3.1-SuperNova-Lite name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 31.97 name: accuracy source: url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=arcee-ai/Llama-3.1-SuperNova-Lite name: Open LLM Leaderboard --- # Triangle104/Llama-3.1-SuperNova-Lite-Q8_0-GGUF This model was converted to GGUF format from [`arcee-ai/Llama-3.1-SuperNova-Lite`](https://huggingface.co./arcee-ai/Llama-3.1-SuperNova-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co./spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co./arcee-ai/Llama-3.1-SuperNova-Lite) for more details on the model. --- Model details: - Overview Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability. The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai. Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Llama-3.1-SuperNova-Lite-Q8_0-GGUF --hf-file llama-3.1-supernova-lite-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Llama-3.1-SuperNova-Lite-Q8_0-GGUF --hf-file llama-3.1-supernova-lite-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Llama-3.1-SuperNova-Lite-Q8_0-GGUF --hf-file llama-3.1-supernova-lite-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Llama-3.1-SuperNova-Lite-Q8_0-GGUF --hf-file llama-3.1-supernova-lite-q8_0.gguf -c 2048 ```