Triangle104
commited on
Commit
•
1ec319b
1
Parent(s):
0c8073f
Update README.md
Browse files
README.md
CHANGED
@@ -110,6 +110,18 @@ model-index:
|
|
110 |
This model was converted to GGUF format from [`arcee-ai/Llama-3.1-SuperNova-Lite`](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
111 |
Refer to the [original model card](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) for more details on the model.
|
112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
## Use with llama.cpp
|
114 |
Install llama.cpp through brew (works on Mac and Linux)
|
115 |
|
|
|
110 |
This model was converted to GGUF format from [`arcee-ai/Llama-3.1-SuperNova-Lite`](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
111 |
Refer to the [original model card](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) for more details on the model.
|
112 |
|
113 |
+
---
|
114 |
+
Model details:
|
115 |
+
-
|
116 |
+
Overview
|
117 |
+
|
118 |
+
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
|
119 |
+
|
120 |
+
The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
|
121 |
+
|
122 |
+
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
|
123 |
+
|
124 |
+
---
|
125 |
## Use with llama.cpp
|
126 |
Install llama.cpp through brew (works on Mac and Linux)
|
127 |
|