File size: 1,534 Bytes
2cf4b5d 842362a 63f485c 842362a 63f485c 8ac4696 d460253 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: other
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
---
Llama-3-11.5B-v2
Thank you to Meta for the weights for Meta-Llama-3-8B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png)
This is an upscaling of the Meta-Llama-3-8B Ai using techniques created for chargoddard/mistral-11b-slimorca. This Ai model has been upscaled from 8b parameters to 11.5b parameters without any continuous pretraining or fine-tuning.
Unlike version 1 this model has no issues at fp16 or any quantizations.
The model that was used to create this one is linked below:
https://huggingface.co./meta-llama/Meta-Llama-3-8B
- Llama-3-11.5B-V2
| Metric | Value |
|---------------------------------|------:|
| Avg. | 66.89 |
| AI2 Reasoning Challenge(25-Shot)| 57.68 |
| HellaSwag (10-Shot) | 78.59 |
| MMLU (5-Shot) | 65.39 |
| TruthfulQA (0-shot) | 35.86 |
| Winogrande (5-shot) | 74.74 |
| GSM8k (5-shot) | 69.37 |
- Original Meta-Llama-3-8B
| Metric |Value|
|---------------------------------|----:|
|Avg. |62.87|
|AI2 Reasoning Challenge (25-Shot)|59.47|
|HellaSwag (10-Shot) |82.09|
|MMLU (5-Shot) |66.69|
|TruthfulQA (0-shot) |43.90|
|Winogrande (5-shot) |77.35|
|GSM8k (5-shot) |45.34| |