File size: 1,516 Bytes
4b40687 370e8c4 4b40687 370e8c4 4b40687 370e8c4 4b40687 370e8c4 4b40687 370e8c4 4b40687 370e8c4 4b40687 370e8c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
library_name: transformers
license: llama2
---
# Llama-3-Smaug-2.1-8B
### Built with Meta Llama 3
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f95cac5f9ba52bbcd7f/FRnFDqYCvPkEYqC2cZX9n.png)
This model was built using the Smaug recipe for improving performance on real world multi-turn conversations applied to
[meta-llama/Meta-Llama-3-8B](https://huggingface.co./meta-llama/Meta-Llama-3-8B).
### Model Description
- **Developed by:** [Abacus.AI](https://abacus.ai)
- **License:** https://llama.meta.com/llama3/license/
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B](https://huggingface.co./meta-llama/Meta-Llama-3-8B).
## Evaluation
```
########## First turn ##########
score
model turn
llama3-8b-smaug-2-merged-600 1 8.79375
llama3-8b-smaug-2-merged-150 1 8.71250
llama3-8b-smaug-2-merged-300 1 8.66250
base_Meta-Llama-3-8B-Instruct 1 8.53125
llama3-8b-smaug-2-merged-450 1 8.42500
########## Second turn ##########
score
model turn
llama3-8b-smaug-2-merged-450 2 7.8125
llama3-8b-smaug-2-merged-300 2 7.7375
llama3-8b-smaug-2-merged-600 2 7.7250
llama3-8b-smaug-2-merged-150 2 7.7125
base_Meta-Llama-3-8B-Instruct 2 7.5500
########## Average ##########
score
model
llama3-8b-smaug-2-merged-600 8.259375
llama3-8b-smaug-2-merged-150 8.212500
llama3-8b-smaug-2-merged-300 8.200000
llama3-8b-smaug-2-merged-450 8.118750
base_Meta-Llama-3-8B-Instruct 8.040625
```
|