File size: 6,571 Bytes
1f0d800 b850d67 1f0d800 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
---
license: other
tags:
- merge
- mergekit
- lazymergekit
base_model:
- nbeerbower/llama-3-stella-8B
- Hastagaras/llama-3-8b-okay
- nbeerbower/llama-3-gutenberg-8B
- openchat/openchat-3.6-8b-20240522
- Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
- cstr/llama3-8b-spaetzle-v20
- mlabonne/ChimeraLlama-3-8B-v3
- flammenai/Mahou-1.1-llama3-8B
- KingNish/KingNish-Llama3-8b
---
**Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-2_5bpw_exl2)**</center> | <center>3478 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-3_0bpw_exl2)**</center> | <center>3894 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-3_75bpw_exl2)**</center> | <center>4518 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-5_0bpw_exl2)**</center> | <center>5556 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-6_0bpw_exl2)**</center> | <center>6497 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-6_5bpw_exl2)**</center> | <center>6893 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co./Zoyd/mlabonne_Daredevil-8B-8_0bpw_exl2)**</center> | <center>8125 MB</center> | <center>8</center> |
# Daredevil-8B
**tl;dr: It looks like a successful merge**
Daredevil-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [nbeerbower/llama-3-stella-8B](https://huggingface.co./nbeerbower/llama-3-stella-8B)
* [Hastagaras/llama-3-8b-okay](https://huggingface.co./Hastagaras/llama-3-8b-okay)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co./nbeerbower/llama-3-gutenberg-8B)
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co./openchat/openchat-3.6-8b-20240522)
* [Kukedlc/NeuralLLaMa-3-8b-DT-v0.1](https://huggingface.co./Kukedlc/NeuralLLaMa-3-8b-DT-v0.1)
* [cstr/llama3-8b-spaetzle-v20](https://huggingface.co./cstr/llama3-8b-spaetzle-v20)
* [mlabonne/ChimeraLlama-3-8B-v3](https://huggingface.co./mlabonne/ChimeraLlama-3-8B-v3)
* [flammenai/Mahou-1.1-llama3-8B](https://huggingface.co./flammenai/Mahou-1.1-llama3-8B)
* [KingNish/KingNish-Llama3-8b](https://huggingface.co./KingNish/KingNish-Llama3-8b)
## π Applications
It is a highly functional censored model. You might want to add `<end_of_turn>` as an additional stop string.
## β‘ Quantization
* **GGUF**: https://huggingface.co./mlabonne/Daredevil-8B-GGUF
## π Evaluation
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: |
| [**mlabonne/Daredevil-8B**](https://huggingface.co./mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | **55.87** | **44.13** | **73.52** | **59.05** | **46.77** |
| [mlabonne/ChimeraLlama-3-8B](https://huggingface.co./mlabonne/Chimera-8B) [π](https://gist.github.com/mlabonne/28d31153628dccf781b74f8071c7c7e4) | 51.58 | 39.12 | 71.81 | 52.4 | 42.98 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co./meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## π³ Model family tree
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/LplqNg6iXHm_JXfX02Aj1.png)
## 𧩠Configuration
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# No parameters necessary for base model
- model: nbeerbower/llama-3-stella-8B
parameters:
density: 0.6
weight: 0.16
- model: Hastagaras/llama-3-8b-okay
parameters:
density: 0.56
weight: 0.1
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.6
weight: 0.18
- model: openchat/openchat-3.6-8b-20240522
parameters:
density: 0.56
weight: 0.12
- model: Kukedlc/NeuralLLaMa-3-8b-DT-v0.1
parameters:
density: 0.58
weight: 0.18
- model: cstr/llama3-8b-spaetzle-v20
parameters:
density: 0.56
weight: 0.08
- model: mlabonne/ChimeraLlama-3-8B-v3
parameters:
density: 0.56
weight: 0.08
- model: flammenai/Mahou-1.1-llama3-8B
parameters:
density: 0.55
weight: 0.05
- model: KingNish/KingNish-Llama3-8b
parameters:
density: 0.55
weight: 0.05
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Daredevil-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |