File size: 2,719 Bytes
3727a46
 
 
 
 
 
 
 
 
 
d1cfa8a
ee83870
96288c2
3727a46
 
96288c2
3727a46
 
 
96288c2
 
 
 
 
 
 
 
 
 
18b6f3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96288c2
49f60cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/Marcoro14-7B-slerp
- mlabonne/NeuralBeagle14-7B
---

![1702099593364787.png](https://cdn-uploads.huggingface.co/production/uploads/65d1f383351255ba48a4f831/iWFpAuiEEn6NFQpQLb3s8.png)


# Maverick-7B

This model is a merge of the following models:
* [mlabonne/Marcoro14-7B-slerp](https://huggingface.co./mlabonne/Marcoro14-7B-slerp)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co./mlabonne/NeuralBeagle14-7B)


## 🏆 Evaluation

### TruthfulQA

|    Task     |Version|Metric|Value |   |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc|      1|mc1   |0.5165|±  |0.0175|
|             |       |mc2   |0.6661|±  |0.0152|

### GPT4ALL

|    Task     |Version| Metric |Value |   |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge|      0|acc     |0.6442|±  |0.0140|
|             |       |acc_norm|0.6570|±  |0.0139|
|arc_easy     |      0|acc     |0.8645|±  |0.0070|
|             |       |acc_norm|0.8304|±  |0.0077|
|boolq        |      1|acc     |0.8850|±  |0.0056|
|hellaswag    |      0|acc     |0.6813|±  |0.0047|
|             |       |acc_norm|0.8571|±  |0.0035|
|openbookqa   |      0|acc     |0.3640|±  |0.0215|
|             |       |acc_norm|0.4800|±  |0.0224|
|piqa         |      0|acc     |0.8324|±  |0.0087|
|             |       |acc_norm|0.8460|±  |0.0084|
|winogrande   |      0|acc     |0.7869|±  |0.0115|

### AGIEval

|             Task             |Version| Metric |Value |   |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat              |      0|acc     |0.2717|±  |0.0280|
|                              |       |acc_norm|0.2559|±  |0.0274|
|agieval_logiqa_en             |      0|acc     |0.3902|±  |0.0191|
|                              |       |acc_norm|0.3856|±  |0.0191|
|agieval_lsat_ar               |      0|acc     |0.2565|±  |0.0289|
|                              |       |acc_norm|0.2478|±  |0.0285|
|agieval_lsat_lr               |      0|acc     |0.5118|±  |0.0222|
|                              |       |acc_norm|0.5216|±  |0.0221|
|agieval_lsat_rc               |      0|acc     |0.6543|±  |0.0291|
|                              |       |acc_norm|0.6506|±  |0.0291|
|agieval_sat_en                |      0|acc     |0.7961|±  |0.0281|
|                              |       |acc_norm|0.8010|±  |0.0279|
|agieval_sat_en_without_passage|      0|acc     |0.4660|±  |0.0348|
|                              |       |acc_norm|0.4709|±  |0.0349|
|agieval_sat_math              |      0|acc     |0.3227|±  |0.0316|
|                              |       |acc_norm|0.3045|±  |0.0311|