File size: 3,316 Bytes
9cb04a6
 
 
 
 
 
 
 
 
 
2bde427
9cb04a6
2bde427
9cb04a6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2bde427
 
8d8a162
2bde427
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3085d8
24a7ce8
 
 
 
 
 
 
 
 
 
5fc2a1c
24a7ce8
8d8a162
 
 
 
 
 
 
 
5fc2a1c
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- udkai/Turdus
---

# Marcoroni-7b-DPO-Merge

Marcoroni-7b-DPO-Merge is a merge of the following models using [mergekit](https://github.com/cg123/mergekit) and inspired by [Maxime Labonne's work](https://medium.com/@mlabonne):
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co./fblgit/UNA-TheBeagle-7b-v1)
* [udkai/Turdus](https://huggingface.co./udkai/Turdus)

## 🧩 Configuration

```yaml
models:
  - model: madatnlp/marcoroni-7b-v3-safetensor
    # no parameters necessary for base model
  - model: fblgit/UNA-TheBeagle-7b-v1
    parameters:
      density: 0.3
      weight: 0.5
  - model: udkai/Turdus
    parameters:
      density: 0.7
      weight: 0.3
merge_method: ties
base_model: madatnlp/marcoroni-7b-v3-safetensor
parameters:
  normalize: true
dtype: float16
```


## 💻 Example Python Code

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_name_or_path = "nfaheem/Marcoroni-7b-DPO-Merge"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
                                             device_map="auto",
                                             revision="main")

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''{prompt}
'''

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))

# Inference can also be done using transformers' pipeline

print("*** Pipeline:")
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_p=0.95,
    top_k=40,
    repetition_penalty=1.1
)

print(pipe(prompt_template)[0]['generated_text'])
```

## 📋 Summary Eval:

| Average |  ARC  | HellaSwag |  MMLU  | TruthfulQA | Winogrande | GSM8K |
|---------|-------|-----------|--------|------------|------------|-------|
|  74.9   | 73.04 |   88.8    |  64.24 |    70.47   |    85.24   | 67.63 |


## 📈 Huggingface Leaderboard
It's Ranked # 1 on HuggingFace Leaderboard among around 13B parameters (01/15/2024)

| Model                              | Average | ARC   | HellaSwag | MMLU  | Truthful QA | Winogrande  | GSM8K |
| ---------------------------------- | ------- | ----- | --------- | ----- | ----------- | ------------| ----- |
| nfaheem/Marcoroni-7b-DPO-Merge     | 74.9    | 73.04 | 88.8      | 64.24 | 70.47       | 85.24       | 67.63 |
| mlabonne/Beagle14-7b               | 74.76   | 72.95 | 87.95     | 64.7  | 68.38       | 82.64       | 71.42 |
| udkai/Turdus                       | 74.66   | 73.38 | 88.56     | 64.52 | 67.11       | 86.66       | 67.7  |
| CultriX/MergeTrix-7B               | 74.33   | 72.24 | 87.84     | 64.88 | 66.27       | 83.5        | 71.19 |
| fblgit/UNA-TheBeagle-7b-v1         | 73.87   | 73.04 | 88        | 63.48 | 69.85       | 82.16       | 66.72 |


![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b6cc785fd617abdfec6bed/0PE-ffmkezG1S6CqScPAv.png)