|
--- |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- DiscoResearch/DiscoLM_German_7b_v1 |
|
- DRXD1000/Phoenix |
|
- OpenPipe/mistral-ft-optimized-1227 |
|
base_model: |
|
- DiscoResearch/DiscoLM_German_7b_v1 |
|
- DRXD1000/Phoenix |
|
- OpenPipe/mistral-ft-optimized-1227 |
|
license: apache-2.0 |
|
language: |
|
- de |
|
--- |
|
|
|
# DiscoPhoenix-7B |
|
![image/png](https://huggingface.co./mayflowergmbh/DiscoPhoenix-7B-dpo/resolve/main/german%20phoenix%20discolm.png) |
|
|
|
DiscoPhoenix-7B is a dpo tuned merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co./DiscoResearch/DiscoLM_German_7b_v1) |
|
* [DRXD1000/Phoenix](https://huggingface.co./DRXD1000/Phoenix) |
|
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co./OpenPipe/mistral-ft-optimized-1227) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
models: |
|
- model: mistralai/Mistral-7B-v0.1 |
|
# No parameters necessary for base model |
|
- model: DiscoResearch/DiscoLM_German_7b_v1 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.3 |
|
- model: DRXD1000/Phoenix |
|
parameters: |
|
density: 0.6 |
|
weight: 0.3 |
|
- model: OpenPipe/mistral-ft-optimized-1227 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.4 |
|
merge_method: dare_ties |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |
|
|
|
## mt-bench-de results |
|
```json |
|
{ |
|
"first_turn": 7.3354430379746836, |
|
"second_turn": 6.65, |
|
"categories": { |
|
"writing": 8.7, |
|
"roleplay": 7.605263157894737, |
|
"reasoning": 5.75, |
|
"math": 3.3, |
|
"coding": 5.3, |
|
"extraction": 7.55, |
|
"stem": 8.4, |
|
"humanities": 9.35 |
|
}, |
|
"average": 6.9927215189873415 |
|
} |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mayflowergmbh/DiscoPhoenix-7B" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |