File size: 9,562 Bytes
27321a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cefadc7
27321a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bef1a6f
 
27321a3
 
 
 
 
 
 
 
 
 
 
 
 
 
68ad6d5
27321a3
 
 
 
 
cefadc7
27321a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6d92440
f122d59
 
 
 
 
 
 
 
 
 
 
50cc49b
 
f122d59
 
 
 
 
 
 
 
 
 
 
50cc49b
6d92440
50cc49b
f122d59
 
 
 
 
 
 
 
 
27321a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8058064
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
license: apache-2.0
language:
- en
- de
pipeline_tag: text-generation
---

![image/png](https://huggingface.co./datasets/malteos/images/resolve/main/occiglot.medium.png)

# Occiglot-7B-DE-EN-Instruct

> A [polyglot](https://en.wikipedia.org/wiki/Multilingualism#In_individuals) language model for the [Occident](https://en.wikipedia.org/wiki/Occident).
> 

**Occiglot-7B-DE-EN-Instruct** is a the instruct version of [occiglot-7b-eu5](https://huggingface.co./occiglot/occiglot-7b-eu5/), a generative language model with 7B parameters supporting German and English and trained by the [Occiglot Research Collective](https://occiglot.github.io/occiglot/).
It was trained on 180M tokens of additional multilingual and code instructions.
Note that the model was not safety aligned and might generate problematic outputs.

This is the first release of an ongoing open research project for multilingual language models. 
If you want to train a model for your own language or are working on evaluations, please contact us or join our [Discord server](https://discord.gg/wUpvYs4XvM). **We are open for collaborations!**

*Special thanks go to **[Disco Research](https://huggingface.co./DiscoResearch)**, **[Jan Philipp Harries](https://huggingface.co./jphme)**, and **[Björn Plüster](https://huggingface.co./bjoernp)** for sharing the German dataset with us*

### Model details

- **Instruction tuned from:** [occiglot-7b-de-en](https://huggingface.co./occiglot/occiglot-7b-de-en)
- **Model type:** Causal decoder-only transformer language model
- **Languages:** English, German, and code.
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
- **Compute resources:** [DFKI cluster](https://www.dfki.de/en/web)
- **Contributors:** Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
- **Research labs:** [Occiglot](https://occiglot.github.io/occiglot/) with support from [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) and [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology)
- **Contact:** [Discord](https://discord.gg/wUpvYs4XvM) 

### How to use

The model was trained using the chatml instruction template. You can use the transformers chat template feature for interaction.
Since the generation relies on some randomness, we
set a seed for reproducibility:

```python
>>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed
>>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-de-en-instruct")
>>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-de-en-instruct')  # You may want to use bfloat16 and/or move to GPU here
>>> set_seed(42)
>>> messages = [
>>>    {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'},
>>>    {"role": "user", "content": "Wer ist der deutsche Bundeskanzler?"},
>>> ]
>>> tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=False, return_tensors='pt',)
>>> set_seed(42)
>>> outputs = model.generate(tokenized_chat.to('cuda'), max_new_tokens=200,)
>>> tokenizer.decode(out[0][len(tokenized_chat[0]):])
'Der deutsche Bundeskanzler ist Olaf Scholz.'
```

## Dataset

The training data was split evenly between German and English based on the total number of tokens. We would like to thank [Disco Research](https://huggingface.co./DiscoResearch), [Jan Philipp Harries](https://huggingface.co./jphme), and [Björn Plüster](https://huggingface.co./bjoernp) for making their dataset available to us. 

**English and Code**
 - [Open-Hermes-2B](https://huggingface.co./datasets/teknium/OpenHermes-2.5)

**German**
 - [DiscoLM German Dataset](https://huggingface.co./DiscoResearch) includes the publicly available [germanrag](https://huggingface.co./datasets/DiscoResearch/germanrag) dataset
 - [OASST-2](https://huggingface.co./datasets/OpenAssistant/oasst2) (German subset)
 - [Aya-Dataset](https://huggingface.co./datasets/CohereForAI/aya_dataset) (German subset)


## Training settings

- Full instruction fine-tuning on 8xH100.
- 0.6 - 4 training epochs (depending on dataset sampling).
- Framework: [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
- Precision: bf16
- Optimizer: AdamW
- Global batch size: 128 (with 8192 context length)
- Cosine Annealing with Warmup


## Tokenizer

Tokenizer is unchanged from [Mistral-7B-v0.1](https://huggingface.co./mistralai/Mistral-7B-v0.1).

## Evaluation

Preliminary evaluation results can be found below. 
Please note that the non-English results are based on partially machine-translated datasets and English prompts ([Belebele](https://huggingface.co./datasets/facebook/belebele) and [Okapi framework](https://github.com/nlp-uoregon/Okapi)) and thus should be interpreted with caution, e.g., biased towards English model performance.
Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian.

<details>
<summary>Evaluation results</summary>

### All 5 Languages

|                            |      avg |   arc_challenge |   belebele |   hellaswag |     mmlu |   truthfulqa |
|:---------------------------|---------:|----------------:|-----------:|------------:|---------:|-------------:|
| Occiglot-7b-eu5            | 0.516895 |        0.508109 |   0.675556 |    0.718963 | 0.402064 |     0.279782 |
| Occiglot-7b-eu5-instruct   | 0.537799 |        0.53632  |   0.691111 |    0.731918 | 0.405198 |     0.32445  |
| Occiglot-7b-de-en          | 0.518337 |        0.496297 |   0.715111 |    0.669034 | 0.412545 |     0.298697 |
| Occiglot-7b-de-en-instruct | 0.543173 |        0.530826 |   0.745778 |    0.67676  | 0.411326 |     0.351176 |
| Leo-mistral-hessianai-7b   | 0.484806 |        0.462103 |   0.653556 |    0.642242 | 0.379208 |     0.28692  |
| Mistral-7b-v0.1            | 0.547111 |        0.528937 |   0.768444 |    0.682516 | 0.448253 |     0.307403 |
| Mistral-7b-instruct-v0.2   | 0.56713  |        0.547228 |   0.741111 |    0.69455  | 0.422501 |     0.430262 |


### English

|                            |      avg |   arc_challenge |   belebele |   hellaswag |     mmlu |   truthfulqa |
|:---------------------------|---------:|----------------:|-----------:|------------:|---------:|-------------:|
| Occiglot-7b-eu5            | 0.59657  |        0.530717 |   0.726667 |    0.789882 | 0.531904 |     0.403678 |
| Occiglot-7b-eu5-instruct   | 0.617905 |        0.558874 |   0.746667 |    0.799841 | 0.535109 |     0.449    |
| Occiglot-7b-de-en          | 0.518337 |        0.496297 |   0.715111 |    0.669034 | 0.412545 |     0.298697 |
| Occiglot-7b-de-en-instruct | 0.543173 |        0.530826 |   0.745778 |    0.67676  | 0.411326 |     0.351176 |
| Leo-mistral-hessianai-7b   | 0.600949 |        0.522184 |   0.736667 |    0.777833 | 0.538812 |     0.429248 |
| Mistral-7b-v0.1            | 0.668385 |        0.612628 |   0.844444 |    0.834097 | 0.624555 |     0.426201 |
| Mistral-7b-instruct-v0.2   | 0.713657 |        0.637372 |   0.824444 |    0.846345 | 0.59201  |     0.668116 |

### German

|                            |      avg |   arc_challenge_de |   belebele_de |   hellaswag_de |   mmlu_de |   truthfulqa_de |
|:---------------------------|---------:|-------------------:|--------------:|---------------:|----------:|----------------:|
| Occiglot-7b-eu5            | 0.508311 |           0.493584 |      0.646667 |       0.666631 |  0.483406 |        0.251269 |
| Occiglot-7b-eu5-instruct   | 0.531506 |           0.529512 |      0.667778 |       0.685205 |  0.488234 |        0.286802 |
| Occiglot-7b-de-en          | 0.540085 |           0.50556  |      0.743333 |       0.67421  |  0.514633 |        0.26269  |
| Occiglot-7b-de-en-instruct | 0.566474 |           0.54491  |      0.772222 |       0.688407 |  0.515915 |        0.310914 |
| Leo-mistral-hessianai-7b   | 0.517766 |           0.474765 |      0.691111 |       0.682109 |  0.488309 |        0.252538 |
| Mistral-7b-v0.1            | 0.527957 |           0.476476 |      0.738889 |       0.610589 |  0.529567 |        0.284264 |
| Mistral-7b-instruct-v0.2   | 0.535215 |           0.485885 |      0.688889 |       0.622438 |  0.501961 |        0.376904 |

</details>

## Acknowledgements

The pre-trained model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).


## License

[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)

## See also

- https://huggingface.co./collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01
- https://huggingface.co./NikolayKozloff/occiglot-7b-de-en-GGUF