johannhartmann commited on
Commit
de1935c
1 Parent(s): 3c2ee06

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,17 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ occiglot-7b-es-en-instruct.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ occiglot-7b-es-en-instruct.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ occiglot-7b-es-en-instruct.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ occiglot-7b-es-en-instruct.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ occiglot-7b-es-en-instruct.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ occiglot-7b-es-en-instruct.Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
42
+ occiglot-7b-es-en-instruct.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ occiglot-7b-es-en-instruct.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
44
+ occiglot-7b-es-en-instruct.Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
45
+ occiglot-7b-es-en-instruct.Q5_1.gguf filter=lfs diff=lfs merge=lfs -text
46
+ occiglot-7b-es-en-instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
47
+ occiglot-7b-es-en-instruct.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
48
+ occiglot-7b-es-en-instruct.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
49
+ occiglot-7b-es-en-instruct.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - es
5
+ license: apache-2.0
6
+ tags:
7
+ - gguf
8
+ pipeline_tag: text-generation
9
+ ---
10
+
11
+ ![image/png](https://huggingface.co/datasets/malteos/images/resolve/main/occiglot.medium.png)
12
+
13
+ # Occiglot-7B-ES-EN-Instruct
14
+
15
+ > A [polyglot](https://en.wikipedia.org/wiki/Multilingualism#In_individuals) language model for the [Occident](https://en.wikipedia.org/wiki/Occident).
16
+ >
17
+
18
+ **Occiglot-7B-ES-EN-Instruct** is a the instruct version of [occiglot-7b-es-en](https://huggingface.co/occiglot/occiglot-7b-es-en), a generative language model with 7B parameters supporting the Spanish and English and trained by the [Occiglot Research Collective](https://occiglot.github.io/occiglot/).
19
+ It was trained on 160M tokens of additional multilingual and code instructions.
20
+ Note that the model was not safety aligned and might generate problematic outputs.
21
+
22
+ This is the first release of an ongoing open research project for multilingual language models.
23
+ If you want to train a model for your own language or are working on evaluations, please contact us or join our [Discord server](https://discord.gg/wUpvYs4XvM). **We are open for collaborations!**
24
+
25
+
26
+ ### Model details
27
+
28
+ - **Instruction tuned from:** [occiglot-7b-es-en](https://huggingface.co/occiglot/occiglot-7b-es-en)
29
+ - **Model type:** Causal decoder-only transformer language model
30
+ - **Languages:** English, Spanish, and code.
31
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
32
+ - **Compute resources:** [DFKI cluster](https://www.dfki.de/en/web)
33
+ - **Contributors:** Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting
34
+ - **Research labs:** [Occiglot](https://occiglot.github.io/occiglot/) with support from [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) and [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology)
35
+ - **Contact:** [Discord](https://discord.gg/wUpvYs4XvM)
36
+
37
+ ### How to use
38
+
39
+ The model was trained using the chatml instruction template. You can use the transformers chat template feature for interaction.
40
+ Since the generation relies on some randomness, we
41
+ set a seed for reproducibility:
42
+
43
+ ```python
44
+ >>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed
45
+ >>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-es-en-instruct")
46
+ >>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-es-en-instruct') # You may want to use bfloat16 and/or move to GPU here
47
+ >>> set_seed(42)
48
+ >>> messages = [
49
+ >>> {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'},
50
+ >>> {"role": "user", "content": "¿quién es el presidente del gobierno español?"},
51
+ >>> ]
52
+ >>> tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=False, return_tensors='pt',)
53
+ >>> set_seed(42)
54
+ >>> outputs = model.generate(tokenized_chat.to('cuda'), max_new_tokens=200,)
55
+ >>> tokenizer.decode(out[0][len(tokenized_chat[0]):])
56
+ 'Actualmente el presidente del gobierno español es Pedro Sánchez Pérez-Castejón'
57
+ ```
58
+
59
+ ## Dataset
60
+
61
+ The training data was split evenly amongst Spanish and English based on the total number of tokens.
62
+
63
+ **English and Code**
64
+ - [Open-Hermes-2B](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
65
+
66
+
67
+ **Spanish**
68
+ - [Mentor-ES](https://huggingface.co/datasets/projecte-aina/MentorES)
69
+ - [Squad-es](https://huggingface.co/datasets/squad_es)
70
+ - [OASST-2](https://huggingface.co/datasets/OpenAssistant/oasst2) (Spanish subset)
71
+ - [Aya-Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) (Spanish subset)
72
+
73
+ ## Training settings
74
+
75
+ - Full instruction fine-tuning on 8xH100.
76
+ - 0.6 - 4 training epochs (depending on dataset sampling).
77
+ - Framework: [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
78
+ - Precision: bf16
79
+ - Optimizer: AdamW
80
+ - Global batch size: 128 (with 8192 context length)
81
+ - Cosine Annealing with Warmup
82
+
83
+
84
+ ## Tokenizer
85
+
86
+ Tokenizer is unchanged from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
87
+
88
+ ## Evaluation
89
+
90
+ Preliminary evaluation results can be found below.
91
+ Please note that the non-English results are based on partially machine-translated datasets and English prompts ([Belebele](https://huggingface.co/datasets/facebook/belebele) and [Okapi framework](https://github.com/nlp-uoregon/Okapi)) and thus should be interpreted with caution, e.g., biased towards English model performance.
92
+ Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian.
93
+
94
+ <details>
95
+ <summary>Evaluation results</summary>
96
+
97
+ ### All 5 Languages
98
+
99
+ | | avg | arc_challenge | belebele | hellaswag | mmlu | truthfulqa |
100
+ |:---------------------------|---------:|----------------:|-----------:|------------:|---------:|-------------:|
101
+ | Occiglot-7b-eu5 | 0.516895 | 0.508109 | 0.675556 | 0.718963 | 0.402064 | 0.279782 |
102
+ | Occiglot-7b-eu5-instruct | 0.537799 | 0.53632 | 0.691111 | 0.731918 | 0.405198 | 0.32445 |
103
+ | Occiglot-7b-es-en | 0.483388 | 0.482949 | 0.606889 | 0.653902 | 0.398922 | 0.274277 |
104
+ | Occiglot-7b-es-en-instruct | 0.504023 | 0.494576 | 0.65 | 0.670847 | 0.406176 | 0.298513 |
105
+ | Lince-mistral-7b-it-es | 0.543427 | 0.540222 | 0.745111 | 0.692931 | 0.426241 | 0.312629 |
106
+ | Mistral-7b-v0.1 | 0.547111 | 0.528937 | 0.768444 | 0.682516 | 0.448253 | 0.307403 |
107
+ | Mistral-7b-instruct-v0.2 | 0.56713 | 0.547228 | 0.741111 | 0.69455 | 0.422501 | 0.430262 |
108
+
109
+
110
+ ### English
111
+
112
+ | | avg | arc_challenge | belebele | hellaswag | mmlu | truthfulqa |
113
+ |:---------------------------|---------:|----------------:|-----------:|------------:|---------:|-------------:|
114
+ | Occiglot-7b-eu5 | 0.59657 | 0.530717 | 0.726667 | 0.789882 | 0.531904 | 0.403678 |
115
+ | Occiglot-7b-eu5-instruct | 0.617905 | 0.558874 | 0.746667 | 0.799841 | 0.535109 | 0.449 |
116
+ | Occiglot-7b-es-en | 0.593609 | 0.543515 | 0.697778 | 0.788289 | 0.548355 | 0.390109 |
117
+ | Occiglot-7b-es-en-instruct | 0.615707 | 0.552048 | 0.736667 | 0.797451 | 0.557328 | 0.435042 |
118
+ | Leo-mistral-hessianai-7b | 0.600949 | 0.522184 | 0.736667 | 0.777833 | 0.538812 | 0.429248 |
119
+ | Mistral-7b-v0.1 | 0.668385 | 0.612628 | 0.844444 | 0.834097 | 0.624555 | 0.426201 |
120
+ | Mistral-7b-instruct-v0.2 | 0.713657 | 0.637372 | 0.824444 | 0.846345 | 0.59201 | 0.668116 |
121
+
122
+ ### Spanish
123
+
124
+ | | avg | arc_challenge_es | belebele_es | hellaswag_es | mmlu_es | truthfulqa_es |
125
+ |:---------------------------|---------:|-------------------:|--------------:|---------------:|----------:|----------------:|
126
+ | Occiglot-7b-eu5 | 0.533194 | 0.508547 | 0.676667 | 0.725411 | 0.499325 | 0.25602 |
127
+ | Occiglot-7b-eu5-instruct | 0.548155 | 0.535043 | 0.68 | 0.737039 | 0.503525 | 0.285171 |
128
+ | Occiglot-7b-es-en | 0.527264 | 0.529915 | 0.627778 | 0.72253 | 0.512749 | 0.243346 |
129
+ | Occiglot-7b-es-en-instruct | 0.5396 | 0.545299 | 0.636667 | 0.734372 | 0.524374 | 0.257288 |
130
+ | Lince-mistral-7b-it-es | 0.547212 | 0.52906 | 0.721111 | 0.687967 | 0.512749 | 0.285171 |
131
+ | Mistral-7b-v0.1 | 0.554817 | 0.528205 | 0.747778 | 0.672712 | 0.544023 | 0.281369 |
132
+ | Mistral-7b-instruct-v0.2 | 0.568575 | 0.54188 | 0.73 | 0.685406 | 0.511699 | 0.373891 |
133
+
134
+ </details>
135
+
136
+ ## Acknowledgements
137
+
138
+ The pre-trained model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)).
139
+ The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)
140
+ through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D).
141
+
142
+
143
+ ## License
144
+
145
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
146
+
147
+ ## See also
148
+
149
+ - https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01
150
+ - https://huggingface.co/NikolayKozloff/occiglot-7b-es-en-GGUF
occiglot-7b-es-en-instruct.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45dec37aed141c79e059592225a9b6b372e5d0efcf74f9308847fb4226d90e05
3
+ size 2719252192
occiglot-7b-es-en-instruct.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfb2e8c762a052f984f1e2f8f8ed1f28b70aa415b62ef385cdd48b25f8352b41
3
+ size 3822035488
occiglot-7b-es-en-instruct.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9871cd46045c25c1cc6e7b4891be7ba48b697bc56400f85f93f32ca859fea985
3
+ size 3518997024
occiglot-7b-es-en-instruct.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2cd63ae4f3c4372a72dd42be27e322e6576695f168a8040312b31ffc08e1d64
3
+ size 3164578336
occiglot-7b-es-en-instruct.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ea22ce504480636ac0fd4689116d6096ee1f9b26431b4415f971652aab3515
3
+ size 4108928608
occiglot-7b-es-en-instruct.Q4_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52d9d5c3bb67229a7854ed0e8b60215b6201eb6b3e4054c9376e3e3a846eefcc
3
+ size 4553328736
occiglot-7b-es-en-instruct.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3967943c48498ea3fbfc1a2f2514cbe9aa84abe3ec5bbdfb1514c72aebbbc52
3
+ size 4368451168
occiglot-7b-es-en-instruct.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1beb0afe8ee3052098a0760b391684b23dfa30be793c15f9880730372ce4fd4d
3
+ size 4140385888
occiglot-7b-es-en-instruct.Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5ddf114fe12d83a2472bc9279b9a6f027e0f2d91bfd47cee454bc0439eb45a1
3
+ size 4997728864
occiglot-7b-es-en-instruct.Q5_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:269abd54718eb9b30505c0293e6b68f3da95dc506b9198b1c4e2e37a72ebf442
3
+ size 5442128992
occiglot-7b-es-en-instruct.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cebcf0c31859034dcb2f87a2e018f6ed287cb1f710d3283e8b587fbad6e94945
3
+ size 5131422304
occiglot-7b-es-en-instruct.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faae17a33886b9383989cc4e4af49d3e0ba8f10d0f0c9dbc2b99ae869b04b3da
3
+ size 4997728864
occiglot-7b-es-en-instruct.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:405d782cd6856c24fbef0ab2cf3bde1bf55c16eff95855223eb1e5a075da3bda
3
+ size 5942079136
occiglot-7b-es-en-instruct.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adf53904ba3253fed768f98b5a8f7dd0564102ac24947ed9c4c9c8f8aa09f1bc
3
+ size 7695875616