juliehunter commited on
Commit
133fbfc
·
verified ·
1 Parent(s): fbc6f80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md CHANGED
@@ -12,3 +12,197 @@ base_model:
12
  - OpenLLM-France/Lucie-7B
13
  pipeline_tag: text-generation
14
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - OpenLLM-France/Lucie-7B
13
  pipeline_tag: text-generation
14
  ---
15
+
16
+ # Model Card for Lucie-7B-Instruct
17
+
18
+ * [Model Description](#model-description)
19
+ <!-- * [Uses](#uses) -->
20
+ * [Training Details](#training-details)
21
+ * [Training Data](#training-data)
22
+ * [Preprocessing](#preprocessing)
23
+ * [Instruction template](#instruction-template)
24
+ * [Training Procedure](#training-procedure)
25
+ <!-- * [Evaluation](#evaluation) -->
26
+ * [Testing the model](#testing-the-model)
27
+ * [Test with ollama](#test-with-ollama)
28
+ * [Test with vLLM](#test-with-vllm)
29
+ * [Citation](#citation)
30
+ * [Acknowledgements](#acknowledgements)
31
+ * [Contact](#contact)
32
+
33
+ ## Model Description
34
+
35
+ Lucie-7B-Instruct is a fine-tuned version of [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B), an open-source, multilingual causal language model created by OpenLLM-France.
36
+
37
+ Lucie-7B-Instruct is fine-tuned on synthetic instructions produced by ChatGPT and Gemma and a small set of customized prompts about OpenLLM and Lucie. It is optimized for generation of French text. Note that it has not been trained for code generation or optimized for math. Such capacities can be improved through further fine-tuning and alignment with methods such as DPO, RLHF, etc.
38
+
39
+ While Lucie-7B-Instruct is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct maintains the capacity of the base model to handle 32K-size context windows.
40
+
41
+
42
+ ## Training details
43
+
44
+ ### Training data
45
+
46
+ Lucie-7B-Instruct is trained on the following datasets:
47
+ * [Alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) (English; 51604 samples)
48
+ * [Alpaca-cleaned-fr](https://huggingface.co/datasets/cmh/alpaca_data_cleaned_fr_52k) (French; 51655 samples)
49
+ * [Magpie-Gemma](https://huggingface.co/datasets/Magpie-Align/Magpie-Gemma2-Pro-200K-Filtered) (English; 195167 samples)
50
+ * [Wildchat](https://huggingface.co/datasets/allenai/WildChat-1M) (French subset; 26436 samples)
51
+ * Hard-coded prompts concerning OpenLLM and Lucie (based on [allenai/tulu-3-hard-coded-10x](https://huggingface.co/datasets/allenai/tulu-3-hard-coded-10x))
52
+ * French: openllm_french.jsonl (24x10 samples)
53
+ * English: openllm_english.jsonl (24x10 samples)
54
+
55
+
56
+ ### Preprocessing
57
+ * Filtering by language: Magpie-Gemma and Wildchat were filtered to keep only English and French samples, respectively.
58
+ * Filtering by keyword: Examples containing assistant responses were filtered out from the four synthetic datasets if the responses contained a keyword from the list [filter_strings](https://github.com/OpenLLM-France/Lucie-Training/blob/98792a1a9015dcf613ff951b1ce6145ca8ecb174/tokenization/data.py#L2012). This filter is designed to remove examples in which the assistant is presented as model other than Lucie (e.g., ChatGPT, Gemma, Llama, ...).
59
+
60
+ ### Instruction template:
61
+ Lucie-7B-Instruct was trained on the chat template from Llama 3.1 with the sole difference that `<|begin_of_text|>` is replaced with `<s>`. The resulting template:
62
+
63
+ ```
64
+ <s><|start_header_id|>system<|end_header_id|>
65
+
66
+ {SYSTEM}<|eot_id|><|start_header_id|>user<|end_header_id|>
67
+
68
+ {INPUT}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
69
+
70
+ {OUTPUT}<|eot_id|>
71
+ ```
72
+
73
+
74
+ An example:
75
+
76
+
77
+ ```
78
+ <s><|start_header_id|>system<|end_header_id|>
79
+
80
+ You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
81
+
82
+ Give me three tips for staying in shape.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
83
+
84
+ 1. Eat a balanced diet and be sure to include plenty of fruits and vegetables. \n2. Exercise regularly to keep your body active and strong. \n3. Get enough sleep and maintain a consistent sleep schedule.<|eot_id|>
85
+ ```
86
+
87
+ ### Training procedure
88
+
89
+ The model architecture and hyperparameters are the same as for [Lucie-7B](https://huggingface.co/OpenLLM-France/Lucie-7B) during the annealing phase with the following exceptions:
90
+ * context length: 4096<sup>*</sup>
91
+ * batch size: 1024
92
+ * max learning rate: 3e-5
93
+ * min learning rate: 3e-6
94
+
95
+ <sup>*</sup>As noted above, while Lucie-7B-Instruct is trained on sequences of 4096 tokens, it maintains the capacity of the base model, Lucie-7B, to handle context sizes of up to 32K tokens.
96
+
97
+ ## Testing the model
98
+
99
+ ### Test with ollama
100
+
101
+ * Download and install [Ollama](https://ollama.com/download)
102
+ * Download the [GGUF model](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct/resolve/main/Lucie-7B-q4_k_m.gguf)
103
+ * Copy the [`Modelfile`](Modelfile), adpating if necessary the path to the GGUF file (line starting with `FROM`).
104
+ * Run in a shell:
105
+ * `ollama create -f Modelfile Lucie`
106
+ * `ollama run Lucie`
107
+ * Once ">>>" appears, type your prompt(s) and press Enter.
108
+ * Optionally, restart a conversation by typing "`/clear`"
109
+ * End the session by typing "`/bye`".
110
+
111
+ Useful for debug:
112
+ * [How to print input requests and output responses in Ollama server?](https://stackoverflow.com/a/78831840)
113
+ * [Documentation on Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter)
114
+ * Examples: [Ollama model library](https://github.com/ollama/ollama#model-library)
115
+ * Llama 3 example: https://ollama.com/library/llama3.1
116
+ * Add GUI : https://docs.openwebui.com/
117
+
118
+ ### Test with vLLM
119
+
120
+ #### 1. Run vLLM Docker Container
121
+
122
+ Use the following command to deploy the model,
123
+ replacing `INSERT_YOUR_HF_TOKEN` with your Hugging Face Hub token.
124
+
125
+ ```bash
126
+ docker run --runtime nvidia --gpus=all \
127
+ --env "HUGGING_FACE_HUB_TOKEN=INSERT_YOUR_HF_TOKEN" \
128
+ -p 8000:8000 \
129
+ --ipc=host \
130
+ vllm/vllm-openai:latest \
131
+ --model OpenLLM-France/Lucie-7B-Instruct-gguf
132
+ ```
133
+
134
+ #### 2. Test using OpenAI Client in Python
135
+
136
+ To test the deployed model, use the OpenAI Python client as follows:
137
+
138
+ ```python
139
+ from openai import OpenAI
140
+
141
+ # Initialize the client
142
+ client = OpenAI(base_url='http://localhost:8000/v1', api_key='empty')
143
+
144
+ # Define the input content
145
+ content = "Hello Lucie"
146
+
147
+ # Generate a response
148
+ chat_response = client.chat.completions.create(
149
+ model="OpenLLM-France/Lucie-7B-Instruct-gguf",
150
+ messages=[
151
+ {"role": "user", "content": content}
152
+ ],
153
+ )
154
+ print(chat_response.choices[0].message.content)
155
+ ```
156
+
157
+ ## Citation
158
+
159
+ When using the Lucie-7B-Instruct model, please cite the following paper:
160
+
161
+ ✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour,
162
+ Evan Dufraisse, Yaya Sy, Pierre-Carl Langlais, Anastasia Stasenko,
163
+ Laura Rivière, Christophe Cerisara, Jean-Pierre Lorré (2025)
164
+ Lucie-7B LLM and its training dataset
165
+ ```bibtex
166
+ @misc{openllm2023claire,
167
+ title={The Lucie-7B LLM and the Lucie Training Dataset:
168
+ open resources for multilingual language generation},
169
+ author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Evan Dufraisse and Yaya Sy and Pierre-Carl Langlais and Anastasia Stasenko and Laura Rivière and Christophe Cerisara and Jean-Pierre Lorré},
170
+ year={2025},
171
+ archivePrefix={arXiv},
172
+ primaryClass={cs.CL}
173
+ }
174
+ ```
175
+
176
+
177
+ ## Acknowledgements
178
+
179
+ This work was performed using HPC resources from GENCI–IDRIS (Grant 2024-GC011015444). We gratefully acknowledge support from GENCI and IDRIS and from Pierre-François Lavallée (IDRIS) and Stephane Requena (GENCI) in particular.
180
+
181
+
182
+ Lucie-7B was created by members of [LINAGORA](https://labs.linagora.com/) and the [OpenLLM-France](https://www.openllm-france.fr/) community, including in alphabetical order:
183
+ Olivier Gouvert (LINAGORA),
184
+ Ismaïl Harrando (LINAGORA/SciencesPo),
185
+ Julie Hunter (LINAGORA),
186
+ Jean-Pierre Lorré (LINAGORA),
187
+ Jérôme Louradour (LINAGORA),
188
+ Michel-Marie Maudet (LINAGORA), and
189
+ Laura Rivière (LINAGORA).
190
+
191
+
192
+ We thank
193
+ Clément Bénesse (Opsci),
194
+ Christophe Cerisara (LORIA),
195
+ Émile Hazard (Opsci),
196
+ Evan Dufraisse (CEA),
197
+ Guokan Shang (MBZUAI),
198
+ Joël Gombin (Opsci),
199
+ Jordan Ricker (Opsci),
200
+ and
201
+ Olivier Ferret (CEA)
202
+ for their helpful input.
203
+
204
+ Finally, we thank the entire OpenLLM-France community, whose members have helped in diverse ways.
205
+
206
+ ## Contact
207
+
208