Upload README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,17 @@ license: apache-2.0
|
|
5 |
model_creator: Intel
|
6 |
model_name: Neural Chat 7B v3-1
|
7 |
model_type: mistral
|
8 |
-
prompt_template: '
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
'
|
11 |
quantized_by: TheBloke
|
@@ -64,11 +74,17 @@ It is supported by:
|
|
64 |
<!-- repositories-available end -->
|
65 |
|
66 |
<!-- prompt-template start -->
|
67 |
-
## Prompt template:
|
68 |
|
69 |
```
|
|
|
|
|
|
|
|
|
70 |
{prompt}
|
71 |
|
|
|
|
|
72 |
```
|
73 |
|
74 |
<!-- prompt-template end -->
|
@@ -133,7 +149,13 @@ prompts = [
|
|
133 |
"What is 291 - 150?",
|
134 |
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
|
135 |
]
|
136 |
-
prompt_template=f'''
|
|
|
|
|
|
|
|
|
|
|
|
|
137 |
'''
|
138 |
|
139 |
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
|
@@ -175,7 +197,13 @@ from huggingface_hub import InferenceClient
|
|
175 |
endpoint_url = "https://your-endpoint-url-here"
|
176 |
|
177 |
prompt = "Tell me about AI"
|
178 |
-
prompt_template=f'''
|
|
|
|
|
|
|
|
|
|
|
|
|
179 |
'''
|
180 |
|
181 |
client = InferenceClient(endpoint_url)
|
@@ -238,7 +266,13 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
238 |
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
239 |
|
240 |
prompt = "Tell me about AI"
|
241 |
-
prompt_template=f'''
|
|
|
|
|
|
|
|
|
|
|
|
|
242 |
'''
|
243 |
|
244 |
# Convert prompt to tokens
|
@@ -340,9 +374,9 @@ And thank you again to a16z for their generous grant.
|
|
340 |
# Original model card: Intel's Neural Chat 7B v3-1
|
341 |
|
342 |
|
343 |
-
##
|
344 |
|
345 |
-
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [
|
346 |
|
347 |
## Model date
|
348 |
Neural-chat-7b-v3-1 was trained between September and October, 2023.
|
@@ -372,10 +406,22 @@ The following hyperparameters were used during training:
|
|
372 |
- total_train_batch_size: 64
|
373 |
- total_eval_batch_size: 8
|
374 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
375 |
-
- lr_scheduler_type:
|
376 |
-
- lr_scheduler_warmup_ratio: 0.
|
377 |
- num_epochs: 2.0
|
378 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
379 |
## Inference with transformers
|
380 |
|
381 |
```shell
|
@@ -401,4 +447,3 @@ The NeuralChat team with members from Intel/SATG/AIA/AIPT. Core team members: Ka
|
|
401 |
## Useful links
|
402 |
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
|
403 |
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
|
404 |
-
* Intel Extension for PyTorch [link](https://github.com/intel/intel-extension-for-pytorch)
|
|
|
5 |
model_creator: Intel
|
6 |
model_name: Neural Chat 7B v3-1
|
7 |
model_type: mistral
|
8 |
+
prompt_template: '### System:
|
9 |
+
|
10 |
+
{system_message}
|
11 |
+
|
12 |
+
|
13 |
+
### User:
|
14 |
+
|
15 |
+
{prompt}
|
16 |
+
|
17 |
+
|
18 |
+
### Assistant:
|
19 |
|
20 |
'
|
21 |
quantized_by: TheBloke
|
|
|
74 |
<!-- repositories-available end -->
|
75 |
|
76 |
<!-- prompt-template start -->
|
77 |
+
## Prompt template: Orca-Hashes
|
78 |
|
79 |
```
|
80 |
+
### System:
|
81 |
+
{system_message}
|
82 |
+
|
83 |
+
### User:
|
84 |
{prompt}
|
85 |
|
86 |
+
### Assistant:
|
87 |
+
|
88 |
```
|
89 |
|
90 |
<!-- prompt-template end -->
|
|
|
149 |
"What is 291 - 150?",
|
150 |
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
|
151 |
]
|
152 |
+
prompt_template=f'''### System:
|
153 |
+
{system_message}
|
154 |
+
|
155 |
+
### User:
|
156 |
+
{prompt}
|
157 |
+
|
158 |
+
### Assistant:
|
159 |
'''
|
160 |
|
161 |
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
|
|
|
197 |
endpoint_url = "https://your-endpoint-url-here"
|
198 |
|
199 |
prompt = "Tell me about AI"
|
200 |
+
prompt_template=f'''### System:
|
201 |
+
{system_message}
|
202 |
+
|
203 |
+
### User:
|
204 |
+
{prompt}
|
205 |
+
|
206 |
+
### Assistant:
|
207 |
'''
|
208 |
|
209 |
client = InferenceClient(endpoint_url)
|
|
|
266 |
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
267 |
|
268 |
prompt = "Tell me about AI"
|
269 |
+
prompt_template=f'''### System:
|
270 |
+
{system_message}
|
271 |
+
|
272 |
+
### User:
|
273 |
+
{prompt}
|
274 |
+
|
275 |
+
### Assistant:
|
276 |
'''
|
277 |
|
278 |
# Convert prompt to tokens
|
|
|
374 |
# Original model card: Intel's Neural Chat 7B v3-1
|
375 |
|
376 |
|
377 |
+
## Fine-tuning on [Habana](https://habana.ai/) Gaudi2
|
378 |
|
379 |
+
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
|
380 |
|
381 |
## Model date
|
382 |
Neural-chat-7b-v3-1 was trained between September and October, 2023.
|
|
|
406 |
- total_train_batch_size: 64
|
407 |
- total_eval_batch_size: 8
|
408 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
409 |
+
- lr_scheduler_type: cosine
|
410 |
+
- lr_scheduler_warmup_ratio: 0.03
|
411 |
- num_epochs: 2.0
|
412 |
|
413 |
+
## Prompt Template
|
414 |
+
|
415 |
+
```
|
416 |
+
### System:
|
417 |
+
{system}
|
418 |
+
### User:
|
419 |
+
{usr}
|
420 |
+
### Assistant:
|
421 |
+
|
422 |
+
```
|
423 |
+
|
424 |
+
|
425 |
## Inference with transformers
|
426 |
|
427 |
```shell
|
|
|
447 |
## Useful links
|
448 |
* Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
|
449 |
* Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
|
|