File size: 1,619 Bytes
483ffbf
 
fb5d697
483ffbf
fb5d697
 
4261dd0
 
 
08943bc
8d5388a
 
 
 
 
fb5d697
 
 
08943bc
fb5d697
 
f87dc15
fb5d697
7f5f20b
8d5388a
 
fb5d697
 
 
 
8d5388a
fb5d697
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
library_name: adapter-transformers
---
# Model Card for Mistral-7B-Instruct-v0.1-MyRestaurant-Domain-Adaptation


![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/KFjiY3lHHERUzt8kJRaQw.jpeg)

<font color="#0000FF" size="5"> <b>
This is a domain adaptation for questions about My Restaurant <br /> </b>
You can play by asking the model questions about the menu...</b>
</font>
<br /> </b>
<br><b>Foundation Model : https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.1  <br />
Dataset :  https://huggingface.co./datasets/Argen7um/restrant-qa <br /></b>
The model has been fine tuned with 2 x GPU T4 (RAM : 2 x 14.8GB) + CPU (RAM : 29GB). <br />

The model is based upon the foundation model : Mistral-7B.<br />
It has been tuned with Supervised Fine-tuning Trainer and PEFT LoRa.<br />

# <b>Notebook used for the training</b>

You can find it in the files and versions tab <br />
<font color="#0000FF" size="3">Direct link : https://huggingface.co./Laurent1/Mistral-7B-Instruct-v0.1-MyRestaurant-Domain-Adaptation/blob/main/laurent-restaurant-adaptation-mistral-7b-tuned.ipynb
</font>
## <b>Bias, Risks, and Limitations</b>

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.<br />
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models.<br />


## <b>Training Details</b>

<ul>
<li>per_device_train_batch_size = 1</li>
<li>gradient_accumulation_steps = 16</li>
<li>2 x GPU T4 (RAM : 14.8GB) + CPU (RAM : 29GB)</li>
</ul>