AmirMohseni
commited on
Commit
•
ed742ed
1
Parent(s):
9a8a8be
Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,99 @@
|
|
1 |
-
---
|
2 |
-
library_name: transformers
|
3 |
-
tags: []
|
4 |
-
---
|
5 |
-
|
6 |
-
# Model Card for Model ID
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
|
|
71 |
|
72 |
-
|
|
|
|
|
73 |
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
|
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
|
|
|
|
91 |
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
2 |
+
# Model Card for `SmolLM-360M-Instruct-finetuned-sft-v2`
|
3 |
|
4 |
+
This is a fine-tuned version of the SmolLM-360M model, optimized for instruction-following tasks. The model has been trained using the [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2) dataset to generate accurate, contextually appropriate, and helpful responses. The fine-tuning was performed on an NVIDIA A100 GPU, enabling efficient training.
|
5 |
|
6 |
## Model Details
|
7 |
|
8 |
### Model Description
|
9 |
|
10 |
+
The `SmolLM-360M-Instruct-finetuned-sft-v2` model is a compact language model part of the SmolLM family, designed for computational efficiency and strong performance across various tasks. This specific version has been fine-tuned to excel at instruction-following scenarios, making it ideal for applications requiring clear and coherent responses based on detailed prompts.
|
11 |
|
12 |
+
- **Developed by:** Hugging Face and fine-tuned by Amir Mohseni
|
13 |
+
- **Model type:** Language Model
|
14 |
+
- **Language(s) (NLP):** English
|
15 |
+
- **License:** Apache 2.0
|
16 |
+
- **Finetuned from model:** SmolLM-360M
|
17 |
|
18 |
+
### Model Sources
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
+
- **Repository:** [SmolLM-360M-Instruct-finetuned-sft-v2 on Hugging Face](https://huggingface.co/AmirMohseni/SmolLM-360M-Instruct-finetuned-sft-v2)
|
21 |
|
22 |
+
## Performance Improvements After Fine-Tuning
|
23 |
|
24 |
+
The fine-tuning process was evaluated using the NVIDIA Nemotron-4-340B-Reward model, which assesses AI-generated responses on five key attributes: helpfulness, correctness, coherence, complexity, and verbosity. Based on this reward model, the fine-tuning resulted in the following performance improvements:
|
|
|
|
|
25 |
|
26 |
+
- **Helpfulness:** Increased from **0.4343** to **0.6166**.
|
27 |
+
- **Correctness:** Increased from **0.5546** to **0.8130**.
|
28 |
+
- **Coherence:** Increased from **2.4018** to **2.5711**.
|
29 |
+
- **Complexity:** Decreased from **1.0023** to **0.9118**.
|
30 |
+
- **Verbosity:** Decreased from **1.4032** to **1.1779**.
|
31 |
|
32 |
+
These results indicate that the fine-tuning process improved the model's ability to generate more helpful, correct, and coherent responses. The reduction in complexity and verbosity means that the model's outputs are easier to understand and use fewer words to convey the same message, which is a positive improvement.
|
|
|
|
|
33 |
|
34 |
+
![Difference in Average Ratings Before and After Fine-tuning](https://cdn-uploads.huggingface.co/production/uploads/65e1bdb336a669a4ca5dab7d/IaLcCQpRmlTl5WcLskR-G.png)
|
35 |
|
36 |
+
## Uses
|
|
|
|
|
37 |
|
38 |
+
### Direct Use
|
39 |
|
40 |
+
The model can be directly used for generating coherent and contextually relevant responses to a wide range of prompts, particularly in scenarios where instruction-following is essential.
|
41 |
|
42 |
### Out-of-Scope Use
|
43 |
|
44 |
+
This model should not be used for tasks that require deep reasoning or extensive context beyond the given prompt. Additionally, it is not suitable for applications requiring highly specialized knowledge unless further fine-tuned on relevant data.
|
|
|
|
|
45 |
|
46 |
## Bias, Risks, and Limitations
|
47 |
|
48 |
+
As with all language models, this model may reflect biases present in its training data. Users should exercise caution when deploying the model in sensitive contexts and should consider further fine-tuning or bias mitigation strategies if needed.
|
49 |
|
50 |
+
## How to Get Started with the Model
|
|
|
|
|
|
|
|
|
51 |
|
52 |
+
Use the code below to get started with the model:
|
53 |
|
54 |
+
```python
|
55 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
56 |
|
57 |
+
model_name = "AmirMohseni/SmolLM-360M-Instruct-finetuned-sft-v2"
|
58 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
59 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
60 |
|
61 |
+
# Example usage
|
62 |
+
prompt = "Explain the process of photosynthesis."
|
63 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
64 |
+
outputs = model.generate(**inputs)
|
65 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
66 |
+
print(response)
|
67 |
+
```
|
68 |
|
69 |
## Training Details
|
70 |
|
71 |
### Training Data
|
72 |
+
The model was fine-tuned using the [HelpSteer2 dataset](https://huggingface.co/datasets/nvidia/HelpSteer2), which consists of approximately 21,400 examples of instruction-based prompts and corresponding responses. The dataset is designed to enhance AI models' ability to generate helpful, correct, and coherent outputs.
|
|
|
|
|
|
|
73 |
|
74 |
### Training Procedure
|
75 |
+
The fine-tuning was performed using the following hyperparameters:
|
76 |
|
77 |
+
- **Training regime:** Mixed precision (FP16)
|
78 |
+
- **Epochs:** 5
|
79 |
+
- **Learning Rate:** 1e-5
|
80 |
+
- **Batch Size:** 16 (per device for both training and evaluation)
|
81 |
+
- **Gradient Accumulation Steps:** 4
|
82 |
+
- **Weight Decay:** 0.02
|
83 |
+
- **Hardware:** NVIDIA A100 GPU
|
84 |
|
85 |
+
### Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
|
87 |
#### Metrics
|
88 |
+
- **Training Loss:** Final loss was 4.3768.
|
89 |
+
- **Validation Loss:** Final loss was 4.1602.
|
|
|
|
|
90 |
|
91 |
### Results
|
92 |
+
The model demonstrated a consistent decrease in both training and validation losses over the epochs, indicating effective learning and good generalization.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
93 |
|
94 |
## Environmental Impact
|
95 |
|
96 |
+
Carbon emissions for the training process were minimal due to the efficient use of the NVIDIA A100 GPU, which allowed for rapid fine-tuning within a few hours.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
|
98 |
+
- **Hardware Type:** NVIDIA A100 GPU
|
99 |
+
- **Hours used:** 1 hour
|