PEFT
natnitaract commited on
Commit
97129d1
1 Parent(s): aa0f684

Uploading model to Hugging Face Hub.

Browse files
Files changed (3) hide show
  1. README.md +216 -0
  2. adapter_config.json +23 -0
  3. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ ---
4
+ ## Training procedure
5
+
6
+
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - quant_method: bitsandbytes
9
+ - load_in_8bit: False
10
+ - load_in_4bit: True
11
+ - llm_int8_threshold: 6.0
12
+ - llm_int8_skip_modules: None
13
+ - llm_int8_enable_fp32_cpu_offload: False
14
+ - llm_int8_has_fp16_weight: False
15
+ - bnb_4bit_quant_type: nf4
16
+ - bnb_4bit_use_double_quant: False
17
+ - bnb_4bit_compute_dtype: float32
18
+
19
+ The following `bitsandbytes` quantization config was used during training:
20
+ - quant_method: bitsandbytes
21
+ - load_in_8bit: False
22
+ - load_in_4bit: True
23
+ - llm_int8_threshold: 6.0
24
+ - llm_int8_skip_modules: None
25
+ - llm_int8_enable_fp32_cpu_offload: False
26
+ - llm_int8_has_fp16_weight: False
27
+ - bnb_4bit_quant_type: nf4
28
+ - bnb_4bit_use_double_quant: False
29
+ - bnb_4bit_compute_dtype: float32
30
+
31
+ The following `bitsandbytes` quantization config was used during training:
32
+ - quant_method: bitsandbytes
33
+ - load_in_8bit: False
34
+ - load_in_4bit: True
35
+ - llm_int8_threshold: 6.0
36
+ - llm_int8_skip_modules: None
37
+ - llm_int8_enable_fp32_cpu_offload: False
38
+ - llm_int8_has_fp16_weight: False
39
+ - bnb_4bit_quant_type: nf4
40
+ - bnb_4bit_use_double_quant: False
41
+ - bnb_4bit_compute_dtype: float32
42
+
43
+ The following `bitsandbytes` quantization config was used during training:
44
+ - quant_method: bitsandbytes
45
+ - load_in_8bit: False
46
+ - load_in_4bit: True
47
+ - llm_int8_threshold: 6.0
48
+ - llm_int8_skip_modules: None
49
+ - llm_int8_enable_fp32_cpu_offload: False
50
+ - llm_int8_has_fp16_weight: False
51
+ - bnb_4bit_quant_type: nf4
52
+ - bnb_4bit_use_double_quant: False
53
+ - bnb_4bit_compute_dtype: float32
54
+
55
+ The following `bitsandbytes` quantization config was used during training:
56
+ - quant_method: bitsandbytes
57
+ - load_in_8bit: False
58
+ - load_in_4bit: True
59
+ - llm_int8_threshold: 6.0
60
+ - llm_int8_skip_modules: None
61
+ - llm_int8_enable_fp32_cpu_offload: False
62
+ - llm_int8_has_fp16_weight: False
63
+ - bnb_4bit_quant_type: nf4
64
+ - bnb_4bit_use_double_quant: False
65
+ - bnb_4bit_compute_dtype: float32
66
+
67
+ The following `bitsandbytes` quantization config was used during training:
68
+ - quant_method: bitsandbytes
69
+ - load_in_8bit: False
70
+ - load_in_4bit: True
71
+ - llm_int8_threshold: 6.0
72
+ - llm_int8_skip_modules: None
73
+ - llm_int8_enable_fp32_cpu_offload: False
74
+ - llm_int8_has_fp16_weight: False
75
+ - bnb_4bit_quant_type: nf4
76
+ - bnb_4bit_use_double_quant: False
77
+ - bnb_4bit_compute_dtype: float32
78
+
79
+ The following `bitsandbytes` quantization config was used during training:
80
+ - quant_method: bitsandbytes
81
+ - load_in_8bit: False
82
+ - load_in_4bit: True
83
+ - llm_int8_threshold: 6.0
84
+ - llm_int8_skip_modules: None
85
+ - llm_int8_enable_fp32_cpu_offload: False
86
+ - llm_int8_has_fp16_weight: False
87
+ - bnb_4bit_quant_type: nf4
88
+ - bnb_4bit_use_double_quant: False
89
+ - bnb_4bit_compute_dtype: float32
90
+
91
+ The following `bitsandbytes` quantization config was used during training:
92
+ - quant_method: bitsandbytes
93
+ - load_in_8bit: False
94
+ - load_in_4bit: True
95
+ - llm_int8_threshold: 6.0
96
+ - llm_int8_skip_modules: None
97
+ - llm_int8_enable_fp32_cpu_offload: False
98
+ - llm_int8_has_fp16_weight: False
99
+ - bnb_4bit_quant_type: nf4
100
+ - bnb_4bit_use_double_quant: False
101
+ - bnb_4bit_compute_dtype: float32
102
+
103
+ The following `bitsandbytes` quantization config was used during training:
104
+ - quant_method: bitsandbytes
105
+ - load_in_8bit: False
106
+ - load_in_4bit: True
107
+ - llm_int8_threshold: 6.0
108
+ - llm_int8_skip_modules: None
109
+ - llm_int8_enable_fp32_cpu_offload: False
110
+ - llm_int8_has_fp16_weight: False
111
+ - bnb_4bit_quant_type: nf4
112
+ - bnb_4bit_use_double_quant: False
113
+ - bnb_4bit_compute_dtype: float32
114
+
115
+ The following `bitsandbytes` quantization config was used during training:
116
+ - quant_method: bitsandbytes
117
+ - load_in_8bit: False
118
+ - load_in_4bit: True
119
+ - llm_int8_threshold: 6.0
120
+ - llm_int8_skip_modules: None
121
+ - llm_int8_enable_fp32_cpu_offload: False
122
+ - llm_int8_has_fp16_weight: False
123
+ - bnb_4bit_quant_type: nf4
124
+ - bnb_4bit_use_double_quant: False
125
+ - bnb_4bit_compute_dtype: float32
126
+
127
+ The following `bitsandbytes` quantization config was used during training:
128
+ - quant_method: bitsandbytes
129
+ - load_in_8bit: False
130
+ - load_in_4bit: True
131
+ - llm_int8_threshold: 6.0
132
+ - llm_int8_skip_modules: None
133
+ - llm_int8_enable_fp32_cpu_offload: False
134
+ - llm_int8_has_fp16_weight: False
135
+ - bnb_4bit_quant_type: nf4
136
+ - bnb_4bit_use_double_quant: False
137
+ - bnb_4bit_compute_dtype: float32
138
+
139
+ The following `bitsandbytes` quantization config was used during training:
140
+ - quant_method: bitsandbytes
141
+ - load_in_8bit: False
142
+ - load_in_4bit: True
143
+ - llm_int8_threshold: 6.0
144
+ - llm_int8_skip_modules: None
145
+ - llm_int8_enable_fp32_cpu_offload: False
146
+ - llm_int8_has_fp16_weight: False
147
+ - bnb_4bit_quant_type: nf4
148
+ - bnb_4bit_use_double_quant: False
149
+ - bnb_4bit_compute_dtype: float32
150
+
151
+ The following `bitsandbytes` quantization config was used during training:
152
+ - quant_method: bitsandbytes
153
+ - load_in_8bit: False
154
+ - load_in_4bit: True
155
+ - llm_int8_threshold: 6.0
156
+ - llm_int8_skip_modules: None
157
+ - llm_int8_enable_fp32_cpu_offload: False
158
+ - llm_int8_has_fp16_weight: False
159
+ - bnb_4bit_quant_type: nf4
160
+ - bnb_4bit_use_double_quant: False
161
+ - bnb_4bit_compute_dtype: float32
162
+
163
+ The following `bitsandbytes` quantization config was used during training:
164
+ - quant_method: bitsandbytes
165
+ - load_in_8bit: False
166
+ - load_in_4bit: True
167
+ - llm_int8_threshold: 6.0
168
+ - llm_int8_skip_modules: None
169
+ - llm_int8_enable_fp32_cpu_offload: False
170
+ - llm_int8_has_fp16_weight: False
171
+ - bnb_4bit_quant_type: nf4
172
+ - bnb_4bit_use_double_quant: False
173
+ - bnb_4bit_compute_dtype: float32
174
+
175
+ The following `bitsandbytes` quantization config was used during training:
176
+ - quant_method: bitsandbytes
177
+ - load_in_8bit: False
178
+ - load_in_4bit: True
179
+ - llm_int8_threshold: 6.0
180
+ - llm_int8_skip_modules: None
181
+ - llm_int8_enable_fp32_cpu_offload: False
182
+ - llm_int8_has_fp16_weight: False
183
+ - bnb_4bit_quant_type: nf4
184
+ - bnb_4bit_use_double_quant: False
185
+ - bnb_4bit_compute_dtype: float32
186
+
187
+ The following `bitsandbytes` quantization config was used during training:
188
+ - quant_method: bitsandbytes
189
+ - load_in_8bit: False
190
+ - load_in_4bit: True
191
+ - llm_int8_threshold: 6.0
192
+ - llm_int8_skip_modules: None
193
+ - llm_int8_enable_fp32_cpu_offload: False
194
+ - llm_int8_has_fp16_weight: False
195
+ - bnb_4bit_quant_type: nf4
196
+ - bnb_4bit_use_double_quant: False
197
+ - bnb_4bit_compute_dtype: float32
198
+ ### Framework versions
199
+
200
+ - PEFT 0.5.0
201
+ - PEFT 0.5.0
202
+ - PEFT 0.5.0
203
+ - PEFT 0.5.0
204
+ - PEFT 0.5.0
205
+ - PEFT 0.5.0
206
+ - PEFT 0.5.0
207
+ - PEFT 0.5.0
208
+ - PEFT 0.5.0
209
+ - PEFT 0.5.0
210
+ - PEFT 0.5.0
211
+ - PEFT 0.5.0
212
+ - PEFT 0.5.0
213
+ - PEFT 0.5.0
214
+ - PEFT 0.5.0
215
+
216
+ - PEFT 0.5.0
adapter_config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_mapping": null,
3
+ "base_model_name_or_path": "Qwen/Qwen-14B",
4
+ "bias": "none",
5
+ "fan_in_fan_out": false,
6
+ "inference_mode": true,
7
+ "init_lora_weights": true,
8
+ "layers_pattern": null,
9
+ "layers_to_transform": null,
10
+ "lora_alpha": 16,
11
+ "lora_dropout": 0.1,
12
+ "modules_to_save": null,
13
+ "peft_type": "LORA",
14
+ "r": 64,
15
+ "revision": null,
16
+ "target_modules": [
17
+ "c_attn",
18
+ "w1",
19
+ "c_proj",
20
+ "w2"
21
+ ],
22
+ "task_type": "CAUSAL_LM"
23
+ }
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:830ccb481a08f664997c4ef0b9b335d1132455fad56323e250bdde0d2ecf0e9d
3
+ size 892742669