Ber Zoidberg
commited on
Commit
•
3f064ac
1
Parent(s):
267bddc
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,52 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: decapod-research/Antares-11b-v1
|
3 |
+
license: cc-by-nc-4.0
|
4 |
+
datasets:
|
5 |
+
- jondurbin/bagel-v0.3
|
6 |
---
|
7 |
+
|
8 |
+
Fine-tune of Upstage AI's SOLAR-10.7B-Instruct-v1.0 model, using the OpenHermes, Platypus, and Capybara datasets. Additionally fine-tuned on Jon Durbin's Bagel v0.3, plus a few unreleased datasets.
|
9 |
+
|
10 |
+
Fine-tuned on 8x4090s for 1.25 epochs.
|
11 |
+
|
12 |
+
|
13 |
+
### Model Sources [optional]
|
14 |
+
|
15 |
+
- **Repository:** TBD
|
16 |
+
- **Demo:** TBD
|
17 |
+
|
18 |
+
## Bias, Risks, and Limitations
|
19 |
+
|
20 |
+
This fine-tune has had zero alignment, safety data, or anything else shoved down it's throat.
|
21 |
+
|
22 |
+
## Training Details
|
23 |
+
|
24 |
+
### Training Data
|
25 |
+
|
26 |
+
See the sidebar for links to the relevant datasets.
|
27 |
+
|
28 |
+
### Training Procedure
|
29 |
+
|
30 |
+
Trained using QLORA via the Axolotl tool.
|
31 |
+
|
32 |
+
## Evaluation
|
33 |
+
|
34 |
+
TBD
|
35 |
+
|
36 |
+
## Training procedure
|
37 |
+
|
38 |
+
The following `bitsandbytes` quantization config was used during training:
|
39 |
+
- quant_method: bitsandbytes
|
40 |
+
- load_in_8bit: False
|
41 |
+
- load_in_4bit: True
|
42 |
+
- llm_int8_threshold: 6.0
|
43 |
+
- llm_int8_skip_modules: None
|
44 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
45 |
+
- llm_int8_has_fp16_weight: False
|
46 |
+
- bnb_4bit_quant_type: nf4
|
47 |
+
- bnb_4bit_use_double_quant: True
|
48 |
+
- bnb_4bit_compute_dtype: bfloat16
|
49 |
+
|
50 |
+
### Framework versions
|
51 |
+
|
52 |
+
- PEFT 0.6.0
|