Update README.md
Browse files
README.md
CHANGED
@@ -11,36 +11,43 @@ datasets:
|
|
11 |
base_model: meta-llama/Llama-2-70b-hf
|
12 |
---
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
-
|
32 |
-
|
33 |
-
-
|
34 |
-
-
|
35 |
-
-
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
license: apache-2.0
|
39 |
---
|
40 |
|
41 |
-
|
42 |
|
43 |
-
Prompt Used:
|
44 |
|
45 |
```
|
46 |
### INSTRUCTION:
|
@@ -57,4 +64,6 @@ Loss metrics
|
|
57 |
Training loss (Blue) Validation Loss (orange):
|
58 |
![training loss](train-loss.png "Training loss")
|
59 |
|
|
|
60 |
|
|
|
|
11 |
base_model: meta-llama/Llama-2-70b-hf
|
12 |
---
|
13 |
|
14 |
+
### Finetuning Overview:
|
15 |
+
|
16 |
+
**Model Used:** meta-llama/Llama-2-70b-hf
|
17 |
+
**Dataset:** Databricks-dolly-15k
|
18 |
+
|
19 |
+
#### Dataset Insights:
|
20 |
+
|
21 |
+
The Databricks-dolly-15k dataset is an impressive compilation of over 15,000 records, made possible by the hard work and dedication of a multitude of Databricks professionals. It has been tailored to:
|
22 |
+
|
23 |
+
- Elevate the interactive capabilities of ChatGPT-like systems.
|
24 |
+
- Provide prompt/response pairs spanning eight distinct instruction categories, inclusive of the seven categories from the InstructGPT paper and an exploratory open-ended category.
|
25 |
+
- Ensure genuine and original content, largely offline-sourced with exceptions for Wikipedia in particular categories, and free from generative AI influences.
|
26 |
+
|
27 |
+
In an innovative approach, contributors had the opportunity to rephrase and answer queries from their peers, highlighting a focus on accuracy and clarity. Additionally, some data subsets feature Wikipedia-sourced reference texts, marked by bracketed citation numbers like [42]. For an optimal user experience in downstream applications, it's recommended to remove these.
|
28 |
+
|
29 |
+
#### Finetuning Details:
|
30 |
+
|
31 |
+
Using [MonsterAPI](https://monsterapi.ai)'s user-friendly [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), the finetuning:
|
32 |
+
|
33 |
+
- Stands out for its cost-effectiveness.
|
34 |
+
- Was executed in a total of 17.5 hours for 3 epochs with an A100 80GB GPU.
|
35 |
+
- Broke down to just 5.8 hours and `$19.25` per epoch, culminating in a combined cost of `$57.75` for all epochs.
|
36 |
+
|
37 |
+
#### Hyperparameters & Additional Details:
|
38 |
+
|
39 |
+
- **Epochs:** 3
|
40 |
+
- **Cost Per Epoch:** $19.25
|
41 |
+
- **Total Finetuning Cost:** $57.75
|
42 |
+
- **Model Path:** meta-llama/Llama-2-70b-hf
|
43 |
+
- **Learning Rate:** 0.0002
|
44 |
+
- **Data Split:** Training 90% / Validation 10%
|
45 |
+
- **Gradient Accumulation Steps:** 4
|
46 |
|
|
|
47 |
---
|
48 |
|
49 |
+
### Prompt Structure:
|
50 |
|
|
|
51 |
|
52 |
```
|
53 |
### INSTRUCTION:
|
|
|
64 |
Training loss (Blue) Validation Loss (orange):
|
65 |
![training loss](train-loss.png "Training loss")
|
66 |
|
67 |
+
---
|
68 |
|
69 |
+
license: apache-2.0
|