honzatoegel
commited on
Commit
•
122d2d5
1
Parent(s):
b10bf80
Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,8 @@
|
|
2 |
library_name: peft
|
3 |
---
|
4 |
## Examples
|
|
|
|
|
5 |
|
6 |
### Example 1
|
7 |
#### Input
|
@@ -38,7 +40,8 @@ Repairs:
|
|
38 |
- The verb "gemachen" is in the wrong form, it should be "gemacht" - this is the third person singular of the verb "machen" in the past tense.#### End AI %}
|
39 |
Repairs:
|
40 |
## Training procedure
|
41 |
-
|
|
|
42 |
|
43 |
The following `bitsandbytes` quantization config was used during training:
|
44 |
- load_in_8bit: False
|
|
|
2 |
library_name: peft
|
3 |
---
|
4 |
## Examples
|
5 |
+
As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning.
|
6 |
+
The low quality is probably caused by very low volume of training data - 100 rows, low amount of combinations which can be then hardly generalized.
|
7 |
|
8 |
### Example 1
|
9 |
#### Input
|
|
|
40 |
- The verb "gemachen" is in the wrong form, it should be "gemacht" - this is the third person singular of the verb "machen" in the past tense.#### End AI %}
|
41 |
Repairs:
|
42 |
## Training procedure
|
43 |
+
Trained on: 1x RTX A6000 , 30GB Ram, 130GB Disc
|
44 |
+
8 Epochs, cca 25 minutes, Loss: 0.36
|
45 |
|
46 |
The following `bitsandbytes` quantization config was used during training:
|
47 |
- load_in_8bit: False
|