yefo-ufpe commited on
Commit
0d4b002
1 Parent(s): 4773b5f

first training information

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # output
20
 
21
- This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.6749
24
  - Accuracy: 0.7503
@@ -29,11 +29,15 @@ More information needed
29
 
30
  ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
  ## Training and evaluation data
35
 
36
- More information needed
 
 
 
 
37
 
38
  ## Training procedure
39
 
 
18
 
19
  # output
20
 
21
+ This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on [SWAG](https://huggingface.co/datasets/allenai/swag) dataset.
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.6749
24
  - Accuracy: 0.7503
 
29
 
30
  ## Intended uses & limitations
31
 
32
+ This model should be used as an expert in the [Meteor-of-LoRA framework](https://github.com/ParagonLight/meteor-of-lora).
33
 
34
  ## Training and evaluation data
35
 
36
+ The data were splitted based on HuggingFace default dataset:
37
+
38
+ ```python3
39
+ dataset = load_dataset("swag")
40
+ ```
41
 
42
  ## Training procedure
43