buruzaemon commited on
Commit
21f7539
1 Parent(s): 9c09cfd

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -13
README.md CHANGED
@@ -34,10 +34,7 @@ It achieves the following results on the evaluation set:
34
 
35
  ## Model description
36
 
37
- This is subsequent example of knowledge-distillation used [`transformers.Trainer.hyperparameter_search`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Trainer.hyperparameter_search) with the default Optuna back to find optimal values for the following hyperparameters:
38
- - `num_train_epochs`
39
- - `alpha`
40
- - `temperature`
41
 
42
  ## Intended uses & limitations
43
 
@@ -45,27 +42,20 @@ More information needed
45
 
46
  ## Training and evaluation data
47
 
48
- The training and evaluation data come straight from the `train` and `validation` splits in the clinc_oos dataset, respectively; and tokenized using the `distilbert-base-uncased` tokenization.
49
 
50
  ## Training procedure
51
 
52
- Hyperparameter-search was done via default backend Optuna, leading to the values below.
53
-
54
- Please see page 228 in Chapter 8: Making Transformers Efficient in Production, Natural Language Processing with Transformers, May 2022.
55
-
56
  ### Training hyperparameters
57
 
58
  The following hyperparameters were used during training:
59
- - num_epochs: 10
60
- - alpha: 0.5858821400787321
61
- - temperature: 4.917005721212045
62
  - learning_rate: 2e-05
63
  - train_batch_size: 48
64
  - eval_batch_size: 48
65
  - seed: 8675309
66
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
67
  - lr_scheduler_type: linear
68
-
69
 
70
  ### Training results
71
 
 
34
 
35
  ## Model description
36
 
37
+ More information needed
 
 
 
38
 
39
  ## Intended uses & limitations
40
 
 
42
 
43
  ## Training and evaluation data
44
 
45
+ More information needed
46
 
47
  ## Training procedure
48
 
 
 
 
 
49
  ### Training hyperparameters
50
 
51
  The following hyperparameters were used during training:
 
 
 
52
  - learning_rate: 2e-05
53
  - train_batch_size: 48
54
  - eval_batch_size: 48
55
  - seed: 8675309
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
58
+ - num_epochs: 10
59
 
60
  ### Training results
61