robingeibel commited on
Commit
19eadd0
1 Parent(s): 6cbdfe8
Files changed (3) hide show
  1. README.md +11 -20
  2. config.json +3 -3
  3. tf_model.h5 +3 -0
README.md CHANGED
@@ -1,20 +1,19 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
- - generated_from_trainer
5
- datasets:
6
- - big_patent
7
  model-index:
8
  - name: led-base-16384-finetuned-big_patent
9
  results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
  # led-base-16384-finetuned-big_patent
16
 
17
- This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the big_patent dataset.
 
 
18
 
19
  ## Model description
20
 
@@ -33,16 +32,8 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 5e-05
37
- - train_batch_size: 1
38
- - eval_batch_size: 1
39
- - seed: 42
40
- - gradient_accumulation_steps: 4
41
- - total_train_batch_size: 4
42
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
- - lr_scheduler_type: linear
44
- - num_epochs: 1
45
- - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
@@ -50,7 +41,7 @@ The following hyperparameters were used during training:
50
 
51
  ### Framework versions
52
 
53
- - Transformers 4.19.4
54
- - Pytorch 1.11.0+cu113
55
- - Datasets 2.2.2
56
  - Tokenizers 0.12.1
 
1
  ---
 
2
  tags:
3
+ - generated_from_keras_callback
 
 
4
  model-index:
5
  - name: led-base-16384-finetuned-big_patent
6
  results: []
7
  ---
8
 
9
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
+ probably proofread and complete it, then remove this comment. -->
11
 
12
  # led-base-16384-finetuned-big_patent
13
 
14
+ This model was trained from scratch on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+
17
 
18
  ## Model description
19
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
+ - optimizer: None
36
+ - training_precision: float32
 
 
 
 
 
 
 
 
37
 
38
  ### Training results
39
 
 
41
 
42
  ### Framework versions
43
 
44
+ - Transformers 4.20.1
45
+ - TensorFlow 2.8.2
46
+ - Datasets 2.3.2
47
  - Tokenizers 0.12.1
config.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
- "_name_or_path": "allenai/led-base-16384",
3
  "activation_dropout": 0.0,
4
  "activation_function": "gelu",
5
  "architectures": [
6
- "LEDForConditionalGeneration"
7
  ],
8
  "attention_dropout": 0.0,
9
  "attention_window": [
@@ -54,7 +54,7 @@
54
  "num_hidden_layers": 6,
55
  "pad_token_id": 1,
56
  "torch_dtype": "float32",
57
- "transformers_version": "4.19.4",
58
  "use_cache": true,
59
  "vocab_size": 50265
60
  }
 
1
  {
2
+ "_name_or_path": "robingeibel/led-base-16384-finetuned-big_patent",
3
  "activation_dropout": 0.0,
4
  "activation_function": "gelu",
5
  "architectures": [
6
+ "LEDModel"
7
  ],
8
  "attention_dropout": 0.0,
9
  "attention_window": [
 
54
  "num_hidden_layers": 6,
55
  "pad_token_id": 1,
56
  "torch_dtype": "float32",
57
+ "transformers_version": "4.20.1",
58
  "use_cache": true,
59
  "vocab_size": 50265
60
  }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48158bab6bd496f1864a9334373f6ad6d5a23eb7ac45ac686c3e7707991702ed
3
+ size 647701880