Niraya666 commited on
Commit
55e4c9f
1 Parent(s): 730c606

./wmc_v2_vit_base_wm811k_cls_contra_learning_0916

Browse files
Files changed (3) hide show
  1. README.md +25 -3
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -4,6 +4,11 @@ license: apache-2.0
4
  base_model: google/vit-base-patch16-224
5
  tags:
6
  - generated_from_trainer
 
 
 
 
 
7
  model-index:
8
  - name: wmc_v2_vit_base_wm811k_cls_contra_learning_0916
9
  results: []
@@ -15,6 +20,12 @@ should probably proofread and complete it, then remove this comment. -->
15
  # wmc_v2_vit_base_wm811k_cls_contra_learning_0916
16
 
17
  This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
 
 
 
 
 
 
18
 
19
  ## Model description
20
 
@@ -34,16 +45,27 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
- - train_batch_size: 8
38
- - eval_batch_size: 8
39
  - seed: 42
40
  - gradient_accumulation_steps: 4
41
- - total_train_batch_size: 32
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - num_epochs: 1
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.44.2
 
4
  base_model: google/vit-base-patch16-224
5
  tags:
6
  - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - precision
10
+ - recall
11
+ - f1
12
  model-index:
13
  - name: wmc_v2_vit_base_wm811k_cls_contra_learning_0916
14
  results: []
 
20
  # wmc_v2_vit_base_wm811k_cls_contra_learning_0916
21
 
22
  This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.1552
25
+ - Accuracy: 0.9504
26
+ - Precision: 0.9212
27
+ - Recall: 0.9099
28
+ - F1: 0.9116
29
 
30
  ## Model description
31
 
 
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 2e-05
48
+ - train_batch_size: 32
49
+ - eval_batch_size: 32
50
  - seed: 42
51
  - gradient_accumulation_steps: 4
52
+ - total_train_batch_size: 128
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: linear
55
  - num_epochs: 1
56
  - mixed_precision_training: Native AMP
57
 
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
61
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
62
+ | 1.2292 | 0.1697 | 100 | 0.5873 | 0.8030 | 0.6718 | 0.6428 | 0.6221 |
63
+ | 0.7744 | 0.3394 | 200 | 0.3258 | 0.8962 | 0.8629 | 0.7596 | 0.7444 |
64
+ | 0.6117 | 0.5091 | 300 | 0.1904 | 0.9458 | 0.9262 | 0.8588 | 0.8734 |
65
+ | 0.4829 | 0.6788 | 400 | 0.1799 | 0.9451 | 0.9028 | 0.9129 | 0.9037 |
66
+ | 0.4838 | 0.8485 | 500 | 0.1552 | 0.9504 | 0.9212 | 0.9099 | 0.9116 |
67
+
68
+
69
  ### Framework versions
70
 
71
  - Transformers 4.44.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49a294504ccbacdd7d8a502f81bef3848f9e9d0c0ba26aef71c02e4df7399b60
3
  size 345598832
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fcee979bf6cecc80fc0261934d3b061d8266b8cb124e0c31adc0281e350a832
3
  size 345598832
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d486f87a46dd24443a57c8ddc1f4a99e60a0cbd2642558540b7c919ad640da6
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34bef05525bdb03a3973ee8b6ff5d455f148036affe193799adef2571e679185
3
  size 5240