itsLeen commited on
Commit
47212cb
·
verified ·
1 Parent(s): 256c84f

itsLeen/vit-large-ai-or-not

Browse files
Files changed (2) hide show
  1. README.md +13 -52
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,11 +1,9 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: google/vit-large-patch16-224
5
  tags:
6
  - generated_from_trainer
7
- metrics:
8
- - accuracy
9
  model-index:
10
  - name: vit-large-ai-or-not
11
  results: []
@@ -16,10 +14,15 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # vit-large-ai-or-not
18
 
19
- This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.1282
22
- - Accuracy: 0.9654
 
 
 
 
 
23
 
24
  ## Model description
25
 
@@ -42,57 +45,15 @@ The following hyperparameters were used during training:
42
  - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 42
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - num_epochs: 4
48
  - mixed_precision_training: Native AMP
49
 
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
- |:-------------:|:------:|:----:|:---------------:|:--------:|
54
- | 0.1089 | 0.1074 | 200 | 0.1756 | 0.9498 |
55
- | 0.041 | 0.2148 | 400 | 0.2151 | 0.9503 |
56
- | 0.0566 | 0.3222 | 600 | 0.2125 | 0.9511 |
57
- | 0.1028 | 0.4296 | 800 | 0.2084 | 0.9450 |
58
- | 0.1722 | 0.5371 | 1000 | 0.1658 | 0.9557 |
59
- | 0.1486 | 0.6445 | 1200 | 0.1312 | 0.9595 |
60
- | 0.1446 | 0.7519 | 1400 | 0.1634 | 0.9565 |
61
- | 0.1281 | 0.8593 | 1600 | 0.1282 | 0.9654 |
62
- | 0.1584 | 0.9667 | 1800 | 0.1295 | 0.9667 |
63
- | 0.0549 | 1.0741 | 2000 | 0.1613 | 0.9670 |
64
- | 0.0373 | 1.1815 | 2200 | 0.1344 | 0.9723 |
65
- | 0.0293 | 1.2889 | 2400 | 0.1584 | 0.9699 |
66
- | 0.0251 | 1.3963 | 2600 | 0.1704 | 0.9656 |
67
- | 0.0249 | 1.5038 | 2800 | 0.1586 | 0.9699 |
68
- | 0.0383 | 1.6112 | 3000 | 0.1467 | 0.9715 |
69
- | 0.0213 | 1.7186 | 3200 | 0.1546 | 0.9734 |
70
- | 0.0544 | 1.8260 | 3400 | 0.1671 | 0.9686 |
71
- | 0.0401 | 1.9334 | 3600 | 0.1870 | 0.9656 |
72
- | 0.0288 | 2.0408 | 3800 | 0.1981 | 0.9600 |
73
- | 0.0078 | 2.1482 | 4000 | 0.1422 | 0.9748 |
74
- | 0.0037 | 2.2556 | 4200 | 0.1775 | 0.9705 |
75
- | 0.0035 | 2.3631 | 4400 | 0.1845 | 0.9705 |
76
- | 0.0043 | 2.4705 | 4600 | 0.2001 | 0.9710 |
77
- | 0.0049 | 2.5779 | 4800 | 0.2145 | 0.9689 |
78
- | 0.01 | 2.6853 | 5000 | 0.1445 | 0.9750 |
79
- | 0.0039 | 2.7927 | 5200 | 0.1509 | 0.9748 |
80
- | 0.0055 | 2.9001 | 5400 | 0.1674 | 0.9748 |
81
- | 0.0094 | 3.0075 | 5600 | 0.1569 | 0.9748 |
82
- | 0.0018 | 3.1149 | 5800 | 0.1580 | 0.9753 |
83
- | 0.0 | 3.2223 | 6000 | 0.1698 | 0.9761 |
84
- | 0.0003 | 3.3298 | 6200 | 0.1606 | 0.9761 |
85
- | 0.0034 | 3.4372 | 6400 | 0.1870 | 0.9737 |
86
- | 0.0 | 3.5446 | 6600 | 0.1697 | 0.9756 |
87
- | 0.0 | 3.6520 | 6800 | 0.1673 | 0.9750 |
88
- | 0.0053 | 3.7594 | 7000 | 0.1644 | 0.9753 |
89
- | 0.0 | 3.8668 | 7200 | 0.1676 | 0.9753 |
90
- | 0.0013 | 3.9742 | 7400 | 0.1641 | 0.9761 |
91
-
92
-
93
  ### Framework versions
94
 
95
  - Transformers 4.44.2
96
- - Pytorch 2.4.1+cu121
97
- - Datasets 3.0.2
98
  - Tokenizers 0.19.1
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: microsoft/swin-base-patch4-window7-224
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: vit-large-ai-or-not
9
  results: []
 
14
 
15
  # vit-large-ai-or-not
16
 
17
+ This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - eval_loss: 0.1679
20
+ - eval_accuracy: 0.9801
21
+ - eval_runtime: 50.6302
22
+ - eval_samples_per_second: 73.553
23
+ - eval_steps_per_second: 9.204
24
+ - epoch: 7.9484
25
+ - step: 7400
26
 
27
  ## Model description
28
 
 
45
  - train_batch_size: 8
46
  - eval_batch_size: 8
47
  - seed: 42
48
+ - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 16
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - num_epochs: 30
53
  - mixed_precision_training: Native AMP
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ### Framework versions
56
 
57
  - Transformers 4.44.2
58
+ - Pytorch 2.5.0+cu121
 
59
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b25225767a7d9a1e0df5081650afac862e3b632f60325816f24c44646d850667
3
  size 347498816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a503ffd3fcc58dd4d65a07ba41b8cab1ebec8e20d847afe0406137eea75995e9
3
  size 347498816