Encore02 commited on
Commit
3745fb1
1 Parent(s): 49900a0

Model save

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/vit-base-patch16-224-in21k
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - imagefolder
9
+ metrics:
10
+ - accuracy
11
+ model-index:
12
+ - name: vit-weldclassifyv4
13
+ results:
14
+ - task:
15
+ name: Image Classification
16
+ type: image-classification
17
+ dataset:
18
+ name: imagefolder
19
+ type: imagefolder
20
+ config: default
21
+ split: train
22
+ args: default
23
+ metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.8291814946619217
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # vit-weldclassifyv4
33
+
34
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 1.0430
37
+ - Accuracy: 0.8292
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 0.0002
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 8
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 13
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|
69
+ | 0.9281 | 0.6329 | 100 | 0.9793 | 0.5907 |
70
+ | 0.6894 | 1.2658 | 200 | 0.7117 | 0.6868 |
71
+ | 0.6074 | 1.8987 | 300 | 0.7031 | 0.6940 |
72
+ | 0.5389 | 2.5316 | 400 | 0.6998 | 0.7331 |
73
+ | 0.2922 | 3.1646 | 500 | 0.6140 | 0.7794 |
74
+ | 0.2661 | 3.7975 | 600 | 0.8140 | 0.7117 |
75
+ | 0.1547 | 4.4304 | 700 | 0.8582 | 0.7189 |
76
+ | 0.1047 | 5.0633 | 800 | 0.7366 | 0.8007 |
77
+ | 0.0672 | 5.6962 | 900 | 1.0770 | 0.7367 |
78
+ | 0.0316 | 6.3291 | 1000 | 0.7481 | 0.8078 |
79
+ | 0.0367 | 6.9620 | 1100 | 0.8766 | 0.7972 |
80
+ | 0.0185 | 7.5949 | 1200 | 0.9476 | 0.8078 |
81
+ | 0.0254 | 8.2278 | 1300 | 1.0394 | 0.7936 |
82
+ | 0.0035 | 8.8608 | 1400 | 0.9604 | 0.8256 |
83
+ | 0.0028 | 9.4937 | 1500 | 1.0136 | 0.8149 |
84
+ | 0.0026 | 10.1266 | 1600 | 1.0094 | 0.8221 |
85
+ | 0.0024 | 10.7595 | 1700 | 1.0215 | 0.8292 |
86
+ | 0.0024 | 11.3924 | 1800 | 1.0316 | 0.8292 |
87
+ | 0.002 | 12.0253 | 1900 | 1.0391 | 0.8292 |
88
+ | 0.0021 | 12.6582 | 2000 | 1.0430 | 0.8292 |
89
+
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.44.2
94
+ - Pytorch 2.5.0+cu121
95
+ - Datasets 3.1.0
96
+ - Tokenizers 0.19.1
data/events.out.tfevents.1730608485.07f6fc948a6b.436.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c99694322bbf7e2e3221448f64864fb72caebebb2d16949160fdb857f2ba1ff
3
- size 54567
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef752c9ab416a6cb5993f004d4f1a353a244c4edafa2cf969129b3db9ed9caa2
3
+ size 54921
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:81ea2291bebba6839136346d20049775a4145831c8afa0285cd36b3ac83482f6
3
  size 343230128
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8a0fd1490cef2ae1f19475c5d9c2df7ce4b74580158f2e52af823fe7c422eac
3
  size 343230128