wcvz commited on
Commit
7df6c86
1 Parent(s): 810fd96

Model save

Browse files
Files changed (2) hide show
  1. README.md +92 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ base_model: facebook/esm2_t30_150M_UR50D
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: esm2_t130_150M-lora-classifier_2024-04-25_21-48-08
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # esm2_t130_150M-lora-classifier_2024-04-25_21-48-08
18
+
19
+ This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.5189
22
+ - Accuracy: 0.8809
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 0.0005701568055793089
42
+ - train_batch_size: 12
43
+ - eval_batch_size: 12
44
+ - seed: 8893
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - num_epochs: 30
48
+ - mixed_precision_training: Native AMP
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
54
+ | 0.6192 | 1.0 | 128 | 0.6737 | 0.6055 |
55
+ | 0.4321 | 2.0 | 256 | 0.6507 | 0.6289 |
56
+ | 0.571 | 3.0 | 384 | 0.5572 | 0.7188 |
57
+ | 0.3053 | 4.0 | 512 | 0.5090 | 0.7852 |
58
+ | 0.5055 | 5.0 | 640 | 0.3370 | 0.8516 |
59
+ | 0.2786 | 6.0 | 768 | 0.3710 | 0.8594 |
60
+ | 0.1327 | 7.0 | 896 | 0.3055 | 0.8711 |
61
+ | 0.2127 | 8.0 | 1024 | 0.2891 | 0.8945 |
62
+ | 0.0913 | 9.0 | 1152 | 0.3454 | 0.8691 |
63
+ | 0.0134 | 10.0 | 1280 | 0.3354 | 0.8809 |
64
+ | 0.2597 | 11.0 | 1408 | 0.3436 | 0.8848 |
65
+ | 0.0276 | 12.0 | 1536 | 0.4181 | 0.8633 |
66
+ | 0.0929 | 13.0 | 1664 | 0.3722 | 0.8789 |
67
+ | 0.9377 | 14.0 | 1792 | 0.5086 | 0.8730 |
68
+ | 0.2894 | 15.0 | 1920 | 0.3311 | 0.8906 |
69
+ | 0.3138 | 16.0 | 2048 | 0.4739 | 0.8809 |
70
+ | 0.0088 | 17.0 | 2176 | 0.3875 | 0.8867 |
71
+ | 0.3591 | 18.0 | 2304 | 0.4032 | 0.8809 |
72
+ | 0.0436 | 19.0 | 2432 | 0.4316 | 0.8887 |
73
+ | 0.0037 | 20.0 | 2560 | 0.4931 | 0.8789 |
74
+ | 0.0322 | 21.0 | 2688 | 0.4787 | 0.8809 |
75
+ | 0.0035 | 22.0 | 2816 | 0.4460 | 0.8770 |
76
+ | 0.0859 | 23.0 | 2944 | 0.4914 | 0.8828 |
77
+ | 0.039 | 24.0 | 3072 | 0.4955 | 0.8770 |
78
+ | 0.4208 | 25.0 | 3200 | 0.5211 | 0.8828 |
79
+ | 0.1874 | 26.0 | 3328 | 0.5376 | 0.8711 |
80
+ | 0.4433 | 27.0 | 3456 | 0.5319 | 0.875 |
81
+ | 0.2976 | 28.0 | 3584 | 0.5201 | 0.8809 |
82
+ | 0.0223 | 29.0 | 3712 | 0.5179 | 0.8809 |
83
+ | 0.0021 | 30.0 | 3840 | 0.5189 | 0.8809 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - PEFT 0.10.0
89
+ - Transformers 4.39.3
90
+ - Pytorch 2.2.1
91
+ - Datasets 2.16.1
92
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ef97b905a1a9445f7b1171aef20b735f511723e3f1bdf3fc4fe3eb10b5091a2
3
  size 3053968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad77b3c10e5da2363a486e126e8b06796918766368f41d0f146a688406ceab1c
3
  size 3053968