sagawa commited on
Commit
3934e14
1 Parent(s): 20109c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,7 +1,71 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
4
  # ZINC-t5
5
 
6
- We trained T5 on SMILES from ZINC using the task of masked-language modeling (MLM), and its tokenizer is also trained on ZINC data. This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
7
- As an example, We finetuned this model to predict products. Model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ datasets:
4
+ - sagawa/ZINC-canonicalized
5
+ metrics:
6
+ - accuracy
7
+ model-index:
8
+ - name: ZINC-deberta
9
+ results:
10
+ - task:
11
+ name: Masked Language Modeling
12
+ type: fill-mask
13
+ dataset:
14
+ name: sagawa/ZINC-canonicalized
15
+ type: sagawa/ZINC-canonicalized
16
+ metrics:
17
+ - name: Accuracy
18
+ type: accuracy
19
+ value: 0.9497212171554565
20
  ---
21
+
22
  # ZINC-t5
23
 
24
+ This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset.
25
+ It achieves the following results on the evaluation set:
26
+ - Loss: 0.1202
27
+ - Accuracy: 0.9497
28
+
29
+
30
+ ## Model description
31
+
32
+ We trained t5 on SMILES from ZINC using the task of masked-language modeling (MLM). Its tokenizer is also trained on ZINC.
33
+
34
+
35
+ ## Intended uses & limitations
36
+
37
+ This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
38
+ As an example, We finetuned this model to predict products. Model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5).
39
+ Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5).
40
+
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 5e-03
48
+ - train_batch_size: 30
49
+ - eval_batch_size: 32
50
+ - seed: 42
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 30.0
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Step | Accuracy | Validation Loss |
58
+ |:-------------:|:------:|:--------:|:---------------:|
59
+ | 0.2226 | 25000 | 0.9843 | 0.2226 |
60
+ | 0.1783 | 50000 | 0.9314 | 0.1783 |
61
+ | 0.1619 | 75000 | 0.9371 | 0.1619 |
62
+ | 0.1520 | 100000 | 0.9401 | 0.1520 |
63
+ | 0.1449 | 125000 | 0.9422 | 0.1449 |
64
+ | 0.1404 | 150000 | 0.9436 | 0.1404 |
65
+ | 0.1368 | 175000 | 0.9447 | 0.1368 |
66
+ | 0.1322 | 200000 | 0.9459 | 0.1322 |
67
+ | 0.1299 | 225000 | 0.9466 | 0.1299 |
68
+ | 0.1268 | 250000 | 0.9473 | 0.1268 |
69
+ | 0.1244 | 275000 | 0.9483 | 0.1244 |
70
+ | 0.1216 | 300000 | 0.9491 | 0.1216 |
71
+ | 0.1204 | 325000 | 0.9497 | 0.1204 |