anrilombard
commited on
Commit
•
17a3d79
1
Parent(s):
b445b43
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
````markdown
|
2 |
+
---
|
3 |
+
library_name: mamba-ssm
|
4 |
+
tags:
|
5 |
+
- safe
|
6 |
+
- mamba
|
7 |
+
- state-space-model
|
8 |
+
- molecular-generation
|
9 |
+
- smiles
|
10 |
+
- generated_from_trainer
|
11 |
+
datasets:
|
12 |
+
- sagawa/ZINC-canonicalized
|
13 |
+
model-index:
|
14 |
+
- name: SSM_100M
|
15 |
+
results: []
|
16 |
+
---
|
17 |
+
|
18 |
+
# SSM_100M
|
19 |
+
|
20 |
+
SSM_100M is a state space model (SSM) developed with the Mamba framework for molecular generation. **The model was trained using the code from [https://github.com/Anri-Lombard/Mamba-SAFE](https://github.com/Anri-Lombard/Mamba-SAFE).** It was trained from scratch on the [ZINC dataset](https://huggingface.co/datasets/sagawa/ZINC-canonicalized), converted from SMILES to the SAFE (SMILES Augmented For Encoding) format. SSM_100M leverages state space models' efficiency and scalability to match the performance of transformer-based models like [SAFE_100M](https://huggingface.co/anrilombard/safe-100m) while using fewer computational resources.
|
21 |
+
|
22 |
+
## Evaluation Results
|
23 |
+
|
24 |
+
SSM_100M performs similarly to the transformer-based SAFE_100M model in molecular generation, maintaining high validity and diversity of generated molecules. It achieves these results with lower computational overhead, making it a more resource-efficient option for large-scale applications.
|
25 |
+
|
26 |
+
## Model Description
|
27 |
+
|
28 |
+
SSM_100M uses the Mamba framework's state space modeling to generate valid and diverse molecular structures efficiently. By converting the ZINC dataset from SMILES to SAFE format, the model benefits from improved molecular encoding, enhancing performance in areas such as:
|
29 |
+
|
30 |
+
- **Drug Discovery:** Identifying potential drug candidates with optimal properties.
|
31 |
+
- **Materials Science:** Designing novel materials with targeted characteristics.
|
32 |
+
- **Chemical Engineering:** Developing new chemical processes and compounds more efficiently.
|
33 |
+
|
34 |
+
### Mamba Framework
|
35 |
+
|
36 |
+
The Mamba framework underpins SSM_100M, offering a robust architecture for linear-time sequence modeling with selective state spaces. It was introduced in the following paper:
|
37 |
+
|
38 |
+
```bibtex
|
39 |
+
@article{gu2023mamba,
|
40 |
+
title={Mamba: Linear-time sequence modeling with selective state spaces},
|
41 |
+
author={Gu, Albert and Dao, Tri},
|
42 |
+
journal={arXiv preprint arXiv:2312.00752},
|
43 |
+
year={2023}
|
44 |
+
}
|
45 |
+
```
|
46 |
+
````
|
47 |
+
|
48 |
+
We thank the authors for their contributions to sequence modeling.
|
49 |
+
|
50 |
+
### SAFE Framework
|
51 |
+
|
52 |
+
SSM_100M employs the SAFE framework to enhance molecular representation using the SMILES Augmented For Encoding format. The SAFE framework is detailed in the following publication:
|
53 |
+
|
54 |
+
```bibtex
|
55 |
+
@article{noutahi2024gotta,
|
56 |
+
title={Gotta be SAFE: a new framework for molecular design},
|
57 |
+
author={Noutahi, Emmanuel and Gabellini, Cristian and Craig, Michael and Lim, Jonathan SC and Tossou, Prudencio},
|
58 |
+
journal={Digital Discovery},
|
59 |
+
volume={3},
|
60 |
+
number={4},
|
61 |
+
pages={796--804},
|
62 |
+
year={2024},
|
63 |
+
publisher={Royal Society of Chemistry}
|
64 |
+
}
|
65 |
+
```
|
66 |
+
|
67 |
+
We appreciate the authors' invaluable work in molecular design.
|
68 |
+
|
69 |
+
## Intended Uses & Limitations
|
70 |
+
|
71 |
+
### Intended Uses
|
72 |
+
|
73 |
+
SSM_100M is suitable for:
|
74 |
+
|
75 |
+
- **Molecular Structure Generation:** Creating new molecules with specific properties.
|
76 |
+
- **Chemical Space Exploration:** Navigating the vast landscape of possible chemical compounds for research and development.
|
77 |
+
- **Material Design:** Assisting in the creation of new materials with desired functionalities.
|
78 |
+
|
79 |
+
### Limitations
|
80 |
+
|
81 |
+
Users should be aware of the following limitations:
|
82 |
+
|
83 |
+
- **Validation Required:** Outputs should be validated by domain experts before use.
|
84 |
+
- **Synthetic Feasibility:** Generated molecules may not always be synthesizable in the lab.
|
85 |
+
- **Dataset Boundaries:** The model is limited to the chemical space of the ZINC dataset, which may restrict its applicability to novel or rare compounds outside this space.
|
86 |
+
|
87 |
+
## Training and Evaluation Data
|
88 |
+
|
89 |
+
SSM_100M was trained on the [ZINC dataset](https://huggingface.co/datasets/sagawa/ZINC-canonicalized), a comprehensive collection of commercially available chemical compounds optimized for virtual screening. The dataset was converted from SMILES to SAFE format to improve molecular encoding for machine learning, enhancing the model's ability to generate meaningful and diverse molecular structures.
|
90 |
+
|
91 |
+
## Training Procedure
|
92 |
+
|
93 |
+
### Training Hyperparameters
|
94 |
+
|
95 |
+
SSM_100M was trained with the following hyperparameters:
|
96 |
+
|
97 |
+
- **Learning Rate:** `0.0003`
|
98 |
+
- **Training Batch Size:** `64`
|
99 |
+
- **Evaluation Batch Size:** `64`
|
100 |
+
- **Random Seed:** `42`
|
101 |
+
- **Gradient Accumulation Steps:** `4`
|
102 |
+
- **Total Training Batch Size:** `256`
|
103 |
+
- **Optimizer:** Adam (`betas=(0.9, 0.98)`, `epsilon=1e-09`)
|
104 |
+
- **Learning Rate Scheduler:** Cosine with `50,000` warmup steps
|
105 |
+
- **Total Training Steps:** `300,000`
|
106 |
+
- **Model Parameters:** 100M
|
107 |
+
|
108 |
+
### Framework Versions
|
109 |
+
|
110 |
+
The training utilized the following software frameworks:
|
111 |
+
|
112 |
+
- **Mamba:** `1.2.3`
|
113 |
+
- **PyTorch:** `2.0.1`
|
114 |
+
- **Datasets:** `2.20.0`
|
115 |
+
- **Tokenizers:** `0.19.1`
|
116 |
+
|
117 |
+
## Acknowledgements
|
118 |
+
|
119 |
+
We thank the authors and contributors of the following frameworks and datasets:
|
120 |
+
|
121 |
+
- **Mamba Framework:** For providing a solid foundation for state space modeling.
|
122 |
+
- **SAFE Framework:** For improving molecular representation with innovative encoding techniques.
|
123 |
+
- **ZINC Dataset Authors:** For curating a comprehensive dataset essential for training effective molecular generation models.
|
124 |
+
|
125 |
+
For more information and updates, visit the [Mamba-SAFE repository](https://github.com/Anri-Lombard/Mamba-SAFE).
|