ZhiyuanChen commited on
Commit
34e5967
1 Parent(s): 3f4dbce

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: rna
3
+ tags:
4
+ - Biology
5
+ - RNA
6
+ license: agpl-3.0
7
+ datasets:
8
+ - multimolecule/rnacentral
9
+ library_name: multimolecule
10
+ pipeline_tag: fill-mask
11
+ mask_token: "<mask>"
12
+ widget:
13
+ - example_title: "microRNA-21"
14
+ text: "UAGC<mask>UAUCAGACUGAUGUUGA"
15
+ output:
16
+ - label: "G"
17
+ score: 0.09372635930776596
18
+ - label: "R"
19
+ score: 0.08816102892160416
20
+ - label: "A"
21
+ score: 0.08292599022388458
22
+ - label: "<eos>"
23
+ score: 0.07841548323631287
24
+ - label: "V"
25
+ score: 0.073448047041893
26
+ ---
27
+
28
+ # RNAErnie
29
+
30
+ Pre-trained model on non-coding RNA (ncRNA) using a multi-stage masked language modeling (MLM) objective.
31
+
32
+ ## Statement
33
+
34
+ _Multi-purpose RNA language modelling with motif-aware pretraining and type-guided fine-tuning_ is published in [Nature Machine Intelligence](https://doi.org/10.1038/s42256-024-00836-4), which is a Closed Access / Author-Fee journal.
35
+
36
+ > Machine learning has been at the forefront of the movement for free and open access to research.
37
+ >
38
+ > We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of these journals as an outlet of record for the machine learning community would be a retrograde step.
39
+
40
+ The MultiMolecule team is committed to the principles of open access and open science.
41
+
42
+ We do NOT endorse the publication of manuscripts in Closed Access / Author-Fee journals and encourage the community to support Open Access journals and conferences.
43
+
44
+ Please consider signing the [Statement on Nature Machine Intelligence](https://openaccess.engineering.oregonstate.edu).
45
+
46
+ ## Disclaimer
47
+
48
+ This is an UNOFFICIAL implementation of the RNAErnie: An RNA Language Model with Structure-enhanced Representations by Ning Wang, Jiang Bian,
49
+ Haoyi Xiong, et al.
50
+
51
+ The OFFICIAL repository of RNAErnie is at [CatIIIIIIII/RNAErnie](https://github.com/CatIIIIIIII/RNAErnie).
52
+
53
+ > [!WARNING]
54
+ > The MultiMolecule team is unable to confirm that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
55
+ > This is because
56
+ >
57
+ > The proposed method is published in a Closed Access / Author-Fee journal.
58
+
59
+ **The team releasing RNAErnie did not write this model card for this model so this model card has been written by the MultiMolecule team.**
60
+
61
+ ## Model Details
62
+
63
+ RNAErnie is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
64
+
65
+ Note that during the conversion process, additional tokens such as `[IND]` and ncRNA class symbols are removed.
66
+
67
+ ### Model Specification
68
+
69
+ | Num Layers | Hidden Size | Num Heads | Intermediate Size | Num Parameters (M) | FLOPs (G) | MACs (G) | Max Num Tokens |
70
+ | ---------- | ----------- | --------- | ----------------- | ------------------ | --------- | -------- | -------------- |
71
+ | 12 | 768 | 12 | 3072 | 86.06 | 22.36 | 11.17 | 512 |
72
+
73
+ ### Links
74
+
75
+ - **Code**: [multimolecule.rnaernie](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/rnaernie)
76
+ - **Weights**: [`multimolecule/rnaernie`](https://huggingface.co/multimolecule/rnaernie)
77
+ - **Data**: [RNAcentral](https://rnacentral.org)
78
+ - **Paper**: Multi-purpose RNA language modelling with motif-aware pretraining and type-guided fine-tuning
79
+ - **Developed by**: Ning Wang, Jiang Bian, Yuchen Li, Xuhong Li, Shahid Mumtaz, Linghe Kong, Haoyi Xiong.
80
+ - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ERNIE](https://huggingface.co/nghuyong/ernie-3.0-base-zh)
81
+ - **Original Repository**: [https://github.com/CatIIIIIIII/RNAErnie](https://github.com/CatIIIIIIII/RNAErnie)
82
+
83
+ ## Usage
84
+
85
+ The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
86
+
87
+ ```bash
88
+ pip install multimolecule
89
+ ```
90
+
91
+ ### Direct Use
92
+
93
+ You can use this model directly with a pipeline for masked language modeling:
94
+
95
+ ```python
96
+ >>> import multimolecule # you must import multimolecule to register models
97
+ >>> from transformers import pipeline
98
+ >>> unmasker = pipeline('fill-mask', model='multimolecule/rnaernie')
99
+ >>> unmasker("uagc<mask>uaucagacugauguuga")
100
+
101
+ [{'score': 0.09372635930776596,
102
+ 'token': 8,
103
+ 'token_str': 'G',
104
+ 'sequence': 'U A G C G U A U C A G A C U G A U G U U G A'},
105
+ {'score': 0.08816102892160416,
106
+ 'token': 11,
107
+ 'token_str': 'R',
108
+ 'sequence': 'U A G C R U A U C A G A C U G A U G U U G A'},
109
+ {'score': 0.08292599022388458,
110
+ 'token': 6,
111
+ 'token_str': 'A',
112
+ 'sequence': 'U A G C A U A U C A G A C U G A U G U U G A'},
113
+ {'score': 0.07841548323631287,
114
+ 'token': 2,
115
+ 'token_str': '<eos>',
116
+ 'sequence': 'U A G C U A U C A G A C U G A U G U U G A'},
117
+ {'score': 0.073448047041893,
118
+ 'token': 20,
119
+ 'token_str': 'V',
120
+ 'sequence': 'U A G C V U A U C A G A C U G A U G U U G A'}]
121
+ ```
122
+
123
+ ### Downstream Use
124
+
125
+ #### Extract Features
126
+
127
+ Here is how to use this model to get the features of a given sequence in PyTorch:
128
+
129
+ ```python
130
+ from multimolecule import RnaTokenizer, RnaErnieModel
131
+
132
+
133
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
134
+ model = RnaErnieModel.from_pretrained('multimolecule/rnaernie')
135
+
136
+ text = "UAGCUUAUCAGACUGAUGUUGA"
137
+ input = tokenizer(text, return_tensors='pt')
138
+
139
+ output = model(**input)
140
+ ```
141
+
142
+ #### Sequence Classification / Regression
143
+
144
+ **Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
145
+
146
+ Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
147
+
148
+ ```python
149
+ import torch
150
+ from multimolecule import RnaTokenizer, RnaErnieForSequencePrediction
151
+
152
+
153
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
154
+ model = RnaErnieForSequencePrediction.from_pretrained('multimolecule/rnaernie')
155
+
156
+ text = "UAGCUUAUCAGACUGAUGUUGA"
157
+ input = tokenizer(text, return_tensors='pt')
158
+ label = torch.tensor([1])
159
+
160
+ output = model(**input, labels=label)
161
+ ```
162
+
163
+ #### Nucleotide Classification / Regression
164
+
165
+ **Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for nucleotide classification or regression.
166
+
167
+ Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
168
+
169
+ ```python
170
+ import torch
171
+ from multimolecule import RnaTokenizer, RnaErnieForNucleotidePrediction
172
+
173
+
174
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
175
+ model = RnaErnieForNucleotidePrediction.from_pretrained('multimolecule/rnaernie')
176
+
177
+ text = "UAGCUUAUCAGACUGAUGUUGA"
178
+ input = tokenizer(text, return_tensors='pt')
179
+ label = torch.randint(2, (len(text), ))
180
+
181
+ output = model(**input, labels=label)
182
+ ```
183
+
184
+ #### Contact Classification / Regression
185
+
186
+ **Note**: This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
187
+
188
+ Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
189
+
190
+ ```python
191
+ import torch
192
+ from multimolecule import RnaTokenizer, RnaErnieForContactPrediction
193
+
194
+
195
+ tokenizer = RnaTokenizer.from_pretrained('multimolecule/rnaernie')
196
+ model = RnaErnieForContactPrediction.from_pretrained('multimolecule/rnaernie')
197
+
198
+ text = "UAGCUUAUCAGACUGAUGUUGA"
199
+ input = tokenizer(text, return_tensors='pt')
200
+ label = torch.randint(2, (len(text), len(text)))
201
+
202
+ output = model(**input, labels=label)
203
+ ```
204
+
205
+ ## Training Details
206
+
207
+ RNAErnie used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
208
+
209
+ ### Training Data
210
+
211
+ The RNAErnie model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral/).
212
+ RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types.
213
+
214
+ RNAErnie used a subset of RNAcentral for pre-training. The subset contains 23 million sequences.
215
+ RNAErnie preprocessed all tokens by replacing "T"s with "S"s.
216
+
217
+ Note that [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
218
+
219
+ ### Training Procedure
220
+
221
+ #### Preprocessing
222
+
223
+ RNAErnie used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
224
+
225
+ - 15% of the tokens are masked.
226
+ - In 80% of the cases, the masked tokens are replaced by `<mask>`.
227
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
228
+ - In the 10% remaining cases, the masked tokens are left as is.
229
+
230
+ #### PreTraining
231
+
232
+ RNAErnie uses a special 3-stage training pipeline to pre-train the model, each with a different masking strategy:
233
+
234
+ Base-level Masking: The masking applies to each nucleotide in the sequence.
235
+ Subsequence-level Masking: The masking applies to subsequences of 4-8bp in the sequence.
236
+ Motif-level Masking: The model is trained on motif datasets.
237
+
238
+ The model was trained on 4 NVIDIA V100 GPUs with 32GiB memories.
239
+
240
+ - Batch size: 50
241
+ - Learning rate: 1e-4
242
+ - Weight decay: 0.01
243
+ - Optimizer: AdamW
244
+ - Steps: 2,580,000
245
+ - Learning rate warm-up: 129,000 steps
246
+ - Learning rate cool-down: 129,000 steps
247
+ - Minimum learning rate: 5e-5
248
+
249
+ ## Citation
250
+
251
+ Citation information is not available for papers published in Closed Access / Author-Fee journals.
252
+
253
+ ## Contact
254
+
255
+ Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
256
+
257
+ Please contact the authors of the RNAErnie paper for questions or comments on the paper/model.
258
+
259
+ ## License
260
+
261
+ This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
262
+
263
+ ```spdx
264
+ SPDX-License-Identifier: AGPL-3.0-or-later
265
+ ```
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RnaErnieForPreTraining"
4
+ ],
5
+ "attention_dropout": 0.1,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "head": {
9
+ "act": null,
10
+ "bias": true,
11
+ "dropout": 0.0,
12
+ "hidden_size": null,
13
+ "layer_norm_eps": 1e-12,
14
+ "num_labels": null,
15
+ "output_name": null,
16
+ "problem_type": null,
17
+ "transform": null,
18
+ "transform_act": "gelu"
19
+ },
20
+ "hidden_act": "relu",
21
+ "hidden_dropout": 0.1,
22
+ "hidden_size": 768,
23
+ "initializer_range": 0.02,
24
+ "intermediate_size": 3072,
25
+ "layer_norm_eps": 1e-12,
26
+ "lm_head": {
27
+ "act": null,
28
+ "bias": true,
29
+ "dropout": 0.0,
30
+ "hidden_size": 768,
31
+ "layer_norm_eps": 1e-12,
32
+ "output_name": null,
33
+ "transform": "nonlinear",
34
+ "transform_act": "gelu"
35
+ },
36
+ "mask_token_id": 4,
37
+ "max_position_embeddings": 513,
38
+ "model_type": "rnaernie",
39
+ "null_token_id": 5,
40
+ "num_attention_heads": 12,
41
+ "num_hidden_layers": 12,
42
+ "pad_token_id": 0,
43
+ "position_embedding_type": "absolute",
44
+ "torch_dtype": "float32",
45
+ "transformers_version": "4.44.0",
46
+ "type_vocab_size": 2,
47
+ "unk_token_id": 3,
48
+ "use_cache": true,
49
+ "vocab_size": 26
50
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d812e74770a231ffc703701f612b76c2f07f12b1db6c261f601596dc4c33aa80
3
+ size 346641488
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9337164c7d83bc9c95450c5df964d20531f0f12da3b449425f1fe3616597bfda
3
+ size 346684922
special_tokens_map.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<null>"
4
+ ],
5
+ "bos_token": "<cls>",
6
+ "cls_token": "<cls>",
7
+ "eos_token": "<eos>",
8
+ "mask_token": "<mask>",
9
+ "pad_token": "<pad>",
10
+ "sep_token": "<eos>",
11
+ "unk_token": "<unk>"
12
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<pad>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<cls>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "<eos>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "<mask>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "5": {
44
+ "content": "<null>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "additional_special_tokens": [
53
+ "<null>"
54
+ ],
55
+ "bos_token": "<cls>",
56
+ "clean_up_tokenization_spaces": true,
57
+ "cls_token": "<cls>",
58
+ "codon": false,
59
+ "eos_token": "<eos>",
60
+ "mask_token": "<mask>",
61
+ "model_max_length": 513,
62
+ "nmers": 1,
63
+ "pad_token": "<pad>",
64
+ "replace_T_with_U": true,
65
+ "sep_token": "<eos>",
66
+ "tokenizer_class": "RnaTokenizer",
67
+ "unk_token": "<unk>"
68
+ }
vocab.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <pad>
2
+ <cls>
3
+ <eos>
4
+ <unk>
5
+ <mask>
6
+ <null>
7
+ A
8
+ C
9
+ G
10
+ U
11
+ N
12
+ R
13
+ Y
14
+ S
15
+ W
16
+ K
17
+ M
18
+ B
19
+ D
20
+ H
21
+ V
22
+ .
23
+ X
24
+ *
25
+ -
26
+ I