SagiPolaczek commited on
Commit
5c71465
1 Parent(s): f043a7e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -4
README.md CHANGED
@@ -1,9 +1,91 @@
1
  ---
2
  tags:
3
- - model_hub_mixin
 
 
 
 
 
4
  license: apache-2.0
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Library: [More Information Needed]
9
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  tags:
3
+ - biology
4
+ - ibm
5
+ - mammal
6
+ - pytorch
7
+ - transformers
8
+ library_name: mammal
9
  license: apache-2.0
10
  ---
11
 
12
+ ## Model Summary
13
+ **MAMMAL (Molecular Aliened Multi-Modal Architect Language)**, a versatile multi-task foundation model that learns from large-scale
14
+ biological datasets (over 2 billion samples) across diverse modalities, including
15
+ proteins, small molecules, and genes. We introduce a query syntax that supports
16
+ a wide range of tasks such as classification, regression, and generation—by combining different modalities and entity types as inputs and/or outputs.
17
+
18
+ - **Developers:** IBM Research
19
+ - **GitHub Repository:** [TBD](TBD)
20
+ - **Paper:** [TBD](https://arxiv.org/abs/TBD)
21
+ - **Release Date**: Oct ?th, 2024
22
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
23
+
24
+
25
+ ## Usage
26
+
27
+ Using `MAMMAL` requires [TBD](https://github.com/TBD)
28
+
29
+ ```
30
+ pip install TBD
31
+ ```
32
+
33
+ A simple example:
34
+ ```python
35
+ import torch
36
+ from fuse.data.tokenizers.modular_tokenizer.op import ModularTokenizerOp
37
+ from mammal.model import Mammal
38
+ from mammal.keys import *
39
+
40
+ # Load Model
41
+ model = Mammal.from_pretrained("ibm/biomed.omics.bl.sm.ma-ted-400m")
42
+
43
+ # Load Tokenizer
44
+ tokenizer_op = ModularTokenizerOp.from_pretrained("ibm/biomed.omics.bl.sm.ma-ted-400m")
45
+
46
+ # Prepare Input Prompt
47
+ protein_calmodulin = "MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMISELDQDGFIDKEDLHDGDGKISFEEFLNLVNKEMTADVDGDGQVNYEEFVTMMTSK"
48
+ protein_calcineurin = "MSSKLLLAGLDIERVLAEKNFYKEWDTWIIEAMNVGDEEVDRIKEFKEDEIFEEAKTLGTAEMQEYKKQKLEEAIEGAFDIFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIRQMWDQNGDWDRIKELKFGEIKKLSAKDTRGTIFIKVFENLGTGVDSEYEDVSKYMLKHQ"
49
+
50
+ # Create and load sample
51
+ sample_dict = dict()
52
+ # Formatting prompt to match pre-training syntax
53
+ sample_dict[ENCODER_INPUTS_STR] = f"<@TOKENIZER-TYPE=AA><BINDING_AFFINITY_CLASS><SENTINEL_ID_0><MOLECULAR_ENTITY><MOLECULAR_ENTITY_GENERAL_PROTEIN><SEQUENCE_NATURAL_START>{protein_calmodulin}<SEQUENCE_NATURAL_END><MOLECULAR_ENTITY><MOLECULAR_ENTITY_GENERAL_PROTEIN><SEQUENCE_NATURAL_START>{protein_calcineurin}<SEQUENCE_NATURAL_END><EOS>"
54
+
55
+ # Tokenize
56
+ tokenizer_op(
57
+ sample_dict=sample_dict,
58
+ key_in=ENCODER_INPUTS_STR,
59
+ key_out_tokens_ids=ENCODER_INPUTS_TOKENS,
60
+ key_out_attention_mask=ENCODER_INPUTS_ATTENTION_MASK,
61
+ )
62
+ sample_dict[ENCODER_INPUTS_TOKENS] = torch.tensor(sample_dict[ENCODER_INPUTS_TOKENS])
63
+ sample_dict[ENCODER_INPUTS_ATTENTION_MASK] = torch.tensor(sample_dict[ENCODER_INPUTS_ATTENTION_MASK])
64
+
65
+ # Generate Prediction
66
+ batch_dict = model.generate(
67
+ [sample_dict],
68
+ output_scores=True,
69
+ return_dict_in_generate=True,
70
+ max_new_tokens=5,
71
+ )
72
+
73
+ # Get output
74
+ generated_output = tokenizer_op._tokenizer.decode(batch_dict[CLS_PRED][0])
75
+ print(f"{generated_output=}")
76
+ ```
77
+
78
+ For more advanced usage, see our detailed example at: <LINK>
79
+
80
+
81
+ ## Citation
82
+
83
+ If you found our work useful, please consider to give a star to the repo and cite our paper:
84
+ ```
85
+ @article{TBD,
86
+ title={TBD},
87
+ author={IBM Research Team},
88
+ jounal={arXiv preprint arXiv:TBD},
89
+ year={2024}
90
+ }
91
+ ```