LTEnjoy commited on
Commit
670b0ef
1 Parent(s): 1a59e49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md CHANGED
@@ -1,3 +1,43 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ We provide two ways to use SaProt, including through huggingface class and
5
+ through the same way as in [esm github](https://github.com/facebookresearch/esm). Users can choose either one to use.
6
+
7
+ ### Huggingface model
8
+ The following code shows how to load the model.
9
+ ```
10
+ from transformers import EsmTokenizer, EsmForMaskedLM
11
+
12
+ model_path = "/your/path/to/SaProt_35M_AF2"
13
+ tokenizer = EsmTokenizer.from_pretrained(model_path)
14
+ model = EsmForMaskedLM.from_pretrained(model_path)
15
+
16
+ #################### Example ####################
17
+ device = "cuda"
18
+ model.to(device)
19
+
20
+ seq = "MdEvVpQpLrVyQdYaKv"
21
+ tokens = tokenizer.tokenize(seq)
22
+ print(tokens)
23
+
24
+ inputs = tokenizer(seq, return_tensors="pt")
25
+ inputs = {k: v.to(device) for k, v in inputs.items()}
26
+
27
+ outputs = model(**inputs)
28
+ print(outputs.logits.shape)
29
+
30
+ """
31
+ ['Md', 'Ev', 'Vp', 'Qp', 'Lr', 'Vy', 'Qd', 'Ya', 'Kv']
32
+ torch.Size([1, 11, 446])
33
+ """
34
+ ```
35
+
36
+ ### esm model
37
+ The esm version is also stored in the same folder, named `SaProt_35M_AF2.pt`. We provide a function to load the model.
38
+ ```
39
+ from utils.esm_loader import load_esm_saprot
40
+
41
+ model_path = "/your/path/to/SaProt_35M_AF2.pt"
42
+ model, alphabet = load_esm_saprot(model_path)
43
+ ```