File size: 1,124 Bytes
00a235d
5db952d
00a235d
5db952d
00a235d
5db952d
00a235d
5db952d
00a235d
5db952d
 
 
 
 
00a235d
5db952d
 
 
 
00a235d
5db952d
 
 
00a235d
5db952d
 
00a235d
5db952d
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: bsd-3-clause
---
Mirror of the base ProGen2-medium model (with slightly modified configuration and forward pass) introduced by [Nijkamp, et al.](https://arxiv.org/abs/2206.13517).

See also my github [repo](https://github.com/hugohrban/ProGen2-finetuning/tree/main) for an example of finetuning this model.

Example usage:

```python
from transformers import AutoModelForCausalLM
from tokenizers import Tokenizer
import torch
import torch.nn.functional as F

# load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-medium", trust_remote_code=True)
tokenizer = Tokenizer.from_pretrained("hugohrban/progen2-medium")
tokenizer.no_padding()

# prepare input
prompt = "1MEVVIVTGMSGAGK"
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device)

# forward pass
logits = model(input_ids).logits

# print output probabilities
next_token_logits = logits[-1, :]
next_token_probs = F.softmax(next_token_logits, dim=-1)
for i in range(tokenizer.get_vocab_size(with_added_tokens=False)):
    print(f"{tokenizer.id_to_token(i)}: {100 * next_token_probs[i].item():.2f} %")
```