File size: 1,349 Bytes
86d218d
 
9fc1844
 
 
86d218d
 
9fc1844
86d218d
24e5ada
 
 
 
86d218d
9fc1844
7ae6c6e
86d218d
7ae6c6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9fc1844
7ae6c6e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
library_name: transformers
license: apache-2.0
language:
- en
---

# test of ModernBERT2Olmo-large_1b

experimental seq2seq with EncoderDecoderModel. You will need to patch `modeling_llama.py` with [this code](https://gist.github.com/pszemraj/a15219f33d94dc53a6e270c0c81360ec) for it work

> [!WARNING]
> WIP + output of this model is gibberish bc cross attn needs training

```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("pszemraj/ModernBERT2Olmo-large_1b-test")
model = AutoModelForSeq2SeqLM.from_pretrained("pszemraj/ModernBERT2Olmo-large_1b-test")

ARTICLE_TO_SUMMARIZE = (
    "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
    "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
    "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
)
prompt = f"summarize dis botmon: {ARTICLE_TO_SUMMARIZE}"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# autoregressively generate summary (uses greedy decoding by default)
generated_ids = model.generate(
    **inputs,
    min_new_tokens=10,
    max_new_tokens=100,
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
```