|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
language: |
|
- en |
|
--- |
|
|
|
# test of ModernBERT2Olmo-large_1b |
|
|
|
experimental seq2seq with EncoderDecoderModel. You will need to patch `modeling_llama.py` with [this code](https://gist.github.com/pszemraj/a15219f33d94dc53a6e270c0c81360ec) for it work |
|
|
|
> [!WARNING] |
|
> WIP + output of this model is gibberish bc cross attn needs training |
|
|
|
```py |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("pszemraj/ModernBERT2Olmo-large_1b-test") |
|
model = AutoModelForSeq2SeqLM.from_pretrained("pszemraj/ModernBERT2Olmo-large_1b-test") |
|
|
|
ARTICLE_TO_SUMMARIZE = ( |
|
"PG&E stated it scheduled the blackouts in response to forecasts for high winds " |
|
"amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " |
|
"scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." |
|
) |
|
prompt = f"summarize dis botmon: {ARTICLE_TO_SUMMARIZE}" |
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
|
|
# autoregressively generate summary (uses greedy decoding by default) |
|
generated_ids = model.generate( |
|
**inputs, |
|
min_new_tokens=10, |
|
max_new_tokens=100, |
|
) |
|
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
print(generated_text) |
|
``` |
|
|
|
|