Summarization models
Collection
2 items
•
Updated
Finetuned ai-forever/ruT5-base for text and dialogue summarization.
All 'train' subsets was concatenated and shuffled with seed 1000 - 7
.
Train subset = 155678 rows.
Evaluation on 10% of concatenated 'validation' subsets = 1458 rows.
See WandB logs.
See report at REPORT WIP.
Scheduler, optimizer and trainer states are saved into this repo, so you can use that to continue finetune with your own data with existing gradients.
from transformers import pipeline
pipe = pipeline('summarization', model='d0rj/rut5-base-summ')
pipe(text)
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('d0rj/rut5-base-summ')
model = T5ForConditionalGeneration.from_pretrained('d0rj/rut5-base-summ').eval()
input_ids = tokenizer(text, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)