MuntasirHossain's picture
Update README.md
b15395f verified
---
license: apache-2.0
tags:
- generated_from_trainer
- pipeline_tag
base_model: google/flan-t5-large
model-index:
- name: flan-t5-large-samsum-qlora-merged
results: []
datasets:
- samsum
metrics:
- rouge
pipeline_tag: summarization
library_name: peft
---
# Model description
Parameter-efficient fine-tuning (PEFT) with QLoRA was employed to fine-tune the base [google/flan-t5-large](https://huggingface.co./google/flan-t5-large) model
on the [samsum](https://huggingface.co./datasets/samsum) dataset containing dialoges. After the fine-tuning, the
[PEFT model adapter](https://huggingface.co./MuntasirHossain/flan-t5-large-samsum-qlora) was merged with the base model.
The model is intended for generative summarization tasks and achieved the following scores on the test dataset:
- Rogue1: 49.249596%
- Rouge2: 23.513032%
- RougeL: 39.960812%
- RougeLsum: 39.968438%
## How to use
Load the model:
``` python
from transformers import pipeline
pipeline_model = pipeline("summarization", model="MuntasirHossain/flan-t5-large-samsum-qlora-merged")
summary = pipeline_model(text, max_new_tokens = 50)
print(summary[0]['summary_text'])
```
Example Inference:
``` python
# random sample text from the samsum test dataset
text = """
Emma: Hi, we're going with Peter to Amiens tomorrow.
Daniel: oh! Cool.
Emma: Wanna join?
Daniel: Sure, I'm fed up with Paris.
Emma: We're too. The noise, traffic etc. Would be nice to see some countrysides.
Daniel: I don't think Amiens is exactly countrysides though :P
Emma: Nope. Hahahah. But not a megalopolis either!
Daniel: Right! Let's do it!
Emma: But we should leave early. The days are shorter now.
Daniel: Yes, the stupid winter time.
Emma: Exactly!
Daniel: Where should we meet then?
Emma: Come to my place by 9am.
Daniel: oohhh. It means I have to get up before 7!
Emma: Yup. The early bird gets the worm (in Amiens).
Daniel: You sound like my grandmother.
Emma: HAHAHA. I'll even add: no parties tonight, no drinking dear Daniel
Daniel: I really hope Amiens is worth it!
"""
summary = pipeline_model(text, max_new_tokens = 50)
print(summary[0]['summary_text'])
Emma and Peter are going to Amiens tomorrow. Daniel will join them. They will meet at Emma's place by 9 am. They will not have any parties tonight.
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2