Edit model card

flan-t5-small-asap_t3_f2_prompt_adherence

This model is a fine-tuned version of google/flan-t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0625
  • Rouge1: 82.0051
  • Rouge2: 77.1041
  • Rougel: 81.9898
  • Rougelsum: 81.9754
  • Gen Len: 12.0580

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 259 0.0796 79.5698 74.2362 79.5637 79.5651 12.0333
0.4061 2.0 518 0.0661 81.7224 76.8264 81.7266 81.7555 12.0493
0.4061 3.0 777 0.0606 81.5783 76.5064 81.5455 81.5755 12.0580
0.0715 4.0 1036 0.0634 81.9213 77.0935 81.9101 81.9339 12.0551
0.0715 5.0 1295 0.0625 82.0051 77.1041 81.9898 81.9754 12.0580

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
2
Safetensors
Model size
77M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for salbatarni/flan-t5-small-asap_t3_f2_prompt_adherence

Finetuned
(297)
this model