reddit_gen_final / README.md
sentientconch's picture
update model card README.md
7cabe6b
|
raw
history blame
3.98 kB
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - rouge
  - bleu
model-index:
  - name: reddit_gen_final
    results: []

reddit_gen_final

This model is a fine-tuned version of microsoft/DialoGPT-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5050
  • Rouge: {'rouge1': 0.5318334167452433, 'rouge2': 0.3266503490464716, 'rougeL': 0.4940196552424935, 'rougeLsum': 0.49965823775029017}
  • Perplexity: 810.2161
  • Bleu: {'bleu': 0.3233116246700081, 'precisions': [0.5456588886510291, 0.3399931653275477, 0.273607307447275, 0.2384403661808989], 'brevity_penalty': 0.9747575251310703, 'length_ratio': 0.975070821529745, 'translation_length': 130796, 'reference_length': 134140}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 1024
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 32768
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1077
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge Perplexity Bleu
3.9872 18.02 320 3.1407 {'rouge1': 0.4236605668840905, 'rouge2': 0.17676647154634773, 'rougeL': 0.37369585565755137, 'rougeLsum': 0.38069226779493875} 1007.2803 {'bleu': 0.1795660368641941, 'precisions': [0.4476203532341719, 0.1996849620846328, 0.12947506323794772, 0.097952801903731], 'brevity_penalty': 0.9786100068791068, 'length_ratio': 0.9788355449530342, 'translation_length': 131301, 'reference_length': 134140}
3.0112 37.02 640 2.6693 {'rouge1': 0.5006690963461402, 'rouge2': 0.2845737029774397, 'rougeL': 0.4598926127632702, 'rougeLsum': 0.46623659707701914} 891.6387 {'bleu': 0.28259351848586683, 'precisions': [0.5153005174673647, 0.2977358252901072, 0.22869830241856198, 0.19400129812455164], 'brevity_penalty': 0.9838352619991267, 'length_ratio': 0.9839645146861488, 'translation_length': 131989, 'reference_length': 134140}
2.5776 56.02 960 2.5050 {'rouge1': 0.5318334167452433, 'rouge2': 0.3266503490464716, 'rougeL': 0.4940196552424935, 'rougeLsum': 0.49965823775029017} 810.2161 {'bleu': 0.3233116246700081, 'precisions': [0.5456588886510291, 0.3399931653275477, 0.273607307447275, 0.2384403661808989], 'brevity_penalty': 0.9747575251310703, 'length_ratio': 0.975070821529745, 'translation_length': 130796, 'reference_length': 134140}

Framework versions

  • Transformers 4.28.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2