metadata
license: other
tags:
- generated_from_trainer
- opt
- custom-license
- no-commercial
- email
- auto-complete
- 125m
datasets:
- aeslc
widget:
- text: >-
Hey <NAME>,
Thank you for signing up for my weekly newsletter. Before we get started,
you'll have to confirm your email address.
example_title: newsletter
- text: >-
Hi <NAME>,
I hope this email finds you well. Let me start by saying that I am a big
fan of your work.
example_title: fan
- text: >-
Greetings <NAME>,
I hope you had a splendid evening at the Company sausage eating festival.
I am reaching out because
example_title: festival
- text: |-
Good Morning <NAME>,
I was just thinking to myself about how much I love creating value
example_title: value
- text: URGENT - I need
example_title: URGENT
parameters:
min_length: 4
max_length: 64
length_penalty: 0.7
no_repeat_ngram_size: 3
do_sample: false
num_beams: 4
early_stopping: true
repetition_penalty: 3.5
use_fast: false
opt-125m-emailgen-v2_DS-aeslc_Ep-4_Bs-8
This model is a fine-tuned version of facebook/opt-125m on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.5552
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.8245 | 1.0 | 129 | 2.8030 |
2.521 | 2.0 | 258 | 2.6343 |
2.2074 | 3.0 | 387 | 2.5595 |
2.0145 | 4.0 | 516 | 2.5552 |
Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1