YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

opt-125m-email-generation - AWQ

Original model description:

license: other tags:

  • generated_from_trainer

  • opt

  • custom-license

  • non-commercial

  • email

  • auto-complete

  • 125m datasets:

  • aeslc widget:

  • text: 'Hey ,

    Thank you for signing up for my weekly newsletter. Before we get started, you''ll have to confirm your email address.' example_title: newsletter

  • text: 'Hi ,

    I hope this email finds you well. Let me start by saying that I am a big fan of your work.' example_title: fan

  • text: 'Greetings ,

    I hope you had a splendid evening at the Company sausage eating festival. I am reaching out because' example_title: festival

  • text: 'Good Morning ,

    I was just thinking to myself about how much I love creating value' example_title: value

  • text: URGENT - I need example_title: URGENT parameters: min_length: 4 max_length: 64 length_penalty: 0.7 no_repeat_ngram_size: 3 do_sample: false num_beams: 4 early_stopping: true repetition_penalty: 3.5 use_fast: false base_model: facebook/opt-125m


NOTE: there is currently a bug with huggingface API for OPT models. Please use the colab notebook to test :)

opt for email generation - 125m

Why write the rest of your email when you can generate it?

from transformers import pipeline
model_tag = "pszemraj/opt-125m-email-generation"
generator = pipeline(
              'text-generation', 
              model=model_tag, 
              use_fast=False,
              do_sample=False,
            )
            
prompt = """
Hello, 
Following up on the bubblegum shipment."""
generator(
    prompt,
    max_length=96,
) # generate

About

This model is a fine-tuned version of facebook/opt-125m on an aeslc dataset.

  • Emails, phone numbers, etc., were attempted to be excluded in a dataset preparation step using clean-text in Python.
  • Note that API is restricted to generating 64 tokens - you can generate longer emails by using this in a text-generation pipeline object

It achieves the following results on the evaluation set:

  • Loss: 2.5552

Intended uses & limitations

  • OPT models cannot be used commercially
  • here is a GitHub gist for a script to generate emails in the console or to a text file.

Training and evaluation data

  • the email_body field of train + validation (get more data) from the aeslc dataset.

Training results

Training Loss Epoch Step Validation Loss
2.8245 1.0 129 2.8030
2.521 2.0 258 2.6343
2.2074 3.0 387 2.5595
2.0145 4.0 516 2.5552

Framework versions

  • Transformers 4.20.1
  • Pytorch 1.11.0+cu113
  • Tokenizers 0.12.1
Downloads last month
2
Safetensors
Model size
51.7M params
Tensor type
I32
·
FP16
·
Inference API
Unable to determine this model's library. Check the docs .