honicky's picture
Update bad README.md
3d684b2 verified
|
raw
history blame
1.3 kB
metadata
library_name: peft
base_model: google/flan-t5-large

Model Card for honicky/t5-short-story-character-extractor

I trained this model as part of a learning project to build a children's story authoring tool for parents of young children. See http://www.storytime.glass/ This model takes in a short story and outputs a comma separated list of characters in the story.

I'm not sure yet how useful this fine-tune is: rather it is for me to learn about the nuts and bolts of fine-tuning.

Model Details

The model is a fine-tune of a sequence-to-sequence model Flan T5 Large, so a different architecture from decoder-only models like GPT*. Maybe this allows it to perform this transformation task (transform a story into a list of characters) using a smaller model?

  • Trained using transformers.Seq2SeqTrainer plus the corresponding collator, tokenizer etc

  • Developed by: RJ Honicky
  • Model type: Encoder-Decoder Transformer
  • Language(s): English (fine tune data set)
  • License: MIT
  • Finetuned from model: google/flan-t5-large

Model Sources [optional]

Uses

Primarily for use in https://github.com/honicky/story-time and for learning.