File size: 1,402 Bytes
51581e2 3d684b2 51581e2 3d684b2 51581e2 3d684b2 51581e2 3d684b2 51581e2 3d684b2 51581e2 3d684b2 51581e2 3d684b2 51581e2 8f49427 51581e2 3d684b2 d1531a5 51581e2 3d684b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for honicky/t5-short-story-character-extractor
I trained this model as part of a learning project to build a children's story authoring tool for parents of young children. See http://www.storytime.glass/
This model takes in a short story and outputs a comma separated list of characters in the story.
I'm not sure yet how useful this fine-tune is: rather it is for me to learn about the nuts and bolts of fine-tuning.
## Model Details
The model is a fine-tune of a sequence-to-sequence model [Flan T5 Large](https://huggingface.co./google/flan-t5-large), so a different architecture from decoder-only models like GPT*. Maybe this allows it to perform this transformation task (transform a story into a list of characters) using a smaller model?
* Trained using `transformers.Seq2SeqTrainer` plus the corresponding collator, tokenizer etc
- **Developed by:** RJ Honicky
- **Model type:** Encoder-Decoder Transformer
- **Language(s):** English (fine tune data set)
- **License:** MIT
- **Finetuned from model:** `google/flan-t5-large`
### Model Sources
- **Repository:** https://github.com/honicky/character-extraction
- **Weights and Biases**: https://wandb.ai/honicky/t5_target_finetune_for_character_extraction/runs/mx57gh45
## Uses
Primarily for use in https://github.com/honicky/story-time and for learning.
|