It should be a good choice to use bloom or bloomz to train the new pygmalion model?
#20
by
win10
- opened
Have you considered bloomz or bloom 560M, 1.1B, 1.7B, 3B models?
I have used bloom 1B1 to load the character card, it is quite coherent.
I personally think it would be better in llama
Very early into the project I gave Bloom a shot for the 560M and 1.1B models, but it performs worse than OPT and Pythia at similar model sizes if used for English conversation. As luhao said, LLaMA would likely be the better base model, but I'm still waiting for Meta to decide whether or not they're okay with people releasing fine-tunes before doing anything with it.
I think you can train a LoRA for the LLaMA model like Alpaca did.
add rlhf