how to fine tuning
Hi everyone I'm new to NLP I want to know how to use BLOOM(traditional Chinese) to fine tuning w/ my own data(.csv) (such as QA)
my data is collected by myself (e.g. prompt and completion --GPT3 format)
Hey 🤗
I see two options for fine-tuning:
- Transformers Checkpoint (this repo): You'd probably want to make use of the DeepSpeed integration for that, see https://huggingface.co./docs/transformers/main_classes/deepspeed
- Megatron-Deepspeed Checkpoint (available here: https://huggingface.co./bigscience/bloom-optimizer-states): You can fine-tune with the same repository used for pre-training available here: https://github.com/bigscience-workshop/Megatron-DeepSpeed
@Muennighoff
how much gpu ram is used to fine tuning Bloom 560m ?
Thank you in advance my friend.
Depends if you're willing to fine-tune only a few parameters you can maybe even do it in a Colab Notebook with 15GB or so; Here are some sources that should help 🤗
Do you think is possible to do the same modification you did in Bloom but in Alpaca 7b for semantic similarity?
I'm currently working with a low-resource language that is a component of the ROOTS dataset, which Bloom is trained on. However, upon examining the vocabulary and attempting to tokenize it, I encountered a situation where there were no tokenizations available for the language.
Is it feasible to inject this language's vocabulary into Bloom's tokenizer?