Fine-tuning the model

#1
by Muhammadreza - opened

Greetings.
Is there any way or instructions on fine-tuning this model?

This is quantized model in ggml format, its not good for finetuning. Maybe this repo will help you https://github.com/lxe/simple-llama-finetuner , this repo uses LoRa method, so finetune can be done on consumer GPU, but it will produce not good results as the native finetune of the model πŸ€—

Thanks for the reply.
I have another question, is it possible to turn that model a "ggml" model then?

Model in this repo is already in ggml format, quantized and ready-to-go, if you want to convert any other llama-like models (and then quantize too), you need to go to the original ggml lama repo https://github.com/ggerganov/llama.cpp

The original alpaca-native model can be find here https://huggingface.co./chavinlo/alpaca-native

For now, there isn't a good and simple guide to convert and quantize llama models to ggml, sorry

I am NOT quantized alpaca-native, i just downloaded it from magnet from discussions in original model. I don't know, how to finetune/quantize llama models and can't help with this

Sosaka, could you create a "ggml-alpaca-13b-q4.bin" version ?

Sosaka, could you create a "ggml-alpaca-13b-q4.bin" version ?

you can find this in https://huggingface.co./Pi3141/alpaca-native-13B-ggml/tree/main
but i havent tested it

you can find this in https://huggingface.co./Pi3141/alpaca-native-13B-ggml/tree/main
but i havent tested it

Is this 13B model compatible with https://github.com/antimatter15/alpaca.cpp? How to run it?

Is this 13B model compatible with https://github.com/antimatter15/alpaca.cpp? How to run it?

idk, but there is gpt4 x alpaca and coming openassistant that are (and also incompartible with alpaca.cpp, use llama.cpp with -ins flag) better than basic alpaca 13b

Sign up or log in to comment