llama.cpp error
I am getting the following error while trying to quantize the model:
Exception: Vocab size mismatch (model has 32000, but TowerBase-7B-v0.1/tokenizer.model has 32005)
could you please fix ? it might be an issue not only for quantization
Hi,
Thanks for raising this. I believe it is now fixed. Could you download the model and try again?
I will and I'll let you know. Thanks for the quick reply. Anyway I have loaded the model in 4bits precision, tried it out and I wanted to express my entusiams, translation is veeeeery good.
Thank you for your interest in Tower, and your enthusiasm! You may want to try out our Instruct variants as well (https://huggingface.co./Unbabel/TowerInstruct-7B-v0.2 and https://huggingface.co./Unbabel/TowerInstruct-13B-v0.1) --- they are tailored for translation, among other tasks.
Thank you for your interest in Tower, and your enthusiasm! You may want to try out our Instruct variants as well (https://huggingface.co./Unbabel/TowerInstruct-7B-v0.2 and https://huggingface.co./Unbabel/TowerInstruct-13B-v0.1) --- they are tailored for translation, among other tasks.
I can cofirm that is now possible to quantize the model. Many thanks