LLaMAX/LLaMAX3-8B training notebook

#3
by Phil-AB - opened

can I kindly get the notebook that was used to fine-tune the model for the translation task? I want to adapt it to train it to translate a low resource language

Owner

can I kindly get the notebook that was used to fine-tune the model for the translation task? I want to adapt it to train it to translate a low resource language

Thank you for your interest in our work. LLaMAX is trained based on the LLaMA model, so any training framework that supports LLaMA can be directly used to fine-tune LLaMAX. If you want to perform supervised fine-tuning on your own data based on LLaMAX, the following code will be useful: https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py
Note that you may need to adjust the special tokens in lines 27-30 of the alpaca code to be compatible with LLaMA3-based models.

In addition, you can refer to the following template to organize your translation data:
"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

Instruction:

Translate the following sentences from English to Chinese Simpl

Input:

"We now have 4-month-old mice that are non-diabetic that used to be diabetic," he added.

Response:他补充道:“我们现在有 4 个月大没有糖尿病的老鼠,但它们曾经得过该病。”

"""

thank you for this

I'm relatively new to this so I apologize for asking a "dumb" question

but I would like to know how I can run inference on the code. The current inference code I have is not giving me the required output.

Owner

thank you for this

I'm relatively new to this so I apologize for asking a "dumb" question

but I would like to know how I can run inference on the code. The current inference code I have is not giving me the required output.

You can try our instruction-tuned model(https://huggingface.co./LLaMAX/LLaMAX3-8B-Alpaca) and follow the example given in the README.

Sign up or log in to comment