[Finetuning Code] Align-Anything support Baichuan-M1
#1
by
XuyaoWang
- opened
β We are very pleased to announce that align-anything now supports fine-tuning for Baichuan-M1. Compared to the community's implementation, we believe our solution is more user-friendly. You just need to run the following script after installation to start training without modifying any parameters.
- Installation:
# We tested on the H800 computing cluster, and this version of CUDA works well.
# You can adjust this version according to the actual situation of the computing cluster.
conda install nvidia/label/cuda-12.2.0::cuda
export CUDA_HOME=$CONDA_PREFIX
pip install -e .[train]
- Train:
cd scripts
bash baichuan_m1_sft.sh
The training detail would be:
- Deploy
python3 -m align_anything.serve.text_modal_cli --model_name_or_path <your_model_path>