chansung's picture
Update README.md
439df27
metadata
license: apache-2.0
pipeline_tag: text2text-generation
tags:
  - llama
  - llm

This is LoRA checkpoint fine-tuned with the following CLI. The fine-tuning process is logged in W&B dashboard. I have used DGX workstation with 8 x A100(40G).

python finetune.py \
    --base_model='elinas/llama-65b-hf-transformers-4.29' \
    --data_path='alpaca_data.json' \
    --num_epochs=10 \
    --cutoff_len=1024 \
    --group_by_length \
    --output_dir='./lora-alpaca-65b-elinas' \
    --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r=16 \
    --lora_alpha=32 \
    --batch_size=1024 \
    --micro_batch_size=15

This LoRA checkpoint is recommended to be used with transformers >= 4.29 which should be installed with the following command currently(4/30/2023).

pip install git+https://github.com/huggingface/transformers.git