airoboros-13b / README.md
jondurbin's picture
Update README.md
9a3051b
|
raw
history blame
2.56 kB
metadata
license: other

Overview

This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created by https://github.com/jondurbin/airoboros

Training data

I used a jailbreak prompt to generate the synthetic instructions, which resulted in some training data that would likely be censored by other models, such as how-to prompts about synthesizing drugs, making homemade flamethrowers, etc. Mind you, this is all generated by ChatGPT, not me. My goal was to simply test some of the capabilities of ChatGPT when unfiltered (as much as possible), and not to intentionally produce any harmful/dangerous/etc. content.

The jailbreak prompt I used is the default prompt in the python code when using the --uncensored flag: https://github.com/jondurbin/airoboros/blob/main/airoboros/self_instruct.py#L39

I also did a few passes of manually cleanup to remove some bad prompts, but mostly I left the data as-is. Initially, the model was fairly bad at math/extrapolation, closed question-answering (heavy hallucination), and coding, so I did one more fine tuning pass with additional synthetic instructions aimed at those types of problems.

Both the initial instructions and final-pass fine-tuning instructions will be published soon.

Fine-tuning method

I used the excellent FastChat module, running with:

source /workspace/venv/bin/activate

export NCCL_P2P_DISABLE=1
export NCCL_P2P_LEVEL=LOC

torchrun --nproc_per_node=8 --master_port=20001 /workspace/FastChat/fastchat/train/train_mem.py \
  --model_name_or_path /workspace/llama-13b \
  --data_path /workspace/as_conversations.json \
  --bf16 True \
  --output_dir /workspace/airoboros-uncensored-13b \
  --num_train_epochs 3 \
  --per_device_train_batch_size 20 \
  --per_device_eval_batch_size 20 \
  --gradient_accumulation_steps 2 \
  --evaluation_strategy "steps" \
  --eval_steps 500 \
  --save_strategy "steps" \
  --save_steps 500 \
  --save_total_limit 10 \
  --learning_rate 2e-5 \
  --weight_decay 0. \
  --warmup_ratio 0.04 \
  --lr_scheduler_type "cosine" \
  --logging_steps 1 \
  --fsdp "full_shard auto_wrap offload" \
  --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
  --tf32 True \
  --model_max_length 2048 \
  --gradient_checkpointing True \
  --lazy_preprocess True

This ran on 8x nvidia 80gb a100's for about 40 hours.

License

The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free.