PEFT
Safetensors
qwen2
Generated from Trainer
Edit model card

If I thought I had no idea what I was doing with quantization, I REALLY have no idea what I’m doing with LORA Fine Tuning... This is my terrible attempt to instruct tune base Qwen2-7B, I haven't even tested this yet, I'll do that eventually...

EDIT: Tested it for a bit, seems to actually work ok, not amazing, but actually not bad, I’ll do another once I learn more about instruct tuning...

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: /workspace/data/models/Qwen2-7B
model_type: Qwen2ForCausalLM
tokenizer_type: Qwen2Tokenizer

trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
#  - path: NobodyExistsOnTheInternet/ToxicQAFinal
#    type: sharegpt
#  - path: /workspace/data/SystemChat_filtered_sharegpt.jsonl
#    type: sharegpt
#    conversation: chatml
  - path: /workspace/data/Opus_Instruct-v2-6.5K-Filtered-v2.json
    type:
      field_system: system
      field_instruction: prompt
      field_output: response
      format: "[INST] {instruction} [/INST]"
      no_input_format: "[INST] {instruction} [/INST]"
#  - path: Undi95/orthogonal-activation-steering-TOXIC
#    type:
#      field_instruction: goal
#      field_output: target
#      format: "[INST] {instruction} [/INST]"
#      no_input_format: "[INST] {instruction} [/INST]"
#    split: test
  - path: cognitivecomputations/WizardLM_alpaca_evol_instruct_70k_unfiltered
    type: alpaca
    split: train

dataset_prepared_path: /workspace/data/last_run_prepared
val_set_size: 0.15
output_dir: /workspace/data/outputs/Qwen2-7B-TestInstructFinetune-LORA

chat_template: chatml

sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true

adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 3e-5

train_on_inputs: false
group_by_length: true
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 4
debug:
deepspeed:
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
  pad_token: "<|endoftext|>"
  eos_token: "<|im_end|>"

workspace/data/outputs/Qwen2-7B-TestInstructFinetune-LORA

This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5037

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
0.6232 0.0027 1 0.6296
0.5602 0.2499 91 0.5246
0.4773 0.4998 182 0.5155
0.4375 0.7497 273 0.5116
0.6325 0.9997 364 0.5092
0.4385 1.2382 455 0.5073
0.4949 1.4882 546 0.5061
0.503 1.7381 637 0.5052
0.5023 1.9880 728 0.5046
0.3737 2.2238 819 0.5041
0.505 2.4737 910 0.5039
0.4833 2.7237 1001 0.5038
0.4986 2.9736 1092 0.5037
0.5227 3.2108 1183 0.5037
0.5723 3.4607 1274 0.5037
0.4692 3.7106 1365 0.5037
0.5222 3.9605 1456 0.5037

Framework versions

  • PEFT 0.11.1
  • Transformers 4.42.3
  • Pytorch 2.1.2+cu118
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
5
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for FuturisticVibes/Qwen2-7B-TestInstructFinetune-LORA

Base model

Qwen/Qwen2-7B
Adapter
(233)
this model

Dataset used to train FuturisticVibes/Qwen2-7B-TestInstructFinetune-LORA