AdamGrzesik's picture
Update README.md
0b28f24 verified
|
raw
history blame
2.64 kB
metadata
base_model: alpindale/Mistral-7B-v0.2-hf
tags:
  - generated_from_trainer
model-index:
  - name: AdamGrzesik/Samantha-PL-AG-Mistral-7B-v0.2
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.0


base_model: alpindale/Mistral-7B-v0.2-hf
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: /workspace/datasets/Samantha-PL-AG-axolotl.json
    type: sharegpt

chat_template: chatml

dataset_prepared_path: last_run_prepared
val_set_size: 0.001
output_dir: /workspace/Samantha

sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true

wandb_project: dolphin
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 8
micro_batch_size: 3
num_epochs: 4
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0
lr_scheduler: cosine
learning_rate: 0.000005
optimizer: adamw_bnb_8bit

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10

eval_steps: 73
eval_table_size:
eval_table_max_new_tokens:
eval_sample_packing: false
saves_per_epoch: 
save_steps: 73
save_total_limit: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
  eos_token: "<|im_end|>"
tokens:
  - "<|im_start|>"

AdamGrzesik/Samantha-PL-AG-Mistral-7B-v0.2

This model is a fine-tuned version of alpindale/Mistral-7B-v0.2-hf on the None dataset.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 3
  • eval_batch_size: 3
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 48
  • total_eval_batch_size: 6
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 4