MetaIX's picture
Create README.md
09dc0f0
|
raw
history blame
984 Bytes

Information

GPT4-X-Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.

Quantized using --true-sequential and --act-order optimizations.

This was made using Chansung's GPT4-Alpaca Lora: https://huggingface.co./chansung/gpt4-alpaca-lora-30b

Training Parameters

  • num_epochs=10
  • cutoff_len=512
  • group_by_length
  • lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'
  • lora_r=16
  • micro_batch_size=8

Benchmarks

Wikitext2: 4.481280326843262

Ptb-New: 8.539161682128906

C4-New: 6.451964855194092

Note: This version does not use --groupsize 128, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.