basiliskinstitute commited on
Commit
8e3f9bb
·
verified ·
1 Parent(s): 4d4480b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -2,4 +2,4 @@ This is Wizard 8x22, quantized to GPTQ with these parameters:
2
 
3
  python3 quant.py alpindale/WizardLM-2-8x22B /workspace/wizard-4bit custom --bits 4 --group_size 128 --desc_act 1 --damp 0.1 --dtype float16 --seqlen 16384 --num_samples 256 --cache_examples 0 --trust_remote_code
4
 
5
- The dataset used was openerotica/erotiquant2. I have included a script reconstitute.py to merge the files into one. Depending on the backend you might need to delete the index file after the files have been merged. I'll try to do this all in a better way once I work out I test out how marlin stacks up to exl2 for this model.
 
2
 
3
  python3 quant.py alpindale/WizardLM-2-8x22B /workspace/wizard-4bit custom --bits 4 --group_size 128 --desc_act 1 --damp 0.1 --dtype float16 --seqlen 16384 --num_samples 256 --cache_examples 0 --trust_remote_code
4
 
5
+ The dataset used was openerotica/erotiquant2. I have included a script reconstitute.py to merge the files into one. Depending on the backend you might need to delete the index file after the files have been merged. I'll try to do this all in a better way once I work after I test out how marlin stacks up to exl2 for this model.