File size: 542 Bytes
356c8d5 ed9b12a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
license: unknown
---
- **Bits**: 4
- **Group Size**: 128
- **Damp Percent**: 0.01
- **Desc Act**: false
- **Static Groups**: false
- **Sym**: false
- **True Sequential**: false
- **LM Head**: true
- **Model Name or Path**: null
- **Model File Base Name**: model
- **Quant Method**: gptq
- **Checkpoint Format**: gptq
- **Meta:**
- Quantizer: intel/auto-round:0.1
- Packer: autogptq:0.8.0.dev1
- Iters: 400
- LR: 0.0025
- MinMax LR: 0.0025
- Enable MinMax Tuning: true
- Use Quant Input: false
- Scale Dtype: torch.float16 |