Mistral-Small-24B-Instruct-2501-writer-AWQ
This model is the 4-bit AWQ-quantized version of Mistral-Small-24B-Instruct-2501-writer.
- Quantization Method: AWQ (Activation-aware Weight Quantization)
- Quantization Configuration:
- Bit Width: 4-bit
- Group Size: 128
- Zero Point: Enabled
- Version: GEMM
- Downloads last month
- 38
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for lars1234/Mistral-Small-24B-Instruct-2501-writer-AWQ
Base model
mistralai/Mistral-Small-24B-Base-2501