SQFT Base Model: sqft-mistral-7b-v0.3-50-base-gptq

Model Sources

How to get this model

Refer to the commands in SQFT/run_command/mistral-7b-v0.3/sparse_quantization.sh.

Citation

@article{munoz2024sqft,
  title = {SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models},
  author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
  journal={The 2024 Conference on Empirical Methods in Natural Language Processing (Findings)},
  year={2024}
}

Acknowledgement

Thanks to the sparse algorithm Wanda and the quantization method GPTQ.

License

Apache-2.0

Downloads last month
28
Safetensors
Model size
1.21B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for IntelLabs/sqft-mistral-7b-v0.3-50-base-gptq

Adapters
1 model

Collection including IntelLabs/sqft-mistral-7b-v0.3-50-base-gptq