This is a converted weight from aya-expanse-8b model in unsloth 4-bit dynamic quant using this collab notebook.

About this Conversion

This conversion uses Unsloth to load the model in 4-bit format and force-save it in the same 4-bit format.

How 4-bit Quantization Works

  • The actual 4-bit quantization is handled by BitsAndBytes (bnb), which works under Torch via AutoGPTQ or BitsAndBytes.
  • Unsloth acts as a wrapper, simplifying and optimizing the process for better efficiency.

This allows for reduced memory usage and faster inference while keeping the model compact.

Downloads last month
7
Safetensors
Model size
4.65B params
Tensor type
F32
·
FP16
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for huggingkot/aya-expanse-8b-bnb-4bit

Finetuned
(18)
this model

Collection including huggingkot/aya-expanse-8b-bnb-4bit