gemma-2-9B-it-q4_0

This is a quantized version of the Gemma2 9B instruct model using the Q4_0 quantization method.

Model Details

  • Original Model: Gemma2-9B-it
  • Quantization Method: Q4_0
  • Precision: 4-bit

Usage

You can use it directly with llama.cpp

Downloads last month
26
GGUF
Model size
9.24B params
Architecture
gemma2

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.