This is a GGUF conversion of Google's T5 v1.1 XXL encoder model.

The weights can be used with ./llama-embedding or with the ComfyUI-GGUF custom node together with image generation models.

This is a non imatrix quant as llama.cpp doesn't support imatrix creation for T5 models at the time of writing. It's therefore recommended to use Q5_K_M or larger for the best results, although smaller models may also still provide decent results in resource constrained scenarios.

Downloads last month
161
GGUF
Model size
4.76B params
Architecture
t5encoder

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Niansuh/t5-v1_1-xxl-encoder-gguf

Base model

google/t5-v1_1-xxl
Quantized
(2)
this model