tomhuang06/TAIDE-LX-7B-Chat-bf16

This model was converted to GGUF bf16 format from taide/TAIDE-LX-7B-Chat using llama.cpp (95bc82f).

Downloads last month
3
GGUF
Model size
6.94B params
Architecture
llama

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for tomhuang06/TAIDE-LX-7B-Chat-bf16

Quantized
(15)
this model