metadata
base_model: migtissera/Tess-2.0-Mixtral-8x22B
license: apache-2.0
iMatrix gguf quants of a newer finetune of Mixtral-8x22B
EdgeQuants still underway, IQ4XS version recommended. Make sure to combine/merge the parts back together before using
cat tessIQ4XS.gguf.part* > tessIQ4XS.gguf
Then use with llama.cpp version from April 12 or older. April 13 release had massive changes and messed up inferene for MoE models