tssst's picture
Update README.md
7c7c45c verified
metadata
library_name: transformers
language:
  - en
  - fr
  - it
  - pt
  - hi
  - es
  - th
  - de
base_model:
  - ockerman0/MN-12B-Starcannon-v5.5-unofficial
base_model_relation: quantized
tags:
  - mergekit
  - merge
  - mistral
quantized_by: tsss
pipeline_tag: text-generation

This repo contains EXL2 quants of ockerman0/MN-12B-Starcannon-v5.5-unofficial.

Find the original model card here.

Base repo only contains the measurement file, see revisions for the quants.

Notes

Making these was a lesson in pain and humility. It has been over two months since the day I decided "hm today i will learn how to make exl2 quants" <- (clueless). First my conda env stopped working (for some reason), then it stopped recognizing venvs when I tried using those, then the universe decided to screw up the one venv I had working somehow (I can only assume it was a cosmic bitflip or something because it literally stopped working overnight) and making these four quants alone took over an hour on my hardware, in which time I could probably have made an entire set of GGUFs (plus a full set of i-quants) for three different models. Then uploading these was such a pain because huggingface-cli might as well be arcane magic since their documentation doesn't really tell you how to actually use it or what exactly will happen when you run this method. I haven't even tested any of these quants because tabbyapi, to this day, simply will not work. Torch keeps bugging me about running out of VRAM, even when trying to load 3bs. I have basically tried everything to try and get tabbyapi to run. It simply will not.

Suggest more models in the community tab and I might have a crack at exl2'ing them.

The model is quite nice though, it is quite useful for my usecases of synthetic variant generation if you're into that sort of thing.