--- license: mit language: - en pipeline_tag: text-generation --- My own (ZeroWw) quantizations. output and embed tensors are quantized to f16. all other tensors are quantized to q5_k or q6_k. Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16. Updated on: Sat Jul 27, 15:17:21