GGUF models for yi.java

Pure .gguf Q4_0 and Q8_0 quantizations of 01.ai Yi models, ready to consume by yi.java.

In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the llama-quantize utility from llama.cpp as follows:

./llama-quantize --pure ./Yi-Coder-1.5B-Chat-F32.gguf ./Yi-Coder-1.5B-Chat-Q4_0.gguf Q4_0

Intro

Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.

Key features:

  • Excelling in long-context understanding with a maximum context length of 128K tokens.
  • Supporting 52 major programming languages:
  'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'

For model details and benchmarks, see Yi-Coder blog and Yi-Coder README.

Downloads last month
30
GGUF
Model size
1.48B params
Architecture
llama

4-bit

8-bit

Inference Examples
Inference API (serverless) does not yet support yi.java models for this pipeline type.

Model tree for mukel/Yi-Coder-1.5B-Chat-GGUF

Quantized
(18)
this model

Collection including mukel/Yi-Coder-1.5B-Chat-GGUF