Edit model card

NOTE: You will need a recent build of llama.cpp to run these quants (i.e. at least commit 494c870).

GGUF importance matrix (imatrix) quants for https://huggingface.co./cognitivecomputations/dolphincoder-starcoder2-15b

This model is based on StarCoder2-15b and is subject to bigcode-openrail-m license.
This Dolphin is really good at coding, I trained with a lot of coding data.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

Layers Context Template
40
16384
<|im_start|>system
You are DolphinCoder, a helpful AI programming assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Downloads last month
38
GGUF
Model size
16B params
Architecture
starcoder2

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) does not yet support gguf models for this pipeline type.

Model tree for dranger003/dolphincoder-starcoder2-15b-iMat.GGUF

Quantized
(5)
this model