metadata
license: llama2
What is it?
This is a quantized version of h2oai/h2ogpt-4096-llama2-13b-chat, formatted in GGUF format to be run with llama.cpp and similar inference tools.
Available Formats
Format | Bits | Use case |
---|---|---|
q8_0 | 8 | Original quant method, 8-bit. |
Currently in conversion
Format | Bits | Use case |
---|---|---|
q3_K_L | 3 | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
q3_K_M | 3 | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
q3_K_S | 3 | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
q4_0 | 4 | Original quant method, 4-bit. |
q4_1 | 4 | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
q4_K_M | 4 | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
q4_K_S | 4 | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
q5_0 | 5 | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
q5_1 | 5 | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
q5_K_M | 5 | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
q5_K_S | 5 | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
q6_K | 6 | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
Original Model Card
inference: false language: - en license: llama2 model_type: llama pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 - h2ogpt
h2oGPT clone of Meta's Llama 2 13B Chat.
Try it live on our h2oGPT demo with side-by-side LLM comparisons and private document chat!
See how it compares to other models on our LLM Leaderboard!
See more at H2O.ai
Model Architecture
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 5120, padding_idx=0)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=5120, out_features=5120, bias=False)
(k_proj): Linear(in_features=5120, out_features=5120, bias=False)
(v_proj): Linear(in_features=5120, out_features=5120, bias=False)
(o_proj): Linear(in_features=5120, out_features=5120, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=5120, out_features=13824, bias=False)
(up_proj): Linear(in_features=5120, out_features=13824, bias=False)
(down_proj): Linear(in_features=13824, out_features=5120, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=5120, out_features=32000, bias=False)
)