Megrez-3B-Instruct-GGUF

Original Model

Infinigence/Megrez-3B-Instruct

Run with LlamaEdge

  • LlamaEdge version: coming soon
  • Prompt template

    • Prompt type: megrez

    • Prompt string

      <|role_start|>system<|role_end|>{system_message}<|turn_end|><|role_start|>user<|role_end|>{user_message}<|turn_end|><|role_start|>assistant<|role_end|>
      
  • Context size: 32000

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Megrez-3B-Instruct-Q5_K_M.gguf \
      llama-api-server.wasm \
      --model-name Megrez-3B-Instruct \
      --prompt-template megrez \
      --ctx-size 32000
    

    For use cases of conversations or article writing, temperature=0.7 is strongly recommended. For use cases of mathematics or logical reasoning, temperature=0.2 is strongly recommended.

  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Megrez-3B-Instruct-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template megrez \
      --ctx-size 32000
    

    For use cases of conversations or article writing, temperature=0.7 is strongly recommended. For use cases of mathematics or logical reasoning, temperature=0.2 is strongly recommended.

Quantized GGUF Models

Name Quant method Bits Size Use case
Megrez-3B-Instruct-Q2_K.gguf Q2_K 2 1.21 GB smallest, significant quality loss - not recommended for most purposes
Megrez-3B-Instruct-Q3_K_L.gguf Q3_K_L 3 1.60 GB small, substantial quality loss
Megrez-3B-Instruct-Q3_K_M.gguf Q3_K_M 3 1.50 GB very small, high quality loss
Megrez-3B-Instruct-Q3_K_S.gguf Q3_K_S 3 1.38 GB very small, high quality loss
Megrez-3B-Instruct-Q4_0.gguf Q4_0 4 1.73 GB legacy; small, very high quality loss - prefer using Q3_K_M
Megrez-3B-Instruct-Q4_K_M.gguf Q4_K_M 4 1.81 GB medium, balanced quality - recommended
Megrez-3B-Instruct-Q4_K_S.gguf Q4_K_S 4 1.74 GB small, greater quality loss
Megrez-3B-Instruct-Q5_0.gguf Q5_0 5 2.05 GB legacy; medium, balanced quality - prefer using Q4_K_M
Megrez-3B-Instruct-Q5_K_M.gguf Q5_K_M 5 2.09 GB large, very low quality loss - recommended
Megrez-3B-Instruct-Q5_K_S.gguf Q5_K_S 5 2.05 GB large, low quality loss - recommended
Megrez-3B-Instruct-Q6_K.gguf Q6_K 6 2.40 GB very large, extremely low quality loss
Megrez-3B-Instruct-Q8_0.gguf Q8_0 8 3.10 GB very large, extremely low quality loss - not recommended
Megrez-3B-Instruct-f16.gguf f16 16 5.84 GB

Quantized with llama.cpp b4381

Downloads last month
157
GGUF
Model size
2.92B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for second-state/Megrez-3B-Instruct-GGUF

Quantized
(4)
this model