apepkuss79's picture
Upload README.md with huggingface_hub
acf43dd verified
metadata
base_model: tiiuae/Falcon3-7B-Instruct
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
model_creator: tiiuae
model_name: Falcon3-7B-Instruct
quantized_by: Second State Inc.
language:
  - en
  - fr
  - es
  - pt
library_name: transformers
tags:
  - falcon3

Falcon3-7B-Instruct-GGUF

Original Model

tiiuae/Falcon3-7B-Instruct

Run with LlamaEdge

  • LlamaEdge version: coming soon
  • Prompt template

    • Prompt type: falcon3

    • Prompt string

      <|system|>
      You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible.
      <|user|>
      {user_message}
      <|assistant|>
      
  • Context size: 32000

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-7B-Instruct-Q5_K_M.gguf \
      llama-api-server.wasm \
      --model-name Falcon3-7B-Instruct \
      --prompt-template falcon3 \
      --ctx-size 32000
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-7B-Instruct-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template falcon3 \
      --ctx-size 32000
    

Quantized GGUF Models

Name Quant method Bits Size Use case
Falcon3-7B-Instruct-Q2_K.gguf Q2_K 2 2.89 GB smallest, significant quality loss - not recommended for most purposes
Falcon3-7B-Instruct-Q3_K_L.gguf Q3_K_L 3 3.97 GB small, substantial quality loss
Falcon3-7B-Instruct-Q3_K_M.gguf Q3_K_M 3 3.69 GB very small, high quality loss
Falcon3-7B-Instruct-Q3_K_S.gguf Q3_K_S 3 3.37 GB very small, high quality loss
Falcon3-7B-Instruct-Q4_0.gguf Q4_0 4 4.30 GB legacy; small, very high quality loss - prefer using Q3_K_M
Falcon3-7B-Instruct-Q4_K_M.gguf Q4_K_M 4 4.57 GB medium, balanced quality - recommended
Falcon3-7B-Instruct-Q4_K_S.gguf Q4_K_S 4 4.33 GB small, greater quality loss
Falcon3-7B-Instruct-Q5_0.gguf Q5_0 5 5.18 GB legacy; medium, balanced quality - prefer using Q4_K_M
Falcon3-7B-Instruct-Q5_K_M.gguf Q5_K_M 5 5.32 GB large, very low quality loss - recommended
Falcon3-7B-Instruct-Q5_K_S.gguf Q5_K_S 5 5.18 GB large, low quality loss - recommended
Falcon3-7B-Instruct-Q6_K.gguf Q6_K 6 6.12 GB very large, extremely low quality loss
Falcon3-7B-Instruct-Q8_0.gguf Q8_0 8 7.93 GB very large, extremely low quality loss - not recommended
Falcon3-7B-Instruct-f16.gguf f16 16 14.9 GB

Quantized with llama.cpp b4381