This is a quantized version of the BrewInteractive/fikri-3.1-8B-Instruct model.

  • Original model: fikri-3.1-8B-Instruct

  • Base model: LLaMA-3.1-8B

  • Quantization: Q4_K_M

  • Optimized for faster inference and reduced memory usage while maintaining performance

  • Built on the LLaMA 3.1 architecture (8B)

  • Fine-tuned for Turkish language tasks

  • Quantized for improved efficiency

How to use

  1. Install llama.cpp:

    • For macOS, use Homebrew:
      brew install llama.cpp
      
    • For other operating systems, follow the installation instructions on the llama.cpp GitHub repository.
  2. Download the quantized GGUF file from this repository's Files section.

  3. Run the following command for conversation mode:

llama-cli -m ./fikri-3.1-8B-Instruct-Q4_K_M.gguf --no-mmap -fa -c 4096 --temp 0.8 -if --in-prefix "<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
Downloads last month
3
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.