GGUF
Inference Endpoints
conversational

Introduction

This repo contains the humanized 360M SmolLM2 model in the GGUF Format

  • Quantization: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_0, q4_K_S, q4_K_M, q5_0, q5_K_S, q5_K_M, q6_K, q8_0

Quickstart

We advise you to clone llama.cpp and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository llama.cpp.

Since cloning the entire repo may be inefficient, you can manually download the GGUF file that you need or use huggingface-cli:

  1. Install
    pip install -U huggingface_hub
    
  2. Download:
    huggingface-cli download AssistantsLab/SmolLM2-360M-humanized_GGUF smollm2-360M-humanized-q4_k_m.gguf --local-dir . --local-dir-use-symlinks False
    

Quants

More information

For more information about this model, please visit the original model here.

License

Apache 2.0

Citation

SmolLM2:

@misc{allal2024SmolLM2,
      title={SmolLM2 - with great data, comes great performance}, 
      author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
      year={2024},
}

Human-Like-DPO-Dataset:

@misc{çalık2025enhancinghumanlikeresponseslarge,
      title={Enhancing Human-Like Responses in Large Language Models}, 
      author={Ethem Yağız Çalık and Talha Rüzgar Akkuş},
      year={2025},
      eprint={2501.05032},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.05032}, 
}

Downloads last month
218
GGUF
Model size
362M params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including AssistantsLab/SmolLM2-360M-humanized_GGUF