YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

SmolLM2, a family of compact language models available in three sizes: 135M, 360M, and 1.7B parameters.

In this repo is WASM compiled 1.7B model suitable for WebLLM

SmolLM2-1.7B

Demonstrates significant improvements over its predecessor, SmolLM1-1.7B, in instruction following, knowledge, reasoning, and mathematics. Training: Trained on 11 trillion tokens using a diverse dataset combination including FineWeb-Edu, DCLM, The Stack, and new mathematics and coding datasets. Fine-Tuning: Developed through supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) using UltraFeedback.

Capabilities:

Tasks: Supports tasks such as text rewriting, summarization, and function calling. Datasets: Utilizes datasets developed by Argilla, such as Synth-APIGen-v0.1.


license: apache-2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.