File size: 1,080 Bytes
94cd715
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bb0b14
94cd715
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
library_name: transformers
license: apache-2.0
quantized_by: stillerman
tags:
- llamafile
- gguf

language:
- en
datasets:
- HuggingFaceTB/smollm-corpus
---

# SmolLM-1.7B-Instruct - llamafile

This repo contains `.gguf` and `.llamafile` files for [SmolLM-1.7B-Instruct](https://huggingface.co./collections/HuggingFaceTB/smollm-6695016cad7167254ce15966). [Llamafiles](https://llamafile.ai/) are single-file executables (called a "llamafile") that run locally on most computers, with no installation.

# Use it in 3 lines!
```
wget https://huggingface.co./stillerman/SmolLM-1.7B-Instruct-Llamafile/resolve/main/SmolLM-1.7B-Instruct-F16.llamafile
chmod a+x SmolLM-1.7B-Instruct-F16.llamafile
./SmolLM-1.7B-Instruct-F16.llamafile
```

# Thank you to
- Huggingface for [SmolLM model family](https://huggingface.co./collections/HuggingFaceTB/smollm-6695016cad7167254ce15966)
- Mozilla for [Llamafile](https://llamafile.ai/)
- [llama.cpp](https://github.com/ggerganov/llama.cpp/)
- [Justine Tunney](https://huggingface.co./jartine) and [Compilade](https://github.com/compilade) for help