Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Edit Models filters
Tasks
Libraries
Datasets
Languages
Licenses
Other
1
Inference status
Reset Inference status
Warm
Cold
Frozen
Misc
Reset Misc
4bit
AutoTrain Compatible
Inference Endpoints
text-generation-inference
4-bit precision
custom_code
8-bit precision
Eval Results
Merge
Misc with no match
text-embeddings-inference
Carbon Emissions
Mixture of Experts
Apply filters
Models
195
Full-text search
Edit filters
Sort: Trending
Active filters:
4bit
Clear all
legraphista/RoGemma-7b-Instruct-IMat-GGUF
Text Generation
•
Updated
Jun 27
•
106
legraphista/Llama-3-Instruct-8B-SPPO-Iter3-IMat-GGUF
Text Generation
•
Updated
Jun 27
•
194
legraphista/Yi-9B-Coder-IMat-GGUF
Text Generation
•
Updated
Jun 27
•
569
legraphista/gemma-2-9b-it-IMat-GGUF
Text Generation
•
Updated
Jun 30
•
538
•
2
legraphista/gemma-2-27b-it-IMat-GGUF
Text Generation
•
Updated
Sep 20
•
1.55k
•
20
legraphista/llm-compiler-7b-IMat-GGUF
Text Generation
•
Updated
Jun 27
•
360
legraphista/llm-compiler-7b-ftd-IMat-GGUF
Text Generation
•
Updated
Jun 28
•
595
•
2
legraphista/llm-compiler-13b-IMat-GGUF
Text Generation
•
Updated
Jun 28
•
510
•
3
legraphista/llm-compiler-13b-ftd-IMat-GGUF
Text Generation
•
Updated
Jun 28
•
251
legraphista/Gemma-2-9B-It-SPPO-Iter3-IMat-GGUF
Text Generation
•
Updated
Jul 5
•
556
•
3
ModelCloud/gemma-2-9b-it-gptq-4bit
Text Generation
•
Updated
Jul 9
•
200
•
3
ModelCloud/gemma-2-9b-gptq-4bit
Text Generation
•
Updated
Jul 9
•
106
legraphista/Phi-3-mini-4k-instruct-update2024_07_03-IMat-GGUF
Text Generation
•
Updated
Jul 3
•
557
legraphista/internlm2_5-7b-chat-IMat-GGUF
Text Generation
•
Updated
Aug 5
•
267
legraphista/internlm2_5-7b-chat-1m-IMat-GGUF
Text Generation
•
Updated
Jul 3
•
246
•
1
legraphista/codegeex4-all-9b-IMat-GGUF
Text Generation
•
Updated
Jul 6
•
564
•
9
ModelCloud/DeepSeek-V2-Lite-gptq-4bit
Text Generation
•
Updated
Jul 9
•
15
ModelCloud/internlm-2.5-7b-gptq-4bit
Feature Extraction
•
Updated
Jul 9
•
8
ModelCloud/internlm-2.5-7b-chat-gptq-4bit
Feature Extraction
•
Updated
Jul 9
•
5
ModelCloud/internlm-2.5-7b-chat-1m-gptq-4bit
Feature Extraction
•
Updated
Jul 9
•
10
legraphista/NuminaMath-7B-TIR-IMat-GGUF
Text Generation
•
Updated
Jul 11
•
126
•
1
legraphista/mathstral-7B-v0.1-IMat-GGUF
Text Generation
•
Updated
Jul 16
•
204
Xelta/miniXelta_01
Text Generation
•
Updated
Jul 17
•
8
legraphista/Athene-70B-IMat-GGUF
Text Generation
•
Updated
Jul 28
•
1.32k
•
3
legraphista/Mistral-Nemo-Instruct-2407-IMat-GGUF
Text Generation
•
Updated
Jul 23
•
603
•
2
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
Updated
Jul 29
•
531
•
3
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
Updated
Jul 26
•
55
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
Updated
Jul 27
•
349
•
4
legraphista/Llama-Guard-3-8B-IMat-GGUF
Text Generation
•
Updated
Jul 23
•
769
•
3
jhangmez/CHATPRG-v0.2.1-Meta-Llama-3.1-8B-bnb-4bit-lora-adapters
Text Generation
•
Updated
Jul 29
Previous
1
2
3
4
5
6
7
Next