Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
32
36
rtuuuuuuuur
urtuuuu
Follow
21world's profile picture
1 follower
·
7 following
AI & ML interests
None yet
Recent Activity
replied
to
bartowski
's
post
2 days ago
Looks like Q4_0_N_M file types are going away Before you panic, there's a new "preferred" method which is online (I prefer the term on-the-fly) repacking, so if you download Q4_0 and your setup can benefit from repacking the weights into interleaved rows (what Q4_0_4_4 was doing), it will do that automatically and give you similar performance (minor losses I think due to using intrinsics instead of assembly, but intrinsics are more maintainable) You can see the reference PR here: https://github.com/ggerganov/llama.cpp/pull/10446 So if you update your llama.cpp past that point, you won't be able to run Q4_0_4_4 (unless they add backwards compatibility back), but Q4_0 should be the same speeds (though it may currently be bugged on some platforms) As such, I'll stop making those newer model formats soon, probably end of this week unless something changes, but you should be safe to download and Q4_0 quants and use those ! Also IQ4_NL supports repacking though not in as many shapes yet, but should get a respectable speed up on ARM chips, PR for that can be found here: https://github.com/ggerganov/llama.cpp/pull/10541 Remember, these are not meant for Apple silicon since those use the GPU and don't benefit from the repacking of weights
reacted
to
bartowski
's
post
with 👍
2 days ago
Looks like Q4_0_N_M file types are going away Before you panic, there's a new "preferred" method which is online (I prefer the term on-the-fly) repacking, so if you download Q4_0 and your setup can benefit from repacking the weights into interleaved rows (what Q4_0_4_4 was doing), it will do that automatically and give you similar performance (minor losses I think due to using intrinsics instead of assembly, but intrinsics are more maintainable) You can see the reference PR here: https://github.com/ggerganov/llama.cpp/pull/10446 So if you update your llama.cpp past that point, you won't be able to run Q4_0_4_4 (unless they add backwards compatibility back), but Q4_0 should be the same speeds (though it may currently be bugged on some platforms) As such, I'll stop making those newer model formats soon, probably end of this week unless something changes, but you should be safe to download and Q4_0 quants and use those ! Also IQ4_NL supports repacking though not in as many shapes yet, but should get a respectable speed up on ARM chips, PR for that can be found here: https://github.com/ggerganov/llama.cpp/pull/10541 Remember, these are not meant for Apple silicon since those use the GPU and don't benefit from the repacking of weights
new
activity
8 days ago
matteogeniaccio/phi-4:
Notably better than Phi3.5 in many ways, but something is wrong.
View all activity
Organizations
None yet
urtuuuu
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
liked
a model
15 days ago
bartowski/EXAONE-3.5-7.8B-Instruct-GGUF
Text Generation
•
Updated
17 days ago
•
4.13k
•
10
liked
3 models
2 months ago
CohereForAI/aya-expanse-8b
Text Generation
•
Updated
19 days ago
•
34.4k
•
307
bartowski/aya-expanse-8b-GGUF
Text Generation
•
Updated
Oct 24
•
4.12k
•
23
bartowski/Ministral-8B-Instruct-2410-GGUF
Text Generation
•
Updated
Oct 21
•
13.7k
•
25
liked
4 models
3 months ago
bartowski/Hercules-6.0-Llama-3.1-8B-GGUF
Text Generation
•
Updated
Sep 27
•
468
•
3
Locutusque/Hercules-6.0-Llama-3.1-8B
Text Generation
•
Updated
Sep 28
•
19
•
8
bartowski/Qwen2.5-7B-Instruct-GGUF
Text Generation
•
Updated
Sep 19
•
13.2k
•
24
bartowski/Qwen2.5-14B-Instruct-GGUF
Text Generation
•
Updated
Nov 8
•
7.6k
•
26
liked
a Space
4 months ago
Running
on
Zero
72
👀
SD 3.5 with Captioner
liked
2 models
4 months ago
microsoft/Phi-3.5-mini-instruct
Text Generation
•
Updated
Sep 18
•
550k
•
•
714
microsoft/Phi-3.5-MoE-instruct
Text Generation
•
Updated
Oct 24
•
45.5k
•
540
liked
9 models
5 months ago
google/gemma-2-2b-it
Text Generation
•
Updated
Aug 27
•
405k
•
•
807
bartowski/Gemmasutra-Mini-2B-v1-GGUF
Text Generation
•
Updated
Aug 5
•
247
•
3
bartowski/gemma-2-2b-it-abliterated-GGUF
Text Generation
•
Updated
Aug 5
•
3.51k
•
53
bartowski/gemma-2-2b-it-GGUF
Text Generation
•
Updated
Aug 5
•
28.5k
•
44
lmstudio-community/gemma-2-2b-it-GGUF
Text Generation
•
Updated
Jul 31
•
8.39k
•
18
bartowski/gemma-2-9b-it-abliterated-GGUF
Text Generation
•
Updated
2 days ago
•
2.77k
•
26
bartowski/Phi-3.1-mini-128k-instruct-GGUF
Text Generation
•
Updated
Aug 3
•
4.15k
•
31
bartowski/Phi-3.1-mini-4k-instruct-GGUF
Text Generation
•
Updated
Aug 3
•
4.19k
•
40
bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
Text Generation
•
Updated
25 days ago
•
233k
•
155
Load more