How I select models ? First, I bench them, to trim the overfit & dumbified models. Then, I test them. The smartest to my taste end-up here.
Nexes the Elder
Nexesenex
AI & ML interests
Maintaining KoboldCPP fork Croco.Cpp.
Looting PRs here and there.
Merging since the 11/02/2025, happily so. Stay tuned!
Making quants often requiring IK_Llama.cpp or Croco.Cpp to run.
Sharing my favorite models quants (recently 70b-123b for bi and tri GPU rigs).
Barking at many trees.
Thanks to Mradermacher and Bartowski for their imatrixes, I'm using them extensively.
Note : Don't make quants of my merges until they are versioned! (Vx or Vx.x)
Recent Activity
updated
a model
about 1 hour ago
Nexesenex/Llama_3.x_70b_NegerTeaz_0.21-iMat-CQ-GGUF
published
a model
about 2 hours ago
Nexesenex/invisietch_L3.1-70Blivion-v0.1-rc1-70B-lorablated_alt
published
a model
about 2 hours ago
Nexesenex/Llama_3.x_70b_NegerTeaz_0.21-iMat-CQ-GGUF
Organizations
Collections
2
models
289

Nexesenex/Llama_3.x_70b_NegerTeaz_0.21-iMat-CQ-GGUF
Updated

Nexesenex/invisietch_L3.1-70Blivion-v0.1-rc1-70B-lorablated_alt
Updated

Nexesenex/Llama_3.x_70b_EverTeaz_0.21-iMat-CQ-GGUF
Updated

Nexesenex/LatitudeGames_Wayfarer-Large-70B-Llama-3.3-iMat-CQ-GGUF
Updated

Nexesenex/invisietch_L3.1-70Blivion-v0.1-rc1-70B-lorablated
Updated

Nexesenex/Llama_3.x_70b_SmarTricks_V1.01-iMat-CQ-GGUF
Updated

Nexesenex/Llama_3.x_70b_Erasmus_V1.11
Text Generation
•
Updated

Nexesenex/Llama_3.x_70b_FreeFaller_R1_V1.11-iMat-CQ-GGUF
Updated

Nexesenex/Llama_3.x_70b_Nemeslices_V1.4-iMat-CQ-GGUF
Updated

Nexesenex/Llama_3.x_70b_SmarTricks_V1.01
Text Generation
•
Updated
datasets
None public yet