exl2 quants add
Browse files
README.md
CHANGED
@@ -293,6 +293,25 @@ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
|
|
293 |
model.generate(**gen_input)
|
294 |
```
|
295 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
296 |
# 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
297 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v4-7B)
|
298 |
|
|
|
293 |
model.generate(**gen_input)
|
294 |
```
|
295 |
|
296 |
+
# 🔄 Quantizationed versions
|
297 |
+
|
298 |
+
Quantizationed versions of this model is available.
|
299 |
+
|
300 |
+
## Exl2 [@bartowski](https://hf.co/bartowski):
|
301 |
+
|
302 |
+
- https://huggingface.co/bartowski/Einstein-v4-7B-exl2
|
303 |
+
|
304 |
+
You can switch up branches in the repo to use the one you want
|
305 |
+
|
306 |
+
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
|
307 |
+
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
|
308 |
+
| [8_0](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
309 |
+
| [6_5](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
310 |
+
| [5_0](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
|
311 |
+
| [4_25](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
|
312 |
+
| [3_5](https://huggingface.co/bartowski/Einstein-v4-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
|
313 |
+
|
314 |
+
|
315 |
# 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
316 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v4-7B)
|
317 |
|