File size: 2,210 Bytes
4d7ccad
cc494c6
 
 
 
83fd628
 
 
 
cc494c6
 
83fd628
 
 
 
 
 
 
 
4d7ccad
 
 
 
 
 
83fd628
 
 
 
 
 
 
 
 
 
 
4d7ccad
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
base_model: jeiku/Average_Normie_v2_l3_8B
inference: false
library_name: transformers
merged_models:
- ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
- vicgalle/Roleplay-Llama-3-8B
- cgato/L3-TheSpice-8b-v0.1.3
- ResplendentAI/Kei_Llama3_8B
pipeline_tag: text-generation
quantized_by: Suparious
tags:
- mergekit
- merge
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
---
# jeiku/Average_Normie_v2_l3_8B AWQ

- Model creator: [jeiku](https://huggingface.co./jeiku)
- Original model: [Average_Normie_v2_l3_8B](https://huggingface.co./jeiku/Average_Normie_v2_l3_8B)

## Model Summary

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ResplendentAI/Kei_Llama3_8B](https://huggingface.co./ResplendentAI/Kei_Llama3_8B) as a base.

The following models were included in the merge:
* [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co./ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B)
* [vicgalle/Roleplay-Llama-3-8B](https://huggingface.co./vicgalle/Roleplay-Llama-3-8B)
* [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co./cgato/L3-TheSpice-8b-v0.1.3)

### About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co./docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code