Edit model card

Llama-3.1-8B-Lexi-Uncensored

Description

This repo contains GGUF format model files for Llama-3.1-8B-Lexi-Uncensored.

Files Provided

Name Quant Bits File Size Remark
llama-3.1-8b-lexi-uncensored.Q2_K.gguf Q2_K 2 3.18 GB 2.96G, +3.5199 ppl @ Llama-3-8B
llama-3.1-8b-lexi-uncensored.Q3_K.gguf Q3_K 3 4.02 GB 3.74G, +0.6569 ppl @ Llama-3-8B
llama-3.1-8b-lexi-uncensored.Q4_0.gguf Q4_0 4 4.66 GB 4.34G, +0.4685 ppl @ Llama-3-8B
llama-3.1-8b-lexi-uncensored.Q4_K.gguf Q4_K 4 4.92 GB 4.58G, +0.1754 ppl @ Llama-3-8B
llama-3.1-8b-lexi-uncensored.Q5_K.gguf Q5_K 5 5.73 GB 5.33G, +0.0569 ppl @ Llama-3-8B
llama-3.1-8b-lexi-uncensored.Q6_K.gguf Q6_K 6 6.60 GB 6.14G, +0.0217 ppl @ Llama-3-8B
llama-3.1-8b-lexi-uncensored.Q8_0.gguf Q8_0 8 8.54 GB 7.96G, +0.0026 ppl @ Llama-3-8B

Parameters

path type architecture rope_theta sliding_win max_pos_embed
unsloth/meta-llama-3.1-8b-instruct llama LlamaForCausalLM 500000.0 null 131072

Benchmark

Original Model Card

LLM Leaderboard 2 results:

Lexi suggests that simply uncensoring the LLM makes it smarter. The dataset used to tune this model does not contain any "new knowledge" or any contamination whatsoever, yet, we see the evaluation scores shot up when we get rid of biases and refusals.

Lexi not only retains the original instruct, but it beats it.

image/png

NOTE: UGI Leaderboard

The UGI Leaderboard runs the Q4 for its evaluations which results in bad results for this model. As noted, the Q4 has issues retaining the fine tuning for some reasons ends up not as good, which will be fixed for V3.

V2 has been released, I recommend you download the new version:

https://huggingface.co./Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2

image/png

This model is based on Llama-3.1-8b-Instruct, and is governed by META LLAMA 3.1 COMMUNITY LICENSE AGREEMENT

Lexi is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.

You are responsible for any content you create using this model. Please use it responsibly.

Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.

IMPORTANT:

Use the same template as the official Llama 3.1 8B instruct. System tokens must be present during inference, even if you set an empty system message. If you are unsure, just add a short system message as you wish.

Feedback:

If you find any issues or have suggestions for improvements, feel free to leave a review and I will look into it for upcoming improvements and next version.

image/png

Downloads last month
296
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .