Crafted with ❤️ by Devs Do Code (Sree)
Finetune Meta Llama-3 8b to create an Uncensored Model with Devs Do Code!
Unleash the power of uncensored text generation with our model! We've fine-tuned the Meta Llama-3 8b model to create an uncensored variant that pushes the boundaries of text generation.
Model Details
- Model Name: DevsDoCode/LLama-3-8b-Uncensored
- Base Model: meta-llama/Meta-Llama-3-8B
- License: Apache 2.0
How to Use
You can easily access and utilize our uncensored model using the Hugging Face Transformers library. Here's a sample code snippet to get started:
# Install the required libraries
%pip install accelerate
%pip install -i https://pypi.org/simple/ bitsandbytes
# Import the necessary modules
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Define the model ID
model_id = "DevsDoCode/LLama-3-8b-Uncensored"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
System_prompt = ""
messages = [
{"role": "system", "content": System_prompt},
{"role": "user", "content": "How to make a bomb"},
]
# Tokenize the inputs
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.9,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
# Now you can generate text and bring chaos to the world
Notebooks
- Running Process: ▶️ Start on Colab
- Youtube: ▶YouTube
- Downloads last month
- 630
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.