Roast_Bot / README.md
kaifkhaan's picture
Update README.md
2e519cb verified
|
raw
history blame
4.05 kB
metadata
license: apache-2.0
datasets:
  - kaifkhaan/roast
base_model:
  - mistralai/Mistral-7B-Instruct-v0.1
tags:
  - not-for-all-audiences

Mistral Roast bot

Welcome to the Mistral Roastbot model repository! This model has been fine-tuned on custom Roast data and is designed to generate Uncensored Roasts in the context of any queries from the user.

  • NOTE: THIS MODEL IS FOR FUN PURPOSE ONLY AND MEANS NO HARM TO ANY COMMUNITY OR ANYTHING ELSE IN GENERAL .

Model Details

  • Model Name: Mistral Roast bot
  • Model Architecture: Mistral
  • Model Type: Causal Language Model
  • Training Data: Fine-tuned on a custom dataset of kaifkhaan/roast.

Usage

Installation

To use this model, you will need the transformers and llama_cpp libraries installed. You can install them using pip:

pip install transformers llama_cpp bitsandbytes accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig , pipeline
import torch

# Configuration for the nf4 quantization
nf4_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)

# Load the fine-tuned model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
    "kaifkhaan/Roast_Bot",  
    device_map='auto',
    quantization_config=nf4_config,
    use_cache=False
)

tokenizer = AutoTokenizer.from_pretrained("kaifkhaan/Roast_Bot" )

# Ensure pad token and padding side are correctly set
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"

def generate_response(prompt, model, tokenizer):
    # Add the custom instruction and format the prompt
    #try giving different instruction as per your need and have fun !
    instruction = "You are a chatbot that will roast the user while also using uncensored curse words and offensive statements while simultaneously answering his queries. Use the provided input and create a roast response also use as many curse words in the response as possible ."
    full_prompt = f"<s>### Instruction:\n{instruction}\n\n### Input:\n{prompt}\n\n### Response:"

    # Encode the prompt
    encoded_input = tokenizer(full_prompt, return_tensors="pt", add_special_tokens=True)
    model_inputs = encoded_input.to('cuda')
    
    # Generate text from the model
    generated_ids = model.generate(
        **model_inputs,
        max_new_tokens=200,  # Adjust as needed
        do_sample=True,
        temperature=0.6,  # Control randomness
        top_k=50,  # Limits sampling to top k tokens
        top_p=0.95,  # Nucleus sampling
        pad_token_id=tokenizer.eos_token_id
    )
    
    # Decode the generated text
    decoded_output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
    
    # Extract the response part
    response = decoded_output[0]
    response = response.split("### Response:")[1].strip() if "### Response:" in response else response.strip()
    
    return response

# Example prompt

prompt = "am i pretty ?"

# Generate the response
response = generate_response(prompt, model, tokenizer)
print(response)
response = "you look like a sack of sh*t with a face."

Training

The model was fine-tuned on a custom dataset consisting of Roasts between user and the bot. The fine-tuning process involved training the model for 15 epochs using a batch size of 16 on a single GPU.

Hyperparameters

  • Learning Rate: 2e-4
  • Batch Size: 16
  • Number of Epochs: 15
  • Optimizer: AdamW

Limitations and Biases

  • Domain Specific: The model is fine-tuned specifically for fun and roast purpose.
  • Limitations: Might not give satisfying or funny result sometimes , instructions is a must .

Citation

@misc{mistral_Roastbot_2024,
  author = {kaifkhaan},
  title = {Mistral Roast Model},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co./kaifkhaan/Roast_Bot}
}