File size: 4,053 Bytes
53bb157 2e519cb 53bb157 c8420d2 53bb157 2e519cb 53bb157 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
license: apache-2.0
datasets:
- kaifkhaan/roast
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
tags:
- not-for-all-audiences
---
# Mistral Roast bot
Welcome to the Mistral Roastbot model repository! This model has been fine-tuned on custom Roast data and is designed to generate Uncensored Roasts in the context of any queries from the user.
- **NOTE**: THIS MODEL IS FOR FUN PURPOSE ONLY AND MEANS NO HARM TO ANY COMMUNITY OR ANYTHING ELSE IN GENERAL .
## Model Details
- **Model Name**: Mistral Roast bot
- **Model Architecture**: [Mistral](https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.1)
- **Model Type**: Causal Language Model
- **Training Data**: Fine-tuned on a custom dataset of kaifkhaan/roast.
## Usage
### Installation
To use this model, you will need the `transformers` and `llama_cpp` libraries installed. You can install them using pip:
```bash
pip install transformers llama_cpp bitsandbytes accelerate
```
```bash
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig , pipeline
import torch
# Configuration for the nf4 quantization
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load the fine-tuned model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"kaifkhaan/Roast_Bot",
device_map='auto',
quantization_config=nf4_config,
use_cache=False
)
tokenizer = AutoTokenizer.from_pretrained("kaifkhaan/Roast_Bot" )
# Ensure pad token and padding side are correctly set
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
def generate_response(prompt, model, tokenizer):
# Add the custom instruction and format the prompt
#try giving different instruction as per your need and have fun !
instruction = "You are a chatbot that will roast the user while also using uncensored curse words and offensive statements while simultaneously answering his queries. Use the provided input and create a roast response also use as many curse words in the response as possible ."
full_prompt = f"<s>### Instruction:\n{instruction}\n\n### Input:\n{prompt}\n\n### Response:"
# Encode the prompt
encoded_input = tokenizer(full_prompt, return_tensors="pt", add_special_tokens=True)
model_inputs = encoded_input.to('cuda')
# Generate text from the model
generated_ids = model.generate(
**model_inputs,
max_new_tokens=200, # Adjust as needed
do_sample=True,
temperature=0.6, # Control randomness
top_k=50, # Limits sampling to top k tokens
top_p=0.95, # Nucleus sampling
pad_token_id=tokenizer.eos_token_id
)
# Decode the generated text
decoded_output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
# Extract the response part
response = decoded_output[0]
response = response.split("### Response:")[1].strip() if "### Response:" in response else response.strip()
return response
# Example prompt
prompt = "am i pretty ?"
# Generate the response
response = generate_response(prompt, model, tokenizer)
print(response)
```
```bash
response = "you look like a sack of sh*t with a face."
```
### Training
The model was fine-tuned on a custom dataset consisting of Roasts between user and the bot. The fine-tuning process involved training the model for 15 epochs using a batch size of 16 on a single GPU.
## Hyperparameters
- **Learning Rate**: 2e-4
- **Batch Size**: 16
- **Number of Epochs**: 15
- **Optimizer**: AdamW
### Limitations and Biases
- **Domain Specific**: The model is fine-tuned specifically for fun and roast purpose.
- **Limitations**: Might not give satisfying or funny result sometimes , instructions is a must .
### Citation
```bibtex
@misc{mistral_Roastbot_2024,
author = {kaifkhaan},
title = {Mistral Roast Model},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co./kaifkhaan/Roast_Bot}
}
``` |