Edit model card

QuantFactory Banner

QuantFactory/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2-GGUF

This is quantized version of prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2 created using llama.cpp

Original Model Card

Model Trained Using AutoTrain

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the mental_health_counseling_conversations dataset.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "prabureddy/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "Hey Alex! I have been feeling a bit down lately.I could really use some advice on how to feel better?"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)
Downloads last month
405
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2-GGUF

Quantized
(86)
this model

Dataset used to train QuantFactory/Mental-Health-FineTuned-Mistral-7B-Instruct-v0.2-GGUF