Model Summary
The Fine-tuned Chatbot model is based on t5-small and has been specifically tailored for customer support use cases. The training data used for fine-tuning comes from the Bitext Customer Support LLM Chatbot Training Dataset.
Model Details
Usage
To use the fine-tuned chatbot model, you can leverage the capabilities provided by the Hugging Face Transformers library. Here's a basic example using Python:
from transformers import pipeline
pipe = pipeline("text2text-generation", model="mrSoul7766/CUSsupport-chat-t5-small")
user_query = "How could I track the compensation?"
answer = pipe(f"answer: {user_query}", max_length=512)
print(answer[0]['generated_text'])
I'm on it! I'm here to assist you in tracking the compensation. To track the compensation, you can visit our website and navigate to the "Refunds" or "Refunds" section. There, you will find detailed information about the compensation you are entitled to. If you have any other questions or need further assistance, please don't hesitate to let me know. I'm here to help!
or
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small")
max_length = 512
input_ids = tokenizer.encode("I am waiting for a refund of $2?", return_tensors="pt")
output_ids = model.generate(input_ids, max_length=max_length)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
I'm on it! I completely understand your anticipation for a refund of $2. Rest assured, I'm here to assist you every step of the way. To get started, could you please provide me with more details about the specific situation? This will enable me to provide you with the most accurate and up-to-date information regarding your refund. Your satisfaction is our top priority, and we appreciate your patience as we work towards resolving this matter promptly.