Edit model card

Mayonnaise LLM

Mayo is a language model fine-tuned on the Mayo dataset using Supervised Fine-Tuning (SFT) and Teacher Reinforced Learning (TRL) techniques. It is based on the Mistral 7b Model

Features

  • Utilizes SFT and TRL techniques for improved performance
  • Supports English language

Usage

To use the Mayo LLM, you can load the model using the Hugging Face Transformers library:

from transformers import pipeline

pipe = pipeline("text-generation", model="nroggendorff/mayo")

question = "What color is the sky?"
conv = [{"role": "user", "content": question}]

response = pipe(conv, max_new_tokens=32)[0]['generated_text'][-1]['content']
print(response)

License

This project is licensed under the MIT License.

Downloads last month
12
Safetensors
Model size
3.87B params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nroggendorff/mayo

Quantized
(92)
this model

Dataset used to train nroggendorff/mayo