Model Card for Model ID

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Yeonwoo Sung
  • License: apache 2.0
  • Finetuned from model: meta-llama/Llama-3.1-8B-Instruct

Model Sources [optional]

Trained from meta-llama/Llama-3.1-8B-Instruct.

How to Get Started with the Model

You could use this model with huggingface transformer by using code below:

import transformers
import torch

model_id = "BlackBeenie/Neos-Llama-3.1-8B"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Training Details

Training Data

Trained on mlabonne/orpo-dpo-mix-40k.

Training Procedure

This model is finetuned with ORPO trainer.

Downloads last month
10
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for BlackBeenie/Neos-Llama-3.1-8B

Finetuned
(874)
this model
Merges
1 model

Dataset used to train BlackBeenie/Neos-Llama-3.1-8B