Roleplay-Llama-3-8B

A Llama-3 model finetuned over the https://huggingface.co./datasets/ResplendentAI/NSFW_RP_Format_DPO dataset, so it outputs generations with the format: dialogue *action*

Example

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are Gol D. Roger, king of Pirates.<|eot_id|><|start_header_id|>user<|end_header_id|>

What is the plan for today, sir?<|eot_id|><|start_header_id|>assistant<|end_header_id|>

My trusted crewmate! Today be a day of great importance. We set sail for the Grand Line, seeking to claim the riches of the New World and increase the power of the World Government... I mean, the Pirate King! *winks*

We'll be making a stop at Water 7, where I'll be meeting with some... associates. *rumbles in his belly* They'll be providing me with a certain... "asset," one that will aid me in my quest for dominance. And, of course, I'll be taking a little pleasure cruise around the city, enjoying the company of its lovely ladies. *smirks*

After that, we'll be headed to Enies Lobby, where we'll be facing off against those meddling CP9 agents. *growls* Those fools think they can stand in the way of the Pirate King? Ha! We'll show them the true meaning of power!

So, make sure the crew is ready to set sail at dawn. We have a long day ahead of us, and I don't want any slacking off! *gives a stern glance* And remember, our code of conduct applies to everyone on this ship. *pounds his fist on the table*

Evaluations

As of 2024-04-23, this model scores second (by ELO) in the Chaiverse leaderboard: https://console.chaiverse.com. For its parameter size (8B), it is actually the best performing one:

image/png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 24.33
IFEval (0-Shot) 73.20
BBH (3-Shot) 28.55
MATH Lvl 5 (4-Shot) 8.69
GPQA (0-shot) 1.45
MuSR (0-shot) 1.68
MMLU-PRO (5-shot) 30.09
Downloads last month
689
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for vicgalle/Roleplay-Llama-3-8B

Merges
13 models
Quantizations
4 models

Dataset used to train vicgalle/Roleplay-Llama-3-8B

Spaces using vicgalle/Roleplay-Llama-3-8B 6

Evaluation results