Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct
DPO fine tuning method using the following datasets:
- https://huggingface.co./datasets/Intel/orca_dpo_pairs
- https://huggingface.co./datasets/argilla/distilabel-math-preference-dpo
- https://huggingface.co./datasets/unalignment/toxic-dpo-v0.2
- https://huggingface.co./datasets/M4-ai/prm_dpo_pairs_cleaned
- https://huggingface.co./datasets/jondurbin/truthy-dpo-v0.1
Instruct format:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Quants:
FP16: https://huggingface.co./OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1
GGUF: https://huggingface.co./OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF
- Downloads last month
- 342
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.