8B AWQ
Collection
164 items
β’
Updated
β’
1
This is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 1k samples of mlabonne/orpo-dpo-mix-40k created for this article.
It's a successful fine-tune that follows the ChatML template!
Try the demo: https://huggingface.co./spaces/mlabonne/OrpoLlama-3-8B
This model uses a context window of 8k. It was trained with the ChatML template.
OrpoLlama-4-8B outperforms Llama-3-8B-Instruct on the GPT4All and TruthfulQA datasets.