File size: 1,315 Bytes
16ebc66
 
b63e315
16ebc66
b63e315
16ebc66
 
81678ca
16ebc66
b63e315
 
 
 
 
 
 
 
a97cdd2
16ebc66
 
 
 
 
 
 
 
 
 
e99c397
 
 
f94a01e
 
b324578
 
f94a01e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
language:
- en
datasets:
- teknium/OpenHermes-2.5
license: other
license_name: llama3
base_model: yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA
tags:
- unsloth
- facebook
- meta
- pytorch
- llama
- llama-3
- GGUF
- trl
pipeline_tag: text-generation
---

# QLoRA Finetune Llama 3 Instruct 8B + OpenHermes 2.5

This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)

Llama 3 Instruct 8B 4-bit from unsloth, finetuned with the OpenHermes 2.5 dataset on my home PC on one 24GB 4090. 

Special care was taken to preserve and reinforce proper eos token structure.

[Source Model](https://huggingface.co./yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA)

* [F16_GGUF](https://huggingface.co./yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA-GGUF/blob/main/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA.f16.gguf)
* [Q4_K_M_GGUF](https://huggingface.co./yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA-GGUF/blob/main/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA.Q4_K_M.gguf)

**Chat with llama.cpp**

`llama.cpp/main -ngl 33 -c 0 --interactive-first --color -e --in-prefix '<|start_header_id|>user<|end_header_id|>\n\n' --in-suffix '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' -r '<|eot_id|>' -m ./llama-3-8b-Instruct-OpenHermes-2.5-QLoRA.Q4_K_M.gguf`