File size: 8,326 Bytes
d7d59fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


OpenBezoar-HH-RLHF-SFT - GGUF
- Model creator: https://huggingface.co./SurgeGlobal/
- Original model: https://huggingface.co./SurgeGlobal/OpenBezoar-HH-RLHF-SFT/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OpenBezoar-HH-RLHF-SFT.Q2_K.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q2_K.gguf) | Q2_K | 1.84GB |
| [OpenBezoar-HH-RLHF-SFT.IQ3_XS.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ3_XS.gguf) | IQ3_XS | 1.84GB |
| [OpenBezoar-HH-RLHF-SFT.IQ3_S.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ3_S.gguf) | IQ3_S | 1.84GB |
| [OpenBezoar-HH-RLHF-SFT.Q3_K_S.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K_S.gguf) | Q3_K_S | 1.84GB |
| [OpenBezoar-HH-RLHF-SFT.IQ3_M.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ3_M.gguf) | IQ3_M | 1.92GB |
| [OpenBezoar-HH-RLHF-SFT.Q3_K.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K.gguf) | Q3_K | 1.99GB |
| [OpenBezoar-HH-RLHF-SFT.Q3_K_M.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K_M.gguf) | Q3_K_M | 1.99GB |
| [OpenBezoar-HH-RLHF-SFT.Q3_K_L.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K_L.gguf) | Q3_K_L | 2.06GB |
| [OpenBezoar-HH-RLHF-SFT.IQ4_XS.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ4_XS.gguf) | IQ4_XS | 1.86GB |
| [OpenBezoar-HH-RLHF-SFT.Q4_0.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_0.gguf) | Q4_0 | 1.84GB |
| [OpenBezoar-HH-RLHF-SFT.IQ4_NL.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ4_NL.gguf) | IQ4_NL | 1.86GB |
| [OpenBezoar-HH-RLHF-SFT.Q4_K_S.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_K_S.gguf) | Q4_K_S | 2.24GB |
| [OpenBezoar-HH-RLHF-SFT.Q4_K.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_K.gguf) | Q4_K | 2.4GB |
| [OpenBezoar-HH-RLHF-SFT.Q4_K_M.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_K_M.gguf) | Q4_K_M | 2.4GB |
| [OpenBezoar-HH-RLHF-SFT.Q4_1.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_1.gguf) | Q4_1 | 2.04GB |
| [OpenBezoar-HH-RLHF-SFT.Q5_0.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_0.gguf) | Q5_0 | 2.23GB |
| [OpenBezoar-HH-RLHF-SFT.Q5_K_S.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_K_S.gguf) | Q5_K_S | 2.42GB |
| [OpenBezoar-HH-RLHF-SFT.Q5_K.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_K.gguf) | Q5_K | 2.57GB |
| [OpenBezoar-HH-RLHF-SFT.Q5_K_M.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_K_M.gguf) | Q5_K_M | 2.57GB |
| [OpenBezoar-HH-RLHF-SFT.Q5_1.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_1.gguf) | Q5_1 | 2.42GB |
| [OpenBezoar-HH-RLHF-SFT.Q6_K.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q6_K.gguf) | Q6_K | 3.39GB |
| [OpenBezoar-HH-RLHF-SFT.Q8_0.gguf](https://huggingface.co./RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q8_0.gguf) | Q8_0 | 3.39GB |




Original model description:
---
license: cc-by-nc-4.0
datasets:
- Anthropic/hh-rlhf
language:
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
---
# OpenBezoar-HH-RLHF-SFT

The OpenBezoar-HH-RLHF-SFT is an LLM that has been further instruction fine tuned version of [OpenBezoar-SFT](https://huggingface.co./SurgeGlobal/OpenBezoar-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co./datasets/Anthropic/hh-rlhf).

## Model Details

- Base Model: [OpenBezoar-SFT](https://huggingface.co./SurgeGlobal/OpenBezoar-SFT)
- Dataset used for SFT: First 100K examples of the [HH-RLHF](https://huggingface.co./datasets/Anthropic/hh-rlhf) dataset
- Epochs: 1

### Model Description

OpenBezoar-HH-RLHF-SFT is an LLM that is built upon the OpenLLaMA 3B v2 architecture. Primary purpose of performing SFT on [OpenBezoar-SFT](https://huggingface.co./SurgeGlobal/OpenBezoar-SFT) is to minimize the distribution shift before applying Direct Preference Optimization (DPO) for human preferences alignment. For more information please refer to our paper.

### Model Sources

- **Repository:** [Bitbucket Project](https://bitbucket.org/paladinanalytics/workspace/projects/OP)
- **Paper :** [Pre-Print](https://arxiv.org/abs/2404.12195)

## Instruction Format

We follow a modified version of the Alpaca prompt template as shown below. It is important to utilize this template in order to obtain best responses for instruction related tasks.
```
### System:
Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:
```

Notice that **no** end-of-sentence (eos) token is being appended.

*Note: The system prompt shown in the following figure is the one that the model has been trained on most of the time. However, you may attempt to use any other system prompt that is available in the [Orca](https://arxiv.org/abs/2306.02707) scheme.*

## Usage

```python
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, AutoModelForSeq2SeqLM

checkpoint =  "SurgeGlobal/OpenBezoar-HH-RLHF-SFT"

tokenizer = AutoTokenizer.from_pretrained(checkpoint)

model = AutoModelForCausalLM.from_pretrained(
	checkpoint,
	load_in_4bit=True, # optionally for low resource environments
	device_map="auto"
)

prompt =  """### System:
Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:""".format(
	instruction="What is the world state in the year 1597."
)

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=True)

print(tokenizer.decode(outputs[0]))
```

## Evaluations

Refer to our self-reported evaluations in our paper (Section 4).

## Limitations

- The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops.
- This model is not aligned to human preferences and therefore it may generate harmful and uncensored content.
- Caution is urged against relying on this model for production or adjacent use-cases.

## Citation

If you find our work useful, please cite our paper as follows:

```
@misc{surge2024openbezoar,
      title={OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data}, 
      author={Chandeepa Dissanayake and Lahiru Lowe and Sachith Gunasekara and Yasiru Ratnayake},
      year={2024},
      eprint={2404.12195},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## Model Authors

Chandeepa Dissanayake, Lahiru Lowe, Sachith Gunasekara, and Yasiru Ratnayake