RichardErkhov commited on
Commit
d7d59fd
1 Parent(s): 4530208

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +151 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ OpenBezoar-HH-RLHF-SFT - GGUF
11
+ - Model creator: https://huggingface.co/SurgeGlobal/
12
+ - Original model: https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [OpenBezoar-HH-RLHF-SFT.Q2_K.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q2_K.gguf) | Q2_K | 1.84GB |
18
+ | [OpenBezoar-HH-RLHF-SFT.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ3_XS.gguf) | IQ3_XS | 1.84GB |
19
+ | [OpenBezoar-HH-RLHF-SFT.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ3_S.gguf) | IQ3_S | 1.84GB |
20
+ | [OpenBezoar-HH-RLHF-SFT.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K_S.gguf) | Q3_K_S | 1.84GB |
21
+ | [OpenBezoar-HH-RLHF-SFT.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ3_M.gguf) | IQ3_M | 1.92GB |
22
+ | [OpenBezoar-HH-RLHF-SFT.Q3_K.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K.gguf) | Q3_K | 1.99GB |
23
+ | [OpenBezoar-HH-RLHF-SFT.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K_M.gguf) | Q3_K_M | 1.99GB |
24
+ | [OpenBezoar-HH-RLHF-SFT.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q3_K_L.gguf) | Q3_K_L | 2.06GB |
25
+ | [OpenBezoar-HH-RLHF-SFT.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ4_XS.gguf) | IQ4_XS | 1.86GB |
26
+ | [OpenBezoar-HH-RLHF-SFT.Q4_0.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_0.gguf) | Q4_0 | 1.84GB |
27
+ | [OpenBezoar-HH-RLHF-SFT.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.IQ4_NL.gguf) | IQ4_NL | 1.86GB |
28
+ | [OpenBezoar-HH-RLHF-SFT.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_K_S.gguf) | Q4_K_S | 2.24GB |
29
+ | [OpenBezoar-HH-RLHF-SFT.Q4_K.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_K.gguf) | Q4_K | 2.4GB |
30
+ | [OpenBezoar-HH-RLHF-SFT.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_K_M.gguf) | Q4_K_M | 2.4GB |
31
+ | [OpenBezoar-HH-RLHF-SFT.Q4_1.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q4_1.gguf) | Q4_1 | 2.04GB |
32
+ | [OpenBezoar-HH-RLHF-SFT.Q5_0.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_0.gguf) | Q5_0 | 2.23GB |
33
+ | [OpenBezoar-HH-RLHF-SFT.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_K_S.gguf) | Q5_K_S | 2.42GB |
34
+ | [OpenBezoar-HH-RLHF-SFT.Q5_K.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_K.gguf) | Q5_K | 2.57GB |
35
+ | [OpenBezoar-HH-RLHF-SFT.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_K_M.gguf) | Q5_K_M | 2.57GB |
36
+ | [OpenBezoar-HH-RLHF-SFT.Q5_1.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q5_1.gguf) | Q5_1 | 2.42GB |
37
+ | [OpenBezoar-HH-RLHF-SFT.Q6_K.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q6_K.gguf) | Q6_K | 3.39GB |
38
+ | [OpenBezoar-HH-RLHF-SFT.Q8_0.gguf](https://huggingface.co/RichardErkhov/SurgeGlobal_-_OpenBezoar-HH-RLHF-SFT-gguf/blob/main/OpenBezoar-HH-RLHF-SFT.Q8_0.gguf) | Q8_0 | 3.39GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: cc-by-nc-4.0
46
+ datasets:
47
+ - Anthropic/hh-rlhf
48
+ language:
49
+ - en
50
+ pipeline_tag: text-generation
51
+ tags:
52
+ - text-generation-inference
53
+ ---
54
+ # OpenBezoar-HH-RLHF-SFT
55
+
56
+ The OpenBezoar-HH-RLHF-SFT is an LLM that has been further instruction fine tuned version of [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf).
57
+
58
+ ## Model Details
59
+
60
+ - Base Model: [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT)
61
+ - Dataset used for SFT: First 100K examples of the [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset
62
+ - Epochs: 1
63
+
64
+ ### Model Description
65
+
66
+ OpenBezoar-HH-RLHF-SFT is an LLM that is built upon the OpenLLaMA 3B v2 architecture. Primary purpose of performing SFT on [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT) is to minimize the distribution shift before applying Direct Preference Optimization (DPO) for human preferences alignment. For more information please refer to our paper.
67
+
68
+ ### Model Sources
69
+
70
+ - **Repository:** [Bitbucket Project](https://bitbucket.org/paladinanalytics/workspace/projects/OP)
71
+ - **Paper :** [Pre-Print](https://arxiv.org/abs/2404.12195)
72
+
73
+ ## Instruction Format
74
+
75
+ We follow a modified version of the Alpaca prompt template as shown below. It is important to utilize this template in order to obtain best responses for instruction related tasks.
76
+ ```
77
+ ### System:
78
+ Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request.
79
+
80
+ ### Instruction:
81
+ {instruction}
82
+
83
+ ### Response:
84
+ ```
85
+
86
+ Notice that **no** end-of-sentence (eos) token is being appended.
87
+
88
+ *Note: The system prompt shown in the following figure is the one that the model has been trained on most of the time. However, you may attempt to use any other system prompt that is available in the [Orca](https://arxiv.org/abs/2306.02707) scheme.*
89
+
90
+ ## Usage
91
+
92
+ ```python
93
+ from peft import PeftConfig, PeftModel
94
+ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, AutoModelForSeq2SeqLM
95
+
96
+ checkpoint = "SurgeGlobal/OpenBezoar-HH-RLHF-SFT"
97
+
98
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
99
+
100
+ model = AutoModelForCausalLM.from_pretrained(
101
+ checkpoint,
102
+ load_in_4bit=True, # optionally for low resource environments
103
+ device_map="auto"
104
+ )
105
+
106
+ prompt = """### System:
107
+ Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request.
108
+
109
+ ### Instruction:
110
+ {instruction}
111
+
112
+ ### Response:""".format(
113
+ instruction="What is the world state in the year 1597."
114
+ )
115
+
116
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
117
+
118
+ outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=True)
119
+
120
+ print(tokenizer.decode(outputs[0]))
121
+ ```
122
+
123
+ ## Evaluations
124
+
125
+ Refer to our self-reported evaluations in our paper (Section 4).
126
+
127
+ ## Limitations
128
+
129
+ - The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops.
130
+ - This model is not aligned to human preferences and therefore it may generate harmful and uncensored content.
131
+ - Caution is urged against relying on this model for production or adjacent use-cases.
132
+
133
+ ## Citation
134
+
135
+ If you find our work useful, please cite our paper as follows:
136
+
137
+ ```
138
+ @misc{surge2024openbezoar,
139
+ title={OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data},
140
+ author={Chandeepa Dissanayake and Lahiru Lowe and Sachith Gunasekara and Yasiru Ratnayake},
141
+ year={2024},
142
+ eprint={2404.12195},
143
+ archivePrefix={arXiv},
144
+ primaryClass={cs.CL}
145
+ }
146
+ ```
147
+
148
+ ## Model Authors
149
+
150
+ Chandeepa Dissanayake, Lahiru Lowe, Sachith Gunasekara, and Yasiru Ratnayake
151
+