TheBloke commited on
Commit
d7617eb
1 Parent(s): 0c24e00

DOI 2023/06/26 GGML model commit

Browse files
README.md ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Kaist AI's Selfee 13B GGML - DOI 2023/06/26
21
+
22
+ These files are GGML format model files for [Kaist AI's Selfee 13B](https://huggingface.co/TheBloke/Selfee-13B-fp16).
23
+
24
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
+ * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
28
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
+ * [ctransformers](https://github.com/marella/ctransformers)
30
+
31
+ ## DOI REPO
32
+
33
+ This is a DOI repository, created 26th June 2023. It contains the GGML model files from [TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML) as of that date.
34
+
35
+ The purpose of a DOI repository is to provide a permanent record of a set of files, guaranteed not to change. Therefore the GGML files in this repository will never update.
36
+
37
+ For the current version GGML files for Selfee 13B, please check [TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML).
38
+
39
+ ## Repositories available
40
+
41
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
42
+ * [2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
43
+ * [DOI Snapshot 2023/06/26 2, 3, 4, 5, 6, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML-DOI)
44
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
45
+
46
+ <!-- compatibility_ggml start -->
47
+ ## Compatibility
48
+
49
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
50
+
51
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
52
+
53
+ They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
54
+
55
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
56
+
57
+ These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
58
+
59
+ They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
60
+
61
+ ## Explanation of the new k-quant methods
62
+
63
+ The new methods available are:
64
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
65
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
66
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
67
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
68
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
69
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
70
+
71
+ Refer to the Provided Files table below to see what files use which methods, and how.
72
+ <!-- compatibility_ggml end -->
73
+
74
+ ## Provided files
75
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
76
+ | ---- | ---- | ---- | ---- | ---- | ----- |
77
+ | selfee-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB | 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
78
+ | selfee-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
79
+ | selfee-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
80
+ | selfee-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
81
+ | selfee-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
82
+ | selfee-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
83
+ | selfee-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
84
+ | selfee-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
85
+ | selfee-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
86
+ | selfee-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
87
+ | selfee-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
88
+ | selfee-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
89
+ | selfee-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
90
+ | selfee-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
91
+
92
+
93
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
94
+
95
+ ## How to run in `llama.cpp`
96
+
97
+ I use the following command line; adjust for your tastes and needs:
98
+
99
+ ```
100
+ ./main -t 10 -ngl 32 -m selfee-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
101
+ ```
102
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
103
+
104
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
105
+
106
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
107
+
108
+ ## How to run in `text-generation-webui`
109
+
110
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
111
+
112
+ <!-- footer start -->
113
+ ## Discord
114
+
115
+ For further support, and discussions on these models and AI in general, join us at:
116
+
117
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
118
+
119
+ ## Thanks, and how to contribute.
120
+
121
+ Thanks to the [chirper.ai](https://chirper.ai) team!
122
+
123
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
124
+
125
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
126
+
127
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
128
+
129
+ * Patreon: https://patreon.com/TheBlokeAI
130
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
131
+
132
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
133
+
134
+ **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire
135
+
136
+ Thank you to all my generous patrons and donaters!
137
+
138
+ <!-- footer end -->
139
+
140
+ # Original model card: Kaist AI's Selfee 13B
141
+
142
+ <p align="center" width="100%">
143
+ <a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
144
+ </p>
145
+
146
+ # SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation
147
+
148
+ [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
149
+ [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
150
+ [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/)
151
+ [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
152
+
153
+
154
+ ## News
155
+ [May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details.
156
+
157
+
158
+ ## Overview
159
+ This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents:
160
+ - The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)).
161
+ - The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)).
162
+ - The training process for the model ([detail](#training), [code](train)).
163
+ - The inference process for the model ([detail](#inference), [code](inference)).
164
+ - The evaluation method and dataset ([detail](#evaluation), [code](evaluation)).
165
+
166
+
167
+ This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌
168
+
169
+
170
+
171
+ **We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.**
172
+
173
+
174
+ ## Data Release
175
+ For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question.
176
+ For other datsets, we do not need special data collection method.
177
+ ## Data Generation Process
178
+ To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br>
179
+ Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br>
180
+ Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br>
181
+ Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair.
182
+ We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br>
183
+ You can see the details with command [here](data_augmentation/README.md).<br>
184
+ *We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them!
185
+
186
+ ## Training
187
+
188
+ We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br>
189
+
190
+ To reproduce the training procedure, here are the steps. <br>
191
+
192
+ ```
193
+ pip install -r requirements.txt
194
+ ```
195
+
196
+ ```
197
+ torchrun --nproc_per_node=4 train/train_mem.py \
198
+ --model_name_or_path llama-7b \
199
+ --data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \
200
+ --bf16 True \
201
+ --output_dir ckpt/selfee-7b \
202
+ --num_train_epochs 3 \
203
+ --per_device_train_batch_size 16 \
204
+ --per_device_eval_batch_size 16 \
205
+ --gradient_accumulation_steps 2 \
206
+ --evaluation_strategy "no" \
207
+ --save_strategy "steps" \
208
+ --save_steps 5000 \
209
+ --save_total_limit 1 \
210
+ --learning_rate 2e-5 \
211
+ --weight_decay 0. \
212
+ --warmup_ratio 0.03 \
213
+ --lr_scheduler_type "cosine" \
214
+ --logging_steps 1 \
215
+ --fsdp "shard_grad_op auto_wrap" \
216
+ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
217
+ --tf32 True \
218
+ --model_max_length 2048 \
219
+ --gradient_checkpointing True \
220
+ --lazy_preprocess True \
221
+ --training_objective full \
222
+ ```
223
+
224
+ The hyperparameters are as follows, following Vicuna and Alpaca.
225
+
226
+ | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
227
+ | --- | ---: | ---: | ---: | ---: | ---: |
228
+ | SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 |
229
+
230
+
231
+ ## Inference
232
+ <b>Restoring checkpoint using diff</b><br>
233
+ We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command
234
+ ```
235
+ python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta
236
+ ```
237
+
238
+
239
+ <b>Autonomous Inference Mode</b><br>
240
+
241
+ Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br>
242
+
243
+ For autonomous inference mode,
244
+
245
+ ```
246
+ python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl"
247
+ ```
248
+
249
+
250
+ <b>Revision Enforce Inference Mode</b><br>
251
+ We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer.
252
+
253
+ For revision enforce inference mode, use the `max-num-revision` argument.
254
+
255
+ ```
256
+ python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3
257
+ ```
258
+
259
+
260
+
261
+ ## Evaluation
262
+ Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br>
263
+
264
+ We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br>
265
+
266
+ ### GPT-4 Automatic Evaluation
267
+ First, you need to get your API key to get access to the GPT-4 API.
268
+ ```
269
+ export OPENAI_API_KEYS={personal_key}
270
+ ```
271
+
272
+ To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`),
273
+
274
+
275
+ ```
276
+ python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl
277
+ ```
278
+
279
+ To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed.
280
+
281
+ ```
282
+ python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl
283
+ ```
284
+
285
+ ## Limitations
286
+ Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt.
287
+
288
+ ## Online demo
289
+ Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>!
290
+
291
+ #### How to launch the demo yourself
292
+ To serve the web demo yourself, run the following commands:
293
+
294
+ 1. Run the controller
295
+ ```
296
+ python3 -m serve.controller
297
+ ```
298
+
299
+ 2. Run the model worker
300
+ ```
301
+ python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b
302
+ ```
303
+
304
+ 3. Run the web server
305
+ ```
306
+ python3 -m serve.gradio_web_server --share
307
+ ```
308
+
309
+ You can find the serving code [here](serve).
310
+
311
+
312
+ ### Team members
313
+ <a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/>
314
+ (* denotes equal contribution)
315
+
316
+ ### Release
317
+ We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface.
318
+
319
+ ### License
320
+
321
+ The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us.
322
+
323
+
324
+
325
+
326
+
327
+ ### Citation
328
+
329
+ Please cite if you use the data or code in this repo.
330
+
331
+ ```
332
+ @misc{selfee2023,
333
+ author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon},
334
+ title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation},
335
+ url = {https://kaistai.github.io/SelFee/},
336
+ month = {May},
337
+ year = {2023},
338
+ howpublished = {Blog post}
339
+ }
340
+ ```
341
+
selfee-13b.ggmlv3.q2_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7d93044358b342c26e975f65371fb16d56f376320974b77dfc8f0501a3b9c3f
3
+ size 5427884448
selfee-13b.ggmlv3.q3_K_L.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d62a0ea7c50e47e795abdb0de4bbf8e581a2fe3c3533c89d0cd3fe492520871
3
+ size 6865274304
selfee-13b.ggmlv3.q3_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37de04028d9d0e9aa3a430750194d9b9fd66445153418caf7bbd506650bf4da6
3
+ size 6249235904
selfee-13b.ggmlv3.q3_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e78cad6587575274a20480c77ac35402649c4697893c85d3809d8aa92edd5d1d
3
+ size 5594695104
selfee-13b.ggmlv3.q4_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7c69763ed067e71d8838634a76fa0d0da06422112644a4ddd7fee986ab1748b
3
+ size 7323310848
selfee-13b.ggmlv3.q4_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:367fda36782456cbf3250342e5ccb487e1ff5e45f3b7483f13abe37e2a0bd97c
3
+ size 8136777088
selfee-13b.ggmlv3.q4_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4491a4693352144a9a422fe211d378a3dab6cbed2cba623927e82ce91422b635
3
+ size 7823432448
selfee-13b.ggmlv3.q4_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d9da801cae4b28a204a2017073b5914f3700a983bbdd463ddf49087141e82b5
3
+ size 7323310848
selfee-13b.ggmlv3.q5_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f4ba8428285b0d4db1e87b093035492ad678eb10650a55f34fad7d68add2372
3
+ size 8950243328
selfee-13b.ggmlv3.q5_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec1fb2039991b758ccdfe8b18c14b9ced273aa3cab0ecd88449c3fdaa847b8fe
3
+ size 9763709568
selfee-13b.ggmlv3.q5_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65cf631d2026fa70ef5064f5bcb862aa3a41bb24a0a3e9284d0cd32654b4b138
3
+ size 9207881728
selfee-13b.ggmlv3.q5_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f75101cedf43200682f898c047e77bb34e069115a6e28213001e1fc4ea0c5c94
3
+ size 8950243328
selfee-13b.ggmlv3.q6_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2f27af2c85dece3f956720d685fc4122271f2e8f27a2c3c5bc2b3e0ff23c70a
3
+ size 10678859104
selfee-13b.ggmlv3.q8_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:079acd2479faf9a18fb89c74d54eb01a16f2c9f2e6af9e570d807cb86af8b818
3
+ size 13831040768