kuotient commited on
Commit
cba0585
·
verified ·
1 Parent(s): 6d999af

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ ---
4
+ # Llama-3 chat vector
5
+
6
+ This is 'modelified' version of _chat vector_ from the paper [Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages](https://arxiv.org/abs/2310.04799). So this is not a model, its just weight diff, just for ease to use myself(or you too)!
7
+
8
+ What I understand here:
9
+ 'Chat vector method' is a merging method that utilizes the difference between the base model, the continuously pre-trained (usually language transferred) model, and the chat model; so the recipe is
10
+
11
+ `model(base) + weight_diff(continous pretrained) + weight_diff(instruct)` or
12
+
13
+ `model(base) + weight_diff(continous pretrained + fine-tuned) + weight_diff(instruct)`.
14
+
15
+ So before (my) initial purpose in comparing which method is better, `llama3 → CP + chat vector → FT` vs. `llama3 → CP → FT + chat vector`, it seems reasonable to compare it with other methods in [Mergekit](https://github.com/arcee-ai/mergekit).
16
+
17
+ | Model | Merge Method | Score(but what?) |
18
+ |---|---|---|
19
+ | [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) | chat vector | - |
20
+ | [kuotient/Llama-3-Ko-8B-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-ties) | Ties | - |
21
+ | [kuotient/Llama-3-Ko-8B-dare-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-dare-ties) | Dare-ties | - |
22
+ | [kuotient/Llama-3-Ko-8B-TA](https://huggingface.co/kuotient/Llama-3-Ko-8B-TA) | Task Arithmetic(maybe...? not sure about this) | - |
23
+ | WIP | Model stock(I don't read this paper yet but still) | - |
24
+
25
+ All that aside, I'd like to thank @[beomi](https://huggingface.co/beomi) for creating such an awesome korean-based model.
26
+