bartowski commited on
Commit
a3ca77c
1 Parent(s): 2fdb841

measurement.json

Browse files
Files changed (2) hide show
  1. README.md +67 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - merge
5
+ - mergekit
6
+ - lazymergekit
7
+ - fblgit/UNA-TheBeagle-7b-v1
8
+ - argilla/distilabeled-Marcoro14-7B-slerp
9
+ - dpo
10
+ - rlhf
11
+ quantized_by: bartowski
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ ## Exllama v2 Quantizations of NeuralBeagle14-7B
16
+
17
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
18
+
19
+ # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
20
+
21
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
22
+
23
+ Conversion was done using the default calibration dataset.
24
+
25
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
26
+
27
+ Original model: https://huggingface.co/mlabonne/NeuralBeagle14-7B
28
+
29
+
30
+
31
+ <a href="https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2/tree/8_0">8.0 bits per weight</a>
32
+
33
+ <a href="https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2/tree/6_5">6.5 bits per weight</a>
34
+
35
+ <a href="https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2/tree/5_0">5.0 bits per weight</a>
36
+
37
+ <a href="https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2/tree/4_0">4.0 bits per weight</a>
38
+
39
+ <a href="https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2/tree/3_5">3.5 bits per weight</a>
40
+
41
+ ## Download instructions
42
+
43
+ With git:
44
+
45
+ ```shell
46
+ git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2
47
+ ```
48
+
49
+ With huggingface hub (credit to TheBloke for instructions):
50
+
51
+ ```shell
52
+ pip3 install huggingface-hub
53
+ ```
54
+
55
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralBeagle14-7B-exl2`:
56
+
57
+ ```shell
58
+ mkdir NeuralBeagle14-7B-exl2
59
+ huggingface-cli download bartowski/NeuralBeagle14-7B-exl2 --local-dir NeuralBeagle14-7B-exl2 --local-dir-use-symlinks False
60
+ ```
61
+
62
+ To download from a different branch, add the `--revision` parameter:
63
+
64
+ ```shell
65
+ mkdir NeuralBeagle14-7B-exl2
66
+ huggingface-cli download bartowski/NeuralBeagle14-7B-exl2 --revision 4_0 --local-dir NeuralBeagle14-7B-exl2 --local-dir-use-symlinks False
67
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff