Update README.md
#4
by
MaziyarPanahi
- opened
README.md
CHANGED
@@ -8,8 +8,26 @@ tags:
|
|
8 |
- biology
|
9 |
- medical
|
10 |
- MoE
|
|
|
11 |
---
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## How to use it
|
14 |
|
15 |
```python
|
@@ -25,7 +43,6 @@ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Bioxtral-4x7B-v0.1")
|
|
25 |
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Bioxtral-4x7B-v0.1")
|
26 |
```
|
27 |
|
28 |
-
|
29 |
## Quantized mode
|
30 |
|
31 |
Here is the list of GGUF models quantized from 2 to 8 bits: https://huggingface.co/MaziyarPanahi/Bioxtral-4x7B-v0.1-GGUF
|
@@ -158,18 +175,6 @@ So, 25 - 4 * 2 + 3 = 20.</s>
|
|
158 |
|
159 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/PR-Py7u6uhcxKTdCpPY4-.png)
|
160 |
|
161 |
-
| Metric | BioMistral-7B | Bioxtral-4x7B-v0.1 |
|
162 |
-
|-----------------------------|---------------|--------------------|
|
163 |
-
| **Average** | 54.99 | **70.84** |
|
164 |
-
| ARC | 54.27 | **68.34** |
|
165 |
-
| HellaSwag | 79.09 | **87.27** |
|
166 |
-
| TruthfulQA | 51.61 | **68.45** |
|
167 |
-
| Winogrande | 73.48 | **82.90** |
|
168 |
-
| GSM8K | 0 | **56.63** |
|
169 |
-
| Professional Medicine | 55.51 | **67.3** |
|
170 |
-
| College Medicine | 58.96 | **61.84** |
|
171 |
-
| Medical Genetics | 67.00 | **74.0** |
|
172 |
-
|
173 |
|
174 |
source: https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Bioxtral-4x7B-v0.1
|
175 |
|
|
|
8 |
- biology
|
9 |
- medical
|
10 |
- MoE
|
11 |
+
- genetic
|
12 |
---
|
13 |
|
14 |
+
## Description
|
15 |
+
|
16 |
+
This is a MoE of top 4x 7B models including BioMistral-7B. Here is the comparison between the two:
|
17 |
+
|
18 |
+
| Metric | BioMistral-7B | Bioxtral-4x7B-v0.1 |
|
19 |
+
|-----------------------------|---------------|--------------------|
|
20 |
+
| **Average** | 54.99 | **70.84** |
|
21 |
+
| ARC | 54.27 | **68.34** |
|
22 |
+
| HellaSwag | 79.09 | **87.27** |
|
23 |
+
| TruthfulQA | 51.61 | **68.45** |
|
24 |
+
| Winogrande | 73.48 | **82.90** |
|
25 |
+
| GSM8K | 0 | **56.63** |
|
26 |
+
| Professional Medicine | 55.51 | **67.3** |
|
27 |
+
| College Medicine | 58.96 | **61.84** |
|
28 |
+
| Medical Genetics | 67.00 | **74.0** |
|
29 |
+
|
30 |
+
|
31 |
## How to use it
|
32 |
|
33 |
```python
|
|
|
43 |
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Bioxtral-4x7B-v0.1")
|
44 |
```
|
45 |
|
|
|
46 |
## Quantized mode
|
47 |
|
48 |
Here is the list of GGUF models quantized from 2 to 8 bits: https://huggingface.co/MaziyarPanahi/Bioxtral-4x7B-v0.1-GGUF
|
|
|
175 |
|
176 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/PR-Py7u6uhcxKTdCpPY4-.png)
|
177 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
source: https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Bioxtral-4x7B-v0.1
|
180 |
|