valeriojob
commited on
Commit
•
a669cf1
1
Parent(s):
cd08a97
Update README.md
Browse files
README.md
CHANGED
@@ -7,16 +7,29 @@ tags:
|
|
7 |
- text-generation-inference
|
8 |
- transformers
|
9 |
- unsloth
|
10 |
-
-
|
11 |
-
-
|
|
|
12 |
---
|
13 |
|
14 |
-
#
|
15 |
|
16 |
-
-
|
17 |
-
-
|
18 |
-
-
|
|
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
- text-generation-inference
|
8 |
- transformers
|
9 |
- unsloth
|
10 |
+
- llama
|
11 |
+
- trl
|
12 |
+
- sft
|
13 |
---
|
14 |
|
15 |
+
# MedGPT-Gemma2-9B-v.1-GGUF
|
16 |
|
17 |
+
- This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) together with GPs based on real medical data.
|
18 |
+
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
|
19 |
+
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/MedGPT-Gemma2-9B-BA-v.1](https://huggingface.co/valeriojob/MedGPT-Gemma2-9B-BA-v.1) that includes the default 16bit format of the model as well as the LoRA adapters of the model.
|
20 |
+
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
21 |
|
22 |
+
## Model description
|
23 |
|
24 |
+
This model acts as a supplementary assistance to GPs helping them in medical and admin tasks.
|
25 |
+
|
26 |
+
## Intended uses & limitations
|
27 |
+
|
28 |
+
The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis.
|
29 |
+
|
30 |
+
## Training and evaluation data
|
31 |
+
|
32 |
+
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/BA-v.1](https://huggingface.co/datasets/valeriojob/BA-v.1)
|
33 |
+
|
34 |
+
## Licenses
|
35 |
+
- **License:** apache-2.0
|