Edit model card

MedGPT-Gemma2-9B-v.1-GGUF

  • This model is a fine-tuned version of unsloth/gemma-2-9b on an dataset created by Valerio Job together with GPs based on real medical data.
  • Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
  • This repo includes the quantized models in the GGUF format. There is a separate repo called valeriojob/MedGPT-Gemma2-9B-BA-v.1 that includes the default 16bit format of the model as well as the LoRA adapters of the model.
  • This model was quantized using llama.cpp.
  • This model is available in the following quantization formats:
    • BF16
    • Q4_K_M
    • Q5_K_M
    • Q8_0

Model description

This model acts as a supplementary assistance to GPs helping them in medical and admin tasks.

Intended uses & limitations

The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis.

Training and evaluation data

The dataset (train and test) used for fine-tuning this model can be found here: datasets/valeriojob/BA-v.1

Licenses

  • License: apache-2.0
Downloads last month
89
GGUF
Model size
9.24B params
Architecture
gemma2

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for valeriojob/MedGPT-Gemma2-9B-BA-v.1-GGUF

Base model

unsloth/gemma-2-9b
Quantized
(3)
this model