|
--- |
|
base_model: unsloth/gemma-2-9b |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
- sft |
|
--- |
|
|
|
# MedGPT-Gemma2-9B-v.1-GGUF |
|
|
|
- This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co./unsloth/gemma-2-9b) on an dataset created by [Valerio Job](https://huggingface.co./valeriojob) together with GPs based on real medical data. |
|
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples. |
|
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/MedGPT-Gemma2-9B-BA-v.1](https://huggingface.co./valeriojob/MedGPT-Gemma2-9B-BA-v.1) that includes the default 16bit format of the model as well as the LoRA adapters of the model. |
|
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp). |
|
|
|
## Model description |
|
|
|
This model acts as a supplementary assistance to GPs helping them in medical and admin tasks. |
|
|
|
## Intended uses & limitations |
|
|
|
The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis. |
|
|
|
## Training and evaluation data |
|
|
|
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/BA-v.1](https://huggingface.co./datasets/valeriojob/BA-v.1) |
|
|
|
## Licenses |
|
- **License:** apache-2.0 |