File size: 2,122 Bytes
0b5360c 64bc8e5 9732fa4 0b5360c 64bc8e5 0b5360c 64bc8e5 0b5360c 9732fa4 0b5360c 64bc8e5 0b5360c 9732fa4 0b5360c 64bc8e5 0b5360c 9732fa4 0b5360c 64bc8e5 0b5360c 9732fa4 0b5360c 64bc8e5 0b5360c 64bc8e5 0b5360c 64bc8e5 0b5360c 64bc8e5 0b5360c 64bc8e5 0b5360c 64bc8e5 0b5360c 64bc8e5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DocGPT-ft
results: []
datasets:
- lavita/ChatDoctor-HealthCareMagic-100k
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DocGPT-ft
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co./TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the lavita/ChatDoctor-HealthCareMagic-100k dataset.
## Model description
Uses parameter efficient fine-tuning for QLora
## Intended uses & limitations
The intended use is just for fun.
## Training and evaluation data
The training set was 90% of the data and testing set was 10%. Only a small percentage of the data was used to reduce training time.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4174 | 0.9412 | 12 | 2.2924 |
| 2.1327 | 1.9608 | 25 | 2.2750 |
| 2.0864 | 2.9804 | 38 | 2.2745 |
| 2.0362 | 4.0 | 51 | 2.2761 |
| 2.1357 | 4.9412 | 63 | 2.2849 |
| 1.942 | 5.9608 | 76 | 2.2961 |
| 1.8904 | 6.9804 | 89 | 2.3165 |
| 1.8585 | 8.0 | 102 | 2.3295 |
| 1.9923 | 8.9412 | 114 | 2.3390 |
| 1.6331 | 9.4118 | 120 | 2.3387 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1 |