File size: 2,544 Bytes
2be3dea
 
 
 
 
 
 
 
 
 
 
b0ade42
b400619
b0ade42
 
 
 
 
 
2be3dea
 
 
 
 
 
 
 
 
 
 
 
 
ca4fe86
 
2be3dea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88826e9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: mistral-ft
  results: []
pipeline_tag: text2text-generation
widget:
- text: >-
    Résultats :• Absence d’anomalie de densité parenchymateuse cérébrale,
    cérébelleuse ou du tronc cérébral• Absence de dilatation du système
    ventriculaire.• Structures médianes en place.• Absence de collection péri
    cérébrale.• Absence de lésion osseuse.• Bonne pneumatisation des sinus.
  example_title: Observation
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# mistral-ft

This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co./TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2527

## Model description

This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co./TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) for radiology reports conclusions generation.


## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3229        | 0.97  | 27   | 1.8742          |
| 1.7299        | 1.98  | 55   | 1.6318          |
| 1.5704        | 2.99  | 83   | 1.4831          |
| 1.4553        | 4.0   | 111  | 1.4052          |
| 1.4421        | 4.97  | 138  | 1.3805          |
| 1.3759        | 5.98  | 166  | 1.3759          |
| 1.3658        | 6.99  | 194  | 1.3355          |
| 1.3271        | 8.0   | 222  | 1.2890          |
| 1.3299        | 8.97  | 249  | 1.2618          |
| 1.2296        | 9.73  | 270  | 1.2527          |


### Framework versions

- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2