learn-python-easy-v2
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on a samll dataset of 205 examples containing question and answer pairs regarding the Python Programming language for purposes of fine tuning experimentation. It achieves the following results on the evaluation set:
- Loss: 0.7009
Model description
More information needed
Intended uses & limitations
This is intended to be used for experimental purposes regarding fine tuning of large language models and can be optimised for better outputs with more training examples.
Training and evaluation data
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.6791 | 1.0 | 164 | 0.6197 |
0.3764 | 2.0 | 328 | 0.5916 |
0.2089 | 3.0 | 492 | 0.6093 |
0.1416 | 4.0 | 656 | 0.6849 |
0.1185 | 5.0 | 820 | 0.7009 |
Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 2
Inference API (serverless) does not yet support peft models for this pipeline type.
Model tree for Ikeofai/learn-python-easy-v2
Base model
mistralai/Mistral-7B-Instruct-v0.2