Angelectronic's picture
Update README.md
fea6379 verified
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
model-index:
- name: llama3-ViMMRC-Answer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-ViMMRC-Answer
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co./unsloth/llama-3-8b-Instruct-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- **Loss**: 0.1419
- **Accuracy**: 0.885662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
ViMMRC train and test set
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2677 | 0.3306 | 10 | 0.1883 |
| 0.4922 | 0.6612 | 20 | 0.2020 |
| 0.4551 | 0.9917 | 30 | 0.1609 |
| 0.4292 | 1.3223 | 40 | 0.2353 |
| 0.4361 | 1.6529 | 50 | 0.1758 |
| 0.4323 | 1.9835 | 60 | 0.1515 |
| 0.4232 | 2.3140 | 70 | 0.1451 |
| 0.411 | 2.6446 | 80 | 0.1424 |
| 0.413 | 2.9752 | 90 | 0.1419 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1