Tweet_sentiment
This model is a fine-tuned version of TheBloke/Llama-2-7B-Chat-GPTQ on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.6657
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
5.422 | 0.0457 | 2 | 5.1721 |
5.1534 | 0.0914 | 4 | 4.6225 |
4.5268 | 0.1371 | 6 | 4.0415 |
3.9455 | 0.1829 | 8 | 3.5294 |
3.4478 | 0.2286 | 10 | 3.1964 |
3.1897 | 0.2743 | 12 | 2.9354 |
2.8951 | 0.32 | 14 | 2.6721 |
2.5987 | 0.3657 | 16 | 2.4219 |
2.3629 | 0.4114 | 18 | 2.2606 |
2.1553 | 0.4571 | 20 | 2.1368 |
2.109 | 0.5029 | 22 | 2.0289 |
1.9611 | 0.5486 | 24 | 1.9382 |
1.8992 | 0.5943 | 26 | 1.8638 |
1.794 | 0.64 | 28 | 1.8066 |
1.7374 | 0.6857 | 30 | 1.7621 |
1.7599 | 0.7314 | 32 | 1.7268 |
1.769 | 0.7771 | 34 | 1.7006 |
1.6442 | 0.8229 | 36 | 1.6834 |
1.6242 | 0.8686 | 38 | 1.6730 |
1.7367 | 0.9143 | 40 | 1.6677 |
1.6649 | 0.96 | 42 | 1.6657 |
Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.1.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for abdullahT/Tweet_sentiment
Base model
meta-llama/Llama-2-7b-chat-hf
Quantized
TheBloke/Llama-2-7B-Chat-GPTQ