File size: 4,982 Bytes
4b9c7c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: apache-2.0
base_model: mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrobertta-fin
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# distilrobertta-fin

This model is a fine-tuned version of [mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis](https://huggingface.co./mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4084
- Accuracy: 0.8430
- F1: 0.8422
- Precision: 0.8417
- Recall: 0.8434

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Accuracy | F1     | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0478        | 0.0820 | 50   | 0.9092          | 0.5421   | 0.4316 | 0.6926    | 0.5431 |
| 0.8397        | 0.1639 | 100  | 0.6847          | 0.6730   | 0.6038 | 0.7467    | 0.6749 |
| 0.6574        | 0.2459 | 150  | 0.5762          | 0.7877   | 0.7764 | 0.7930    | 0.7885 |
| 0.5869        | 0.3279 | 200  | 0.4971          | 0.8144   | 0.8091 | 0.8140    | 0.8149 |
| 0.5599        | 0.4098 | 250  | 0.5133          | 0.8056   | 0.7990 | 0.8079    | 0.8064 |
| 0.5189        | 0.4918 | 300  | 0.4836          | 0.8167   | 0.8105 | 0.8186    | 0.8174 |
| 0.4824        | 0.5738 | 350  | 0.4722          | 0.8256   | 0.8190 | 0.8292    | 0.8262 |
| 0.4592        | 0.6557 | 400  | 0.5095          | 0.8126   | 0.8018 | 0.8243    | 0.8133 |
| 0.4592        | 0.7377 | 450  | 0.4579          | 0.8334   | 0.8291 | 0.8341    | 0.8339 |
| 0.4443        | 0.8197 | 500  | 0.5057          | 0.8134   | 0.8120 | 0.8211    | 0.8131 |
| 0.4845        | 0.9016 | 550  | 0.4407          | 0.8348   | 0.8320 | 0.8337    | 0.8352 |
| 0.4287        | 0.9836 | 600  | 0.4399          | 0.8349   | 0.8317 | 0.8336    | 0.8354 |
| 0.4342        | 1.0656 | 650  | 0.4310          | 0.8317   | 0.8323 | 0.8327    | 0.8319 |
| 0.4615        | 1.1475 | 700  | 0.4514          | 0.8306   | 0.8297 | 0.8300    | 0.8310 |
| 0.402         | 1.2295 | 750  | 0.4553          | 0.8384   | 0.8351 | 0.8407    | 0.8386 |
| 0.3893        | 1.3115 | 800  | 0.4312          | 0.836    | 0.8352 | 0.8348    | 0.8364 |
| 0.4091        | 1.3934 | 850  | 0.4648          | 0.8261   | 0.8170 | 0.8356    | 0.8268 |
| 0.3781        | 1.4754 | 900  | 0.4436          | 0.8316   | 0.8249 | 0.8364    | 0.8322 |
| 0.3814        | 1.5574 | 950  | 0.4700          | 0.8206   | 0.8235 | 0.8330    | 0.8208 |
| 0.3944        | 1.6393 | 1000 | 0.4139          | 0.8437   | 0.8429 | 0.8437    | 0.8438 |
| 0.3961        | 1.7213 | 1050 | 0.4183          | 0.8454   | 0.8434 | 0.8458    | 0.8456 |
| 0.3962        | 1.8033 | 1100 | 0.4255          | 0.8386   | 0.8372 | 0.8413    | 0.8385 |
| 0.4214        | 1.8852 | 1150 | 0.4022          | 0.8435   | 0.8414 | 0.8423    | 0.8438 |
| 0.4058        | 1.9672 | 1200 | 0.4445          | 0.832    | 0.8296 | 0.8365    | 0.8320 |
| 0.3507        | 2.0492 | 1250 | 0.4159          | 0.8444   | 0.8430 | 0.8438    | 0.8446 |
| 0.3535        | 2.1311 | 1300 | 0.4342          | 0.8405   | 0.8377 | 0.8420    | 0.8407 |
| 0.3467        | 2.2131 | 1350 | 0.4208          | 0.8407   | 0.8418 | 0.8448    | 0.8407 |
| 0.3394        | 2.2951 | 1400 | 0.4053          | 0.8476   | 0.8466 | 0.8469    | 0.8478 |
| 0.3344        | 2.3770 | 1450 | 0.4173          | 0.8393   | 0.8410 | 0.8445    | 0.8393 |
| 0.3499        | 2.4590 | 1500 | 0.4050          | 0.848    | 0.8472 | 0.8468    | 0.8483 |
| 0.3245        | 2.5410 | 1550 | 0.4056          | 0.8474   | 0.8465 | 0.8470    | 0.8475 |
| 0.3524        | 2.6230 | 1600 | 0.4002          | 0.8486   | 0.8475 | 0.8473    | 0.8489 |
| 0.3285        | 2.7049 | 1650 | 0.4138          | 0.8446   | 0.8458 | 0.8478    | 0.8447 |
| 0.3269        | 2.7869 | 1700 | 0.4017          | 0.8483   | 0.8478 | 0.8479    | 0.8485 |
| 0.3318        | 2.8689 | 1750 | 0.4012          | 0.8494   | 0.8483 | 0.8481    | 0.8497 |
| 0.3253        | 2.9508 | 1800 | 0.4024          | 0.8476   | 0.8475 | 0.8477    | 0.8477 |


### Framework versions

- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Tokenizers 0.19.1