ANGELRC2 commited on
Commit
763486e
1 Parent(s): 35d3d2d

Lo logramos de nuevo equipo! 🤗

Browse files
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: distilroberta-base
5
+ tags:
6
+ - text-classification
7
+ - generated_from_trainer
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ model-index:
12
+ - name: distilroberta-base-mrpc-glue-upeu
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # distilroberta-base-mrpc-glue-upeu
20
+
21
+ This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the nyu-mll/glue dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.7301
24
+ - Accuracy: 0.8309
25
+ - F1: 0.8821
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 5e-05
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 3
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
55
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
56
+ | 0.5119 | 1.0893 | 500 | 0.4417 | 0.8333 | 0.8824 |
57
+ | 0.3453 | 2.1786 | 1000 | 0.7301 | 0.8309 | 0.8821 |
58
+
59
+
60
+ ### Framework versions
61
+
62
+ - Transformers 4.44.2
63
+ - Pytorch 2.5.0+cu121
64
+ - Datasets 3.0.2
65
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 205471423937184.0,
4
+ "train_loss": 0.3733939985996931,
5
+ "train_runtime": 158.9318,
6
+ "train_samples_per_second": 69.237,
7
+ "train_steps_per_second": 8.664
8
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4008e52defdd50969cac1fb7521615807adad0a4b49a4017d73c0dfd8fa0f03c
3
  size 328492280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66c6f6526e83f75979f15b63206a3a419a3e91f06039e26429cfe4b39dc3bbb5
3
  size 328492280
runs/Oct29_17-28-40_6f000e9161e0/events.out.tfevents.1730222955.6f000e9161e0.289.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:11f66914d86513d4a2742b86da6d01c2ce67a8634cfae3b4fc06459254ebf5a8
3
- size 6278
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ddaac8ccbeaa2d6569fbb3a08812a15cf543c4dc1c6c8c81206b02bef60d711
3
+ size 6632
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 205471423937184.0,
4
+ "train_loss": 0.3733939985996931,
5
+ "train_runtime": 158.9318,
6
+ "train_samples_per_second": 69.237,
7
+ "train_steps_per_second": 8.664
8
+ }