TFG
Collection
Datasets and models leveraged and developed during my final degree work (TFG). Info and code can be found at https://github.com/enriquesaou/tfg-lm-qa
•
18 items
•
Updated
•
2
This model is a fine-tuned version of google-t5/t5-small on an MRQA sample. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 0.9991 | 357 | 0.9669 |
1.0947 | 1.9981 | 714 | 0.9170 |
0.9558 | 3.0 | 1072 | 0.8990 |
0.9558 | 3.9991 | 1429 | 0.8855 |
0.9023 | 4.9981 | 1786 | 0.8680 |
0.8684 | 6.0 | 2144 | 0.8680 |
0.8542 | 6.9991 | 2501 | 0.8668 |
0.8542 | 7.9925 | 2856 | 0.8647 |
Base model
google-t5/t5-small