Portuguese NER- TempClinBr - BioBERTpt(all)
Treinado com BioBERTpt(all), com o corpus TempClinBr.
Metricas:
precision recall f1-score support
0 0.75 0.90 0.82 291
1 0.77 1.00 0.87 33
2 1.00 0.25 0.40 28
3 0.90 0.99 0.94 71
4 0.79 0.91 0.85 112
5 0.72 0.83 0.77 420
6 0.62 0.45 0.53 11
7 0.96 0.85 0.91 2236
8 0.61 0.67 0.64 78
9 0.61 0.98 0.76 124
10 0.81 0.87 0.84 503
11 0.67 0.60 0.63 10
accuracy 0.86 3917
macro avg 0.77 0.78 0.74 3917
weighted avg 0.87 0.86 0.86 3917
F1: 0.8588744393393593 Accuracy: 0.8565228491192239
Parâmetros:
device = cuda (Colab)
nclasses = len(tag2id)
nepochs = 50 => parou na 9
batch_size = 16
batch_status = 32
learning_rate = 3e-5
early_stop = 5
max_length = 256
write_path = 'model'
Eval no conjunto de teste - TempClinBr OBS: Avaliação com tag "O" (label 7), se necessário fazer a média sem essa tag.
tag2id ={'B-Tratamento': 0,
'I-Teste': 1,
'I-Ocorrencia': 2,
'B-Evidencia': 3,
'B-Teste': 4,
'I-Problema': 5,
'B-DepartamentoClinico': 6,
'O': 7,
'I-Tratamento': 8,
'B-Ocorrencia': 9,
'B-Problema': 10,
'I-DepartamentoClinico': 11,
'<pad>': 12}
precision recall f1-score support
0 0.82 0.92 0.87 261
1 0.81 0.58 0.67 99
2 0.56 0.20 0.29 51
3 1.00 0.94 0.97 128
4 0.81 0.86 0.83 194
5 0.81 0.87 0.84 645
6 0.96 0.80 0.87 30
7 0.95 0.90 0.93 2431
8 0.73 0.81 0.77 146
9 0.74 0.88 0.80 146
10 0.87 0.95 0.91 713
11 0.83 0.71 0.77 14
12 0.00 0.00 0.00 0
accuracy 0.89 4858
macro avg 0.76 0.72 0.73 4858
weighted avg 0.89 0.89 0.89 4858
Como citar: em breve
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.