|
|
|
Checking label assignment: |
|
|
|
Domain: Mathematics |
|
Categories: cs.IT math.IT |
|
Abstract: information embedding ie is the transmission of information within a host signal subject to a distor... |
|
|
|
Domain: Computer Science |
|
Categories: cs.CY |
|
Abstract: according to socioconstructivism approach collective situations are promoted to favor learning in cl... |
|
|
|
Domain: Physics |
|
Categories: physics.pop-ph physics.optics |
|
Abstract: a method is presented for generation of a subwavelength lambda longitudinally polarized beam which p... |
|
|
|
Domain: Chemistry |
|
Categories: nlin.PS |
|
Abstract: rolls in finite prandtl number rotating convection with freeslip top and bottom boundary conditions ... |
|
|
|
Domain: Statistics |
|
Categories: stat.ME stat.CO |
|
Abstract: in this paper we introduce a novel particle filter scheme for a class of partiallyobserved multivari... |
|
|
|
Domain: Biology |
|
Categories: q-bio.PE q-bio.CB quant-ph |
|
Abstract: this is a supplement to the paper arxivqbio containing the text of correspondence sent to nature in... |
|
|
|
Training with All Cluster tokenizer: |
|
Vocabulary size: 16005 |
|
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge |
|
Initialized model with vocabulary size: 16005 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 1/3: |
|
Train Loss: 0.9143, Train Accuracy: 0.6955 |
|
Val Loss: 0.6986, Val Accuracy: 0.7743, Val F1: 0.7502 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 2/3: |
|
Train Loss: 0.6277, Train Accuracy: 0.7987 |
|
Val Loss: 0.6150, Val Accuracy: 0.8002, Val F1: 0.7753 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 3/3: |
|
Train Loss: 0.5085, Train Accuracy: 0.8373 |
|
Val Loss: 0.6998, Val Accuracy: 0.7784, Val F1: 0.7468 |
|
|
|
Test Results for All Cluster tokenizer: |
|
Accuracy: 0.7781 |
|
F1 Score: 0.7465 |
|
AUC-ROC: 0.8821 |
|
|
|
Training with Final tokenizer: |
|
Vocabulary size: 15047 |
|
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge |
|
Initialized model with vocabulary size: 15047 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Epoch 1/3: |
|
Train Loss: 0.9914, Train Accuracy: 0.6629 |
|
Val Loss: 0.8531, Val Accuracy: 0.7224, Val F1: 0.6560 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Epoch 2/3: |
|
Train Loss: 0.7899, Train Accuracy: 0.7359 |
|
Val Loss: 0.7491, Val Accuracy: 0.7516, Val F1: 0.7260 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15046 |
|
Vocab size: 15047 |
|
Epoch 3/3: |
|
Train Loss: 0.6774, Train Accuracy: 0.7784 |
|
Val Loss: 0.7340, Val Accuracy: 0.7557, Val F1: 0.7386 |
|
|
|
Test Results for Final tokenizer: |
|
Accuracy: 0.7560 |
|
F1 Score: 0.7388 |
|
AUC-ROC: 0.8423 |
|
|
|
Training with General tokenizer: |
|
Vocabulary size: 16000 |
|
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge |
|
Initialized model with vocabulary size: 16000 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15945 |
|
Vocab size: 16000 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15984 |
|
Vocab size: 16000 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15985 |
|
Vocab size: 16000 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15985 |
|
Vocab size: 16000 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15901 |
|
Vocab size: 16000 |
|
Epoch 1/3: |
|
Train Loss: 0.8970, Train Accuracy: 0.7058 |
|
Val Loss: 0.7586, Val Accuracy: 0.7604, Val F1: 0.6892 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15873 |
|
Vocab size: 16000 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15950 |
|
Vocab size: 16000 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15985 |
|
Vocab size: 16000 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15985 |
|
Vocab size: 16000 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15992 |
|
Vocab size: 16000 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15928 |
|
Vocab size: 16000 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15980 |
|
Vocab size: 16000 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Epoch 2/3: |
|
Train Loss: 0.6461, Train Accuracy: 0.7883 |
|
Val Loss: 0.5972, Val Accuracy: 0.8024, Val F1: 0.7585 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15871 |
|
Vocab size: 16000 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15985 |
|
Vocab size: 16000 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15987 |
|
Vocab size: 16000 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15973 |
|
Vocab size: 16000 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 15956 |
|
Vocab size: 16000 |
|
Epoch 3/3: |
|
Train Loss: 0.5426, Train Accuracy: 0.8275 |
|
Val Loss: 0.5413, Val Accuracy: 0.8275, Val F1: 0.7986 |
|
|
|
Test Results for General tokenizer: |
|
Accuracy: 0.8281 |
|
F1 Score: 0.7992 |
|
AUC-ROC: 0.8504 |
|
|
|
Summary of Results: |
|
|
|
All Cluster Tokenizer: |
|
Accuracy: 0.7781 |
|
F1 Score: 0.7465 |
|
AUC-ROC: 0.8821 |
|
|
|
Final Tokenizer: |
|
Accuracy: 0.7560 |
|
F1 Score: 0.7388 |
|
AUC-ROC: 0.8423 |
|
|
|
General Tokenizer: |
|
Accuracy: 0.8281 |
|
F1 Score: 0.7992 |
|
AUC-ROC: 0.8504 |
|
|
|
Class distribution in training set: |
|
Class Biology: 439 samples |
|
Class Chemistry: 454 samples |
|
Class Computer Science: 1358 samples |
|
Class Mathematics: 9480 samples |
|
Class Physics: 2733 samples |
|
Class Statistics: 200 samples |
|
|