DunnBC22 commited on
Commit
6bfb586
1 Parent(s): ab17dc7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -3,20 +3,21 @@ license: apache-2.0
3
  base_model: bert-base-uncased
4
  tags:
5
  - generated_from_trainer
 
6
  metrics:
7
  - f1
8
  - accuracy
 
9
  model-index:
10
  - name: bert-base-uncased-Research_Articles_Multilabel
11
  results: []
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
  # bert-base-uncased-Research_Articles_Multilabel
18
 
19
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
 
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.2039
22
  - F1: 0.8405
@@ -25,15 +26,15 @@ It achieves the following results on the evaluation set:
25
 
26
  ## Model description
27
 
28
- More information needed
29
 
30
  ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
  ## Training and evaluation data
35
 
36
- More information needed
37
 
38
  ## Training procedure
39
 
@@ -56,10 +57,9 @@ The following hyperparameters were used during training:
56
  | 0.1739 | 2.0 | 4194 | 0.1986 | 0.8348 | 0.8926 | 0.7072 |
57
  | 0.1328 | 3.0 | 6291 | 0.2039 | 0.8405 | 0.8976 | 0.7082 |
58
 
59
-
60
  ### Framework versions
61
 
62
  - Transformers 4.31.0
63
  - Pytorch 2.0.1+cu118
64
  - Datasets 2.14.4
65
- - Tokenizers 0.13.3
 
3
  base_model: bert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
+ - Multilabel
7
  metrics:
8
  - f1
9
  - accuracy
10
+ - roc_auc
11
  model-index:
12
  - name: bert-base-uncased-Research_Articles_Multilabel
13
  results: []
14
+ pipeline_tag: text-classification
15
  ---
16
 
 
 
 
17
  # bert-base-uncased-Research_Articles_Multilabel
18
 
19
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased).
20
+
21
  It achieves the following results on the evaluation set:
22
  - Loss: 0.2039
23
  - F1: 0.8405
 
26
 
27
  ## Model description
28
 
29
+ Here is the link to my code for this model: https://github.com/DunnBC22/NLP_Projects/blob/main/Multilabel%20Classification/Research%20Articles/Research%20Articles%20-%20Multilabel%20Classification%20-%20Bert-Base-Uncased.ipynb
30
 
31
  ## Intended uses & limitations
32
 
33
+ This model could be used to read labels with printed text. You are more than welcome to use it, but remember that it is at your own risk/peril.
34
 
35
  ## Training and evaluation data
36
 
37
+ Dataset Source: https://www.kaggle.com/datasets/shivanandmn/multilabel-classification-dataset
38
 
39
  ## Training procedure
40
 
 
57
  | 0.1739 | 2.0 | 4194 | 0.1986 | 0.8348 | 0.8926 | 0.7072 |
58
  | 0.1328 | 3.0 | 6291 | 0.2039 | 0.8405 | 0.8976 | 0.7082 |
59
 
 
60
  ### Framework versions
61
 
62
  - Transformers 4.31.0
63
  - Pytorch 2.0.1+cu118
64
  - Datasets 2.14.4
65
+ - Tokenizers 0.13.3