File size: 3,367 Bytes
ae6129f
3a56115
ae6129f
b566c06
 
ae6129f
 
 
 
 
 
 
3a56115
 
 
 
ae6129f
b566c06
ae6129f
 
 
3a56115
ae6129f
 
 
b566c06
ae6129f
3a56115
ae6129f
3a56115
 
ae6129f
3a56115
 
ae6129f
3a56115
 
ae6129f
3a56115
ae6129f
 
 
 
 
b566c06
ae6129f
 
 
b566c06
 
 
 
 
ae6129f
 
 
 
 
6a91a8a
ae6129f
6a91a8a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae6129f
 
 
 
 
 
 
 
 
 
 
b566c06
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae6129f
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
language: en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: My name is Scott and I live in Columbus.
- text: Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne.
base_model: albert-base-v2
model-index:
- name: albert-base-v2-finetuned-ner
  results:
  - task:
      type: token-classification
      name: Token Classification
    dataset:
      name: conll2003
      type: conll2003
      args: conll2003
    metrics:
    - type: precision
      value: 0.9252213840603477
      name: Precision
    - type: recall
      value: 0.9329732113328189
      name: Recall
    - type: f1
      value: 0.9290811285541773
      name: F1
    - type: accuracy
      value: 0.9848205157332728
      name: Accuracy
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# albert-base-v2-finetuned-ner

This model is a fine-tuned version of [albert-base-v2](https://huggingface.co./albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0626
- Precision: 0.9252
- Recall: 0.9330
- F1: 0.9291
- Accuracy: 0.9848

## Model description

More information needed

## limitations

#### Limitations and bias

This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases. 


#### How to use

You can use this model with Transformers *pipeline* for NER.

```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")

model = AutoModelForTokenClassification.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Scott and I live in Ohio"
ner_results = nlp(example)
print(ner_results)
```

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log        | 1.0   | 220  | 0.0863          | 0.8827    | 0.8969 | 0.8898 | 0.9773   |
| No log        | 2.0   | 440  | 0.0652          | 0.8951    | 0.9199 | 0.9073 | 0.9809   |
| 0.1243        | 3.0   | 660  | 0.0626          | 0.9191    | 0.9208 | 0.9200 | 0.9827   |
| 0.1243        | 4.0   | 880  | 0.0585          | 0.9227    | 0.9281 | 0.9254 | 0.9843   |
| 0.0299        | 5.0   | 1100 | 0.0626          | 0.9252    | 0.9330 | 0.9291 | 0.9848   |


### Framework versions

- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0