English
SpEL
Entity Linking
Structured Prediction
Edit model card

SpEL (Structured prediction for Entity Linking)

SpEL model finetuned on English Wikipedia as well as the training portion of CoNLL2003/AIDA. It is introduced in the paper SPEL: Structured Prediction for Entity Linking (EMNLP 2023). The code and data are available in this repository.

Usage

The following snippet demonstrates a quick way that SpEL can be used to generate subword-level, word-level, and phrase-level annotations for a sentence.

# download SpEL from https://github.com/shavarani/SpEL
from transformers import AutoTokenizer
from spel.model import SpELAnnotator, dl_sa
from spel.configuration import device
from spel.utils import get_subword_to_word_mapping
from spel.span_annotation import WordAnnotation, PhraseAnnotation
finetuned_after_step = 4
sentence = "Grace Kelly by Mika reached the top of the UK Singles Chart in 2007."
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
# ############################################# LOAD SpEL #############################################################
spel = SpELAnnotator()
spel.init_model_from_scratch(device=device)
if finetuned_after_step == 3:
    spel.shrink_classification_head_to_aida(device)
spel.load_checkpoint(None, device=device, load_from_torch_hub=True, finetuned_after_step=finetuned_after_step)
# ############################################# RUN SpEL ##############################################################
inputs = tokenizer(sentence, return_tensors="pt")
token_offsets = list(zip(inputs.encodings[0].tokens,inputs.encodings[0].offsets))
subword_annotations = spel.annotate_subword_ids(inputs.input_ids, k_for_top_k_to_keep=10, token_offsets=token_offsets)
# #################################### CREATE WORD-LEVEL ANNOTATIONS ##################################################
tokens_offsets = token_offsets[1:-1]
subword_annotations = subword_annotations[1:]
for sa in subword_annotations:
    sa.idx2tag = dl_sa.mentions_itos
word_annotations = [WordAnnotation(subword_annotations[m[0]:m[1]], tokens_offsets[m[0]:m[1]])
                    for m in get_subword_to_word_mapping(inputs.tokens(), sentence)]
# ################################## CREATE PHRASE-LEVEL ANNOTATIONS ##################################################
phrase_annotations = []
for w in word_annotations:
    if not w.annotations:
        continue
    if phrase_annotations and phrase_annotations[-1].resolved_annotation == w.resolved_annotation:
        phrase_annotations[-1].add(w)
    else:
        phrase_annotations.append(PhraseAnnotation(w))
# ################################## PRINT OUT THE CREATED ANNOTATIONS ################################################
for phrase_annotation in phrase_annotations:
   print(dl_sa.mentions_itos[phrase_annotation.resolved_annotation])

Evaluation Results

Entity Linking evaluation results of SpEL compared to that of the literature over AIDA test sets:

Approach EL Micro-F1
test-a
EL Micro-F1
test-b
#params
on GPU
speed
sec/doc
Hoffart et al. (2011) 72.4 72.8 - -
Kolitsas et al. (2018) 89.4 82.4 330.7M 0.097
Broscheit (2019) 86.0 79.3 495.1M 0.613
Peters et al. (2019) 82.1 73.1 - -
Martins et al. (2019) 85.2 81.9 - -
van Hulst et al. (2020) 83.3 82.4 19.0M 0.337
Févry et al. (2020) 79.7 76.7 - -
Poerner et al. (2020) 90.8 85.0 131.1M -
Kannan Ravi et al. (2021) - 83.1 - -
De Cao et al. (2021b) - 83.7 406.3M 40.969
De Cao et al. (2021a)
(no mention-specific candidate set)
61.9 49.4 124.8M 0.268
De Cao et al. (2021a)
(using PPRforNED candidate set)
90.1 85.5 124.8M 0.194
Mrini et al. (2022) - 85.7 (train) 811.5M
(test) 406.2M
-
Zhang et al. (2022) - 85.8 1004.3M -
Feng et al. (2022) - 86.3 157.3M -





SpEL-base (no mention-specific candidate set) 91.3 85.5 128.9M 0.084
SpEL-base (KB+Yago candidate set) 90.6 85.7 128.9M 0.158
SpEL-base (PPRforNED candidate set)
(context-agnostic)
91.7 86.8 128.9M 0.153
SpEL-base (PPRforNED candidate set)
(context-aware)
92.7 88.1 128.9M 0.156
SpEL-large (no mention-specific candidate set) 91.6 85.8 361.1M 0.273
SpEL-large (KB+Yago candidate set) 90.8 85.7 361.1M 0.267
SpEL-large (PPRforNED candidate set)
(context-agnostic)
92.0 87.3 361.1M 0.268
SpEL-large (PPRforNED candidate set)
(context-aware)
92.9 88.6 361.1M 0.267

Citation

If you use SpEL finetuned models or data, please cite our paper:

@inproceedings{shavarani2023spel,
  title={Sp{EL}: Structured Prediction for Entity Linking},
  author={Shavarani, Hassan S.  and  Sarkar, Anoop},
  booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
  year={2023},
  url={https://arxiv.org/abs/2310.14684}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train shavarani/SpEL

Space using shavarani/SpEL 1