NuNER - Token Classification & NER backbones
Collection
The Best Eng/Multi Token Classification foundation models with MIT license
•
7 items
•
Updated
•
5
This is the BERT model from our Paper: NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data
This is the model used in Section 4.2 when comparing against TadNER.
For other sections, NuNER v1.0 is used.
Checkout other models by NuMind:
bert-base-uncased fine-tuned on NuNER data.
Metrics:
Read more about evaluation protocol datasets in Section 4.2 of our paper.
Embeddings can be used out of the box or fine-tuned on specific datasets.
Get embeddings:
import torch
import transformers
model = transformers.AutoModel.from_pretrained(
'numind/NuNER-BERT-v1.0',
output_hidden_states=True
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
'numind/NuNER-BERT-v1.0'
)
text = [
"NuMind is an AI company based in Paris and USA.",
"See other models from us on https://huggingface.co./numind"
]
encoded_input = tokenizer(
text,
return_tensors='pt',
padding=True,
truncation=True
)
output = model(**encoded_input)
# for better quality
emb = torch.cat(
(output.hidden_states[-1], output.hidden_states[-7]),
dim=2
)
# for better speed
# emb = output.hidden_states[-1]
@misc{bogdanov2024nuner,
title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data},
author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
year={2024},
eprint={2402.15343},
archivePrefix={arXiv},
primaryClass={cs.CL}
}