--- license: mit datasets: - colinswaelens/DBBErt language: - el metrics: - accuracy - f1 - recall - precision pipeline_tag: token-classification --- ## DBBErt Our DBBErt model is a sub-word BERT model developed mainly for Byzantine but also Ancient Greek. It is the only model that is not only trained on Ancient and Modern Greek data but also on unedited Byzantine data. Fine-tuned on Part-of-Speech tagging and Morphological Analysis. Pre-trained weights are made available for a standard 12 layer, 768d BERT model. ## How to use Requirements: ``` pip install transformers pip install flair ``` Can be directly used from the HuggingFace Model Hub with: ``` from transformers import AutoTokenizer, AutoModel tokeniser = AutoTokenizer.from_pretrained("colinswaelens/DBBErt") model = AutoModel.from_pretrained("colinswaelens/DBBErt") ``` ## WIP