--- language: - "la" tags: - "latin" - "token-classification" - "pos" - "dependency-parsing" base_model: ClassCat/roberta-base-latin-v2 datasets: - "universal_dependencies" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" widget: - text: "deus videt te non sentientem" --- # roberta-base-latin-ud-goeswith ## Model Description This is a RoBERTa model pre-trained on CC-100 Latin texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-latin-v2](https://huggingface.co./ClassCat/roberta-base-latin-v2). ## How to Use ```py from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-latin-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("deus videt te non sentientem")) ```