Fill-Mask
Transformers
PyTorch
Spanish
roberta
legal
spanish
Inference Endpoints
gonzalez-agirre commited on
Commit
52ba14c
1 Parent(s): 6714a6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -10
README.md CHANGED
@@ -1,32 +1,48 @@
1
  ---
 
2
  language:
 
3
  - es
 
4
  license: apache-2.0
 
5
  tags:
6
- - legal
7
- - spanish
 
 
 
8
  datasets:
9
- - legal_ES
10
- - temu_legal
 
 
 
11
  metrics:
12
- - ppl
 
 
13
  widget:
14
  - text: "La ley fue <mask> finalmente."
15
  - text: "El Tribunal <mask> desestimó el recurso de amparo."
16
  - text: "Hay base legal dentro del marco <mask> actual."
17
 
18
  ---
19
- # Spanish Legal-domain RoBERTa
 
20
 
21
  ## Table of contents
22
  <details>
23
  <summary>Click to expand</summary>
24
 
 
25
  - [Model description](#model-description)
26
- - [Intended uses and limitations](#intended-use)
27
  - [How to use](#how-to-use)
28
  - [Limitations and bias](#limitations-and-bias)
29
  - [Training](#training)
 
 
30
  - [Evaluation](#evaluation)
31
  - [Additional information](#additional-information)
32
  - [Author](#author)
@@ -34,25 +50,97 @@ widget:
34
  - [Copyright](#copyright)
35
  - [Licensing information](#licensing-information)
36
  - [Funding](#funding)
37
- - [Citing information](#citing-information)
38
  - [Disclaimer](#disclaimer)
39
 
40
  </details>
41
 
42
- ## Model description
43
 
 
 
 
 
 
 
 
 
44
 
45
  ## Intended uses and limitations
 
 
 
46
 
47
  ## How to use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Limitations and bias
50
  At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
51
 
52
- ## Training
 
 
 
 
 
 
 
53
 
54
  ## Evaluation
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ## Additional information
57
 
58
  ### Author
 
1
  ---
2
+
3
  language:
4
+
5
  - es
6
+
7
  license: apache-2.0
8
+
9
  tags:
10
+
11
+ - "legal"
12
+
13
+ - "spanish"
14
+
15
  datasets:
16
+
17
+ - "legal_ES"
18
+
19
+ - "temu_legal"
20
+
21
  metrics:
22
+
23
+ - "ppl"
24
+
25
  widget:
26
  - text: "La ley fue <mask> finalmente."
27
  - text: "El Tribunal <mask> desestimó el recurso de amparo."
28
  - text: "Hay base legal dentro del marco <mask> actual."
29
 
30
  ---
31
+
32
+ # RoBERTa base trained with Spanish Legal Domain Corpora
33
 
34
  ## Table of contents
35
  <details>
36
  <summary>Click to expand</summary>
37
 
38
+ - [Overview](#overview)
39
  - [Model description](#model-description)
40
+ - [Intended uses and limitations](#intended-uses-and-limitations)
41
  - [How to use](#how-to-use)
42
  - [Limitations and bias](#limitations-and-bias)
43
  - [Training](#training)
44
+ - [Training data](#training-data)
45
+ - [Training procedure](#training-procedure)
46
  - [Evaluation](#evaluation)
47
  - [Additional information](#additional-information)
48
  - [Author](#author)
 
50
  - [Copyright](#copyright)
51
  - [Licensing information](#licensing-information)
52
  - [Funding](#funding)
53
+ - [Citation Information](#citation-information)
54
  - [Disclaimer](#disclaimer)
55
 
56
  </details>
57
 
 
58
 
59
+ ## Overview
60
+ - **Architecture:** roberta-base
61
+ - **Language:** Spanish
62
+ - **Task:** fill-mask
63
+ - **Data:** Legal
64
+
65
+ ## Model description
66
+ The **RoBERTalex** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using a large [Spanish Legal Domain Corpora](https://zenodo.org/record/5495529), with a total of 8.9GB of text.
67
 
68
  ## Intended uses and limitations
69
+ The **RoBERTalex** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section).
70
+ However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.
71
+ You can use the raw model for fill mask or fine-tune it to a downstream task.
72
 
73
  ## How to use
74
+ Here is how to use this model:
75
+
76
+ ```python
77
+ >>> from transformers import pipeline
78
+ >>> from pprint import pprint
79
+ >>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/RoBERTalex')
80
+ >>> pprint(unmasker("La ley fue <mask> finalmente."))
81
+ [{'score': 0.21217258274555206,
82
+ 'sequence': ' La ley fue modificada finalmente.',
83
+ 'token': 5781,
84
+ 'token_str': ' modificada'},
85
+ {'score': 0.20414969325065613,
86
+ 'sequence': ' La ley fue derogada finalmente.',
87
+ 'token': 15951,
88
+ 'token_str': ' derogada'},
89
+ {'score': 0.19272951781749725,
90
+ 'sequence': ' La ley fue aprobada finalmente.',
91
+ 'token': 5534,
92
+ 'token_str': ' aprobada'},
93
+ {'score': 0.061143241822719574,
94
+ 'sequence': ' La ley fue revisada finalmente.',
95
+ 'token': 14192,
96
+ 'token_str': ' revisada'},
97
+ {'score': 0.041809432208538055,
98
+ 'sequence': ' La ley fue aplicada finalmente.',
99
+ 'token': 12208,
100
+ 'token_str': ' aplicada'}]
101
+
102
+ ```
103
+
104
+ Here is how to use this model to get the features of a given text in PyTorch:
105
+
106
+ ```python
107
+ >>> from transformers import RobertaTokenizer, RobertaModel
108
+ >>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/RoBERTalex')
109
+ >>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/RoBERTalex')
110
+ >>> text = "Gracias a los datos legales se ha podido desarrollar este modelo del lenguaje."
111
+ >>> encoded_input = tokenizer(text, return_tensors='pt')
112
+ >>> output = model(**encoded_input)
113
+ >>> print(output.last_hidden_state.shape)
114
+ torch.Size([1, 16, 768])
115
+ ```
116
 
117
  ## Limitations and bias
118
  At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
119
 
120
+ ## Training data
121
+ The [Spanish Legal Domain Corpora](https://zenodo.org/record/5495529) corpora comprise multiple digital resources and it has a total of 8.9GB of textual data. Part of it has been obtained
122
+ from [previous work](https://aclanthology.org/2020.lt4gov-1.6/). To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept.
123
+
124
+ ### Training procedure
125
+ The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens.
126
+
127
+ The **RoBERTalex** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The model was trained until convergence with 2 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
128
 
129
  ## Evaluation
130
 
131
+ Due to the lack of domain-specific evaluation data, the model was evaluated on general domain tasks, where it obtains reasonable performance. We fine-tuned the model in the following task:
132
+
133
+ | Dataset | Metric | **RoBERtalex** |
134
+ |--------------|----------|------------|
135
+ | UD-POS | F1 | 0.9871 |
136
+ | CoNLL-NERC | F1 | 0.8323 |
137
+ | CAPITEL-POS | F1 | 0.9788|
138
+ | CAPITEL-NERC | F1 | 0.8394 |
139
+ | STS | Combined | 0.7374 |
140
+ | MLDoc | Accuracy | 0.9417 |
141
+ | PAWS-X | F1 | 0.7304 |
142
+ | XNLI | Accuracy | 0.7337 |
143
+
144
  ## Additional information
145
 
146
  ### Author