Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,14 @@ widget:
|
|
16 |
|
17 |
During the model training process, a masked language modeling approach was used with a token masking probability of 15\%. The training was performed for a single epoch, which means that the entire dataset was passed through the model once during the training process.
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
⚠️ **Limitations**
|
20 |
|
21 |
Models are often highly domain dependent. Therefore, the model may perform less well on different domains and text types not included in the training set.
|
|
|
16 |
|
17 |
During the model training process, a masked language modeling approach was used with a token masking probability of 15\%. The training was performed for a single epoch, which means that the entire dataset was passed through the model once during the training process.
|
18 |
|
19 |
+
👨💻 **Model Use**
|
20 |
+
|
21 |
+
```python
|
22 |
+
from transformers import pipeline
|
23 |
+
model = pipeline('fill-mask', model='parlbert-german')
|
24 |
+
model("Diese Themen gehören nicht ins [MASK].")
|
25 |
+
```
|
26 |
+
|
27 |
⚠️ **Limitations**
|
28 |
|
29 |
Models are often highly domain dependent. Therefore, the model may perform less well on different domains and text types not included in the training set.
|