Update model card
Browse files
README.md
CHANGED
@@ -11,12 +11,16 @@ datasets:
|
|
11 |
Proc-RoBERTa is a pre-trained language model for procedural text. It was built by fine-tuning the RoBERTa-based model on a procedural corpus (PubMed articles/chemical patents/cooking recipes), which contains 1.05B tokens. More details can be found in the following [paper](https://arxiv.org/abs/2109.04711):
|
12 |
|
13 |
```
|
14 |
-
@
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
|
|
|
|
|
|
|
|
20 |
}
|
21 |
```
|
22 |
|
|
|
11 |
Proc-RoBERTa is a pre-trained language model for procedural text. It was built by fine-tuning the RoBERTa-based model on a procedural corpus (PubMed articles/chemical patents/cooking recipes), which contains 1.05B tokens. More details can be found in the following [paper](https://arxiv.org/abs/2109.04711):
|
12 |
|
13 |
```
|
14 |
+
@inproceedings{bai-etal-2021-pre,
|
15 |
+
title = "Pre-train or Annotate? Domain Adaptation with a Constrained Budget",
|
16 |
+
author = "Bai, Fan and
|
17 |
+
Ritter, Alan and
|
18 |
+
Xu, Wei",
|
19 |
+
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
|
20 |
+
month = nov,
|
21 |
+
year = "2021",
|
22 |
+
address = "Online and Punta Cana, Dominican Republic",
|
23 |
+
publisher = "Association for Computational Linguistics",
|
24 |
}
|
25 |
```
|
26 |
|