Commit
·
2eec0c2
1
Parent(s):
ff1a1ea
Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ task_categories:
|
|
10 |
- question-answering
|
11 |
---
|
12 |
|
13 |
-
PhEnText Detoxed is a large-scale and multi-domain lexical data written in Philippine English and Taglish text. The news articles, religious articles and court decisions collated by the [original researchers](https://ieeexplore.ieee.org/abstract/document/9923429) were filtered for toxicity and special characters were further preprocessed. This dataset has been configured to easily fine-tune LLaMA-based models (Alpaca, Guanaco, Vicuna, LLaMA 2, etc.)
|
14 |
|
15 |
## Sources
|
16 |
According to Canon et al. (2022), here is the original breakdown of the dataset sources:
|
@@ -36,6 +36,8 @@ After training/fine-tuning a model on this dataset, it is important to take note
|
|
36 |
|
37 |
4. **NSFW Content:** The data has already been detoxified, however it may still contain sensitive topics including violence, gore, and sexual content. If you plan to further refine your model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
|
38 |
|
|
|
|
|
39 |
## References
|
40 |
```bibtext
|
41 |
@INPROCEEDINGS{9923429,
|
|
|
10 |
- question-answering
|
11 |
---
|
12 |
|
13 |
+
PhEnText Detoxed is a large-scale and multi-domain lexical data written in Philippine English and Taglish text. The news articles, religious articles and court decisions collated by the [original researchers](https://ieeexplore.ieee.org/abstract/document/9923429) were filtered for toxicity and special characters were further preprocessed. This dataset has been configured to easily fine-tune LLaMA-based models (Alpaca, Guanaco, Vicuna, LLaMA 2, etc.) In total, this dataset contains 6.29 million rows of training data and 2.7 million rows of testing data.
|
14 |
|
15 |
## Sources
|
16 |
According to Canon et al. (2022), here is the original breakdown of the dataset sources:
|
|
|
36 |
|
37 |
4. **NSFW Content:** The data has already been detoxified, however it may still contain sensitive topics including violence, gore, and sexual content. If you plan to further refine your model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
|
38 |
|
39 |
+
5. **Timeliness** The data's cutoff date is December 2021. The data must not be used to generate content that heavily relies on events after the cutoff date.
|
40 |
+
|
41 |
## References
|
42 |
```bibtext
|
43 |
@INPROCEEDINGS{9923429,
|