Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ The data utilized comprises 1 million sentences sourced from Wikipedia.
|
|
32 |
We have yet to thoroughly investigate the potential bias inherent in this model. Regarding transparency, it's important to note that the model is primarily trained on data from the Unicode Burmese(Myanmar) language.
|
33 |
|
34 |
### References and Citations
|
35 |
-
Jiang, Shengyi & Huang, Xiuwen & Cai, Xiaonan & Lin, Nankai. (2021). Pre-trained Models and Evaluation Data for the Myanmar Language. 10.1007/978-3-030-92310-5_52.
|
36 |
-
|
37 |
|
38 |
|
|
|
32 |
We have yet to thoroughly investigate the potential bias inherent in this model. Regarding transparency, it's important to note that the model is primarily trained on data from the Unicode Burmese(Myanmar) language.
|
33 |
|
34 |
### References and Citations
|
35 |
+
1. Jiang, Shengyi & Huang, Xiuwen & Cai, Xiaonan & Lin, Nankai. (2021). Pre-trained Models and Evaluation Data for the Myanmar Language. 10.1007/978-3-030-92310-5_52.
|
36 |
+
2. Lin, N., Fu, Y., Chen, C., Yang, Z., & Jiang, S. (2021). LaoPLM: Pre-trained Language Models for Lao. ArXiv. /abs/2110.05896
|
37 |
|
38 |
|