t1101675 commited on
Commit
701e8a5
1 Parent(s): 0a2af45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -13,7 +13,7 @@ pipeline_tag: text-generation
13
 
14
  # Pretrain-Qwen-500M
15
 
16
- [paper]() | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
  **Pretrain-Qwen-500M** is a 500M model with QWen achitecture conventionally pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) for 50B tokens.
19
 
@@ -34,4 +34,11 @@ MiniPLM models achieves better performance given the same computation and scales
34
 
35
  ## Citation
36
 
37
- TODO
 
 
 
 
 
 
 
 
13
 
14
  # Pretrain-Qwen-500M
15
 
16
+ [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
  **Pretrain-Qwen-500M** is a 500M model with QWen achitecture conventionally pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) for 50B tokens.
19
 
 
34
 
35
  ## Citation
36
 
37
+ ```bibtext
38
+ @article{miniplm,
39
+ title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
40
+ author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
41
+ journal={arXiv preprint arXiv:2410.17215},
42
+ year={2024}
43
+ }
44
+ ```