Text Generation
Transformers
Safetensors
English
qwen2
conversational
text-generation-inference
Inference Endpoints
MiniPLM-Qwen-500M / README.md
t1101675's picture
Update README.md
cb0ff7a verified
metadata
library_name: transformers
license: apache-2.0
datasets:
  - monology/pile-uncopyrighted
  - MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
language:
  - en
metrics:
  - accuracy
pipeline_tag: text-generation

MinPLM-Qwen-500M

paper | code

MiniPLM-Qwen-500M is a 500M model with Qwen achitecture pre-trained from scratch on the Pile using the MiniPLM knowledge distillation framework with the offcial Qwen1.5-1.8B as the teacher model.

We also open-source the pre-training corpus refined by Difference Sampling in MiniPLM for reproducibility.

Evaluation

MiniPLM models achieves better performance given the same computation and scales well across model sizes:

Baseline Models

Citation

@article{miniplm,
    title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, 
    author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
    journal={arXiv preprint arXiv:2410.17215},
    year={2024}
}