Text Generation
Transformers
Safetensors
English
mamba
text-generation-inference
Inference Endpoints
MiniPLM-Mamba-130M / README.md
t1101675's picture
Update README.md
2784e1c verified
metadata
library_name: transformers
license: apache-2.0
datasets:
  - monology/pile-uncopyrighted
  - MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
language:
  - en
metrics:
  - accuracy
pipeline_tag: text-generation

MiniPLM-Mamba-130M

paper | code

MiniPLM-Mamba-130M is a 130M model with the Mamba achitecture pre-trained from scratch on the Pile using the MiniPLM knowledge distillation framework with the offcial Qwen1.5-1.8B as the teacher model. This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.

We also open-source the pre-training corpus refined by Difference Sampling in MiniPLM for reproducibility.

Evaluation

MiniPLM models achieves better performance given the same computation and scales well across model sizes:

Baseline Models

Citation

@article{miniplm,
    title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, 
    author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
    journal={arXiv preprint arXiv:2410.17215},
    year={2024}
}