Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ pipeline_tag: text-generation
|
|
15 |
|
16 |
[paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
|
17 |
|
18 |
-
**MiniPLM-llama3.1-212M** is a 212M model with the [LLaMA3.1 achitecture](https://arxiv.org/abs/2407.21783) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial
|
19 |
This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.
|
20 |
|
21 |
We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.
|
|
|
15 |
|
16 |
[paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
|
17 |
|
18 |
+
**MiniPLM-llama3.1-212M** is a 212M model with the [LLaMA3.1 achitecture](https://arxiv.org/abs/2407.21783) pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
|
19 |
This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.
|
20 |
|
21 |
We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.
|