File size: 968 Bytes
33ce2f0 0b3bd3e 33ce2f0 0b3bd3e 33ce2f0 0b3bd3e 33ce2f0 0b3bd3e 33ce2f0 0b3bd3e 33ce2f0 0b3bd3e 33ce2f0 0b3bd3e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
language:
- en
metrics:
- rouge
base_model:
- facebook/opt-6.7B
pipeline_tag: text-generation
---
# KD-OPT-6.7B
[paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)
**KD-OPT-6.7B** is a model distilled from [OPT-13B](https://huggingface.co./MiniLLM/teacher-OPT-13B) on [databricks-dolly-15k](https://huggingface.co./datasets/aisquared/databricks-dolly-15k) with token-level forward KLD.
It is used as a baseline for [MiniLLM](https://huggingface.co./MiniLLM/MiniLLM-OPT-6.7B).
## Other Baselines
+ [SFT w/o KD](https://huggingface.co./MiniLLM/SFT-OPT-6.7B)
+ [SeqKD](https://huggingface.co./MiniLLM/SeqKD-OPT-6.7B)
## Citation
```
@inproceedings{minillm,
title={MiniLLM: Knowledge Distillation of Large Language Models},
author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
booktitle={Proceedings of ICLR},
year={2024}
}
``` |