Chinese-Mixtral-Instruct-LoRA
Chinese Mixtral GitHub repository: https://github.com/ymcui/Chinese-Mixtral
This repository contains Chinese-Mixtral-Instruct-LoRA, which is further tuned with instruction data on Chinese-Mixtral, where Chinese-Mixtral is build on top of Mixtral-8x7B-v0.1.
Note: You must combine LoRA with the original Mixtral-8x7B-v0.1 to obtain full weight.
Others
For full model, please see: https://huggingface.co./hfl/chinese-mixtral-instruct
For GGUF model (llama.cpp compatible), please see: https://huggingface.co./hfl/chinese-mixtral-instruct-gguf
If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-Mixtral/.
Citation
Please consider cite our paper if you use the resource of this repository. Paper link: https://arxiv.org/abs/2403.01851
@article{chinese-mixtral,
title={Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral},
author={Cui, Yiming and Yao, Xin},
journal={arXiv preprint arXiv:2403.01851},
url={https://arxiv.org/abs/2403.01851},
year={2024}
}
Model tree for hfl/chinese-mixtral-instruct-lora
Base model
mistralai/Mixtral-8x7B-v0.1