license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co./Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE
language:
- en
- ko
pipeline_tag: text-generation
tags:
- chat
Qwen1.5-14B-Chat
Introduction
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
- 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B;
- Significant performance improvement in human preference for chat models;
- Multilingual support of both base and chat models;
- Stable support of 32K context length for models of all sizes
- No need of
trust_remote_code
.
For more details, please refer to our blog post and GitHub repo.
Model Details
Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention.
Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems.
Requirements
The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install transformers>=4.37.0
, or you might encounter the following error:
KeyError: 'qwen2'
DPO Tuning