|
--- |
|
library_name: peft |
|
pipeline_tag: conversational |
|
base_model: meta-llama/Llama-2-7b-hf |
|
--- |
|
|
|
<div align="center"> |
|
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> |
|
|
|
|
|
[![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner) |
|
|
|
|
|
</div> |
|
|
|
## Model |
|
|
|
Llama-2-7b-qlora-msagent-react is fine-tuned from [Llama-2-7b](https://huggingface.co./meta-llama/Llama-2-7b-hf) with [MSAgent-Bench](https://modelscope.cn/datasets/damo/MSAgent-Bench) dataset by [XTuner](https://github.com/InternLM/xtuner). |
|
|
|
|
|
## Quickstart |
|
|
|
### Usage with XTuner CLI |
|
|
|
#### Installation |
|
|
|
```shell |
|
pip install xtuner |
|
``` |
|
|
|
#### Chat |
|
|
|
```shell |
|
xtuner chat meta-llama/Llama-2-7b-hf --adapter xtuner/Llama-2-7b-qlora-msagent-react --lagent |
|
``` |
|
|
|
#### Fine-tune |
|
|
|
Use the following command to quickly reproduce the fine-tuning results. |
|
|
|
```shell |
|
NPROC_PER_NODE=8 xtuner train llama2_7b_qlora_msagent_react_e3_gpu8 |
|
``` |
|
|