yaoying-qwen2.5 / README.md
sunday-hao's picture
Update README.md
e0583ac verified
|
raw
history blame
1.97 kB
metadata
license: mit

Introduction

The repository consists of the weights of Finetuned Qwen2.5-7B, the scripts you need, the datasets we use to finetune the base model.

-mix_yaoying_knowledge_with_ending_phrase.json is composed of different types of knowledge, with paradigm accounting for 20% and general knowledge about Yaoying with ending-phrase accounting for 80%

Installation

Before you start, make sure you have installed the following packages:

  1. Prepare conda environment and activate environment: conda create -n yaoying python=3.10 (If your environment name is not yaoying, you may need to change environment in launching scripts) conda activate yaoying
  2. Add correct environment variables in ~/.bashrc (CUDA=11.8, gcc > 9, gcc < 10). e.g.:
    export PATH=/mnt/petrelfs/share/cuda-11.8/bin:$PATH
    export LD_LIBRARY_PATH=/mnt/petrelfs/share/cuda-11.8/lib64:$LD_LIBRARY_PATH
    export PATH=/mnt/petrelfs/share/gcc-9.3.0/bin:$PATH
    export LD_LIBRARY_PATH=/mnt/petrelfs/share/gcc-9.3.0/lib64:$LD_LIBRARY_PATH
    
  3. Take the variables into effect: source ~/.bashrc
  4. Install dependencies: pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu118
  5. Install vllm: pip install https://github.com/vllm-project/vllm/releases/download/v0.6.1.post1/vllm-0.6.1.post1+cu118-cp310-cp310-manylinux1_x86_64.whl Environment:Python=3.10(Anaconda),
  6. Install the latest Git and Git LFS: conda install git git lfs install
  7. Clone the repo: git clone https://huggingface.co./sunday-hao/yaoying-qwen2.5
  8. Change current directory: cd yaoying-qwen2.5

QuickStart

If you want to inference with vllm,

python with_vllm.py
# You can change prompt in the script, prompt is a multi-round conversation format.

If you want to test the model without vllm,

python inference_without_vllm.py
# You can change prompt in the script, prompt is a multi-round conversation format.