Text Generation
PEFT
conversational
LZHgrla commited on
Commit
1c1ace8
1 Parent(s): c2af009
Files changed (1) hide show
  1. README.md +39 -14
README.md CHANGED
@@ -1,21 +1,46 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - quant_method: bitsandbytes
9
- - load_in_8bit: False
10
- - load_in_4bit: True
11
- - llm_int8_threshold: 6.0
12
- - llm_int8_skip_modules: None
13
- - llm_int8_enable_fp32_cpu_offload: False
14
- - llm_int8_has_fp16_weight: False
15
- - bnb_4bit_quant_type: nf4
16
- - bnb_4bit_use_double_quant: True
17
- - bnb_4bit_compute_dtype: float16
18
- ### Framework versions
19
 
 
20
 
21
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
+ datasets:
4
+ - tatsu-lab/alpaca
5
+ - silk-road/alpaca-data-gpt4-chinese
6
+ pipeline_tag: conversational
7
+ base_model: internlm/internlm-7b
8
  ---
 
9
 
10
+ <div align="center">
11
+ <img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
12
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ [![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner)
15
 
16
+
17
+ </div>
18
+
19
+ ## Model
20
+
21
+ internlm-7b-qlora-alpaca-enzh is fine-tuned from [InternLM-7B](https://huggingface.co/internlm/internlm-7b) with [alpaca en](https://huggingface.co/datasets/tatsu-lab/alpaca) / [zh](https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese) datasets by [XTuner](https://github.com/InternLM/xtuner).
22
+
23
+
24
+ ## Quickstart
25
+
26
+ ### Usage with XTuner CLI
27
+
28
+ #### Installation
29
+
30
+ ```shell
31
+ pip install xtuner
32
+ ```
33
+
34
+ #### Chat
35
+
36
+ ```shell
37
+ xtuner chat internlm/internlm-7b --adapter xtuner/internlm-7b-qlora-alpaca-enzh --prompt-template internlm_chat --system-template alpaca
38
+ ```
39
+
40
+ #### Fine-tune
41
+
42
+ Use the following command to quickly reproduce the fine-tuning results.
43
+
44
+ ```shell
45
+ xtuner train internlm_7b_qlora_alpaca_enzh_e3
46
+ ```