Text Generation
Transformers
PyTorch
llama
text-generation-inference
PengQu commited on
Commit
0854599
1 Parent(s): 8c48b62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -10,7 +10,7 @@ datasets:
10
 
11
  **NOTE: This "delta model" cannot be used directly.**
12
  Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights.
13
- See https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL#model-weights for instructions.
14
  <br>
15
  <br>
16
 
@@ -22,15 +22,15 @@ See https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL#model
22
  vicuna-13b-finetuned-langchain-MRKL is an open-source chatbot trained by fine-tuning vicuna-13b on 15 examples with langchain-MRKL format.
23
 
24
  **Where to send questions or comments about the model:**
25
- https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL/issues
26
 
27
 
28
  ## Training dataset
29
  train only one epoch on mix data (sharegpt + 32*my.json + moss-003-sft-data)
30
 
31
  ## Evaluation
32
- - demo code: https://github.com/pengqu123/vicuna-13b-delta-finetuned-langchain-MRKL/blob/main/demo.ipynb
33
- - No evaluation set. Because we don't improve the ability of model. we just make model fit langchain-MRKL strictly.
34
  - We just want to show vicuna-13b's powerful ability about thinking and action.
35
 
36
  ## Major Improvement
 
10
 
11
  **NOTE: This "delta model" cannot be used directly.**
12
  Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights.
13
+ See https://github.com/rinnakk/vicuna-13b-delta-finetuned-langchain-MRKL#model-weights for instructions.
14
  <br>
15
  <br>
16
 
 
22
  vicuna-13b-finetuned-langchain-MRKL is an open-source chatbot trained by fine-tuning vicuna-13b on 15 examples with langchain-MRKL format.
23
 
24
  **Where to send questions or comments about the model:**
25
+ https://github.com/rinnakk/vicuna-13b-delta-finetuned-langchain-MRKL/issues
26
 
27
 
28
  ## Training dataset
29
  train only one epoch on mix data (sharegpt + 32*my.json + moss-003-sft-data)
30
 
31
  ## Evaluation
32
+ - demo for langchain-MRKL: https://github.com/rinnakk/vicuna-13b-delta-finetuned-langchain-MRKL/blob/main/demo.ipynb
33
+ - No evaluation set. Because we don't think we improved the ability of model. we just make model fit langchain-MRKL strictly.
34
  - We just want to show vicuna-13b's powerful ability about thinking and action.
35
 
36
  ## Major Improvement