Sandiago21 commited on
Commit
d8c436e
1 Parent(s): 66ae84f

Update README.md with license

Browse files
Files changed (1) hide show
  1. README.md +35 -4
README.md CHANGED
@@ -8,7 +8,11 @@ pipeline_tag: conversational
8
 
9
  ## Model Card for Model ID
10
 
11
- Finetuned decapoda-research/llama-13b-hf on conversations
 
 
 
 
12
 
13
 
14
  ## Model Details
@@ -57,11 +61,38 @@ Generating text and prompt answering
57
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
58
 
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ## How to Get Started with the Model
61
 
62
  Use the code below to get started with the model.
63
 
64
- ```
65
  from transformers import LlamaTokenizer, LlamaForCausalLM
66
  from peft import PeftModel
67
 
@@ -74,7 +105,7 @@ model = PeftModel.from_pretrained(model, "Sandiago21/public-ai-model")
74
  ```
75
 
76
  ### Example of Usage
77
- ```
78
  from transformers import GenerationConfig
79
 
80
  PROMPT = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nWhich is the capital city of Greece and with which countries does Greece border?\n\n### Input:\nQuestion answering\n\n### Response:\n"""
@@ -107,7 +138,7 @@ for s in generation_output.sequences:
107
  ```
108
 
109
  ### Example Output
110
- ```
111
  Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
112
 
113
  ### Instruction:
 
8
 
9
  ## Model Card for Model ID
10
 
11
+ Fine-tuned decapoda-research/llama-13b-hf on conversations
12
+
13
+ This repository contains a LLaMA-13B fine-tuned model.
14
+
15
+ ⚠️ **I used [LLaMA-13B-hf](https://huggingface.co/decapoda-research/llama-13b-hf) as a base model, so this model is for Research purpose only (See the [license](https://huggingface.co/decapoda-research/llama-13b-hf/blob/main/LICENSE))**
16
 
17
 
18
  ## Model Details
 
61
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
62
 
63
 
64
+ # Usage
65
+
66
+ ## Creating prompt
67
+
68
+ The model was trained on the following kind of prompt:
69
+
70
+ ```python
71
+ def generate_prompt(instruction: str, input_ctxt: str = None) -> str:
72
+ if input_ctxt:
73
+ return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
74
+
75
+ ### Instruction:
76
+ {instruction}
77
+
78
+ ### Input:
79
+ {input_ctxt}
80
+
81
+ ### Response:"""
82
+ else:
83
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
84
+
85
+ ### Instruction:
86
+ {instruction}
87
+
88
+ ### Response:"""
89
+ ```
90
+
91
  ## How to Get Started with the Model
92
 
93
  Use the code below to get started with the model.
94
 
95
+ ```python
96
  from transformers import LlamaTokenizer, LlamaForCausalLM
97
  from peft import PeftModel
98
 
 
105
  ```
106
 
107
  ### Example of Usage
108
+ ```python
109
  from transformers import GenerationConfig
110
 
111
  PROMPT = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nWhich is the capital city of Greece and with which countries does Greece border?\n\n### Input:\nQuestion answering\n\n### Response:\n"""
 
138
  ```
139
 
140
  ### Example Output
141
+ ```python
142
  Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
143
 
144
  ### Instruction: