Sandiago21 commited on
Commit
9e486be
1 Parent(s): a82427a

Create README.md after uploading original bin files

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: conversational
7
+ ---
8
+
9
+ Model Card for Model ID
10
+
11
+ Finetuned depacoda-research/llamma-13b-hf on conversations
12
+
13
+
14
+ Model Details
15
+
16
+
17
+ Model Description
18
+
19
+ The depacoda-research/llamma-13b-hf model was finetuned on conversations and question answering prompts
20
+
21
+ Developed by: [More Information Needed]
22
+ Shared by [optional]: [More Information Needed]
23
+ Model type: Causal LM
24
+ Language(s) (NLP): English, multilingual
25
+ License: Research
26
+ Finetuned from model [optional]: depacoda-research/llamma-13b-hf
27
+
28
+ Model Sources [optional]
29
+
30
+ Repository: [More Information Needed]
31
+ Paper [optional]: [More Information Needed]
32
+ Demo [optional]: [More Information Needed]
33
+
34
+ Uses
35
+
36
+ The model can be used for prompt answering
37
+
38
+
39
+ Direct Use
40
+
41
+ The model can be used for prompt answering
42
+
43
+
44
+ Downstream Use [optional]
45
+
46
+ Generating text and prompt answering
47
+
48
+
49
+ Recommendations
50
+
51
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
52
+
53
+
54
+ How to Get Started with the Model
55
+
56
+ Use the code below to get started with the model.
57
+
58
+ ```
59
+ from transformers import LlamaTokenizer, LlamaForCausalLM
60
+ from peft import PeftModel
61
+
62
+ MODEL_NAME = "decapoda-research/llama-13b-hf"
63
+ tokenizer = LlamaTokenizer.from_pretrained(MODEL_NAME, add_eos_token=True)
64
+ tokenizer.pad_token_id = 0
65
+
66
+ model = LlamaForCausalLM.from_pretrained(MODEL_NAME, load_in_8bit=True, device_map="auto")
67
+ model = PeftModel.from_pretrained(model, "Sandiago21/public-ai-model")
68
+ ```
69
+
70
+ Training Details
71
+
72
+
73
+ Training Data
74
+
75
+ The decapoda-research/llama-13b-hf was finetuned on conversations and question answering data
76
+
77
+
78
+ Training Procedure
79
+
80
+ The decapoda-research/llama-13b-hf model was further trained and finetuned on question answering and prompts data
81
+
82
+
83
+ Model Architecture and Objective
84
+
85
+ The model is based on decapoda-research/llama-13b-hf model and finetuned adapters on top of the main model on conversations and question answering data.