Fan21 commited on
Commit
877b98a
1 Parent(s): 31dffe8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -8,11 +8,11 @@ pipeline_tag: question-answering
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
- This model is fine-tuned with LLaMA with 8 Nvidia RTX 1080Ti GPUs and enhanced with conversation safety policies (e.g., threat, profanity, identity attack) using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags [SAFE] and [UNSAFE]
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
- # A list of special tokens the model was trained with
15
-
16
  from transformers import LlamaTokenizer, AutoModelForCausalLM
17
  tokenizer = LlamaTokenizer.from_pretrained("Fan21/Llama-mt-lora")
18
  BASE_MODEL = "Fan21/Llama-mt-lora"
@@ -92,5 +92,13 @@ def evaluate(
92
  s = generation_output.sequences[0]
93
  output = tokenizer.decode(s)
94
  return output.split("### Response:")[1].strip()
95
-
 
 
 
 
 
 
 
 
96
  ```
 
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+ This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags [SAFE] and [UNSAFE]
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
+ import torch
15
+ import transformers
16
  from transformers import LlamaTokenizer, AutoModelForCausalLM
17
  tokenizer = LlamaTokenizer.from_pretrained("Fan21/Llama-mt-lora")
18
  BASE_MODEL = "Fan21/Llama-mt-lora"
 
92
  s = generation_output.sequences[0]
93
  output = tokenizer.decode(s)
94
  return output.split("### Response:")[1].strip()
95
+ instruction = 'write your instruction here'
96
+ inputs = 'write your inputs here'
97
+ print("output:", evaluate(instruction,
98
+ input=inputs,
99
+ temperature=0.1,#change the parameters by yourself
100
+ top_p=0.75,
101
+ top_k=40,
102
+ num_beams=4,
103
+ max_new_tokens=128,))
104
  ```