Fan21 commited on
Commit
dcd627a
1 Parent(s): 20596ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -4,11 +4,11 @@ language:
4
  - en
5
  pipeline_tag: question-answering
6
  ---
7
- # Model Card for Model ID
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
- This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags [SAFE] and [UNSAFE]
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
  import torch
 
4
  - en
5
  pipeline_tag: question-answering
6
  ---
7
+ # Llama-mt-lora
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+ This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context.
12
  ### Here is how to use it with texts in HuggingFace
13
  ```python
14
  import torch