dfurman commited on
Commit
5361301
1 Parent(s): 52f944d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -7
README.md CHANGED
@@ -7,8 +7,7 @@ license: other
7
 
8
  LLaMA-7B is a base model for text generation. It was built and released by Meta AI alongside "[LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)".
9
 
10
- This model repo was converted to work with the Hugging Face transformers package. It is under a bespoke **non-commercial** license, please see the LICENSE file for more details.
11
-
12
 
13
  ## Model Summary
14
 
@@ -23,15 +22,13 @@ Questions and comments about LLaMA can be sent via the [GitHub repository](https
23
 
24
  ## Intended use
25
  **Primary intended uses**
26
- The primary use of LLaMA is research on large language models, including:
27
- exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those,
28
- evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
29
 
30
  **Primary intended users**
31
  The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
32
 
33
  **Out-of-scope use cases**
34
- LLaMA is a base model, also known as a foundation model. As such, it should not be used on downstream applications without further risk evaluation, mitigation, and potential further fine-tuning (for example, on instructions and/or chats). In particular, the model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
35
 
36
  ## Factors
37
  **Relevant factors**
@@ -102,4 +99,4 @@ _ = model.generate(
102
  max_new_tokens=20,
103
  streamer=streamer,
104
  )
105
- ```
 
7
 
8
  LLaMA-7B is a base model for text generation. It was built and released by Meta AI alongside "[LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)".
9
 
10
+ This model repo was converted to work with the transformers package. It is under a bespoke **non-commercial** license, please see the LICENSE file for more details.
 
11
 
12
  ## Model Summary
13
 
 
22
 
23
  ## Intended use
24
  **Primary intended uses**
25
+ The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, and hallucinations.
 
 
26
 
27
  **Primary intended users**
28
  The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
29
 
30
  **Out-of-scope use cases**
31
+ LLaMA is a base model, also known as a foundation model. As such, it should not be used on downstream applications without further risk evaluation, mitigation, and potential further fine-tuning. In particular, the model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
32
 
33
  ## Factors
34
  **Relevant factors**
 
99
  max_new_tokens=20,
100
  streamer=streamer,
101
  )
102
+ ```