ellis-v1-chatbot / README.md
gsl22's picture
Create README.md
75fab3c
metadata
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
  This is a form to enable access to Llama 2 on Hugging Face after you have been
  granted access from Meta. Please visit the [Meta
  website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
  and accept our license terms and acceptable use policy before submitting this
  form. Requests will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
  I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
  - en
  - pt
  - es
pipeline_tag: text-generation
inference: false
tags:
  - meta
  - pytorch
  - llama

Llama 2

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.

Model Details

Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.

Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.

Model Developers Meta

Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.

Input Models input text only.

Output Models generate text only.

Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.

Training Data Params Content Length GQA Tokens LR
Llama 2 A new mix of publicly available online data 7B 4k 2.0T 3.0 x 10-4
Llama 2 A new mix of publicly available online data 13B 4k 2.0T 3.0 x 10-4
Llama 2 A new mix of publicly available online data 70B 4k 2.0T 1.5 x 10-4

Llama 2 family of models. Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.

Model Dates Llama 2 was trained between January 2023 and July 2023.

Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.

License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/

Ethical Considerations and Limitations

Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/

Reporting Issues

Please report any software “bug,” or other problems with the models through one of the following means:

Llama Model Index

Model Llama2 Llama2-hf Llama2-chat Llama2-chat-hf
7B Link Link Link Link
13B Link Link Link Link
70B Link Link Link Link