Spaces:
Running
Running
title: README | |
emoji: 🦙 | |
colorFrom: blue | |
colorTo: blue | |
sdk: static | |
pinned: true | |
# The Llama Family | |
*From Meta* | |
Welcome to the official Hugging Face organization for Llama, Llama Guard, and Code Llama models from Meta! In order to access models here, please visit a repo of one of the three families and accept the license terms and acceptable use policy. Requests are processed hourly. | |
In this organization, you can find models in both the original Meta format as well as the Hugging Face transformers format. You can find: | |
* **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens. | |
* **Llama 2:** a collection of pretrained and fine-tuned text models ranging in scale from 7 billion to 70 billion parameters. | |
* **Code Llama:** a collection of code-specialized versions of Llama 2 in three flavors (base model, Python specialist, and instruct tuned). | |
* **Llama Guard:** a 8B Llama 3 safeguard model for classifying LLM inputs and responses. | |
Learn more about the models at https://ai.meta.com/llama/ |