|
--- |
|
license: mit |
|
--- |
|
## Overview |
|
|
|
**Meta** developed and released the [Llama3.3](https://huggingface.co./meta-llama/Llama-3.3-70B-Instruct) model, a state-of-the-art multilingual large language model designed for instruction-tuned generative tasks. With 70 billion parameters, this model is optimized for multilingual dialogue use cases, providing high-quality text input and output. Llama3.3 has been fine-tuned through supervised learning and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. It sets a new standard in performance, outperforming many open-source and closed-source chat models on common industry benchmarks. The model’s capabilities make it a powerful tool for applications requiring conversational AI, multilingual support, and instruction adherence. |
|
|
|
## Variants |
|
|
|
| No | Variant | Cortex CLI command | |
|
| --- | --- | --- | |
|
| 1 | [gguf](https://huggingface.co./cortexso/llama3.3/tree/main) | `cortex run llama3.3` | |
|
|
|
## Use it with Jan (UI) |
|
|
|
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) |
|
2. Use in Jan model Hub: |
|
```text |
|
cortexso/llama3.3 |
|
``` |
|
|
|
## Use it with Cortex (CLI) |
|
|
|
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) |
|
2. Run the model with command: |
|
```bash |
|
cortex run llama3.3 |
|
``` |
|
|
|
## Credits |
|
|
|
- **Author:** Meta |
|
- **Converter:** [Homebrew](https://www.homebrew.ltd/) |
|
- **Original License:** [License](https://llama.meta.com/llama3/license/) |
|
- **Papers:** [Llama-3 Blog](https://llama.meta.com/llama3/) |
|
|