Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,8 @@ Sarvam-1 is a 2-billion parameter language model specifically optimized for Indi
|
|
21 |
|
22 |
The model was trained with [NVIDIA NeMo™ Framework](https://github.com/NVIDIA/NeMo) on the Yotta Shakti Cloud using HGX H100 systems.
|
23 |
|
|
|
|
|
24 |
## Key Features
|
25 |
|
26 |
- **Optimized for 10 Indian Languages**: Built from the ground up to support major Indian languages alongside English
|
|
|
21 |
|
22 |
The model was trained with [NVIDIA NeMo™ Framework](https://github.com/NVIDIA/NeMo) on the Yotta Shakti Cloud using HGX H100 systems.
|
23 |
|
24 |
+
*Note: This is a text-completion model. It is meant to be finetuned on downsteream tasks, and cannot be used directly as a chat or an instruction-following model.*
|
25 |
+
|
26 |
## Key Features
|
27 |
|
28 |
- **Optimized for 10 Indian Languages**: Built from the ground up to support major Indian languages alongside English
|