Crystalcareai
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -70,7 +70,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
70 |
Virtuoso-Lite demonstrates strong results across multiple benchmarks (e.g., BBH, MMLU-PRO, MATH), often standing its ground against models with higher parameter counts. This efficiency is largely credited to logit-level distillation, which compresses the teacher model’s capabilities into a more parameter-friendly package.
|
71 |
|
72 |
### Limitations
|
73 |
-
- **Context Length:**
|
74 |
- **Knowledge Cut-off:** Training data may not reflect the latest events or developments beyond June 2024.
|
75 |
|
76 |
### Ethical Considerations
|
|
|
70 |
Virtuoso-Lite demonstrates strong results across multiple benchmarks (e.g., BBH, MMLU-PRO, MATH), often standing its ground against models with higher parameter counts. This efficiency is largely credited to logit-level distillation, which compresses the teacher model’s capabilities into a more parameter-friendly package.
|
71 |
|
72 |
### Limitations
|
73 |
+
- **Context Length:** 32k Tokens (may vary depending on the final tokenizer settings and system resources).
|
74 |
- **Knowledge Cut-off:** Training data may not reflect the latest events or developments beyond June 2024.
|
75 |
|
76 |
### Ethical Considerations
|