Crystalcareai commited on
Commit
d0d005a
·
verified ·
1 Parent(s): ef3a352

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -70,7 +70,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
70
  Virtuoso-Lite demonstrates strong results across multiple benchmarks (e.g., BBH, MMLU-PRO, MATH), often standing its ground against models with higher parameter counts. This efficiency is largely credited to logit-level distillation, which compresses the teacher model’s capabilities into a more parameter-friendly package.
71
 
72
  ### Limitations
73
- - **Context Length:** 128k Tokens (may vary depending on the final tokenizer settings and system resources).
74
  - **Knowledge Cut-off:** Training data may not reflect the latest events or developments beyond June 2024.
75
 
76
  ### Ethical Considerations
 
70
  Virtuoso-Lite demonstrates strong results across multiple benchmarks (e.g., BBH, MMLU-PRO, MATH), often standing its ground against models with higher parameter counts. This efficiency is largely credited to logit-level distillation, which compresses the teacher model’s capabilities into a more parameter-friendly package.
71
 
72
  ### Limitations
73
+ - **Context Length:** 32k Tokens (may vary depending on the final tokenizer settings and system resources).
74
  - **Knowledge Cut-off:** Training data may not reflect the latest events or developments beyond June 2024.
75
 
76
  ### Ethical Considerations