File size: 3,662 Bytes
0448ee6 10ad051 0448ee6 10ad051 0448ee6 10ad051 0448ee6 10ad051 3179b76 4c629c0 3b952e5 4c629c0 10ad051 4c629c0 10ad051 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
## Model Notes
Linear models offer a promising approach to significantly reduce computational costs at scale, particularly for large context lengths. This enables a more than 1000x improvement in inference cost efficiency, enabling both O1-style inference time thinking and wider AI accessibility.
We are able to convert many previously trained softmax Attention-based models, such as Qwen and LLaMA, into an RWKV variant without **requiring retraining from scratch**. This enables us to rapidly test and validate the significantly more efficient RWKV Linear attention mechanism at a larger scale with a much smaller budget, bypassing the need for training from scratch.
This approach demonstrates the architectural design and scalability of RWKV, reinforcing the idea that softmax attention is not the sole essential component.
One downside to this technique is that the model's inherent knowledge and dataset training are inherited from its "parent" model. Consequently, unlike previous RWKV models trained on over 100+ languages, the Qwerky model is limited to approximately 30 languages supported by the Qwen line of models.
But it gets the inference time performance speed of a linear model.
# Qwerky-72B Benchmark Numbers
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|--------------|------:|------|-----:|----------|---|-----:|---|-----:|
|mmlu | 2|none | 0|acc |↑ |0.7767|± |0.0033|
|arc_challenge | 1|none | 0|acc |↑ |0.6152|± |0.0142|
| | |none | 0|acc_norm |↑ |0.6297|± |0.0141|
|arc_easy | 1|none | 0|acc |↑ |0.8565|± |0.0072|
| | |none | 0|acc_norm |↑ |0.8304|± |0.0077|
|hellaswag | 1|none | 0|acc |↑ |0.6780|± |0.0047|
| | |none | 0|acc_norm |↑ |0.8587|± |0.0035|
|lambada_openai| 1|none | 0|acc |↑ |0.7502|± |0.0060|
| | |none | 0|perplexity|↓ |2.9369|± |0.0624|
|piqa | 1|none | 0|acc |↑ |0.8237|± |0.0089|
| | |none | 0|acc_norm |↑ |0.8368|± |0.0086|
|sciq | 1|none | 0|acc |↑ |0.971 |± |0.0053|
| | |none | 0|acc_norm |↑ |0.957 |± |0.0064|
|winogrande | 1|none | 0|acc |↑ |0.7806|± |0.0116|
# Original Qwen2.5-72B-Instruct scores
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|--------------|------:|------|-----:|----------|---|-----:|---|-----:|
|mmlu | 2|none | |acc |↑ |0.8335|± |0.0030|
|arc_challenge | 1|none | 0|acc |↑ |0.6229|± |0.0142|
| | |none | 0|acc_norm |↑ |0.6323|± |0.0141|
|arc_easy | 1|none | 0|acc |↑ |0.8632|± |0.0071|
| | |none | 0|acc_norm |↑ |0.8329|± |0.0077|
|hellaswag | 1|none | 0|acc |↑ |0.7023|± |0.0046|
| | |none | 0|acc_norm |↑ |0.8739|± |0.0033|
|lambada_openai| 1|none | 0|acc |↑ |0.7512|± |0.0060|
| | |none | 0|perplexity|↓ |2.7690|± |0.0559|
|piqa | 1|none | 0|acc |↑ |0.8313|± |0.0087|
| | |none | 0|acc_norm |↑ |0.8400|± |0.0086|
|sciq | 1|none | 0|acc |↑ |0.972 |± |0.0052|
| | |none | 0|acc_norm |↑ |0.956 |± |0.0065|
|winogrande | 1|none | 0|acc |↑ |0.7640|± |0.0119|
|