|
--- |
|
language: |
|
- en |
|
tags: |
|
- pytorch |
|
- text-generation |
|
- causal-lm |
|
- rwkv |
|
license: apache-2.0 |
|
datasets: |
|
- the_pile |
|
|
|
--- |
|
|
|
# RWKV-4 7B |
|
|
|
## Model Description |
|
|
|
RWKV-4 7B is a L32-D4096 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. |
|
|
|
** Note: It's a BF16 model, and it may overflow if you are using FP16 (probably fixable by rescaling the weights). ** |
|
|
|
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-LM) to run it. |
|
|
|
ctx_len = 1024 |
|
n_layer = 32 |
|
n_embd = 4096 |
|
|
|
(there are ctx_len 2048 and 4096 models though they might be slightly weaker at generating short contents) |
|
|
|
Final checkpoint: RWKV-4-Pile-7B-20221115-8047.pth : Trained on the Pile for 332B tokens. |
|
* Pile loss 1.8415 |
|
* LAMBADA ppl 4.38, acc 67.18% |
|
* PIQA acc 76.06% |
|
* SC2016 acc 73.44% |
|
* Hellaswag acc_norm 65.51% |
|
|