RWKV7
Collection
RWKV7 models
•
1 item
•
Updated
This is RWKV-7 model under flash-linear attention format.
Install flash-linear-attention
and the latest version of transformers
before using this model:
pip install git+https://github.com/fla-org/flash-linear-attention
pip install 'transformers>=4.48.0'
You can use this model just as any other HuggingFace models:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-168M-pile', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-168M-pile', trust_remote_code=True)
This model is trained on the Pile with a total of 332 billion tokens.
lambada_openai
: ppl 14.2 acc 45.6%
piqa
: acc 65.5%
Q: safetensors metadata is none.
A: upgrade transformers to >=4.48.0: pip install 'transformers>=4.48.0'
Base model
BlinkDL/rwkv-7-pile