rwkv7-168M-pile

This is RWKV-7 model under flash-linear attention format.

Warning! Currently training is not tested. Only inference is supported.

Model Details

Model Description

  • Developed by: Bo Peng, Yu Zhang, Songlin Yang, Ruochong Zhang
  • Funded by: Shenzhen Yuanshi Intelligent Co. Ltd.
  • Model type: RWKV7
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Parameter count: 168M
  • Tokenizer: GPT-NeoX 20B tokenizer

Model Sources

Uses

Install flash-linear-attention and the latest version of transformers before using this model:

pip install git+https://github.com/fla-org/flash-linear-attention
pip install 'transformers>=4.48.0'

Direct Use

You can use this model just as any other HuggingFace models:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-168M-pile', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-168M-pile', trust_remote_code=True)

Training Details

Training Data

This model is trained on the Pile with a total of 332 billion tokens.

Training Hyperparameters

  • Training regime: bfloat16, lr 8e-4 to 3e-5 cosine decay, wd 0.1, bsz 8x30x4096

Evaluation

Metrics

lambada_openai: ppl 14.2 acc 45.6%

piqa: acc 65.5%

FAQ

Q: safetensors metadata is none.

A: upgrade transformers to >=4.48.0: pip install 'transformers>=4.48.0'

Downloads last month
102
Safetensors
Model size
168M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for fla-hub/rwkv7-168M-pile

Finetuned
(1)
this model

Dataset used to train fla-hub/rwkv7-168M-pile

Collection including fla-hub/rwkv7-168M-pile