This is a model checkpoint for "Should You Mask 15% in Masked Language Modeling" (code).
The original checkpoint is avaliable at princeton-nlp/efficient_mlm_m0.40. Unfortunately this checkpoint depends on code that isn't part of the official transformers
library. Additionally, the checkpoints contains unused weights due to a bug.
This checkpoint fixes the unused weights issue and uses the RobertaPreLayerNorm
model from the transformers
library.
- Downloads last month
- 1,572