Merge-Effect
Viewer • Updated • 1.01M • 4.15k • 119Note Original dataset used to train tokenisers and models.
pietrolesci/tokenisers
UpdatedNote Tokenisers trained on the MiniPile. The `_raw_tokenisers` folder contains the original tokenisers trained with a vocabulary size of 320k. Then, each folder is a `transformers`-compatible tokeniser of a smaller size.
pietrolesci/minipile
Viewer • Updated • 6.06M • 336Note Tokenised MiniPile dataset(s). Each split correponds to a tokeniser in `pietrolesci/tokenisers`.
pietrolesci/smol_llama-81M-tied_bpe8064minipile
UpdatedNote Model trained for 50k steps on the MiniPile dataset. Each branch is a different checkpoint saved each 2k steps.
pietrolesci/smol_llama-81M-tied_bpe32000minipile
UpdatedNote Model trained for 50k steps on the MiniPile dataset. Each branch is a different checkpoint saved each 2k steps.
pietrolesci/smol_llama-81M-tied_bpe128000minipile
UpdatedNote Model trained for 50k steps on the MiniPile dataset. Each branch is a different checkpoint saved each 2k steps.
pietrolesci/smol_llama-81M-tied_wordpiece32000minipile
UpdatedNote Model trained for 50k steps on the MiniPile dataset. Each branch is a different checkpoint saved each 2k steps.
pietrolesci/smol_llama-81M-tied_bpe2wp32000minipile
UpdatedNote Model trained for 50k steps on the MiniPile dataset. Each branch is a different checkpoint saved every 2k steps. The bpe2wp nomenclature means that we choose the merges using the BPE objective, and we tokenised the MiniPile using the resulting vocabulary and the WordPiece tokenisation function (i.e., longest prefix match).
pietrolesci/smol_llama-370M-tied_bpe32000minipile
UpdatedNote Model trained for 50k steps on the MiniPile dataset. Each branch is a different checkpoint saved each 2k steps.
pietrolesci/smol_llama-1B_bpe32000minipile
UpdatedNote Model trained for 50k steps on the MiniPile dataset. Each branch is a different checkpoint saved each 2k steps.