Papers
arxiv:2501.10322

Hierarchical Autoregressive Transformers: Combining Byte-~and Word-Level Processing for Robust, Adaptable Language Models

Published on Jan 17
Authors:
,
,
,

Abstract

Tokenization is a fundamental step in natural language processing, breaking text into units that computational models can process. While learned subword tokenizers have become the de-facto standard, they present challenges such as large vocabularies, limited adaptability to new domains or languages, and sensitivity to spelling errors and variations. To overcome these limitations, we investigate a hierarchical architecture for autoregressive language modelling that combines character-level and word-level processing. It employs a lightweight character-level encoder to convert character sequences into word embeddings, which are then processed by a word-level backbone model and decoded back into characters via a compact character-level decoder. This method retains the sequence compression benefits of word-level tokenization without relying on a rigid, predefined vocabulary. We demonstrate, at scales up to 7 billion parameters, that hierarchical transformers match the downstream task performance of subword-tokenizer-based models while exhibiting significantly greater robustness to input perturbations. Additionally, during continued pretraining on an out-of-domain language, our model trains almost twice as fast, achieves superior performance on the target language, and retains more of its previously learned knowledge. Hierarchical transformers pave the way for NLP systems that are more robust, flexible, and generalizable across languages and domains.

Community

Very interesting architecture!

@pitneitemeier I am interested in the stand-alone capability of the introduced encoder. Would it be possible to use this one for NLU and "mimic" a BERT-like experience on e.g. token classification tasks? And will there be any code/model releases 🤔 Many thanks!

Hey Stefan, glad you are liking our work.

In principle it should be possible to use the embeddings from the encoder for this since they only contain information of the current word (the broader context is only added in the backbone), but we have not tried this. I would also be very interested to explore and compare the resulting embedding space to a tokenizer based model (especially in a multilingual and robustness setting), but we have not yet had time to do this.

We do intend to fully release code and model checkpoints together with a blogpost, but unfortunately I cannot promise a timeline at this point.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2501.10322 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.10322 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.10322 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.