Papers
arxiv:2501.09747

FAST: Efficient Action Tokenization for Vision-Language-Action Models

Published on Jan 16
· Submitted by akhaliq on Jan 17

Abstract

Autoregressive sequence models, such as Transformer-based vision-language action (VLA) policies, can be tremendously effective for capturing complex and generalizable robotic behaviors. However, such models require us to choose a tokenization of our continuous action signals, which determines how the discrete symbols predicted by the model map to continuous robot actions. We find that current approaches for robot action tokenization, based on simple per-dimension, per-timestep binning schemes, typically perform poorly when learning dexterous skills from high-frequency robot data. To address this challenge, we propose a new compression-based tokenization scheme for robot actions, based on the discrete cosine transform. Our tokenization approach, Frequency-space Action Sequence Tokenization (FAST), enables us to train autoregressive VLAs for highly dexterous and high-frequency tasks where standard discretization methods fail completely. Based on FAST, we release FAST+, a universal robot action tokenizer, trained on 1M real robot action trajectories. It can be used as a black-box tokenizer for a wide range of robot action sequences, with diverse action spaces and control frequencies. Finally, we show that, when combined with the pi0 VLA, our method can scale to training on 10k hours of robot data and match the performance of diffusion VLAs, while reducing training time by up to 5x.

Community

Paper submitter

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

In Figure 3, the input x and the label y for training the model is unclear to me.
The paper says "the network must predict the black dashed curve given the four circles", which indicates:

  • Input: x is a set of four points [x1, y1, x2, y2, x3, y3, x4, y4]
  • Label: y is the coefficients of a cubic function (a, b, c, d for f(x) = ax^3+bx^2+cx+d).

My questions are:

  • Is my understanding correct?
  • Are the input x values "consecutive" samples from the cubic function?
  • Does the model have four neurons just before the last layer corresponding to the four coefficient of the cubic function? (and the last layer computes f(x)?)
  • Is the loss computed between the f(x) values of the predicted coefficients and the actual coefficients?
·

The x-values of the "inputs" are just linspace(0, 1, 4) = [0, 0.33, 0.66, 1], and the outputs are the cubic evaluated at those points. The "labels" are the values of the cubic predicted over a dense set linspace(0, 1, N) where N is swept from 25 to 800. The network is a small autoregressive transformer doing discrete sequence prediction (analogously to the more complex robot policy setting), and so the loss is just NLL.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2501.09747 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2501.09747 in a Space README.md to link it from this page.

Collections including this paper 11