hpt-base / README.md
nielsr's picture
nielsr HF staff
Add link to paper, add robotics tag
ad39057 verified
|
raw
history blame
1.23 kB
metadata
pipeline_tag: robotics

🦾 Heterogenous Pre-trained Transformers

Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He

Neural Information Processing Systems (Spotlight), 2024

Paper: https://huggingface.co./papers/2409.20537

You can find more details on our project page. An alternative clean implementation of HPT in Hugging Face can also be found here.

TL;DR: HPT aligns different embodiment to a shared latent space and investigates the scaling behaviors in policy learning. Put a scalable transformer in the middle of your policy and don’t train from scratch!

If you find HPT useful in your research, please consider citing:

@inproceedings{wang2024hpt,
author    = {Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He},
title     = {Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers},
booktitle = {Neurips},
year      = {2024}
}

Contact

If you have any questions, feel free to contact me through email ([email protected]). Enjoy!