File size: 1,227 Bytes
ad39057
 
 
7d7e557
 
 
3960b2d
7d7e557
3960b2d
ad39057
 
7d7e557
3960b2d
c150b7c
7d7e557
3960b2d
c659932
7d7e557
 
c659932
7d7e557
 
 
 
 
 
3960b2d
 
7d7e557
3960b2d
7d7e557
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
pipeline_tag: robotics
---
# 🦾 Heterogenous Pre-trained Transformers
 
[Lirui Wang](https://liruiw.github.io/), [Xinlei Chen](https://xinleic.xyz/), [Jialiang Zhao](https://alanz.info/), [Kaiming He](https://people.csail.mit.edu/kaiming/)

Neural Information Processing Systems (Spotlight), 2024

Paper: https://huggingface.co./papers/2409.20537

You can find more details on our [project page](https://liruiw.github.io/hpt). An alternative clean implementation of HPT in Hugging Face can also be found [here](https://github.com/liruiw/lerobot/tree/hpt_squash/lerobot/common/policies/hpt).


**TL;DR:** HPT aligns different embodiment to a shared latent space and investigates the scaling behaviors in policy learning. Put a scalable transformer in the middle of your policy and don’t train from scratch!


If you find HPT useful in your research, please consider citing:
```
@inproceedings{wang2024hpt,
author    = {Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He},
title     = {Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers},
booktitle = {Neurips},
year      = {2024}
}
```


## Contact

If you have any questions, feel free to contact me through email ([email protected]). Enjoy!