|
--- |
|
license: cdla-permissive-2.0 |
|
--- |
|
# MoCapAct Dataset |
|
Control of simulated humanoid characters is a challenging benchmark for sequential decision-making methods, as it assesses a policy’s ability to drive an inherently unstable, discontinuous, and high-dimensional physical system. Motion capture (MoCap) data can be very helpful in learning sophisticated locomotion policies by teaching a humanoid agent low-level skills (e.g., standing, walking, and running) that can then be used to generate high-level behaviors. However, even with MoCap data, controlling simulated humanoids remains very hard, because this data offers only kinematic information. Finding physical control inputs to realize the MoCap-demonstrated motions has required methods like reinforcement learning that need large amounts of compute, which has effectively served as a barrier to entry for this exciting research direction. |
|
|
|
In an effort to broaden participation and facilitate evaluation of ideas in humanoid locomotion research, we are releasing MoCapAct (Motion Capture with Actions), a library of high-quality pre-trained agents that can track over three hours of MoCap data for a simulated humanoid in the `dm_control` physics-based environment and rollouts from these experts containing proprioceptive observations and actions. MoCapAct allows researchers to sidestep the computationally intensive task of training low-level control policies from MoCap data and instead use MoCapAct's expert agents and demonstrations for learning advanced locomotion behaviors. It also allows improving on our low-level policies by using them and their demonstration data as a starting point. |
|
|
|
In our work, we use MoCapAct to train a single hierarchical policy capable of tracking the entire MoCap dataset within `dm_control`. |
|
We then re-use the learned low-level component to efficiently learn other high-level tasks. |
|
Finally, we use MoCapAct to train an autoregressive GPT model and show that it can perform natural motion completion given a motion prompt. |
|
We encourage the reader to visit our [project website](https://microsoft.github.io/MoCapAct/) to see videos of our results as well as get links to our paper and code. |
|
|
|
## File Structure |
|
|
|
The file structure of the dataset is: |
|
``` |
|
├── all |
|
│ ├── large |
|
│ │ ├── large_1.tar.gz |
|
│ │ ├── large_2.tar.gz |
|
| │ ... |
|
│ │ └── large_43.tar.gz |
|
│ └── small |
|
│ ├── small_1.tar.gz |
|
│ ├── small_2.tar.gz |
|
│ └── small_3.tar.gz |
|
│ |
|
├── sample |
|
│ ├── large.tar.gz |
|
│ └── small.tar.gz |
|
│ |
|
└── videos |
|
├── full_clip_videos.tar.gz |
|
└── snippet_videos.tar.gz |
|
``` |
|
|
|
## MoCapAct Dataset Tarball Files |
|
The dataset tarball files have the following structure: |
|
- `all/small/small_*.tar.gz`: Contains HDF5 files with 20 rollouts per snippet. Due to file size limitations, we split the rollouts among multiple tarball files. |
|
- `all/large/large_*.tar.gz`: Contains HDF5 files with 200 rollouts per snippet. Due to file size limitations, we split the rollouts among multiple tarball files. |
|
- `sample/small.tar.gz`: Contains example HDF5 files with 20 rollouts per snippet. |
|
- `sample/large.tar.gz`: Contains example HDF5 files with 200 rollouts per snippet. |
|
|
|
The HDF5 structure is detailed in Appendix A.2 of the paper as well as https://github.com/microsoft/MoCapAct#description. |
|
|
|
An example for loading and inspecting an HDF5 file in Python is: |
|
```python |
|
import h5py |
|
dset = h5py.File("/path/to/small/CMU_083_33.hdf5", "r") |
|
print("Expert actions from first rollout episode:") |
|
print(dset["CMU_083_33-0-194/0/actions"][...]) |
|
``` |
|
|
|
## MoCap Videos |
|
There are two tarball files containing videos of the MoCap clips in the dataset: |
|
- `full_clip_videos.tar.gz` contains videos of the full MoCap clips. |
|
- `snippet_videos.tar.gz` contains videos of the snippets that were used to train the experts. |
|
Note that they are playbacks of the clips themselves, not rollouts of the corresponding experts. |
|
|