File size: 4,327 Bytes
333af84
 
 
 
 
 
 
 
 
 
 
 
 
 
b9102cd
 
 
469f320
94b418d
b9102cd
 
3613f87
b9102cd
3613f87
 
 
 
94b418d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9c4f5a7
d8857ab
9c4f5a7
d8857ab
9c4f5a7
d8857ab
59547d5
 
 
 
 
 
94b418d
9c4f5a7
59547d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9c4f5a7
 
 
 
d8857ab
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
pretty_name: 1X World Model Challenge Dataset
size_categories:
- 10M<n<100M
viewer: false
---
Dataset for the [1X World Model Challenge](https://github.com/1x-technologies/1xgpt).

Download with:
```
huggingface-cli download 1x-technologies/worldmodel --repo-type dataset --local-dir data
```

Changes from v1.1:
- New train and val dataset of 100 hours, replacing the v1.1 datasets
- Blur applied to faces
- Shared a new raw video dataset under CC-BY-NC-SA 4.0: https://huggingface.co./datasets/1x-technologies/worldmodel_raw_data

Contents of train/val_v2.0:

The training dataset is shareded into 100 independent shards. The definitions are as follows:

- **video_{shard}.bin**: 8x8x8 image patches at 30hz, with 17 frame temporal window, encoded using [NVIDIA Cosmos Tokenizer](https://github.com/NVIDIA/Cosmos-Tokenizer) "Cosmos-Tokenizer-DV8x8x8".
- **segment_idx_{shard}.bin** - Maps each frame `i` to its corresponding segment index. You may want to use this to separate non-contiguous frames from different videos (transitions).
- **states_{shard}.bin** - States arrays (defined below in `Index-to-State Mapping`) stored in `np.float32` format. For frame `i`, the corresponding state is represented by `states_{shard}[i]`.
- **metadata** - The `metadata.json` file provides high-level information about the entire dataset, while `metadata_{shard}.json` files contain specific details for each shard. 

  #### Index-to-State Mapping (NEW)
  ```
   {
        0: HIP_YAW
        1: HIP_ROLL
        2: HIP_PITCH
        3: KNEE_PITCH
        4: ANKLE_ROLL
        5: ANKLE_PITCH
        6: LEFT_SHOULDER_PITCH
        7: LEFT_SHOULDER_ROLL
        8: LEFT_SHOULDER_YAW
        9: LEFT_ELBOW_PITCH
        10: LEFT_ELBOW_YAW
        11: LEFT_WRIST_PITCH
        12: LEFT_WRIST_ROLL
        13: RIGHT_SHOULDER_PITCH
        14: RIGHT_SHOULDER_ROLL
        15: RIGHT_SHOULDER_YAW
        16: RIGHT_ELBOW_PITCH
        17: RIGHT_ELBOW_YAW
        18: RIGHT_WRIST_PITCH
        19: RIGHT_WRIST_ROLL
        20: NECK_PITCH
        21: Left hand closure state (0 = open, 1 = closed)
        22: Right hand closure state (0 = open, 1 = closed)
        23: Linear Velocity
        24: Angular Velocity
    }


Previous version: v1.1

- **magvit2.ckpt** - weights for [MAGVIT2](https://github.com/TencentARC/Open-MAGVIT2) image tokenizer we used. We provide the encoder (tokenizer) and decoder (de-tokenizer) weights.

Contents of train/val_v1.1:
- **video.bin** - 16x16 image patches at 30hz, each patch is vector-quantized into 2^18 possible integer values. These can be decoded into 256x256 RGB images using the provided `magvig2.ckpt` weights.
- **segment_ids.bin** - for each frame `segment_ids[i]` uniquely points to the segment index that frame `i` came from. You may want to use this to separate non-contiguous frames from different videos (transitions).
- **actions/** - a folder of action arrays stored in `np.float32` format. For frame `i`, the corresponding action is given by `joint_pos[i]`, `driving_command[i]`, `neck_desired[i]`, and so on. The shapes and definitions of the arrays are as follows (N is the number of frames):
  - **joint_pos** `(N, 21)`: Joint positions. See `Index-to-Joint Mapping` below.  
  - **driving_command** `(N, 2)`: Linear and angular velocities.
  - **neck_desired** `(N, 1)`: Desired neck pitch.
  - **l_hand_closure** `(N, 1)`: Left hand closure state (0 = open, 1 = closed).
  - **r_hand_closure** `(N, 1)`: Right hand closure state (0 = open, 1 = closed).
  #### Index-to-Joint Mapping (OLD)
  ```
   {
        0: HIP_YAW
        1: HIP_ROLL
        2: HIP_PITCH
        3: KNEE_PITCH
        4: ANKLE_ROLL
        5: ANKLE_PITCH
        6: LEFT_SHOULDER_PITCH
        7: LEFT_SHOULDER_ROLL
        8: LEFT_SHOULDER_YAW
        9: LEFT_ELBOW_PITCH
        10: LEFT_ELBOW_YAW
        11: LEFT_WRIST_PITCH
        12: LEFT_WRIST_ROLL
        13: RIGHT_SHOULDER_PITCH
        14: RIGHT_SHOULDER_ROLL
        15: RIGHT_SHOULDER_YAW
        16: RIGHT_ELBOW_PITCH
        17: RIGHT_ELBOW_YAW
        18: RIGHT_WRIST_PITCH
        19: RIGHT_WRIST_ROLL
        20: NECK_PITCH
    }
  
  ```

  

We also provide a small `val_v1.1` data split containing held-out examples not seen in the training set, in case you want to try evaluating your model on held-out frames.