|
--- |
|
dataset_info: |
|
features: |
|
- name: observation.state |
|
sequence: float32 |
|
length: 6 |
|
- name: action |
|
sequence: float32 |
|
length: 6 |
|
- name: observation.images.webcam |
|
dtype: video_frame |
|
- name: episode_index |
|
dtype: int64 |
|
- name: frame_index |
|
dtype: int64 |
|
- name: timestamp |
|
dtype: float32 |
|
- name: next.done |
|
dtype: bool |
|
- name: index |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 4736267 |
|
num_examples: 35051 |
|
download_size: 1729226 |
|
dataset_size: 4736267 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
Move a blue cube and a pink cylinder between target zones. |
|
|
|
Always start and finish with the blue cube and pink cylinder on the right, with the blue cube closer to the arm. This allows us to generate demonstrations without having to intervene for environment resets. |
|
|
|
Always start and finish with the arm in rest position. |
|
|
|
Always move the blue cube across first. This makes it so that the task is Markovian. |
|
|
|
Only do one cycle per episode. |
|
|
|
First 25 episodes with only natural lighting. Last 25 episodes with added warm LED room lighting. |
|
|
|
The orientation of the gripper (**thumb-left** as seen in the video) is consistent throughout. |
|
|
|
![image/gif](https://cdn-uploads.huggingface.co/production/uploads/65d4f32d9936544a08817465/wGIR3sWBeaTVQaUXCU_fk.gif) |
|
|