Training datasets for the robosuite environment are provided here as "robosuite_demo_1" through "robosuite_demo_5". Each dataset contains 10000 trajectories (so there are 50000 in total). Note that we perform most experiments on just 5000 trajectories. These only contain the raw environment observations. To perform video prediction training, they must be rendered into images, using the following command: (This renders the data at 256x256 resolution as we do in the paper, but you can specify any resolution.) ``` python robomimic/robomimic/scripts/dataset_states_to_obs.py --dataset /PATH/HERE/demo.hdf5 --output_name rendered_256.hdf5 --done_mode 2 --camera_names agentview_shift_2 --camera_depths 1 --camera_segmentations instance --camera_height 256 --camera_width 256 --renderer igibson ``` Robodesk datasets can be generated using our scripts as described in the main README, or look for the download link.