Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 13 new columns ({'robot_type', 'codebase_version', 'total_tasks', 'total_frames', 'features', 'total_videos', 'total_chunks', 'chunks_size', 'splits', 'video_path', 'total_episodes', 'fps', 'data_path'}) and 3 missing columns ({'length', 'tasks', 'episode_index'}). This happened while the json dataset builder was generating data using hf://datasets/AdleBens/task_index/meta/info.json (at revision 63295ff59ade6e86d0d28307cac3f99fcfe16a8a) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast robot_type: string codebase_version: string total_episodes: int64 total_frames: int64 total_tasks: int64 total_videos: int64 total_chunks: int64 chunks_size: int64 fps: int64 splits: struct<train: string> child 0, train: string data_path: string video_path: string features: struct<action: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, timestamp: struct<dtype: string, shape: list<item: int64>, names: null>, episode_index: struct<dtype: string, shape: list<item: int64>, names: null>, frame_index: struct<dtype: string, shape: list<item: int64>, names: null>, task_index: struct<dtype: string, shape: list<item: int64>, names: null>, index: struct<dtype: string, shape: list<item: int64>, names: null>, observation.state: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, observation.images.main: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video_fps: int64, video_codec: string, video_pix_fmt: string, video_is_depth_map: bool, has_audio: bool>>> child 0, action: struct<dtype: string, shape: list<item: int64>, names: list<item: string>> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: list<item: string> child 0, item: string child 1, timestamp: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 ... ist<item: int64> child 0, item: int64 child 2, names: null child 4, task_index: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: null child 5, index: struct<dtype: string, shape: list<item: int64>, names: null> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: null child 6, observation.state: struct<dtype: string, shape: list<item: int64>, names: list<item: string>> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: list<item: string> child 0, item: string child 7, observation.images.main: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video_fps: int64, video_codec: string, video_pix_fmt: string, video_is_depth_map: bool, has_audio: bool>> child 0, dtype: string child 1, shape: list<item: int64> child 0, item: int64 child 2, names: list<item: string> child 0, item: string child 3, info: struct<video_fps: int64, video_codec: string, video_pix_fmt: string, video_is_depth_map: bool, has_audio: bool> child 0, video_fps: int64 child 1, video_codec: string child 2, video_pix_fmt: string child 3, video_is_depth_map: bool child 4, has_audio: bool to {'episode_index': Value(dtype='int64', id=None), 'tasks': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'length': Value(dtype='int64', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1420, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1052, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 13 new columns ({'robot_type', 'codebase_version', 'total_tasks', 'total_frames', 'features', 'total_videos', 'total_chunks', 'chunks_size', 'splits', 'video_path', 'total_episodes', 'fps', 'data_path'}) and 3 missing columns ({'length', 'tasks', 'episode_index'}). This happened while the json dataset builder was generating data using hf://datasets/AdleBens/task_index/meta/info.json (at revision 63295ff59ade6e86d0d28307cac3f99fcfe16a8a) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
episode_index
int64 | tasks
sequence | length
int64 | robot_type
string | codebase_version
string | total_episodes
int64 | total_frames
int64 | total_tasks
int64 | total_videos
int64 | total_chunks
int64 | chunks_size
int64 | fps
int64 | splits
dict | data_path
string | video_path
string | features
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | [
"None"
] | 26 | null | null | null | null | null | null | null | null | null | null | null | null | null |
1 | [
"None"
] | 26 | null | null | null | null | null | null | null | null | null | null | null | null | null |
2 | [
"None"
] | 25 | null | null | null | null | null | null | null | null | null | null | null | null | null |
3 | [
"None"
] | 30 | null | null | null | null | null | null | null | null | null | null | null | null | null |
null | null | null | so-100 | v2.0 | 0 | 0 | 0 | 0 | 1 | 1,000 | 10 | {
"train": "0:0"
} | data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet | videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4 | {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6"
]
},
"observation.images.main": {
"dtype": "video",
"shape": [
240,
320,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video_fps": 10,
"video_codec": "mp4v",
"video_pix_fmt": "yuv420p",
"video_is_depth_map": false,
"has_audio": false
}
}
} |
[object Object] | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
task_index
This dataset was generated using a phospho dev kit.
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
- Downloads last month
- 15