url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.49B
2.43B
node_id
stringlengths
18
19
number
int64
5.35k
7.07k
title
stringlengths
1
290
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
dict
comments
sequencelengths
0
30
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
1
19.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/7068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7068/comments
https://api.github.com/repos/huggingface/datasets/issues/7068/events
https://github.com/huggingface/datasets/pull/7068
2,426,657,434
PR_kwDODunzps52SwXS
7,068
Fix prepare_single_hop_path_and_storage_options
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7068). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-07-24T05:52:34
2024-07-24T08:54:32
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7068", "html_url": "https://github.com/huggingface/datasets/pull/7068", "diff_url": "https://github.com/huggingface/datasets/pull/7068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7068.patch", "merged_at": null }
Fix `_prepare_single_hop_path_and_storage_options`: - Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs - Do not overwrite passed `storage_options` nested values: - Before, when passed ```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```, it was overwritten to ```{"https": {"client_kwargs": {"trust_env": True}}}``` - Now, the result combines both: ```{"https": {"client_kwargs": {"trust_env": True, "raise_for_status": True}}}```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7068/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7067/comments
https://api.github.com/repos/huggingface/datasets/issues/7067/events
https://github.com/huggingface/datasets/issues/7067
2,425,460,168
I_kwDODunzps6QkZXI
7,067
Convert_to_parquet fails for datasets with multiple configs
{ "login": "HuangZhen02", "id": 97585031, "node_id": "U_kgDOBdEHhw", "avatar_url": "https://avatars.githubusercontent.com/u/97585031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HuangZhen02", "html_url": "https://github.com/HuangZhen02", "followers_url": "https://api.github.com/users/HuangZhen02/followers", "following_url": "https://api.github.com/users/HuangZhen02/following{/other_user}", "gists_url": "https://api.github.com/users/HuangZhen02/gists{/gist_id}", "starred_url": "https://api.github.com/users/HuangZhen02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuangZhen02/subscriptions", "organizations_url": "https://api.github.com/users/HuangZhen02/orgs", "repos_url": "https://api.github.com/users/HuangZhen02/repos", "events_url": "https://api.github.com/users/HuangZhen02/events{/privacy}", "received_events_url": "https://api.github.com/users/HuangZhen02/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Many users have encountered the same issue, which has caused inconvenience.\r\n\r\nhttps://discuss.huggingface.co/t/convert-to-parquet-fails-for-datasets-with-multiple-configs/86733\r\n" ]
2024-07-23T15:09:33
2024-07-23T15:10:44
null
NONE
null
null
null
If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error: ``` Traceback (most recent call last): File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run dataset.push_to_hub( File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f) Bad request: Invalid reference for a branch: refs/pr/1 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7067/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7066/comments
https://api.github.com/repos/huggingface/datasets/issues/7066/events
https://github.com/huggingface/datasets/issues/7066
2,425,125,160
I_kwDODunzps6QjHko
7,066
One subset per file in repo ?
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-07-23T12:43:59
2024-07-23T12:43:59
null
MEMBER
null
null
null
Right now we consider all the files of a dataset to be the same data, e.g. ``` single_subset_dataset/ ├── train0.jsonl ├── train1.jsonl └── train2.jsonl ``` but in cases like this, each file is actually a different subset of the dataset and should be loaded separately ``` many_subsets_dataset/ ├── animals.jsonl ├── trees.jsonl └── metadata.jsonl ``` It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7066/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7066/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7065/comments
https://api.github.com/repos/huggingface/datasets/issues/7065/events
https://github.com/huggingface/datasets/issues/7065
2,424,734,953
I_kwDODunzps6QhoTp
7,065
Cannot get item after loading from disk and then converting to iterable.
{ "login": "happyTonakai", "id": 21305646, "node_id": "MDQ6VXNlcjIxMzA1NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/21305646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/happyTonakai", "html_url": "https://github.com/happyTonakai", "followers_url": "https://api.github.com/users/happyTonakai/followers", "following_url": "https://api.github.com/users/happyTonakai/following{/other_user}", "gists_url": "https://api.github.com/users/happyTonakai/gists{/gist_id}", "starred_url": "https://api.github.com/users/happyTonakai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/happyTonakai/subscriptions", "organizations_url": "https://api.github.com/users/happyTonakai/orgs", "repos_url": "https://api.github.com/users/happyTonakai/repos", "events_url": "https://api.github.com/users/happyTonakai/events{/privacy}", "received_events_url": "https://api.github.com/users/happyTonakai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-07-23T09:37:56
2024-07-23T09:37:56
null
NONE
null
null
null
### Describe the bug The dataset generated from local file works fine. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` But after saving it to disk and then loading it from disk, I cannot get data as expected. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ds.save_to_disk("./train") ds = datasets.load_from_disk("./train") ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` After a long time waiting, an error occurs: ``` Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s] Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get if not self._poll(timeout): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll return self._poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll r = wait([self], timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait ready = selector.select(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module> cli.main() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main run() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file runpy.run_path(target, run_name="__main__") File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module> for batch in dataloader: File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__ data = self._next_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data idx, data = self._get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data success, data = self._try_get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly ``` It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable? ### Steps to reproduce the bug 1. Create a `Dataset` from local files with `from_dict` 2. Save it to disk with `save_to_disk` 3. Load it from disk with `load_from_disk` 4. Convert to iterable with `to_iterable_dataset` 5. Loop the dataset ### Expected behavior Get items faster than the original dataset generated from dict. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.23.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7065/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7064/comments
https://api.github.com/repos/huggingface/datasets/issues/7064/events
https://github.com/huggingface/datasets/pull/7064
2,424,613,104
PR_kwDODunzps52Lz2-
7,064
Add `batch` method to `Dataset` class
{ "login": "lappemic", "id": 61876623, "node_id": "MDQ6VXNlcjYxODc2NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lappemic", "html_url": "https://github.com/lappemic", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "organizations_url": "https://api.github.com/users/lappemic/orgs", "repos_url": "https://api.github.com/users/lappemic/repos", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "received_events_url": "https://api.github.com/users/lappemic/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Looks good to me ! :)\r\n\r\nyou might want to add the `map` num_proc argument as well, for people who want to make it run faster", "Thanks for the feedback @lhoestq! The last commits include:\r\n- Adding the `num_proc` parameter to `batch`\r\n- Adding tests similar to the one done for `IterableDataset.batch()`\r\n- Updated the documentation -> I think they are actually misplaced in the `Stream` page. But could not find a better place atm. Where would you put this documentation?\r\n\r\nWDYT?", "You can put the documentation in process.mdx :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7064). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-07-23T08:40:43
2024-07-24T13:24:53
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7064", "html_url": "https://github.com/huggingface/datasets/pull/7064", "diff_url": "https://github.com/huggingface/datasets/pull/7064.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7064.patch", "merged_at": null }
This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples. Key changes: - Add `batch` method to `Dataset` class in `arrow_dataset.py` - Utilize `map` method for batching Closes #7063 Once the approach is approved, i will create the tests and update the documentation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7064/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7063/comments
https://api.github.com/repos/huggingface/datasets/issues/7063/events
https://github.com/huggingface/datasets/issues/7063
2,424,488,648
I_kwDODunzps6QgsLI
7,063
Add `batch` method to `Dataset`
{ "login": "lappemic", "id": 61876623, "node_id": "MDQ6VXNlcjYxODc2NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lappemic", "html_url": "https://github.com/lappemic", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "organizations_url": "https://api.github.com/users/lappemic/orgs", "repos_url": "https://api.github.com/users/lappemic/repos", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "received_events_url": "https://api.github.com/users/lappemic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-07-23T07:36:59
2024-07-23T07:36:59
null
CONTRIBUTOR
null
null
null
### Feature request Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054. ### Motivation A batched iteration speeds up data loading significantly (see e.g. #6279) ### Your contribution I plan to open a PR to implement this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7063/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7062/comments
https://api.github.com/repos/huggingface/datasets/issues/7062/events
https://github.com/huggingface/datasets/pull/7062
2,424,467,484
PR_kwDODunzps52LUPR
7,062
Avoid calling http_head for non-HTTP URLs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7062). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005591 / 0.011353 (-0.005761) | 0.003992 / 0.011008 (-0.007016) | 0.063932 / 0.038508 (0.025424) | 0.034572 / 0.023109 (0.011463) | 0.252532 / 0.275898 (-0.023366) | 0.271233 / 0.323480 (-0.052247) | 0.005146 / 0.007986 (-0.002840) | 0.002844 / 0.004328 (-0.001484) | 0.049555 / 0.004250 (0.045305) | 0.044111 / 0.037052 (0.007059) | 0.270131 / 0.258489 (0.011642) | 0.318109 / 0.293841 (0.024269) | 0.030247 / 0.128546 (-0.098300) | 0.012438 / 0.075646 (-0.063209) | 0.205160 / 0.419271 (-0.214112) | 0.036228 / 0.043533 (-0.007305) | 0.250664 / 0.255139 (-0.004475) | 0.263884 / 0.283200 (-0.019315) | 0.018141 / 0.141683 (-0.123541) | 1.128504 / 1.452155 (-0.323650) | 1.182543 / 1.492716 (-0.310173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094576 / 0.018006 (0.076570) | 0.301153 / 0.000490 (0.300664) | 0.000246 / 0.000200 (0.000046) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019143 / 0.037411 (-0.018268) | 0.062788 / 0.014526 (0.048262) | 0.074688 / 0.176557 (-0.101869) | 0.121799 / 0.737135 (-0.615336) | 0.076200 / 0.296338 (-0.220138) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277002 / 0.215209 (0.061793) | 2.735738 / 2.077655 (0.658083) | 1.430408 / 1.504120 (-0.073712) | 1.309795 / 1.541195 (-0.231400) | 1.339083 / 1.468490 (-0.129407) | 0.702540 / 4.584777 (-3.882237) | 2.352468 / 3.745712 (-1.393244) | 2.913698 / 5.269862 (-2.356164) | 1.871739 / 4.565676 (-2.693938) | 0.077054 / 0.424275 (-0.347221) | 0.005055 / 0.007607 (-0.002552) | 0.330550 / 0.226044 (0.104505) | 3.272556 / 2.268929 (1.003627) | 1.805268 / 55.444624 (-53.639356) | 1.504791 / 6.876477 (-5.371686) | 1.511361 / 2.142072 (-0.630712) | 0.784451 / 4.805227 (-4.020776) | 0.132182 / 6.500664 (-6.368482) | 0.042516 / 0.075469 (-0.032954) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946939 / 1.841788 (-0.894849) | 11.369607 / 8.074308 (3.295299) | 9.667350 / 10.191392 (-0.524042) | 0.138689 / 0.680424 (-0.541735) | 0.014416 / 0.534201 (-0.519785) | 0.300685 / 0.579283 (-0.278598) | 0.259709 / 0.434364 (-0.174655) | 0.341271 / 0.540337 (-0.199066) | 0.435609 / 1.386936 (-0.951327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005726 / 0.011353 (-0.005627) | 0.004071 / 0.011008 (-0.006937) | 0.050837 / 0.038508 (0.012329) | 0.047000 / 0.023109 (0.023890) | 0.278543 / 0.275898 (0.002645) | 0.300526 / 0.323480 (-0.022954) | 0.004483 / 0.007986 (-0.003503) | 0.002835 / 0.004328 (-0.001494) | 0.050925 / 0.004250 (0.046675) | 0.041834 / 0.037052 (0.004782) | 0.285059 / 0.258489 (0.026570) | 0.324557 / 0.293841 (0.030716) | 0.038949 / 0.128546 (-0.089597) | 0.012145 / 0.075646 (-0.063501) | 0.061791 / 0.419271 (-0.357481) | 0.034493 / 0.043533 (-0.009040) | 0.274034 / 0.255139 (0.018895) | 0.295886 / 0.283200 (0.012686) | 0.018524 / 0.141683 (-0.123159) | 1.148766 / 1.452155 (-0.303388) | 1.207966 / 1.492716 (-0.284750) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094078 / 0.018006 (0.076071) | 0.307850 / 0.000490 (0.307361) | 0.000224 / 0.000200 (0.000024) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023502 / 0.037411 (-0.013910) | 0.077321 / 0.014526 (0.062795) | 0.091147 / 0.176557 (-0.085410) | 0.131111 / 0.737135 (-0.606025) | 0.090906 / 0.296338 (-0.205432) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290700 / 0.215209 (0.075491) | 2.833655 / 2.077655 (0.756001) | 1.546371 / 1.504120 (0.042251) | 1.415337 / 1.541195 (-0.125858) | 1.445752 / 1.468490 (-0.022738) | 0.737880 / 4.584777 (-3.846897) | 0.961549 / 3.745712 (-2.784164) | 2.844021 / 5.269862 (-2.425841) | 2.023547 / 4.565676 (-2.542130) | 0.079791 / 0.424275 (-0.344484) | 0.005449 / 0.007607 (-0.002158) | 0.356381 / 0.226044 (0.130337) | 3.515555 / 2.268929 (1.246627) | 1.920407 / 55.444624 (-53.524217) | 1.628637 / 6.876477 (-5.247839) | 1.752995 / 2.142072 (-0.389077) | 0.807264 / 4.805227 (-3.997963) | 0.133627 / 6.500664 (-6.367037) | 0.041861 / 0.075469 (-0.033609) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.035643 / 1.841788 (-0.806144) | 12.114792 / 8.074308 (4.040484) | 10.185844 / 10.191392 (-0.005548) | 0.142354 / 0.680424 (-0.538070) | 0.015466 / 0.534201 (-0.518734) | 0.304681 / 0.579283 (-0.274603) | 0.124297 / 0.434364 (-0.310067) | 0.339907 / 0.540337 (-0.200430) | 0.436266 / 1.386936 (-0.950670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#856eb84569006ab9389ddbcce8b7141befeab9cc \"CML watermark\")\n" ]
2024-07-23T07:25:09
2024-07-23T14:28:27
2024-07-23T14:21:08
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7062", "html_url": "https://github.com/huggingface/datasets/pull/7062", "diff_url": "https://github.com/huggingface/datasets/pull/7062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7062.patch", "merged_at": "2024-07-23T14:21:08" }
Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement. Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,... I discovered this while working in an unrelated issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7062/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7061/comments
https://api.github.com/repos/huggingface/datasets/issues/7061/events
https://github.com/huggingface/datasets/issues/7061
2,423,786,881
I_kwDODunzps6QeA2B
7,061
Custom Dataset | Still Raise Error while handling errors in _generate_examples
{ "login": "hahmad2008", "id": 68266028, "node_id": "MDQ6VXNlcjY4MjY2MDI4", "avatar_url": "https://avatars.githubusercontent.com/u/68266028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hahmad2008", "html_url": "https://github.com/hahmad2008", "followers_url": "https://api.github.com/users/hahmad2008/followers", "following_url": "https://api.github.com/users/hahmad2008/following{/other_user}", "gists_url": "https://api.github.com/users/hahmad2008/gists{/gist_id}", "starred_url": "https://api.github.com/users/hahmad2008/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hahmad2008/subscriptions", "organizations_url": "https://api.github.com/users/hahmad2008/orgs", "repos_url": "https://api.github.com/users/hahmad2008/repos", "events_url": "https://api.github.com/users/hahmad2008/events{/privacy}", "received_events_url": "https://api.github.com/users/hahmad2008/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-07-22T21:18:12
2024-07-22T21:18:12
null
NONE
null
null
null
### Describe the bug I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution. ``` def _generate_examples(self, filepaths): errors=[] id_ = 0 for filepath in filepaths: try: with open(filepath, 'r') as f: for line in f: json_obj = json.loads(line) yield id_, json_obj id_ += 1 except Exception as exc: logger.error(f"error occur at filepath: {filepath}") errors.append(error) ``` seems the logger.error is printed but still exception is raised the the run is exit. ``` Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841 ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl Traceback (most recent call last): File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples json_obj = json.loads(line) File "myenv/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "myenv/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3) Generating train split: 0 examples [00:06, ? examples/s]> RemoteTraceback: """ Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset """ The above exception was the direct cause of the following exception: │ │ │ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │ │ py:1377 in <listcomp> │ │ │ │ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │ │ 1375 │ │ │ │ │ break │ │ 1376 │ │ # we get the result in case there's an error to raise │ │ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │ │ 1378 │ │ │ │ ╭──────────────────────────────── locals ─────────────────────────────────╮ │ │ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │ │ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ ╰─────────────────────────────────────────────────────────────────────────╯ │ │ │ │ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │ │ in get │ │ │ │ 768 │ │ if self._success: │ │ 769 │ │ │ return self._value │ │ 770 │ │ else: │ │ ❱ 771 │ │ │ raise self._value │ │ 772 │ │ │ 773 │ def _set(self, i, obj): │ │ 774 │ │ self._success, self._value = obj │ │ │ │ ╭────────────────────────────── locals ──────────────────────────────╮ │ │ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ │ timeout = None │ │ │ ╰────────────────────────────────────────────────────────────────────╯ │ DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug same as above ### Expected behavior should handle error and continue reading remaining files ### Environment info python 3.9
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7061/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7060/comments
https://api.github.com/repos/huggingface/datasets/issues/7060/events
https://github.com/huggingface/datasets/pull/7060
2,423,188,419
PR_kwDODunzps52G71g
7,060
WebDataset BuilderConfig
{ "login": "hlky", "id": 106811348, "node_id": "U_kgDOBl3P1A", "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hlky", "html_url": "https://github.com/hlky", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "organizations_url": "https://api.github.com/users/hlky/orgs", "repos_url": "https://api.github.com/users/hlky/repos", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "received_events_url": "https://api.github.com/users/hlky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7060). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-07-22T15:41:07
2024-07-23T13:28:44
2024-07-23T13:28:44
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7060", "html_url": "https://github.com/huggingface/datasets/pull/7060", "diff_url": "https://github.com/huggingface/datasets/pull/7060.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7060.patch", "merged_at": null }
This PR adds `WebDatasetConfig`. Closes #7055
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7060/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7059/comments
https://api.github.com/repos/huggingface/datasets/issues/7059/events
https://github.com/huggingface/datasets/issues/7059
2,422,827,892
I_kwDODunzps6QaWt0
7,059
None values are skipped when reading jsonl in subobjects
{ "login": "PonteIneptique", "id": 1929830, "node_id": "MDQ6VXNlcjE5Mjk4MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PonteIneptique", "html_url": "https://github.com/PonteIneptique", "followers_url": "https://api.github.com/users/PonteIneptique/followers", "following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}", "gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}", "starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions", "organizations_url": "https://api.github.com/users/PonteIneptique/orgs", "repos_url": "https://api.github.com/users/PonteIneptique/repos", "events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}", "received_events_url": "https://api.github.com/users/PonteIneptique/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-07-22T13:02:42
2024-07-22T13:02:53
null
NONE
null
null
null
### Describe the bug I have been fighting against my machine since this morning only to find out this is some kind of a bug. When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around. E.g., let's take this example Here are two version of a same dataset: [not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz) [buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz) ### Steps to reproduce the bug 1. Load the `buggy.tar.gz` dataset 2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` 3. Load the `not-buggy.tar.gz` dataset 4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` ### Expected behavior Both should have 4 baseline entries: 1. Buggy should have None followed by three lists 2. Non-Buggy should have four lists, and the first one should be an empty list. One does not work, 2 works. Despite accepting None in another position than the first one. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7059/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7058/comments
https://api.github.com/repos/huggingface/datasets/issues/7058/events
https://github.com/huggingface/datasets/issues/7058
2,422,560,355
I_kwDODunzps6QZVZj
7,058
New feature type: Document
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-07-22T10:49:20
2024-07-22T10:49:20
null
CONTRIBUTOR
null
null
null
It would be useful for PDF. https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7058/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7057/comments
https://api.github.com/repos/huggingface/datasets/issues/7057/events
https://github.com/huggingface/datasets/pull/7057
2,422,498,520
PR_kwDODunzps52EjGC
7,057
Update load_hub.mdx
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7057). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005617 / 0.011353 (-0.005736) | 0.003994 / 0.011008 (-0.007014) | 0.064188 / 0.038508 (0.025680) | 0.030939 / 0.023109 (0.007829) | 0.248712 / 0.275898 (-0.027186) | 0.273417 / 0.323480 (-0.050063) | 0.003340 / 0.007986 (-0.004646) | 0.002823 / 0.004328 (-0.001506) | 0.049985 / 0.004250 (0.045734) | 0.046872 / 0.037052 (0.009820) | 0.254554 / 0.258489 (-0.003935) | 0.288142 / 0.293841 (-0.005699) | 0.030540 / 0.128546 (-0.098006) | 0.012295 / 0.075646 (-0.063352) | 0.204589 / 0.419271 (-0.214683) | 0.036383 / 0.043533 (-0.007150) | 0.254277 / 0.255139 (-0.000862) | 0.267962 / 0.283200 (-0.015237) | 0.021173 / 0.141683 (-0.120510) | 1.126933 / 1.452155 (-0.325221) | 1.190841 / 1.492716 (-0.301875) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093622 / 0.018006 (0.075616) | 0.297967 / 0.000490 (0.297477) | 0.000241 / 0.000200 (0.000041) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018623 / 0.037411 (-0.018789) | 0.062210 / 0.014526 (0.047684) | 0.074369 / 0.176557 (-0.102187) | 0.120585 / 0.737135 (-0.616550) | 0.075966 / 0.296338 (-0.220372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285440 / 0.215209 (0.070231) | 2.804275 / 2.077655 (0.726620) | 1.484539 / 1.504120 (-0.019580) | 1.366587 / 1.541195 (-0.174607) | 1.355269 / 1.468490 (-0.113221) | 0.722289 / 4.584777 (-3.862488) | 2.344567 / 3.745712 (-1.401145) | 2.831779 / 5.269862 (-2.438083) | 1.899800 / 4.565676 (-2.665876) | 0.078657 / 0.424275 (-0.345619) | 0.005188 / 0.007607 (-0.002420) | 0.340150 / 0.226044 (0.114106) | 3.390915 / 2.268929 (1.121986) | 1.836473 / 55.444624 (-53.608152) | 1.520718 / 6.876477 (-5.355759) | 1.723448 / 2.142072 (-0.418624) | 0.810281 / 4.805227 (-3.994946) | 0.136008 / 6.500664 (-6.364657) | 0.044005 / 0.075469 (-0.031465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989982 / 1.841788 (-0.851806) | 11.671075 / 8.074308 (3.596767) | 9.805471 / 10.191392 (-0.385921) | 0.141637 / 0.680424 (-0.538787) | 0.014551 / 0.534201 (-0.519650) | 0.310077 / 0.579283 (-0.269206) | 0.266838 / 0.434364 (-0.167526) | 0.348894 / 0.540337 (-0.191444) | 0.451530 / 1.386936 (-0.935406) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005639 / 0.011353 (-0.005713) | 0.003935 / 0.011008 (-0.007074) | 0.050147 / 0.038508 (0.011639) | 0.031023 / 0.023109 (0.007914) | 0.268361 / 0.275898 (-0.007537) | 0.295774 / 0.323480 (-0.027706) | 0.005029 / 0.007986 (-0.002956) | 0.002832 / 0.004328 (-0.001496) | 0.049806 / 0.004250 (0.045556) | 0.040515 / 0.037052 (0.003463) | 0.283298 / 0.258489 (0.024809) | 0.321946 / 0.293841 (0.028105) | 0.031833 / 0.128546 (-0.096714) | 0.012137 / 0.075646 (-0.063510) | 0.060510 / 0.419271 (-0.358761) | 0.033754 / 0.043533 (-0.009779) | 0.268079 / 0.255139 (0.012940) | 0.292468 / 0.283200 (0.009268) | 0.017268 / 0.141683 (-0.124414) | 1.159922 / 1.452155 (-0.292233) | 1.188961 / 1.492716 (-0.303755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096930 / 0.018006 (0.078923) | 0.306921 / 0.000490 (0.306431) | 0.000226 / 0.000200 (0.000026) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022811 / 0.037411 (-0.014600) | 0.077298 / 0.014526 (0.062772) | 0.088949 / 0.176557 (-0.087608) | 0.130763 / 0.737135 (-0.606372) | 0.090429 / 0.296338 (-0.205909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300866 / 0.215209 (0.085657) | 2.963375 / 2.077655 (0.885720) | 1.595753 / 1.504120 (0.091633) | 1.463091 / 1.541195 (-0.078104) | 1.481182 / 1.468490 (0.012692) | 0.712939 / 4.584777 (-3.871838) | 0.956694 / 3.745712 (-2.789018) | 2.802890 / 5.269862 (-2.466971) | 1.891092 / 4.565676 (-2.674585) | 0.077570 / 0.424275 (-0.346706) | 0.005536 / 0.007607 (-0.002072) | 0.351958 / 0.226044 (0.125914) | 3.459114 / 2.268929 (1.190185) | 1.989488 / 55.444624 (-53.455137) | 1.676271 / 6.876477 (-5.200205) | 1.808073 / 2.142072 (-0.334000) | 0.786920 / 4.805227 (-4.018307) | 0.132220 / 6.500664 (-6.368444) | 0.041602 / 0.075469 (-0.033867) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031759 / 1.841788 (-0.810029) | 12.007776 / 8.074308 (3.933467) | 10.568254 / 10.191392 (0.376862) | 0.143176 / 0.680424 (-0.537248) | 0.015556 / 0.534201 (-0.518645) | 0.304484 / 0.579283 (-0.274799) | 0.125508 / 0.434364 (-0.308855) | 0.340017 / 0.540337 (-0.200320) | 0.434285 / 1.386936 (-0.952651) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#16fa4421f44b22bbbc607f379a93f45af468d1fc \"CML watermark\")\n" ]
2024-07-22T10:17:46
2024-07-22T10:34:14
2024-07-22T10:28:10
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7057", "html_url": "https://github.com/huggingface/datasets/pull/7057", "diff_url": "https://github.com/huggingface/datasets/pull/7057.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7057.patch", "merged_at": "2024-07-22T10:28:10" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7057/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7057/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7056/comments
https://api.github.com/repos/huggingface/datasets/issues/7056/events
https://github.com/huggingface/datasets/pull/7056
2,422,192,257
PR_kwDODunzps52DgOu
7,056
Make `BufferShuffledExamplesIterable` resumable
{ "login": "yzhangcs", "id": 18402347, "node_id": "MDQ6VXNlcjE4NDAyMzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yzhangcs", "html_url": "https://github.com/yzhangcs", "followers_url": "https://api.github.com/users/yzhangcs/followers", "following_url": "https://api.github.com/users/yzhangcs/following{/other_user}", "gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}", "starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions", "organizations_url": "https://api.github.com/users/yzhangcs/orgs", "repos_url": "https://api.github.com/users/yzhangcs/repos", "events_url": "https://api.github.com/users/yzhangcs/events{/privacy}", "received_events_url": "https://api.github.com/users/yzhangcs/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Oh cool !\r\n\r\nThe time it takes to resume depends on the expected maximum distance in this case right ? Do you know its relationship with $B$ ?\r\n\r\nIn your test it already as high as 15k for $B=1024$, which is ok for text datasets but is maybe not ideal for datasets with heavy samples like audio/image/video ? Though for heavy samples datasets the buffer size is generally much smaller to avoid memory issues.\r\n\r\nMaybe we could just add a warning message on resuming to tell the user that it might take some time to recover the shuffle buffer (with a progress bar maybe ?), and have the option to stop + re-run with an env variable to disable shuffle buffer recovering ? WDYT ?", "> The time it takes to resume depends on the expected maximum distance in this case right ? Do you know its relationship with $B$\r\n\r\nHi, I created a histogram to visualize the distances in the simulation exp.\r\n![](https://github.com/user-attachments/assets/464f7a86-051c-412f-b48a-461f7e7c9f20)\r\nI think there is no guarantee as to when the oldest example will be yielded. It could stay in the buffer until the entire shard is consumed. However, this can be rare, and in most cases, the pushed examples will be yielded very quickly. In the figure above, most examples are yielded within $2B$ steps. Things will improve if the dataset is split into enough shards and each shard is not too large.\r\n\r\nI agree that we may need to add some warnings or provide some options to allow users to make their own choices.", "Maybe there's a middle ground between rebuilding the buffer from scratch and storing the entire buffer, but the logic is a bit complicated and takes time to implement. At least for now, we have a way to make shuffled `IterableDataset` resumable :)", "@lhoestq I'm not sure if it's ok to use progress bar when having multiple workers. \r\nHow about passing an arg `resumable=True` to `IterableDataset.shuffle` to allow for controling of the behaviors?", "I feel like the default behavior should ideally be fast and perfect resuming.\r\n\r\nLoading from disk is a good option for this (although it's not always possible to serialize the content of the buffer, in that case the buffer would restart empty and we can show a warning). \r\n\r\nThe state_dict() would be part of the training state_dict that is saved to disk along with the model and optimizer anyway. Cc @muellerzr from that worked on storing training state_dicts for the `accelerate` lib, in case you have an opinion.\r\n\r\nI also feel like it is simpler and more intuitive to users. It doesn't require to explain why we need to stream a lot of data just to recover a buffer.\r\n\r\n> Maybe there's a middle ground between rebuilding the buffer from scratch and storing the entire buffer, but the logic is a bit complicated and takes time to implement.\r\n\r\ndefinitely, and it would also make things even harder to understand to users", "@lhoestq \r\n> Loading from disk is a good option for this (although it's not always possible to serialize the content of the buffer, in that case the buffer would restart empty and we can show a warning).\r\nThe state_dict() would be part of the training state_dict that is saved to disk along with the model and optimizer anyway. Cc @muellerzr from that worked on storing training state_dicts for the accelerate lib, in case you have an opinion.\r\nI also feel like it is simpler and more intuitive to users. It doesn't require to explain why we need to stream a lot of data just to recover a buffer.\r\n\r\nYea, agree with you. But here's the thing: saving buffers as state dict can get pretty tricky. When it comes to tokenized text data, working with multi-worker shuffle can take around x hundreds GB of memories in my case. That's just not feasible for most machine envs out there, and can be more severe for audio/video data.\r\n\r\nAlso, serializing the buffer does take a major toll on performance, and in my experience, I've had to lean heavily on numpy/torch tensor operations to manage those tokenized text data efficiently, which isn't easily transferable to other scenarios—it's kind of a custom fix that works for now, but it's not a one-size-fits-all solution. So, for me it's not that ideal to directly serialize the buffer content with those limitations.\r\n\r\n", "> When it comes to tokenized text data, working with multi-worker shuffle can taken around x hundreds GB memories in my case.\r\n\r\nit's kinda close to the size of a model + optimizer no ?\r\n\r\nAnyway that makes sense and adding the feature to recover a buffer shuffle (at least as an opt-in for now, we can decide on the default later based on users feedback and experience).\r\n\r\nAre you ok with adding `buffer_resuming_mode=` to `.shuffle()` to enable buffer recovering using your method with `buffer_resuming_mode=\"recover_from_source\"` ? (feel free to suggest other names for the parameter and value)", "@lhoestq \r\n> Are you ok with adding buffer_resuming_mode= to .shuffle() to enable buffer recovering using your method with buffer_resuming_mode=\"recover_from_source\" ? (feel free to suggest other names for the parameter and value)\r\n\r\nOf course, appreciate your feedbacks." ]
2024-07-22T07:50:02
2024-07-22T15:37:01
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7056", "html_url": "https://github.com/huggingface/datasets/pull/7056", "diff_url": "https://github.com/huggingface/datasets/pull/7056.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7056.patch", "merged_at": null }
This PR aims to implement a resumable `BufferShuffledExamplesIterable`. Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first example in the buffer dict. The idea is that since the buffer size is limited, even if the entire buffer is discarded, we can rebuild it as long as the state of the oldest example is recorded. For buffer size $B$, the expected distance between when an example is pushed and when it is yielded is $d = \sum_{k=1}^{\infty} k\frac{1}{B} (1 - \frac{1}{B} )^{k-1} =B$. Simulation experiments support these claims: ```py from random import randint BUFFER_SIZE = 1024 dists = [] buffer = [] for i in range(10000000): if i < BUFFER_SIZE: buffer.append(i) else: index = randint(0, BUFFER_SIZE - 1) dists.append(i - buffer[index]) buffer[index] = i print(f"MIN DIST: {min(dists)}\nMAX DIST: {max(dists)}\nAVG DIST: {sum(dists) / len(dists):.2f}\n") ``` which produces the following output: ```py MIN DIST: 1 MAX DIST: 15136 AVG DIST: 1023.95 ``` The overall time for reconstructing the buffer and recovery should not be too long. The following code mimics the cases of resuming online tokenization by `datasets` and `StatefulDataLoader` under distributed scenarios, ```py import pickle import time from itertools import chain from typing import Any, Dict, List import torch from datasets import load_dataset from torchdata.stateful_dataloader import StatefulDataLoader from tqdm import tqdm from transformers import AutoTokenizer, DataCollatorForLanguageModeling tokenizer = AutoTokenizer.from_pretrained('fla-hub/gla-1.3B-100B') tokenizer.pad_token = tokenizer.eos_token data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) torch.manual_seed(42) def tokenize(examples: Dict[str, List[Any]]) -> Dict[str, List[List[int]]]: input_ids = tokenizer(examples['text'])['input_ids'] input_ids = list(chain(*input_ids)) total_length = len(input_ids) chunk_size = 2048 total_length = (total_length // chunk_size) * chunk_size # the last chunk smaller than chunk_size will be discarded return {'input_ids': [input_ids[i: i+chunk_size] for i in range(0, total_length, chunk_size)]} batch_size = 16 num_workers = 5 context_length = 2048 rank = 1 world_size = 32 prefetch_factor = 2 steps = 2048 path = 'fla-hub/slimpajama-test' dataset = load_dataset( path=path, split='train', streaming=True, trust_remote_code=True ) dataset = dataset.map(tokenize, batched=True, remove_columns=next(iter(dataset)).keys()) dataset = dataset.shuffle(seed=42) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) start = time.time() for i, batch in tqdm(enumerate(loader)): if i == 0: print(f'{i}\n{batch["input_ids"]}') if i == steps - 1: print(f'{i}\n{batch["input_ids"]}') state_dict = loader.state_dict() if i == steps: print(f'{i}\n{batch["input_ids"]}') break print(f"{time.time() - start:.2f}s elapsed") print(f"{len(pickle.dumps(state_dict)) / 1024**2:.2f}MB states in total") for worker in state_dict['_snapshot']['_worker_snapshots'].keys(): print(f"{worker} {len(pickle.dumps(state_dict['_snapshot']['_worker_snapshots'][worker])) / 1024**2:.2f}MB") print(state_dict['_snapshot']['_worker_snapshots']['worker_0']['dataset_state']) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) print("Loading state dict") loader.load_state_dict(state_dict) start = time.time() for batch in loader: print(batch['input_ids']) break print(f"{time.time() - start:.2f}s elapsed") ``` and the outputs are ```py 0 tensor([[ 909, 395, 19082, ..., 13088, 16232, 395], [ 601, 28705, 28770, ..., 28733, 923, 288], [21753, 15071, 13977, ..., 9369, 28723, 415], ..., [21763, 28751, 20300, ..., 28781, 28734, 4775], [ 354, 396, 10214, ..., 298, 429, 28770], [ 333, 6149, 28768, ..., 2773, 340, 351]]) 2047 tensor([[28723, 415, 3889, ..., 272, 3065, 2609], [ 403, 3214, 3629, ..., 403, 21163, 16434], [28723, 13, 28749, ..., 28705, 28750, 28734], ..., [ 2778, 2251, 28723, ..., 354, 684, 429], [ 5659, 298, 1038, ..., 5290, 297, 22153], [ 938, 28723, 1537, ..., 9123, 28733, 12154]]) 2048 tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 65.97s elapsed 0.00MB states in total worker_0 0.00MB worker_1 0.00MB worker_2 0.00MB worker_3 0.00MB worker_4 0.00MB {'ex_iterable': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 14000}, 'num_examples_since_previous_state': 166, 'previous_state_example_idx': 7394, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 13000}}, 'num_taken': 6560, 'global_example_idx': 7560, 'buffer_state_dict': {'num_taken': 6560, 'global_example_idx': 356, 'index_offset': 0, 'first_state': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 1000}, 'num_examples_since_previous_state': 356, 'previous_state_example_idx': 0, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0}}, 'bit_generator_state': {'state': {'state': 274674114334540486603088602300644985544, 'inc': 332724090758049132448979897138935081983}, 'bit_generator': 'PCG64', 'has_uint32': 0, 'uinteger': 0}}} Loading state dict tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 24.60s elapsed ``` Not sure if this PR complies with the `datasets` code style. Looking for your help @lhoestq, also very willing to further improve the code if any suggestions are given.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7056/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7055/comments
https://api.github.com/repos/huggingface/datasets/issues/7055/events
https://github.com/huggingface/datasets/issues/7055
2,421,708,891
I_kwDODunzps6QWFhb
7,055
WebDataset with different prefixes are unsupported
{ "login": "hlky", "id": 106811348, "node_id": "U_kgDOBl3P1A", "avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hlky", "html_url": "https://github.com/hlky", "followers_url": "https://api.github.com/users/hlky/followers", "following_url": "https://api.github.com/users/hlky/following{/other_user}", "gists_url": "https://api.github.com/users/hlky/gists{/gist_id}", "starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hlky/subscriptions", "organizations_url": "https://api.github.com/users/hlky/orgs", "repos_url": "https://api.github.com/users/hlky/repos", "events_url": "https://api.github.com/users/hlky/events{/privacy}", "received_events_url": "https://api.github.com/users/hlky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Since `datasets` uses is built on Arrow to store the data, it requires each sample to have the same columns.\r\n\r\nThis can be fixed by specifyign in advance the name of all the possible columns in the `dataset_info` in YAML, and missing values will be `None`", "Thanks. This currently doesn't work for WebDataset because there's no `BuilderConfig` with `features` and in turn `_info` is missing `features=self.config.features`. I'll prepare a PR to fix this.\r\n\r\nNote it may be useful to add the [expected format of `features`](https://github.com/huggingface/datasets/blob/16fa4421f44b22bbbc607f379a93f45af468d1fc/src/datasets/features/features.py#L1757) to the documentation for [`Builder Parameters`](https://huggingface.co./docs/datasets/repository_structure#builder-parameters).\r\n", "Oh good catch ! thanks\r\n\r\n> Note it may be useful to add the [expected format of features](https://github.com/huggingface/datasets/blob/16fa4421f44b22bbbc607f379a93f45af468d1fc/src/datasets/features/features.py#L1757) to the documentation for [Buil](https://huggingface.co./docs/datasets/repository_structure#builder-parameters)\r\n\r\nGood idea, let me open a PR", "#7060 ", "Actually I just tried with `datasets` on the `main` branch and having `features` defined in `dataset_info` worked for me\r\n\r\n```python\r\n>>> list(load_dataset(\"/Users/quentinlhoest/tmp\", streaming=True, split=\"train\"))\r\n[{'txt': 'hello there\\n', 'other': None}]\r\n```\r\nwhere `tmp` contains data.tar with \"hello there\\n\" in a text file and the README.md:\r\n```\r\n---\r\ndataset_info:\r\n features:\r\n - name: txt\r\n dtype: string\r\n - name: other\r\n dtype: string\r\n---\r\n\r\nThis is a dataset card\r\n```\r\n\r\nWhat error did you get when you tried to specify the columns in `dataset_info` ?", "If you review the changes in #7060 you'll note that `features` are not passed to `DatasetInfo`.\r\n\r\nIn your case the features are being extracted by [this code](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/webdataset/webdataset.py#L72-L98).\r\n\r\nTry with the `Steps to reproduce the bug`. It's the same error mentioned in `Describe the bug` because `features` are not passed to `DatasetInfo`.\r\n\r\n`features` are [not used](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/builder.py#L365-L366) when the `BuilderConfig` has no `features` attribute. `WebDataset` uses the default [`BuilderConfig`](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/builder.py#L101-L124).\r\n\r\nThere is a [warning](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/load.py#L640-L648) that `features` are ignored.\r\n\r\nNote that as mentioned in `Describe the bug` this could also be resolved by removing the check [here](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) because Arrow actually handles this itself, Arrow sets any missing fields to `None`, at least in my case.", "Note for anyone else who encounters this issue, every dataset type except folder-based types supported features in the [documented](https://huggingface.co./docs/datasets/repository_structure#builder-parameters) manner; [Arrow](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/arrow/arrow.py#L15-L21), [csv](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/csv/csv.py#L25-L68), [generator](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/generator/generator.py#L8-L19), [json](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/json/json.py#L42-L52), [pandas](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/pandas/pandas.py#L14-L20), [parquet](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/parquet/parquet.py#L16-L24), [spark](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/spark/spark.py#L31-L37), [sql](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/sql/sql.py#L24-L35) and [text](https://github.com/huggingface/datasets/blob/e83d6fa574710fcb44e341087239d2687183f62b/src/datasets/packaged_modules/text/text.py#L18-L27). `WebDataset` is different and requires [`dataset_info` which is vaguely documented](https://huggingface.co./docs/datasets/dataset_script#optional-generate-dataset-metadata) under dataset loading scripts.", "Thanks for explaining. I see the Dataset Viewer is still failing - I'll update `datasets` in the Viewer to fix this" ]
2024-07-22T01:14:19
2024-07-24T13:26:30
2024-07-23T13:28:46
NONE
null
null
null
### Describe the bug Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co./datasets/bigdata-pw/fashion-150k) Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given. ``` The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. ``` The purpose of this check is unclear because PyArrow supports different keys. Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset. ``` >>> from datasets import load_dataset >>> path = "shards/*.tar" >>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True) Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 152/152 [00:00<00:00, 56458.93it/s] >>> dataset IterableDataset({ features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'], n_shards: 152 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("bigdata-pw/fashion-150k") ``` ### Expected behavior Dataset loads without error ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.19 - `huggingface_hub` version: 0.23.4 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7055/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7054/comments
https://api.github.com/repos/huggingface/datasets/issues/7054/events
https://github.com/huggingface/datasets/pull/7054
2,418,548,995
PR_kwDODunzps514T1f
7,054
Add batching to `IterableDataset`
{ "login": "lappemic", "id": 61876623, "node_id": "MDQ6VXNlcjYxODc2NjIz", "avatar_url": "https://avatars.githubusercontent.com/u/61876623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lappemic", "html_url": "https://github.com/lappemic", "followers_url": "https://api.github.com/users/lappemic/followers", "following_url": "https://api.github.com/users/lappemic/following{/other_user}", "gists_url": "https://api.github.com/users/lappemic/gists{/gist_id}", "starred_url": "https://api.github.com/users/lappemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lappemic/subscriptions", "organizations_url": "https://api.github.com/users/lappemic/orgs", "repos_url": "https://api.github.com/users/lappemic/repos", "events_url": "https://api.github.com/users/lappemic/events{/privacy}", "received_events_url": "https://api.github.com/users/lappemic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Cool ! Thanks for diving into it :)\r\n\r\nYour implementation is great and indeed supports shuffling and batching, you just need to additionally account for state_dict (for dataset [checkpointing+resuming](https://huggingface.co./docs/datasets/main/en/use_with_pytorch#checkpoint-and-resume))\r\n\r\nThat being said, I believe the implementation can be made simpler by relying on `IterableDataset.map()` which already implements all this. Maybe something like\r\n\r\n```python\r\n\r\ndef batch(self, batch_size: int, drop_last_batch: bool = False) -> \"IterableDataset\":\r\n def batch(unbatched: dict[str, list]) -> dict[str, list]:\r\n return {k: [v] for k, v in unbatched}\r\n\r\n return self.map(batch, batched=True, batch_size=batch_size, drop_last_batch=drop_last_batch)\r\n```\r\n\r\nAnd this way no need to reimplement everything !\r\n\r\n(my only small concern is that it's not an Arrow-optimized function so it requires the examples to be manipulated as python objects even if the original data is in Arrow format (e.g. when streaming Parquet files) but it's not a big deal and we can see later if we need to optimize this)", "Thanks a lot for the feedback @lhoestq! I definitely could have saved some time looking into it properly first. 😅 \r\n\r\nImplemented the `.batch()` method, added a proper docsrtring for documentation, and added tests.\r\n\r\nLet me know what you think and if this needs some update.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7054). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for the feedbak @lhoestq!\r\n\r\nApplied it and referenced the `batched=True` option in the `map` function and highlighted the difference. Hope i got this right.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005181 / 0.011353 (-0.006172) | 0.003714 / 0.011008 (-0.007294) | 0.063060 / 0.038508 (0.024552) | 0.030885 / 0.023109 (0.007776) | 0.239060 / 0.275898 (-0.036838) | 0.262480 / 0.323480 (-0.061000) | 0.004103 / 0.007986 (-0.003883) | 0.002696 / 0.004328 (-0.001632) | 0.048706 / 0.004250 (0.044456) | 0.042577 / 0.037052 (0.005525) | 0.249928 / 0.258489 (-0.008561) | 0.283252 / 0.293841 (-0.010589) | 0.029304 / 0.128546 (-0.099242) | 0.012001 / 0.075646 (-0.063646) | 0.204467 / 0.419271 (-0.214804) | 0.035639 / 0.043533 (-0.007894) | 0.243850 / 0.255139 (-0.011289) | 0.261609 / 0.283200 (-0.021590) | 0.018302 / 0.141683 (-0.123381) | 1.096040 / 1.452155 (-0.356115) | 1.135917 / 1.492716 (-0.356800) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091976 / 0.018006 (0.073970) | 0.296396 / 0.000490 (0.295906) | 0.000203 / 0.000200 (0.000003) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018405 / 0.037411 (-0.019007) | 0.062470 / 0.014526 (0.047944) | 0.073340 / 0.176557 (-0.103216) | 0.119474 / 0.737135 (-0.617661) | 0.075750 / 0.296338 (-0.220588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279586 / 0.215209 (0.064377) | 2.768542 / 2.077655 (0.690887) | 1.449158 / 1.504120 (-0.054962) | 1.328760 / 1.541195 (-0.212435) | 1.336338 / 1.468490 (-0.132152) | 0.732582 / 4.584777 (-3.852195) | 2.325558 / 3.745712 (-1.420154) | 2.898077 / 5.269862 (-2.371784) | 1.893107 / 4.565676 (-2.672569) | 0.078788 / 0.424275 (-0.345487) | 0.005273 / 0.007607 (-0.002335) | 0.334887 / 0.226044 (0.108842) | 3.304173 / 2.268929 (1.035244) | 1.834743 / 55.444624 (-53.609882) | 1.527463 / 6.876477 (-5.349014) | 1.538824 / 2.142072 (-0.603249) | 0.785646 / 4.805227 (-4.019581) | 0.134876 / 6.500664 (-6.365788) | 0.042894 / 0.075469 (-0.032575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976635 / 1.841788 (-0.865152) | 11.217156 / 8.074308 (3.142848) | 9.616971 / 10.191392 (-0.574421) | 0.127276 / 0.680424 (-0.553148) | 0.014344 / 0.534201 (-0.519857) | 0.301896 / 0.579283 (-0.277387) | 0.259615 / 0.434364 (-0.174749) | 0.340693 / 0.540337 (-0.199645) | 0.429145 / 1.386936 (-0.957791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005534 / 0.011353 (-0.005819) | 0.003795 / 0.011008 (-0.007213) | 0.049761 / 0.038508 (0.011253) | 0.031311 / 0.023109 (0.008202) | 0.276032 / 0.275898 (0.000134) | 0.297316 / 0.323480 (-0.026164) | 0.004396 / 0.007986 (-0.003590) | 0.002693 / 0.004328 (-0.001635) | 0.049025 / 0.004250 (0.044775) | 0.039707 / 0.037052 (0.002654) | 0.284264 / 0.258489 (0.025775) | 0.319962 / 0.293841 (0.026121) | 0.031842 / 0.128546 (-0.096705) | 0.012192 / 0.075646 (-0.063454) | 0.059895 / 0.419271 (-0.359376) | 0.033676 / 0.043533 (-0.009856) | 0.275917 / 0.255139 (0.020778) | 0.292637 / 0.283200 (0.009437) | 0.017992 / 0.141683 (-0.123691) | 1.199329 / 1.452155 (-0.252826) | 1.259083 / 1.492716 (-0.233633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092770 / 0.018006 (0.074764) | 0.313363 / 0.000490 (0.312873) | 0.000212 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022977 / 0.037411 (-0.014434) | 0.076839 / 0.014526 (0.062314) | 0.088289 / 0.176557 (-0.088267) | 0.128625 / 0.737135 (-0.608510) | 0.089348 / 0.296338 (-0.206990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300881 / 0.215209 (0.085672) | 2.946499 / 2.077655 (0.868845) | 1.599686 / 1.504120 (0.095566) | 1.479332 / 1.541195 (-0.061862) | 1.476910 / 1.468490 (0.008420) | 0.720536 / 4.584777 (-3.864241) | 0.944822 / 3.745712 (-2.800890) | 2.771864 / 5.269862 (-2.497998) | 1.886573 / 4.565676 (-2.679103) | 0.078462 / 0.424275 (-0.345813) | 0.005392 / 0.007607 (-0.002215) | 0.354984 / 0.226044 (0.128939) | 3.516449 / 2.268929 (1.247520) | 1.977033 / 55.444624 (-53.467592) | 1.671922 / 6.876477 (-5.204555) | 1.785755 / 2.142072 (-0.356318) | 0.795330 / 4.805227 (-4.009897) | 0.132895 / 6.500664 (-6.367769) | 0.041178 / 0.075469 (-0.034291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031780 / 1.841788 (-0.810008) | 11.855600 / 8.074308 (3.781292) | 10.245599 / 10.191392 (0.054207) | 0.140649 / 0.680424 (-0.539775) | 0.015332 / 0.534201 (-0.518869) | 0.299402 / 0.579283 (-0.279881) | 0.120007 / 0.434364 (-0.314357) | 0.337770 / 0.540337 (-0.202568) | 0.433679 / 1.386936 (-0.953257) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e83d6fa574710fcb44e341087239d2687183f62b \"CML watermark\")\n" ]
2024-07-19T10:11:47
2024-07-23T13:25:13
2024-07-23T10:34:28
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7054", "html_url": "https://github.com/huggingface/datasets/pull/7054", "diff_url": "https://github.com/huggingface/datasets/pull/7054.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7054.patch", "merged_at": "2024-07-23T10:34:28" }
I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class. The main changes are: 1. A new `BatchedExamplesIterable` that groups examples into batches. 2. A `.batch()` method for `IterableDataset` to easily create batched versions. 3. Support for shuffling and sharding to work with PyTorch DataLoader and multiple workers. I'm not sure if this is exactly what you had in mind and also have not fully tested it atm, so I'd really appreciate your feedback. Does this seem like it's heading in the right direction? I'm happy to make any changes or explore different approaches if needed. Pinging @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7054/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7053/comments
https://api.github.com/repos/huggingface/datasets/issues/7053/events
https://github.com/huggingface/datasets/issues/7053
2,416,423,791
I_kwDODunzps6QB7Nv
7,053
Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple`
{ "login": "MatthewYZhang", "id": 48289218, "node_id": "MDQ6VXNlcjQ4Mjg5MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/48289218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatthewYZhang", "html_url": "https://github.com/MatthewYZhang", "followers_url": "https://api.github.com/users/MatthewYZhang/followers", "following_url": "https://api.github.com/users/MatthewYZhang/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewYZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatthewYZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewYZhang/subscriptions", "organizations_url": "https://api.github.com/users/MatthewYZhang/orgs", "repos_url": "https://api.github.com/users/MatthewYZhang/repos", "events_url": "https://api.github.com/users/MatthewYZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/MatthewYZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi,\r\n\r\nThis issue was fixed in `datasets` 2.15.0:\r\n- #6105\r\n\r\nYou will need to update your `datasets`:\r\n```\r\npip install -U datasets\r\n```", "Duplicate of:\r\n- #6100" ]
2024-07-18T13:42:35
2024-07-18T15:17:42
2024-07-18T15:16:18
NONE
null
null
null
### Describe the bug in data_files.py, line 332, `fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)` If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')` So, `isinstance(fs.protocol, str) == False` and `protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise `TypeError: can only concatenate tuple (not "str") to tuple`. ### Steps to reproduce the bug Steps to reproduce: 1. Run on a cloud server like AWS, 2. `import datasets.data_files as datafile` 3. datafile.resolve_pattern('path/to/dataset', '.') 4. `TypeError: can only concatenate tuple (not "str") to tuple` ### Expected behavior Should return path of the dataset, with fs.protocol at the beginning ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.19 - Huggingface_hub version: 0.23.5 - PyArrow version: 16.1.0 - Pandas version: 1.1.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7053/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7052/comments
https://api.github.com/repos/huggingface/datasets/issues/7052/events
https://github.com/huggingface/datasets/pull/7052
2,411,682,730
PR_kwDODunzps51iuop
7,052
Adding `Music` feature for symbolic music modality (MIDI, abc)
{ "login": "Natooz", "id": 56734983, "node_id": "MDQ6VXNlcjU2NzM0OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Natooz", "html_url": "https://github.com/Natooz", "followers_url": "https://api.github.com/users/Natooz/followers", "following_url": "https://api.github.com/users/Natooz/following{/other_user}", "gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Natooz/subscriptions", "organizations_url": "https://api.github.com/users/Natooz/orgs", "repos_url": "https://api.github.com/users/Natooz/repos", "events_url": "https://api.github.com/users/Natooz/events{/privacy}", "received_events_url": "https://api.github.com/users/Natooz/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-07-16T17:26:04
2024-07-16T17:26:04
null
NONE
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7052", "html_url": "https://github.com/huggingface/datasets/pull/7052", "diff_url": "https://github.com/huggingface/datasets/pull/7052.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7052.patch", "merged_at": null }
⚠️ (WIP) ⚠️ ### What this PR does This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files. ### Motivations These two file formats are widely used in the [Music Information Retrieval (MIR)](https://en.wikipedia.org/wiki/Music_information_retrieval) for tasks such as music generation, music transcription, music synthesis or music transcription. Having a dedicated feature in the datasets library would allow to both encourage researchers to share datasets of this modality as well as making them more easily usable for end users, benefitting from the perks of the library. These file formats are supported by [symusic](https://github.com/Yikai-Liao/symusic), a lightweight Python library with C bindings (using nanobind) allowing to efficiently read, write and manipulate them. The library is actively developed, and can in the future also implement other file formats such as [musicXML](https://en.wikipedia.org/wiki/MusicXML). As such, this PR relies on it. The music data can then easily be tokenized with appropriate tokenizers such as [MidiTok](https://github.com/Natooz/MidiTok) or converted to pianorolls matrices by symusic. **Jul 16th 2024:** * the tests for the `Music` feature are currently failing due to non-supported access to the LazyBatch in `test_dataset_with_music_feature_map` and `test_dataset_with_music_feature_map_resample_music` (see TODOs). I am a beginner with pyArrow, I'll take any advice to make this work; * additional tests including the `Music` feature with parquet and WebDataset should be implemented. As of right now, I am waiting for your feedback before taking further steps; * a `MusicFolder` should also be implemented to comply with the usages of the `Image` and `Audio` features, waiting for your feedback too. CCing @lhoestq and @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7052/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7051/comments
https://api.github.com/repos/huggingface/datasets/issues/7051/events
https://github.com/huggingface/datasets/issues/7051
2,409,353,929
I_kwDODunzps6Pm9LJ
7,051
How to set_epoch with interleave_datasets?
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This is not possible right now afaik :/\r\n\r\nMaybe we could have something like this ? wdyt ?\r\n\r\n```python\r\nds = interleave_datasets(\r\n [shuffled_dataset_a, dataset_b],\r\n probabilities=probabilities,\r\n stopping_strategy='all_exhausted',\r\n reshuffle_each_iteration=True,\r\n)", "That would be helpful for this case! \r\n\r\nIf there was some way for from_generator to iterate over just a single shard of some dataset that would probably be more ideal. Maybe something like\r\n\r\n```\r\ndef from_dataset_generator(dataset, generator_fn, gen_kwargs):\r\n # calls generator_fn(dataset=dataset_shard, **gen_kwargs)\r\n```\r\n\r\nAnother transform I was trying to implement is an input bucketing transform. Essentially you need to iterate through a dataset and reorder the examples in them, which is not really possible with a `map()` call. But using `from_generator()` causes the final dataset to be a single shard and loses speed gains from multiple dataloader workers", "I see, there are some internal functions to get a single shard already but the public `.shard()` method hasn't been implemented yet for `IterableDataset` :/\r\n\r\n(see the use of `ex_iterable.shard_data_sources` in `IterableDataset._prepare_ex_iterable_for_iteration` for example)", "Would that be something planned on the roadmap for the near future, or do you suggest hacking through with internal APIs for now?", "Ok this turned out to be not too difficult. Are there any obvious issues with my implementation?\r\n\r\n```\r\nclass ShuffleEveryEpochIterable(iterable_dataset._BaseExamplesIterable):\r\n \"\"\"ExamplesIterable that reshuffles the dataset every epoch.\"\"\"\r\n\r\n def __init__(\r\n self,\r\n ex_iterable: iterable_dataset._BaseExamplesIterable,\r\n generator: np.random.Generator,\r\n ):\r\n \"\"\"Constructor.\"\"\"\r\n super().__init__()\r\n self.ex_iterable = ex_iterable\r\n self.generator = generator\r\n\r\n def _init_state_dict(self) -> dict:\r\n self._state_dict = {\r\n 'ex_iterable': self.ex_iterable._init_state_dict(),\r\n 'epoch': 0,\r\n }\r\n return self._state_dict\r\n\r\n @typing.override\r\n def __iter__(self):\r\n epoch = self._state_dict['epoch'] if self._state_dict else 0\r\n for i in itertools.count(epoch):\r\n # Create effective seed using i (subtract in order to avoir overflow in long_scalars)\r\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - i\r\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\r\n generator = np.random.default_rng(effective_seed)\r\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\r\n if self._state_dict:\r\n self._state_dict['epoch'] = i\r\n self._state_dict['ex_iterable'] = self.ex_iterable._init_state_dict()\r\n it = iter(self.ex_iterable)\r\n yield from it\r\n\r\n @typing.override\r\n def shuffle_data_sources(self, generator):\r\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\r\n return ShuffleEveryEpochIterable(ex_iterable, generator=generator)\r\n\r\n @typing.override\r\n def shard_data_sources(self, worker_id: int, num_workers: int):\r\n ex_iterable = self.ex_iterable.shard_data_sources(worker_id, num_workers)\r\n return ShuffleEveryEpochIterable(ex_iterable, generator=self.generator)\r\n\r\n @typing.override\r\n @property\r\n def n_shards(self) -> int:\r\n return self.ex_iterable.n_shards\r\n \r\ngenerator = np.random.default_rng(seed)\r\nshuffling = iterable_dataset.ShufflingConfig(generator=generator, _original_seed=seed)\r\nex_iterable = iterable_dataset.BufferShuffledExamplesIterable(\r\n dataset._ex_iterable, buffer_size=buffer_size, generator=generator\r\n)\r\nex_iterable = ShuffleEveryEpochIterable(ex_iterable, generator=generator)\r\ndataset = datasets.IterableDataset(\r\n ex_iterable=ex_iterable,\r\n info=dataset._info.copy(),\r\n split=dataset._split,\r\n formatting=dataset._formatting,\r\n shuffling=shuffling,\r\n distributed=copy.deepcopy(dataset._distributed),\r\n token_per_repo_id=dataset._token_per_repo_id,\r\n)\r\n```\r\n", "Nice ! This iterable is infinite though no ? How would `interleave_dataset` know when to stop ?\r\n\r\nMaybe the re-shuffling can be implemented directly in `RandomlyCyclingMultiSourcesExamplesIterable` (which is the iterable used by `interleave_dataset`) ?", "Infinite is fine for my usecases fortunately." ]
2024-07-15T18:24:52
2024-07-22T16:52:07
null
NONE
null
null
null
Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples. I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch) Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start. How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset... Something like ``` dataset_a = load_dataset(...) dataset_b = load_dataset(...) def epoch_shuffled_dataset(ds): # How to make this maintain the number of shards in ds?? for epoch in itertools.count(): ds.set_epoch(epoch) yield from iter(ds) shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a}) interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted') ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7051/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/7051/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7050/comments
https://api.github.com/repos/huggingface/datasets/issues/7050/events
https://github.com/huggingface/datasets/pull/7050
2,409,048,733
PR_kwDODunzps51Z1Yp
7,050
add checkpoint and resume title in docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7050). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005707 / 0.011353 (-0.005646) | 0.004381 / 0.011008 (-0.006627) | 0.063711 / 0.038508 (0.025202) | 0.031882 / 0.023109 (0.008772) | 0.250056 / 0.275898 (-0.025842) | 0.287616 / 0.323480 (-0.035863) | 0.003327 / 0.007986 (-0.004658) | 0.003717 / 0.004328 (-0.000611) | 0.049103 / 0.004250 (0.044853) | 0.048821 / 0.037052 (0.011769) | 0.259688 / 0.258489 (0.001199) | 0.311469 / 0.293841 (0.017628) | 0.030667 / 0.128546 (-0.097879) | 0.013091 / 0.075646 (-0.062555) | 0.204737 / 0.419271 (-0.214534) | 0.038312 / 0.043533 (-0.005221) | 0.250055 / 0.255139 (-0.005084) | 0.272199 / 0.283200 (-0.011001) | 0.021161 / 0.141683 (-0.120522) | 1.116095 / 1.452155 (-0.336060) | 1.153588 / 1.492716 (-0.339129) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.107828 / 0.018006 (0.089822) | 0.315898 / 0.000490 (0.315408) | 0.000228 / 0.000200 (0.000028) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018873 / 0.037411 (-0.018539) | 0.063374 / 0.014526 (0.048848) | 0.076424 / 0.176557 (-0.100133) | 0.123468 / 0.737135 (-0.613667) | 0.077432 / 0.296338 (-0.218906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288931 / 0.215209 (0.073722) | 2.828745 / 2.077655 (0.751091) | 1.471061 / 1.504120 (-0.033059) | 1.332289 / 1.541195 (-0.208906) | 1.379797 / 1.468490 (-0.088693) | 0.708053 / 4.584777 (-3.876724) | 2.382431 / 3.745712 (-1.363281) | 2.952672 / 5.269862 (-2.317190) | 1.957517 / 4.565676 (-2.608160) | 0.078730 / 0.424275 (-0.345546) | 0.005093 / 0.007607 (-0.002514) | 0.338147 / 0.226044 (0.112102) | 3.340841 / 2.268929 (1.071912) | 1.857083 / 55.444624 (-53.587541) | 1.533659 / 6.876477 (-5.342818) | 1.750549 / 2.142072 (-0.391523) | 0.804125 / 4.805227 (-4.001103) | 0.134618 / 6.500664 (-6.366046) | 0.042517 / 0.075469 (-0.032952) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968608 / 1.841788 (-0.873180) | 12.326994 / 8.074308 (4.252686) | 9.464889 / 10.191392 (-0.726503) | 0.143979 / 0.680424 (-0.536445) | 0.014577 / 0.534201 (-0.519624) | 0.303205 / 0.579283 (-0.276078) | 0.269866 / 0.434364 (-0.164498) | 0.344846 / 0.540337 (-0.195491) | 0.443794 / 1.386936 (-0.943142) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006452 / 0.011353 (-0.004900) | 0.004264 / 0.011008 (-0.006745) | 0.051355 / 0.038508 (0.012847) | 0.035188 / 0.023109 (0.012079) | 0.267697 / 0.275898 (-0.008201) | 0.295853 / 0.323480 (-0.027627) | 0.004611 / 0.007986 (-0.003374) | 0.005395 / 0.004328 (0.001066) | 0.049903 / 0.004250 (0.045652) | 0.044582 / 0.037052 (0.007530) | 0.284706 / 0.258489 (0.026217) | 0.321623 / 0.293841 (0.027782) | 0.033228 / 0.128546 (-0.095318) | 0.013077 / 0.075646 (-0.062569) | 0.061867 / 0.419271 (-0.357405) | 0.034625 / 0.043533 (-0.008908) | 0.269088 / 0.255139 (0.013949) | 0.284899 / 0.283200 (0.001699) | 0.019972 / 0.141683 (-0.121710) | 1.157976 / 1.452155 (-0.294178) | 1.181658 / 1.492716 (-0.311058) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.111072 / 0.018006 (0.093066) | 0.333310 / 0.000490 (0.332820) | 0.000251 / 0.000200 (0.000051) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023760 / 0.037411 (-0.013652) | 0.080746 / 0.014526 (0.066221) | 0.090231 / 0.176557 (-0.086326) | 0.132200 / 0.737135 (-0.604936) | 0.095679 / 0.296338 (-0.200660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297404 / 0.215209 (0.082195) | 2.919779 / 2.077655 (0.842124) | 1.577470 / 1.504120 (0.073350) | 1.452924 / 1.541195 (-0.088271) | 1.523683 / 1.468490 (0.055193) | 0.743801 / 4.584777 (-3.840976) | 1.006944 / 3.745712 (-2.738768) | 3.218161 / 5.269862 (-2.051701) | 2.069762 / 4.565676 (-2.495914) | 0.082900 / 0.424275 (-0.341375) | 0.005239 / 0.007607 (-0.002368) | 0.360124 / 0.226044 (0.134080) | 3.505349 / 2.268929 (1.236420) | 1.959324 / 55.444624 (-53.485300) | 1.663782 / 6.876477 (-5.212694) | 1.725745 / 2.142072 (-0.416327) | 0.825268 / 4.805227 (-3.979959) | 0.138577 / 6.500664 (-6.362087) | 0.042716 / 0.075469 (-0.032753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.021138 / 1.841788 (-0.820650) | 13.907954 / 8.074308 (5.833646) | 11.023796 / 10.191392 (0.832404) | 0.135224 / 0.680424 (-0.545200) | 0.016232 / 0.534201 (-0.517969) | 0.330389 / 0.579283 (-0.248894) | 0.131702 / 0.434364 (-0.302662) | 0.372499 / 0.540337 (-0.167838) | 0.472702 / 1.386936 (-0.914234) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87f4c2088854ff33e817e724e75179e9975c1b02 \"CML watermark\")\n" ]
2024-07-15T15:38:04
2024-07-15T16:06:15
2024-07-15T15:59:56
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7050", "html_url": "https://github.com/huggingface/datasets/pull/7050", "diff_url": "https://github.com/huggingface/datasets/pull/7050.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7050.patch", "merged_at": "2024-07-15T15:59:56" }
(minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7050/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7049/comments
https://api.github.com/repos/huggingface/datasets/issues/7049/events
https://github.com/huggingface/datasets/issues/7049
2,408,514,366
I_kwDODunzps6PjwM-
7,049
Save nparray as list
{ "login": "Sakurakdx", "id": 48399040, "node_id": "MDQ6VXNlcjQ4Mzk5MDQw", "avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sakurakdx", "html_url": "https://github.com/Sakurakdx", "followers_url": "https://api.github.com/users/Sakurakdx/followers", "following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}", "gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions", "organizations_url": "https://api.github.com/users/Sakurakdx/orgs", "repos_url": "https://api.github.com/users/Sakurakdx/repos", "events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}", "received_events_url": "https://api.github.com/users/Sakurakdx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "In addition, when I use `set_format ` and index the ds, the following error occurs:\r\nthe code\r\n```python\r\nds.set_format(type=\"np\", colums=\"pixel_values\")\r\n```\r\nerror\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/user-attachments/assets/b28bbff2-20ea-4d28-ab62-b4ed2d944996\">\r\n", "> Some people use the set_format function to convert the column back, but doesn't this lose precision?\r\n\r\nUnder the hood the data is saved in Arrow format using the same precision as your numpy arrays?\r\nBy default the Arrow data is read as python lists, but you can indeed read them back as numpy arrays with the same precision", "(you can fix your second issue by fixing the typo `colums` -> `columns`)", "> (you can fix your second issue by fixing the typo `colums` -> `columns`)\r\n\r\nYou are right, I was careless. Thank you.", "> > Some people use the set_format function to convert the column back, but doesn't this lose precision?\r\n> \r\n> Under the hood the data is saved in Arrow format using the same precision as your numpy arrays? By default the Arrow data is read as python lists, but you can indeed read them back as numpy arrays with the same precision\r\n\r\nYes, after testing I found that there was no loss of precision. Thanks again for your answer." ]
2024-07-15T11:36:11
2024-07-18T11:33:34
2024-07-18T11:33:34
NONE
null
null
null
### Describe the bug When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision? ### Steps to reproduce the bug the map function ```python def convert_image_to_features(inst, processor, image_dir): image_file = inst["image_url"] file = image_file.split("/")[-1] image_path = os.path.join(image_dir, file) image = Image.open(image_path) image = image.convert("RGBA") inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"] return inst ``` main function ```python map_fun = partial( convert_image_to_features, processor=processor, image_dir=image_dir ) ds = ds.map(map_fun, batched=False, num_proc=20) print(type(ds[0]["pixel_values"]) ``` ### Expected behavior (type < list>) ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.23.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7049/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7048/comments
https://api.github.com/repos/huggingface/datasets/issues/7048/events
https://github.com/huggingface/datasets/issues/7048
2,408,487,547
I_kwDODunzps6Pjpp7
7,048
ImportError: numpy.core.multiarray when using `filter`
{ "login": "kamilakesbi", "id": 45195979, "node_id": "MDQ6VXNlcjQ1MTk1OTc5", "avatar_url": "https://avatars.githubusercontent.com/u/45195979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamilakesbi", "html_url": "https://github.com/kamilakesbi", "followers_url": "https://api.github.com/users/kamilakesbi/followers", "following_url": "https://api.github.com/users/kamilakesbi/following{/other_user}", "gists_url": "https://api.github.com/users/kamilakesbi/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamilakesbi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamilakesbi/subscriptions", "organizations_url": "https://api.github.com/users/kamilakesbi/orgs", "repos_url": "https://api.github.com/users/kamilakesbi/repos", "events_url": "https://api.github.com/users/kamilakesbi/events{/privacy}", "received_events_url": "https://api.github.com/users/kamilakesbi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Could you please check your `numpy` version?", "I got this issue while using numpy version 2.0. \r\n\r\nI solved it by switching back to numpy 1.26.0 :) ", "We recently added support for numpy 2.0, but it is not released yet.", "Ok I see, thanks! I think we can close this issue for now as switching back to version 1.26.0 solves the problem :) " ]
2024-07-15T11:21:04
2024-07-16T10:11:25
2024-07-16T10:11:25
NONE
null
null
null
### Describe the bug I can't apply the filter method on my dataset. ### Steps to reproduce the bug The following snippet generates a bug: ```python from datasets import load_dataset ami = load_dataset('kamilakesbi/ami', 'ihm') ami['train'].filter( lambda example: example["file_name"] == 'EN2001a' ) ``` I get the following error: `ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).` ### Expected behavior It should work properly! ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7048/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/7047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7047/comments
https://api.github.com/repos/huggingface/datasets/issues/7047/events
https://github.com/huggingface/datasets/issues/7047
2,406,495,084
I_kwDODunzps6PcDNs
7,047
Save Dataset as Sharded Parquet
{ "login": "tom-p-reichel", "id": 43631024, "node_id": "MDQ6VXNlcjQzNjMxMDI0", "avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tom-p-reichel", "html_url": "https://github.com/tom-p-reichel", "followers_url": "https://api.github.com/users/tom-p-reichel/followers", "following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}", "gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}", "starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions", "organizations_url": "https://api.github.com/users/tom-p-reichel/orgs", "repos_url": "https://api.github.com/users/tom-p-reichel/repos", "events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}", "received_events_url": "https://api.github.com/users/tom-p-reichel/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "To anyone else who finds themselves in this predicament, it's possible to read the parquet file in the same way that datasets writes it, and then manually break it into pieces. Although, you need a couple of magic options (`thrift_*`) to deal with the huge metadata, otherwise pyarrow immediately crashes.\r\n```python\r\nimport pyarrow.parquet as pq\r\nimport pyarrow as pa\r\n\r\nr = pq.ParquetReader()\r\n\r\nr.open(\"./outrageous-file.parquet\",thrift_string_size_limit=2**31-1, thrift_container_size_limit=2**31-1)\r\n\r\nfrom more_itertools import chunked\r\nimport tqdm\r\n\r\nfor i,chunk in tqdm.tqdm(enumerate(chunked(range(r.num_row_groups),10000))):\r\n w = pq.ParquetWriter(f\"./chunks.parquet/chunk{i}.parquet\",schema=r.schema_arrow)\r\n for idx in chunk:\r\n w.write_table(r.read_row_group(idx))\r\n w.close()\r\n```", "You can also use `.shard()` and call `to_parquet()` on each shard in the meantime:\r\n\r\n```python\r\nnum_shards = 128\r\noutput_path_template = \"output_dir/{index:05d}.parquet\"\r\nfor index in range(num_shards):\r\n shard = ds.shard(index=index, num_shards=num_shards, contiguous=True)\r\n shard.to_parquet(output_path_template.format(index=index))\r\n```" ]
2024-07-12T23:47:51
2024-07-17T12:07:08
null
NONE
null
null
null
### Feature request `to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically. ### Motivation This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet. ### Your contribution I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158 to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7047/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7046/comments
https://api.github.com/repos/huggingface/datasets/issues/7046/events
https://github.com/huggingface/datasets/pull/7046
2,405,485,582
PR_kwDODunzps51N05n
7,046
Support librosa and numpy 2.0 for Python 3.10
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005897 / 0.011353 (-0.005456) | 0.003958 / 0.011008 (-0.007050) | 0.063684 / 0.038508 (0.025176) | 0.031743 / 0.023109 (0.008634) | 0.246725 / 0.275898 (-0.029173) | 0.275519 / 0.323480 (-0.047961) | 0.003347 / 0.007986 (-0.004639) | 0.004089 / 0.004328 (-0.000240) | 0.049591 / 0.004250 (0.045341) | 0.049386 / 0.037052 (0.012333) | 0.264929 / 0.258489 (0.006440) | 0.317157 / 0.293841 (0.023316) | 0.029929 / 0.128546 (-0.098617) | 0.012264 / 0.075646 (-0.063382) | 0.209208 / 0.419271 (-0.210064) | 0.037073 / 0.043533 (-0.006460) | 0.247999 / 0.255139 (-0.007140) | 0.273457 / 0.283200 (-0.009742) | 0.020354 / 0.141683 (-0.121328) | 1.109874 / 1.452155 (-0.342281) | 1.180085 / 1.492716 (-0.312631) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099935 / 0.018006 (0.081929) | 0.305607 / 0.000490 (0.305118) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020019 / 0.037411 (-0.017392) | 0.066608 / 0.014526 (0.052083) | 0.079354 / 0.176557 (-0.097202) | 0.123416 / 0.737135 (-0.613719) | 0.078171 / 0.296338 (-0.218167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281627 / 0.215209 (0.066418) | 2.809807 / 2.077655 (0.732152) | 1.467007 / 1.504120 (-0.037112) | 1.351367 / 1.541195 (-0.189828) | 1.396782 / 1.468490 (-0.071708) | 0.735605 / 4.584777 (-3.849172) | 2.378455 / 3.745712 (-1.367257) | 2.971739 / 5.269862 (-2.298122) | 2.004970 / 4.565676 (-2.560707) | 0.078156 / 0.424275 (-0.346119) | 0.005276 / 0.007607 (-0.002331) | 0.340370 / 0.226044 (0.114325) | 3.347552 / 2.268929 (1.078624) | 1.851098 / 55.444624 (-53.593527) | 1.518079 / 6.876477 (-5.358398) | 1.703145 / 2.142072 (-0.438927) | 0.799574 / 4.805227 (-4.005654) | 0.133591 / 6.500664 (-6.367074) | 0.043329 / 0.075469 (-0.032141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977268 / 1.841788 (-0.864520) | 12.720209 / 8.074308 (4.645901) | 9.798126 / 10.191392 (-0.393266) | 0.132106 / 0.680424 (-0.548318) | 0.014456 / 0.534201 (-0.519745) | 0.312965 / 0.579283 (-0.266318) | 0.271348 / 0.434364 (-0.163016) | 0.343951 / 0.540337 (-0.196386) | 0.449814 / 1.386936 (-0.937122) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005944 / 0.011353 (-0.005409) | 0.004054 / 0.011008 (-0.006954) | 0.050573 / 0.038508 (0.012065) | 0.034580 / 0.023109 (0.011470) | 0.261439 / 0.275898 (-0.014459) | 0.286057 / 0.323480 (-0.037423) | 0.004463 / 0.007986 (-0.003523) | 0.002891 / 0.004328 (-0.001437) | 0.049169 / 0.004250 (0.044919) | 0.041622 / 0.037052 (0.004570) | 0.275216 / 0.258489 (0.016727) | 0.305847 / 0.293841 (0.012006) | 0.032615 / 0.128546 (-0.095932) | 0.012304 / 0.075646 (-0.063343) | 0.062890 / 0.419271 (-0.356382) | 0.033846 / 0.043533 (-0.009687) | 0.262758 / 0.255139 (0.007619) | 0.279451 / 0.283200 (-0.003748) | 0.018953 / 0.141683 (-0.122730) | 1.149158 / 1.452155 (-0.302997) | 1.173981 / 1.492716 (-0.318735) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100462 / 0.018006 (0.082456) | 0.308390 / 0.000490 (0.307900) | 0.000207 / 0.000200 (0.000007) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023089 / 0.037411 (-0.014322) | 0.078610 / 0.014526 (0.064084) | 0.090348 / 0.176557 (-0.086208) | 0.130784 / 0.737135 (-0.606351) | 0.092538 / 0.296338 (-0.203801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296255 / 0.215209 (0.081046) | 2.899159 / 2.077655 (0.821504) | 1.603524 / 1.504120 (0.099404) | 1.418002 / 1.541195 (-0.123192) | 1.470221 / 1.468490 (0.001731) | 0.722129 / 4.584777 (-3.862648) | 0.956146 / 3.745712 (-2.789566) | 3.011640 / 5.269862 (-2.258222) | 1.910966 / 4.565676 (-2.654711) | 0.078771 / 0.424275 (-0.345504) | 0.005154 / 0.007607 (-0.002453) | 0.354001 / 0.226044 (0.127956) | 3.484224 / 2.268929 (1.215296) | 1.913612 / 55.444624 (-53.531012) | 1.634492 / 6.876477 (-5.241985) | 1.693292 / 2.142072 (-0.448780) | 0.816837 / 4.805227 (-3.988390) | 0.136631 / 6.500664 (-6.364033) | 0.042291 / 0.075469 (-0.033178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.994887 / 1.841788 (-0.846901) | 13.144865 / 8.074308 (5.070557) | 10.820098 / 10.191392 (0.628706) | 0.132557 / 0.680424 (-0.547867) | 0.015467 / 0.534201 (-0.518734) | 0.302026 / 0.579283 (-0.277257) | 0.128763 / 0.434364 (-0.305601) | 0.347908 / 0.540337 (-0.192430) | 0.444829 / 1.386936 (-0.942107) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf6f41e94d9b2f1c620cf937a2e85e5754a8b960 \"CML watermark\")\n" ]
2024-07-12T12:42:47
2024-07-12T13:04:40
2024-07-12T12:58:17
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7046", "html_url": "https://github.com/huggingface/datasets/pull/7046", "diff_url": "https://github.com/huggingface/datasets/pull/7046.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7046.patch", "merged_at": "2024-07-12T12:58:17" }
Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release: - https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1 - https://github.com/dofuuz/python-soxr/issues/28
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7046/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7045/comments
https://api.github.com/repos/huggingface/datasets/issues/7045/events
https://github.com/huggingface/datasets/pull/7045
2,405,447,858
PR_kwDODunzps51Nsie
7,045
Fix tensorflow min version depending on Python version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7045). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005426 / 0.011353 (-0.005927) | 0.003896 / 0.011008 (-0.007112) | 0.063492 / 0.038508 (0.024984) | 0.030199 / 0.023109 (0.007090) | 0.249892 / 0.275898 (-0.026006) | 0.291311 / 0.323480 (-0.032168) | 0.004389 / 0.007986 (-0.003597) | 0.002829 / 0.004328 (-0.001500) | 0.049685 / 0.004250 (0.045435) | 0.043351 / 0.037052 (0.006299) | 0.264265 / 0.258489 (0.005776) | 0.290463 / 0.293841 (-0.003378) | 0.030007 / 0.128546 (-0.098539) | 0.012146 / 0.075646 (-0.063500) | 0.203841 / 0.419271 (-0.215430) | 0.037159 / 0.043533 (-0.006373) | 0.253377 / 0.255139 (-0.001762) | 0.275990 / 0.283200 (-0.007209) | 0.018334 / 0.141683 (-0.123349) | 1.112616 / 1.452155 (-0.339539) | 1.157507 / 1.492716 (-0.335209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097781 / 0.018006 (0.079775) | 0.314381 / 0.000490 (0.313891) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018704 / 0.037411 (-0.018708) | 0.062293 / 0.014526 (0.047767) | 0.073997 / 0.176557 (-0.102559) | 0.120309 / 0.737135 (-0.616826) | 0.075592 / 0.296338 (-0.220747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283178 / 0.215209 (0.067969) | 2.798027 / 2.077655 (0.720372) | 1.431320 / 1.504120 (-0.072800) | 1.316135 / 1.541195 (-0.225060) | 1.345528 / 1.468490 (-0.122962) | 0.717300 / 4.584777 (-3.867477) | 2.401019 / 3.745712 (-1.344693) | 2.866411 / 5.269862 (-2.403451) | 1.933198 / 4.565676 (-2.632479) | 0.079505 / 0.424275 (-0.344771) | 0.005089 / 0.007607 (-0.002519) | 0.333614 / 0.226044 (0.107569) | 3.315449 / 2.268929 (1.046520) | 1.807667 / 55.444624 (-53.636957) | 1.490537 / 6.876477 (-5.385939) | 1.633305 / 2.142072 (-0.508767) | 0.807732 / 4.805227 (-3.997495) | 0.133825 / 6.500664 (-6.366839) | 0.041696 / 0.075469 (-0.033774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969063 / 1.841788 (-0.872724) | 11.825985 / 8.074308 (3.751677) | 9.808041 / 10.191392 (-0.383351) | 0.143338 / 0.680424 (-0.537085) | 0.014714 / 0.534201 (-0.519487) | 0.304360 / 0.579283 (-0.274923) | 0.266863 / 0.434364 (-0.167501) | 0.342374 / 0.540337 (-0.197963) | 0.442120 / 1.386936 (-0.944816) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005574 / 0.011353 (-0.005778) | 0.003735 / 0.011008 (-0.007273) | 0.051021 / 0.038508 (0.012513) | 0.032825 / 0.023109 (0.009716) | 0.267775 / 0.275898 (-0.008123) | 0.286015 / 0.323480 (-0.037464) | 0.004332 / 0.007986 (-0.003653) | 0.002796 / 0.004328 (-0.001532) | 0.050183 / 0.004250 (0.045933) | 0.040191 / 0.037052 (0.003138) | 0.279777 / 0.258489 (0.021288) | 0.312161 / 0.293841 (0.018320) | 0.031993 / 0.128546 (-0.096553) | 0.012168 / 0.075646 (-0.063478) | 0.061622 / 0.419271 (-0.357650) | 0.033577 / 0.043533 (-0.009956) | 0.267300 / 0.255139 (0.012161) | 0.284595 / 0.283200 (0.001396) | 0.018476 / 0.141683 (-0.123207) | 1.135917 / 1.452155 (-0.316237) | 1.164516 / 1.492716 (-0.328200) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.108194 / 0.018006 (0.090188) | 0.309514 / 0.000490 (0.309025) | 0.000211 / 0.000200 (0.000011) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022998 / 0.037411 (-0.014413) | 0.077126 / 0.014526 (0.062600) | 0.088779 / 0.176557 (-0.087778) | 0.128646 / 0.737135 (-0.608489) | 0.089895 / 0.296338 (-0.206443) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295131 / 0.215209 (0.079922) | 2.887380 / 2.077655 (0.809726) | 1.586450 / 1.504120 (0.082330) | 1.449831 / 1.541195 (-0.091363) | 1.468805 / 1.468490 (0.000315) | 0.721578 / 4.584777 (-3.863199) | 0.970499 / 3.745712 (-2.775214) | 2.975604 / 5.269862 (-2.294258) | 1.935809 / 4.565676 (-2.629867) | 0.078504 / 0.424275 (-0.345771) | 0.005219 / 0.007607 (-0.002388) | 0.347168 / 0.226044 (0.121124) | 3.417040 / 2.268929 (1.148111) | 1.928707 / 55.444624 (-53.515917) | 1.629398 / 6.876477 (-5.247078) | 1.653014 / 2.142072 (-0.489058) | 0.796097 / 4.805227 (-4.009130) | 0.133956 / 6.500664 (-6.366708) | 0.041567 / 0.075469 (-0.033902) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.995511 / 1.841788 (-0.846277) | 12.577211 / 8.074308 (4.502903) | 10.562561 / 10.191392 (0.371169) | 0.144288 / 0.680424 (-0.536136) | 0.016345 / 0.534201 (-0.517856) | 0.304364 / 0.579283 (-0.274920) | 0.134630 / 0.434364 (-0.299734) | 0.341494 / 0.540337 (-0.198843) | 0.436238 / 1.386936 (-0.950698) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b708bb6611a88c3f00f58ec3c63fe0da2c2b1e1 \"CML watermark\")\n" ]
2024-07-12T12:20:23
2024-07-12T12:38:53
2024-07-12T12:33:00
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7045", "html_url": "https://github.com/huggingface/datasets/pull/7045", "diff_url": "https://github.com/huggingface/datasets/pull/7045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7045.patch", "merged_at": "2024-07-12T12:33:00" }
Fix tensorflow min version depending on Python version. Related to: - #6991
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7045/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7044/comments
https://api.github.com/repos/huggingface/datasets/issues/7044/events
https://github.com/huggingface/datasets/pull/7044
2,405,002,987
PR_kwDODunzps51MLbh
7,044
Mark tests that require librosa
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7044). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005797 / 0.011353 (-0.005556) | 0.004017 / 0.011008 (-0.006991) | 0.063829 / 0.038508 (0.025321) | 0.031329 / 0.023109 (0.008220) | 0.249388 / 0.275898 (-0.026510) | 0.273129 / 0.323480 (-0.050351) | 0.004250 / 0.007986 (-0.003736) | 0.002821 / 0.004328 (-0.001507) | 0.049250 / 0.004250 (0.044999) | 0.046175 / 0.037052 (0.009123) | 0.252040 / 0.258489 (-0.006449) | 0.296537 / 0.293841 (0.002696) | 0.030579 / 0.128546 (-0.097967) | 0.012436 / 0.075646 (-0.063210) | 0.205829 / 0.419271 (-0.213443) | 0.036979 / 0.043533 (-0.006554) | 0.251354 / 0.255139 (-0.003785) | 0.272262 / 0.283200 (-0.010938) | 0.019047 / 0.141683 (-0.122636) | 1.112410 / 1.452155 (-0.339745) | 1.137445 / 1.492716 (-0.355271) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097270 / 0.018006 (0.079264) | 0.309329 / 0.000490 (0.308839) | 0.000221 / 0.000200 (0.000021) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019021 / 0.037411 (-0.018390) | 0.066801 / 0.014526 (0.052276) | 0.075280 / 0.176557 (-0.101276) | 0.122499 / 0.737135 (-0.614637) | 0.077424 / 0.296338 (-0.218914) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279469 / 0.215209 (0.064259) | 2.787511 / 2.077655 (0.709856) | 1.411389 / 1.504120 (-0.092731) | 1.285796 / 1.541195 (-0.255399) | 1.354252 / 1.468490 (-0.114238) | 0.735341 / 4.584777 (-3.849436) | 2.418557 / 3.745712 (-1.327155) | 2.983406 / 5.269862 (-2.286455) | 2.005853 / 4.565676 (-2.559823) | 0.080440 / 0.424275 (-0.343835) | 0.005242 / 0.007607 (-0.002365) | 0.343557 / 0.226044 (0.117513) | 3.358984 / 2.268929 (1.090055) | 1.816709 / 55.444624 (-53.627915) | 1.500225 / 6.876477 (-5.376252) | 1.715405 / 2.142072 (-0.426667) | 0.829054 / 4.805227 (-3.976174) | 0.138352 / 6.500664 (-6.362312) | 0.043709 / 0.075469 (-0.031760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969135 / 1.841788 (-0.872652) | 12.510750 / 8.074308 (4.436442) | 10.140368 / 10.191392 (-0.051024) | 0.133117 / 0.680424 (-0.547307) | 0.015775 / 0.534201 (-0.518426) | 0.302203 / 0.579283 (-0.277080) | 0.268214 / 0.434364 (-0.166150) | 0.347041 / 0.540337 (-0.193296) | 0.456095 / 1.386936 (-0.930841) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006255 / 0.011353 (-0.005098) | 0.004453 / 0.011008 (-0.006555) | 0.052298 / 0.038508 (0.013790) | 0.034808 / 0.023109 (0.011699) | 0.274723 / 0.275898 (-0.001175) | 0.297199 / 0.323480 (-0.026281) | 0.004499 / 0.007986 (-0.003486) | 0.003086 / 0.004328 (-0.001242) | 0.051315 / 0.004250 (0.047065) | 0.042764 / 0.037052 (0.005712) | 0.285636 / 0.258489 (0.027147) | 0.321819 / 0.293841 (0.027978) | 0.033350 / 0.128546 (-0.095196) | 0.013457 / 0.075646 (-0.062189) | 0.063930 / 0.419271 (-0.355342) | 0.034537 / 0.043533 (-0.008996) | 0.272630 / 0.255139 (0.017491) | 0.289245 / 0.283200 (0.006045) | 0.018910 / 0.141683 (-0.122773) | 1.153064 / 1.452155 (-0.299091) | 1.207065 / 1.492716 (-0.285651) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093008 / 0.018006 (0.075002) | 0.301313 / 0.000490 (0.300823) | 0.000214 / 0.000200 (0.000014) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023168 / 0.037411 (-0.014244) | 0.080837 / 0.014526 (0.066312) | 0.089667 / 0.176557 (-0.086889) | 0.135849 / 0.737135 (-0.601286) | 0.092082 / 0.296338 (-0.204257) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298933 / 0.215209 (0.083723) | 2.847736 / 2.077655 (0.770082) | 1.550268 / 1.504120 (0.046148) | 1.425675 / 1.541195 (-0.115520) | 1.469251 / 1.468490 (0.000761) | 0.720446 / 4.584777 (-3.864331) | 0.976149 / 3.745712 (-2.769563) | 3.081804 / 5.269862 (-2.188057) | 1.982797 / 4.565676 (-2.582880) | 0.078598 / 0.424275 (-0.345677) | 0.005229 / 0.007607 (-0.002379) | 0.345475 / 0.226044 (0.119430) | 3.421312 / 2.268929 (1.152384) | 1.929034 / 55.444624 (-53.515590) | 1.631523 / 6.876477 (-5.244953) | 1.671996 / 2.142072 (-0.470077) | 0.776916 / 4.805227 (-4.028311) | 0.133966 / 6.500664 (-6.366699) | 0.042183 / 0.075469 (-0.033286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993023 / 1.841788 (-0.848764) | 12.981642 / 8.074308 (4.907334) | 10.610457 / 10.191392 (0.419065) | 0.146748 / 0.680424 (-0.533676) | 0.016556 / 0.534201 (-0.517645) | 0.303613 / 0.579283 (-0.275670) | 0.132671 / 0.434364 (-0.301693) | 0.344786 / 0.540337 (-0.195552) | 0.443049 / 1.386936 (-0.943887) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8419c40a085d67eb5832cecebf3ef8213112857d \"CML watermark\")\n" ]
2024-07-12T08:06:59
2024-07-12T09:06:32
2024-07-12T09:00:09
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7044", "html_url": "https://github.com/huggingface/datasets/pull/7044", "diff_url": "https://github.com/huggingface/datasets/pull/7044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7044.patch", "merged_at": "2024-07-12T09:00:09" }
Mark tests that require `librosa`. Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`: - https://github.com/dofuuz/python-soxr/issues/28
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7044/timeline
null
null
true

Dataset Card for "github-issues"

More Information needed

Downloads last month
32
Edit dataset card