url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.18B
2.35B
node_id
stringlengths
18
19
number
int64
3.98k
6.97k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
12
βŒ€
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
1
33.9k
βŒ€
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6761/comments
https://api.github.com/repos/huggingface/datasets/issues/6761/events
https://github.com/huggingface/datasets/pull/6761
2,212,805,108
PR_kwDODunzps5rCAu8
6,761
Remove deprecated code
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-28T09:57:57
2024-03-29T13:27:26
2024-03-29T13:18:13
CONTRIBUTOR
null
What does this PR do? 1. remove `list_files_info` in favor of `list_repo_tree`. As of `0.23`, `list_files_info` will be removed for good. `datasets` had a utility to support both pre-0.20 and post-0.20 versions. Since `hfh` version is already pinned to `>=0.21.2`, I removed the legacy part. 2. `preupload_lfs_files` had also a different behavior between `<0.20` and `>=0.20`. I remove it since huggingface_hub is now pinned to `>=0.21.2` 3. `hf_hub_url` is overwritten to default to the dataset repo_type. I do think it is misleading to keep the same method naming for it. I renamed it to `get_dataset_url` for clarity. Let me know if you prefer to see this change reverted.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6761/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6761/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6761", "html_url": "https://github.com/huggingface/datasets/pull/6761", "diff_url": "https://github.com/huggingface/datasets/pull/6761.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6761.patch", "merged_at": "2024-03-29T13:18:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6760/comments
https://api.github.com/repos/huggingface/datasets/issues/6760/events
https://github.com/huggingface/datasets/issues/6760
2,212,288,122
I_kwDODunzps6D3NZ6
6,760
Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0
{ "login": "yucc-leon", "id": 17897916, "node_id": "MDQ6VXNlcjE3ODk3OTE2", "avatar_url": "https://avatars.githubusercontent.com/u/17897916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yucc-leon", "html_url": "https://github.com/yucc-leon", "followers_url": "https://api.github.com/users/yucc-leon/followers", "following_url": "https://api.github.com/users/yucc-leon/following{/other_user}", "gists_url": "https://api.github.com/users/yucc-leon/gists{/gist_id}", "starred_url": "https://api.github.com/users/yucc-leon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yucc-leon/subscriptions", "organizations_url": "https://api.github.com/users/yucc-leon/orgs", "repos_url": "https://api.github.com/users/yucc-leon/repos", "events_url": "https://api.github.com/users/yucc-leon/events{/privacy}", "received_events_url": "https://api.github.com/users/yucc-leon/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-28T03:44:26
2024-04-07T09:40:40
null
NONE
null
### Describe the bug This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily. ``` Traceback (most recent call last): File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset builder_instance = load_dataset_builder( File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory raise e1 from None File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1831, in dataset_module_factory can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read() File "/home/xxx/miniconda3/envs/py310/lib/python3.10/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` ### Steps to reproduce the bug 1. Using Python3.10/3.11 2. Install datasets-2.18.0 3. test with ``` from datasets import load_dataset dataset = load_dataset("codeparrot/apps") ``` ### Expected behavior Normally it should manage to download and load the dataset without such error. ### Environment info Ubuntu, Python3.10/3.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6760/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6759/comments
https://api.github.com/repos/huggingface/datasets/issues/6759/events
https://github.com/huggingface/datasets/issues/6759
2,208,892,891
I_kwDODunzps6DqQfb
6,759
Persistent multi-process Pool
{ "login": "fostiropoulos", "id": 4337024, "node_id": "MDQ6VXNlcjQzMzcwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fostiropoulos", "html_url": "https://github.com/fostiropoulos", "followers_url": "https://api.github.com/users/fostiropoulos/followers", "following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}", "gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}", "starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions", "organizations_url": "https://api.github.com/users/fostiropoulos/orgs", "repos_url": "https://api.github.com/users/fostiropoulos/repos", "events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}", "received_events_url": "https://api.github.com/users/fostiropoulos/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
null
2024-03-26T17:35:25
2024-03-26T17:35:25
null
NONE
null
### Feature request Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively. As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering. My ideas: 1. There should be an option to declare `persistent_workers` similar to pytorch DataLoader. Downside would be that would be complex to determine the correct resource allocation and deallocation of the pool. i.e. the dataset can outlive the utility of the pool. 2. Provide a pool as an argument. Downside would be the expertise required by the user. Upside, is that there is better resource management. ### Motivation Is really slow to iteratively perform map and filter operations on a dataset. ### Your contribution If approved I could integrate it. I would need to know what method would be most suitable to implement from the two options above.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6759/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6758/comments
https://api.github.com/repos/huggingface/datasets/issues/6758/events
https://github.com/huggingface/datasets/issues/6758
2,208,494,302
I_kwDODunzps6DovLe
6,758
Passing `sample_by` to `load_dataset` when loading text data does not work
{ "login": "ntoxeg", "id": 823693, "node_id": "MDQ6VXNlcjgyMzY5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/823693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ntoxeg", "html_url": "https://github.com/ntoxeg", "followers_url": "https://api.github.com/users/ntoxeg/followers", "following_url": "https://api.github.com/users/ntoxeg/following{/other_user}", "gists_url": "https://api.github.com/users/ntoxeg/gists{/gist_id}", "starred_url": "https://api.github.com/users/ntoxeg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ntoxeg/subscriptions", "organizations_url": "https://api.github.com/users/ntoxeg/orgs", "repos_url": "https://api.github.com/users/ntoxeg/repos", "events_url": "https://api.github.com/users/ntoxeg/events{/privacy}", "received_events_url": "https://api.github.com/users/ntoxeg/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
null
2024-03-26T14:55:33
2024-04-09T11:27:59
2024-04-09T11:27:59
NONE
null
### Describe the bug I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=β€œdocument”` to `load_dataset` results in files getting split into lines regardless. I have edited `src/datasets/packaged_modules/text/text.py` for myself to switch the default and it works fine. As a side note, the `if-else` for `sample_by` will silently load an empty dataset if someone makes a typo in the argument, which is not ideal. ### Steps to reproduce the bug 1. Prepare data as a bunch of files in a directory. 2. Load that data via `load_dataset(β€œtext”, data_files=<data_dir>/<files_glob>, …, sample_by=β€œdocument”)`. 3. Inspect the resultant dataset β€” every item should have the form of `{β€œtext”: <a line from a file>}`. ### Expected behavior `load_dataset(β€œtext”, data_files=<data_dir>/<files_glob>, …, sample_by=β€œdocument”)` should result in a dataset with items of the form `{β€œtext”: <one document>}`. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.0-1046-nvidia-x86_64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.21.4 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6758/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6757/comments
https://api.github.com/repos/huggingface/datasets/issues/6757/events
https://github.com/huggingface/datasets/pull/6757
2,206,280,340
PR_kwDODunzps5qr7Li
6,757
Test disabling transformers containers in docs CI
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-25T17:16:11
2024-03-27T16:26:35
null
CONTRIBUTOR
null
Related to https://github.com/huggingface/doc-builder/pull/487 and [internal slack thread](https://huggingface.slack.com/archives/C04F8N7FQNL/p1711384899462349?thread_ts=1711041424.720769&cid=C04F8N7FQNL). There is now a `custom_container` option when building docs in CI. When set to `""` (instead of `"huggingface/transformers-doc-builder"` by default), we don't run the CI inside a container, therefore saving ~2min of download time. The plan is to test disabling the transformers container on a few "big" repo and if everything works correctly, we will stop making it the default container. More details on https://github.com/huggingface/doc-builder/pull/487. cc @mishig25
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6757/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6757/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6757", "html_url": "https://github.com/huggingface/datasets/pull/6757", "diff_url": "https://github.com/huggingface/datasets/pull/6757.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6757.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6756/comments
https://api.github.com/repos/huggingface/datasets/issues/6756/events
https://github.com/huggingface/datasets/issues/6756
2,205,557,725
I_kwDODunzps6DdiPd
6,756
Support SQLite files?
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
null
2024-03-25T11:48:05
2024-03-26T16:09:32
2024-03-26T16:09:32
CONTRIBUTOR
null
### Feature request Support loading a dataset from a SQLite file https://huggingface.co./datasets/severo/test_iris_sqlite/tree/main ### Motivation SQLite is a popular file format. ### Your contribution See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal) In particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`. See dataset here: https://huggingface.co./datasets/severo/test_iris_sqlite Note: should we also support DuckDB files?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6756/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6756/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6755/comments
https://api.github.com/repos/huggingface/datasets/issues/6755/events
https://github.com/huggingface/datasets/issues/6755
2,204,573,289
I_kwDODunzps6DZx5p
6,755
Small typo on the documentation
{ "login": "fostiropoulos", "id": 4337024, "node_id": "MDQ6VXNlcjQzMzcwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fostiropoulos", "html_url": "https://github.com/fostiropoulos", "followers_url": "https://api.github.com/users/fostiropoulos/followers", "following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}", "gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}", "starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions", "organizations_url": "https://api.github.com/users/fostiropoulos/orgs", "repos_url": "https://api.github.com/users/fostiropoulos/repos", "events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}", "received_events_url": "https://api.github.com/users/fostiropoulos/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "JINO-ROHIT", "id": 63234112, "node_id": "MDQ6VXNlcjYzMjM0MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JINO-ROHIT", "html_url": "https://github.com/JINO-ROHIT", "followers_url": "https://api.github.com/users/JINO-ROHIT/followers", "following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}", "gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}", "starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions", "organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs", "repos_url": "https://api.github.com/users/JINO-ROHIT/repos", "events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}", "received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events", "type": "User", "site_admin": false }
[ { "login": "JINO-ROHIT", "id": 63234112, "node_id": "MDQ6VXNlcjYzMjM0MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JINO-ROHIT", "html_url": "https://github.com/JINO-ROHIT", "followers_url": "https://api.github.com/users/JINO-ROHIT/followers", "following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}", "gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}", "starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions", "organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs", "repos_url": "https://api.github.com/users/JINO-ROHIT/repos", "events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}", "received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events", "type": "User", "site_admin": false } ]
null
null
2024-03-24T21:47:52
2024-04-02T14:01:19
2024-04-02T14:01:19
NONE
null
### Describe the bug There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938 It should be `caching is enabled`. ### Steps to reproduce the bug Please visit https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938 ### Expected behavior `caching is enabled` ### Environment info - `datasets` version: 2.17.1 - Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6755/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6754/comments
https://api.github.com/repos/huggingface/datasets/issues/6754/events
https://github.com/huggingface/datasets/pull/6754
2,204,214,595
PR_kwDODunzps5qk-nr
6,754
Fix cache path to snakecase for `CachedDatasetModuleFactory` and `Cache`
{ "login": "izhx", "id": 26690193, "node_id": "MDQ6VXNlcjI2NjkwMTkz", "avatar_url": "https://avatars.githubusercontent.com/u/26690193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/izhx", "html_url": "https://github.com/izhx", "followers_url": "https://api.github.com/users/izhx/followers", "following_url": "https://api.github.com/users/izhx/following{/other_user}", "gists_url": "https://api.github.com/users/izhx/gists{/gist_id}", "starred_url": "https://api.github.com/users/izhx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/izhx/subscriptions", "organizations_url": "https://api.github.com/users/izhx/orgs", "repos_url": "https://api.github.com/users/izhx/repos", "events_url": "https://api.github.com/users/izhx/events{/privacy}", "received_events_url": "https://api.github.com/users/izhx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-24T06:59:15
2024-04-15T15:45:44
2024-04-15T15:38:51
CONTRIBUTOR
null
Fix https://github.com/huggingface/datasets/issues/6750#issuecomment-2016678729 I didn't find a guideline on how to run the tests, so i just run the following steps to make sure that this bug is fixed. 1. `python test.py`, 2. then `HF_DATASETS_OFFLINE=1 python test.py` The `test.py` is ``` import datasets datasets.utils.logging.set_verbosity_info() ds = datasets.load_dataset('izhx/STS17-debug') print(ds) ds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4') print(ds) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6754/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6754", "html_url": "https://github.com/huggingface/datasets/pull/6754", "diff_url": "https://github.com/huggingface/datasets/pull/6754.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6754.patch", "merged_at": "2024-04-15T15:38:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/6753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6753/comments
https://api.github.com/repos/huggingface/datasets/issues/6753/events
https://github.com/huggingface/datasets/issues/6753
2,204,155,091
I_kwDODunzps6DYLzT
6,753
Type error when importing datasets on Kaggle
{ "login": "jtv199", "id": 18300717, "node_id": "MDQ6VXNlcjE4MzAwNzE3", "avatar_url": "https://avatars.githubusercontent.com/u/18300717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jtv199", "html_url": "https://github.com/jtv199", "followers_url": "https://api.github.com/users/jtv199/followers", "following_url": "https://api.github.com/users/jtv199/following{/other_user}", "gists_url": "https://api.github.com/users/jtv199/gists{/gist_id}", "starred_url": "https://api.github.com/users/jtv199/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jtv199/subscriptions", "organizations_url": "https://api.github.com/users/jtv199/orgs", "repos_url": "https://api.github.com/users/jtv199/repos", "events_url": "https://api.github.com/users/jtv199/events{/privacy}", "received_events_url": "https://api.github.com/users/jtv199/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-24T03:01:30
2024-04-04T13:50:35
2024-03-30T00:23:49
NONE
null
### Describe the bug When trying to run ``` import datasets print(datasets.__version__) ``` It generates the following error ``` TypeError: expected string or bytes-like object ``` It looks like It cannot find the valid versions of `fsspec` though fsspec version is fine when I checked Via command ``` import fsspec print(fsspec.__version__) ​ # output: 2024.3.1 ``` Detailed crash report ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[1], line 1 ----> 1 import datasets 2 print(datasets.__version__) File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18 1 # ruff: noqa 2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. 3 # (...) 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 16 __version__ = "2.18.0" ---> 18 from .arrow_dataset import Dataset 19 from .arrow_reader import ReadInstruction 20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:66 63 from multiprocess import Pool 64 from tqdm.contrib.concurrent import thread_map ---> 66 from . import config 67 from .arrow_reader import ArrowReader 68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence File /opt/conda/lib/python3.10/site-packages/datasets/config.py:41 39 # Imports 40 DILL_VERSION = version.parse(importlib.metadata.version("dill")) ---> 41 FSSPEC_VERSION = version.parse(importlib.metadata.version("fsspec")) 42 PANDAS_VERSION = version.parse(importlib.metadata.version("pandas")) 43 PYARROW_VERSION = version.parse(importlib.metadata.version("pyarrow")) File /opt/conda/lib/python3.10/site-packages/packaging/version.py:49, in parse(version) 43 """ 44 Parse the given version string and return either a :class:`Version` object 45 or a :class:`LegacyVersion` object depending on if the given version is 46 a valid PEP 440 version or a legacy version. 47 """ 48 try: ---> 49 return Version(version) 50 except InvalidVersion: 51 return LegacyVersion(version) File /opt/conda/lib/python3.10/site-packages/packaging/version.py:264, in Version.__init__(self, version) 261 def __init__(self, version: str) -> None: 262 263 # Validate the version and parse it into pieces --> 264 match = self._regex.search(version) 265 if not match: 266 raise InvalidVersion(f"Invalid version: '{version}'") TypeError: expected string or bytes-like object ``` ### Steps to reproduce the bug 1. run `!pip install -U datasets` on kaggle 2. check datasets is installed via ``` import datasets print(datasets.__version__) ``` ### Expected behavior Expected to print datasets version, like `2.18.0` ### Environment info Running on Kaggle, latest enviornment , here is the notebook https://www.kaggle.com/code/jtv199/mistrial-7b-part2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6753/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6752/comments
https://api.github.com/repos/huggingface/datasets/issues/6752/events
https://github.com/huggingface/datasets/issues/6752
2,204,043,839
I_kwDODunzps6DXwo_
6,752
Precision being changed from float16 to float32 unexpectedly
{ "login": "gcervantes8", "id": 21228908, "node_id": "MDQ6VXNlcjIxMjI4OTA4", "avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gcervantes8", "html_url": "https://github.com/gcervantes8", "followers_url": "https://api.github.com/users/gcervantes8/followers", "following_url": "https://api.github.com/users/gcervantes8/following{/other_user}", "gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}", "starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions", "organizations_url": "https://api.github.com/users/gcervantes8/orgs", "repos_url": "https://api.github.com/users/gcervantes8/repos", "events_url": "https://api.github.com/users/gcervantes8/events{/privacy}", "received_events_url": "https://api.github.com/users/gcervantes8/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-23T20:53:56
2024-04-10T15:21:33
null
NONE
null
### Describe the bug I'm loading a HuggingFace Dataset for images. I'm running a preprocessing (map operation) step that runs a few operations, one of them being conversion to float16. The Dataset features also say that the 'img' is of type float16. Whenever I take an image from that HuggingFace Dataset instance, the type turns out to be float32. ### Steps to reproduce the bug ```python import torchvision.transforms.v2 as transforms from datasets import load_dataset dataset = load_dataset('cifar10', split='test') dataset = dataset.with_format("torch") data_transform = transforms.Compose([transforms.Resize((32, 32)), transforms.ToDtype(torch.float16, scale=True), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), ]) def _preprocess(examples): # Permutes from (BS x H x W x C) to (BS x C x H x W) images = torch.permute(examples['img'], (0, 3, 2, 1)) examples['img'] = data_transform(images) return examples dataset = dataset.map(_preprocess, batched=True, batch_size=8) ``` Now at this point the dataset.features are showing float16 which is great because that's what I want. ```python print(data_loader.features['img']) Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None) ``` But when I try to sample an image from this dataloader; I'm getting a float32 image, when I'm expecting float16: ```python print(next(iter(data_loader))['img'].dtype) torch.float32 ``` ### Expected behavior I'm expecting the images loaded after the transformation to stay in float16. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.9 - `huggingface_hub` version: 0.21.4 - PyArrow version: 14.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6752/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6751/comments
https://api.github.com/repos/huggingface/datasets/issues/6751/events
https://github.com/huggingface/datasets/pull/6751
2,203,951,501
PR_kwDODunzps5qkKLH
6,751
Use 'with' operator for some download functions
{ "login": "Moisan", "id": 31669, "node_id": "MDQ6VXNlcjMxNjY5", "avatar_url": "https://avatars.githubusercontent.com/u/31669?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Moisan", "html_url": "https://github.com/Moisan", "followers_url": "https://api.github.com/users/Moisan/followers", "following_url": "https://api.github.com/users/Moisan/following{/other_user}", "gists_url": "https://api.github.com/users/Moisan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Moisan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moisan/subscriptions", "organizations_url": "https://api.github.com/users/Moisan/orgs", "repos_url": "https://api.github.com/users/Moisan/repos", "events_url": "https://api.github.com/users/Moisan/events{/privacy}", "received_events_url": "https://api.github.com/users/Moisan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-23T16:32:08
2024-03-26T00:40:57
2024-03-26T00:40:57
NONE
null
Some functions in `streaming_download_manager.py` are not closing the file they open which lead to `Unclosed file` warnings in our code. This fixes a few of them.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6751/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6751", "html_url": "https://github.com/huggingface/datasets/pull/6751", "diff_url": "https://github.com/huggingface/datasets/pull/6751.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6751.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6750/comments
https://api.github.com/repos/huggingface/datasets/issues/6750/events
https://github.com/huggingface/datasets/issues/6750
2,203,590,658
I_kwDODunzps6DWCAC
6,750
`load_dataset` requires a network connection for local download?
{ "login": "MiroFurtado", "id": 6306695, "node_id": "MDQ6VXNlcjYzMDY2OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/6306695?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MiroFurtado", "html_url": "https://github.com/MiroFurtado", "followers_url": "https://api.github.com/users/MiroFurtado/followers", "following_url": "https://api.github.com/users/MiroFurtado/following{/other_user}", "gists_url": "https://api.github.com/users/MiroFurtado/gists{/gist_id}", "starred_url": "https://api.github.com/users/MiroFurtado/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MiroFurtado/subscriptions", "organizations_url": "https://api.github.com/users/MiroFurtado/orgs", "repos_url": "https://api.github.com/users/MiroFurtado/repos", "events_url": "https://api.github.com/users/MiroFurtado/events{/privacy}", "received_events_url": "https://api.github.com/users/MiroFurtado/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-23T01:06:32
2024-04-15T15:38:52
2024-04-15T15:38:52
NONE
null
### Describe the bug Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again? ### Steps to reproduce the bug ``` >>> import datasets >>> datasets.load_dataset("hh-rlhf") Repo card metadata block was not found. Setting CardData to empty. *hangs bc i'm firewalled* ```` stack trace from ctrl-c: ``` ^CTraceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset builder_instance.download_and_prepare( output_path = get_from_cache( [0/122] File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 532, in get_from_cache response = http_head( File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 419, in http_head response = _request_with_retry( File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 304, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/adapters.py", line 487, in send resp = conn.urlopen( File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) KeyboardInterrupt ``` ### Expected behavior loads the dataset ### Environment info ``` > pip show datasets Name: datasets Version: 2.18.0 ``` Python 3.10.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6750/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6750/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6749/comments
https://api.github.com/repos/huggingface/datasets/issues/6749/events
https://github.com/huggingface/datasets/pull/6749
2,202,310,116
PR_kwDODunzps5qeoSk
6,749
Fix fsspec tqdm callback
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-22T11:44:11
2024-03-22T14:51:45
2024-03-22T14:45:39
MEMBER
null
Following changes at https://github.com/fsspec/filesystem_spec/pull/1497 for `fsspec>=2024.2.0`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6749/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6749", "html_url": "https://github.com/huggingface/datasets/pull/6749", "diff_url": "https://github.com/huggingface/datasets/pull/6749.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6749.patch", "merged_at": "2024-03-22T14:45:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/6748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6748/comments
https://api.github.com/repos/huggingface/datasets/issues/6748/events
https://github.com/huggingface/datasets/issues/6748
2,201,517,348
I_kwDODunzps6DOH0k
6,748
Strange slicing behavior
{ "login": "Luciennnnnnn", "id": 20135317, "node_id": "MDQ6VXNlcjIwMTM1MzE3", "avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luciennnnnnn", "html_url": "https://github.com/Luciennnnnnn", "followers_url": "https://api.github.com/users/Luciennnnnnn/followers", "following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}", "gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions", "organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs", "repos_url": "https://api.github.com/users/Luciennnnnnn/repos", "events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}", "received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-22T01:49:13
2024-03-22T16:43:57
null
NONE
null
### Describe the bug I have loaded a dataset, and then slice first 300 samples using `:` ops, however, the resulting dataset is not expected, as the output below: ```bash len(dataset)=1050324 len(dataset[:300])=2 len(dataset[0:300])=2 len(dataset.select(range(300)))=300 ``` ### Steps to reproduce the bug load a dataset then: ```bash dataset = load_from_disk(args.train_data_dir) print(f"{len(dataset)=}", flush=True) print(f"{len(dataset[:300])=}", flush=True) print(f"{len(dataset[0:300])=}", flush=True) print(f"{len(dataset.select(range(300)))=}", flush=True) ``` ### Expected behavior ```bash len(dataset)=1050324 len(dataset[:300])=300 len(dataset[0:300])=300 len(dataset.select(range(300)))=300 ``` ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - `huggingface_hub` version: 0.20.2 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6748/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6747/comments
https://api.github.com/repos/huggingface/datasets/issues/6747/events
https://github.com/huggingface/datasets/pull/6747
2,201,219,384
PR_kwDODunzps5qa5L-
6,747
chore(deps): bump fsspec
{ "login": "shcheklein", "id": 3659196, "node_id": "MDQ6VXNlcjM2NTkxOTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3659196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shcheklein", "html_url": "https://github.com/shcheklein", "followers_url": "https://api.github.com/users/shcheklein/followers", "following_url": "https://api.github.com/users/shcheklein/following{/other_user}", "gists_url": "https://api.github.com/users/shcheklein/gists{/gist_id}", "starred_url": "https://api.github.com/users/shcheklein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shcheklein/subscriptions", "organizations_url": "https://api.github.com/users/shcheklein/orgs", "repos_url": "https://api.github.com/users/shcheklein/repos", "events_url": "https://api.github.com/users/shcheklein/events{/privacy}", "received_events_url": "https://api.github.com/users/shcheklein/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-21T21:25:49
2024-03-22T16:40:15
2024-03-22T16:28:40
CONTRIBUTOR
null
There were a few fixes released recently, some DVC ecosystem packages require newer version of `fsspec`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6747/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6747", "html_url": "https://github.com/huggingface/datasets/pull/6747", "diff_url": "https://github.com/huggingface/datasets/pull/6747.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6747.patch", "merged_at": "2024-03-22T16:28:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/6746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6746/comments
https://api.github.com/repos/huggingface/datasets/issues/6746/events
https://github.com/huggingface/datasets/issues/6746
2,198,993,949
I_kwDODunzps6DEfwd
6,746
ExpectedMoreSplits error when loading C4 dataset
{ "login": "billwang485", "id": 65165345, "node_id": "MDQ6VXNlcjY1MTY1MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/65165345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/billwang485", "html_url": "https://github.com/billwang485", "followers_url": "https://api.github.com/users/billwang485/followers", "following_url": "https://api.github.com/users/billwang485/following{/other_user}", "gists_url": "https://api.github.com/users/billwang485/gists{/gist_id}", "starred_url": "https://api.github.com/users/billwang485/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/billwang485/subscriptions", "organizations_url": "https://api.github.com/users/billwang485/orgs", "repos_url": "https://api.github.com/users/billwang485/repos", "events_url": "https://api.github.com/users/billwang485/events{/privacy}", "received_events_url": "https://api.github.com/users/billwang485/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-21T02:53:04
2024-04-22T16:30:14
null
NONE
null
### Describe the bug I encounter bug when running the example command line ```python python main.py \ --model decapoda-research/llama-7b-hf \ --prune_method wanda \ --sparsity_ratio 0.5 \ --sparsity_type unstructured \ --save out/llama_7b/unstructured/wanda/ ``` The bug occurred at these lines of code (when loading c4 dataset) ```python traindata = load_dataset('allenai/c4', 'allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train') valdata = load_dataset('allenai/c4', 'allenai--c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation') ``` The error message states: ``` raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits))) datasets.utils.info_utils.ExpectedMoreSplits: {'validation'} ``` ### Steps to reproduce the bug 1. I encounter bug when running the example command line ### Expected behavior The error message states: ``` raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits))) datasets.utils.info_utils.ExpectedMoreSplits: {'validation'} ``` ### Environment info I'm using cuda 12.4, so I use ```pip install pytorch``` instead of conda provided in install.md Also, I've tried another environment using the same commands in install.md, but the same bug occured
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6746/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6745/comments
https://api.github.com/repos/huggingface/datasets/issues/6745/events
https://github.com/huggingface/datasets/issues/6745
2,198,541,732
I_kwDODunzps6DCxWk
6,745
Scraping the whole of github including private repos is bad; kindly stop
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
null
2024-03-20T20:54:06
2024-03-21T12:28:04
2024-03-21T10:24:56
NONE
null
### Feature request https://github.com/bigcode-project/opt-out-v2 - opt out is not consent. kindly quit this ridiculous nonsense. ### Motivation [EDITED: insults not tolerated] ### Your contribution [EDITED: insults not tolerated]
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6745/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6744/comments
https://api.github.com/repos/huggingface/datasets/issues/6744/events
https://github.com/huggingface/datasets/issues/6744
2,197,910,168
I_kwDODunzps6DAXKY
6,744
Option to disable file locking
{ "login": "VRehnberg", "id": 35767167, "node_id": "MDQ6VXNlcjM1NzY3MTY3", "avatar_url": "https://avatars.githubusercontent.com/u/35767167?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VRehnberg", "html_url": "https://github.com/VRehnberg", "followers_url": "https://api.github.com/users/VRehnberg/followers", "following_url": "https://api.github.com/users/VRehnberg/following{/other_user}", "gists_url": "https://api.github.com/users/VRehnberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/VRehnberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VRehnberg/subscriptions", "organizations_url": "https://api.github.com/users/VRehnberg/orgs", "repos_url": "https://api.github.com/users/VRehnberg/repos", "events_url": "https://api.github.com/users/VRehnberg/events{/privacy}", "received_events_url": "https://api.github.com/users/VRehnberg/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
null
2024-03-20T15:59:45
2024-03-20T15:59:45
null
NONE
null
### Feature request Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this. ### Motivation File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point to local disk and the problem would be solved. However, as cache_dir is both where the small info files are written and the processed datasets are put this isn't a feasible solution. Considering https://github.com/huggingface/datasets/issues/6395 I still do think this is something that belongs in HuggingFace. The possibility to control packages separately is valuable. It might be that a user has their dataset on a file-system that doesn't support file-locking while they are using file locking on local disk to control some other type of access. ### Your contribution My suggested solution: ``` diff --git a/src/datasets/utils/_filelock.py b/src/datasets/utils/_filelock.py index 19620e6e..58f41a02 100644 --- a/src/datasets/utils/_filelock.py +++ b/src/datasets/utils/_filelock.py @@ -18,11 +18,15 @@ import os from filelock import FileLock as FileLock_ -from filelock import UnixFileLock +from filelock import SoftFileLock, UnixFileLock from filelock import __version__ as _filelock_version from packaging import version +if os.getenv('HF_USE_SOFTFILELOCK', 'false').lower() in ('true', '1'): + FileLock_ = SoftFileLock + + class FileLock(FileLock_): """ A `filelock.FileLock` initializer that handles long paths. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6744/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6743/comments
https://api.github.com/repos/huggingface/datasets/issues/6743/events
https://github.com/huggingface/datasets/pull/6743
2,195,481,697
PR_kwDODunzps5qHeMZ
6,743
Allow null values in dict columns
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-19T16:54:22
2024-04-08T13:08:42
2024-03-19T20:05:19
COLLABORATOR
null
Fix #6738
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6743/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6743", "html_url": "https://github.com/huggingface/datasets/pull/6743", "diff_url": "https://github.com/huggingface/datasets/pull/6743.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6743.patch", "merged_at": "2024-03-19T20:05:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/6742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6742/comments
https://api.github.com/repos/huggingface/datasets/issues/6742/events
https://github.com/huggingface/datasets/pull/6742
2,195,134,854
PR_kwDODunzps5qGSfG
6,742
Fix missing download_config in get_data_patterns
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-19T14:29:25
2024-03-19T18:24:39
2024-03-19T18:15:13
MEMBER
null
Reported in https://github.com/huggingface/datasets-server/issues/2607
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6742/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6742", "html_url": "https://github.com/huggingface/datasets/pull/6742", "diff_url": "https://github.com/huggingface/datasets/pull/6742.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6742.patch", "merged_at": "2024-03-19T18:15:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6741/comments
https://api.github.com/repos/huggingface/datasets/issues/6741/events
https://github.com/huggingface/datasets/pull/6741
2,194,626,108
PR_kwDODunzps5qEiu3
6,741
Fix offline mode with single config
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-19T10:48:32
2024-03-25T16:35:21
2024-03-25T16:23:59
MEMBER
null
Reported in https://github.com/huggingface/datasets/issues/4760 The cache was not able to reload a dataset with a single config form the cache if the config name is not specificed For example ```python from datasets import load_dataset, config config.HF_DATASETS_OFFLINE = True load_dataset("openai_humaneval") ``` This was due to a regression in https://github.com/huggingface/datasets/pull/6632
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6741/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6741", "html_url": "https://github.com/huggingface/datasets/pull/6741", "diff_url": "https://github.com/huggingface/datasets/pull/6741.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6741.patch", "merged_at": "2024-03-25T16:23:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/6740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6740/comments
https://api.github.com/repos/huggingface/datasets/issues/6740/events
https://github.com/huggingface/datasets/issues/6740
2,193,172,074
I_kwDODunzps6CuSZq
6,740
Support for loading geotiff files as a part of the ImageFolder
{ "login": "sunny1401", "id": 31362090, "node_id": "MDQ6VXNlcjMxMzYyMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/31362090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sunny1401", "html_url": "https://github.com/sunny1401", "followers_url": "https://api.github.com/users/sunny1401/followers", "following_url": "https://api.github.com/users/sunny1401/following{/other_user}", "gists_url": "https://api.github.com/users/sunny1401/gists{/gist_id}", "starred_url": "https://api.github.com/users/sunny1401/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sunny1401/subscriptions", "organizations_url": "https://api.github.com/users/sunny1401/orgs", "repos_url": "https://api.github.com/users/sunny1401/repos", "events_url": "https://api.github.com/users/sunny1401/events{/privacy}", "received_events_url": "https://api.github.com/users/sunny1401/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
null
2024-03-18T20:00:39
2024-03-27T18:19:48
2024-03-27T18:19:20
NONE
null
### Feature request Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL ### Motivation As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood correctly) uses PIL. This is not really optimized because mostly these datasets have images with many channels and additional metadata. Using PIL makes one loose it unless we provide a custom script. Hence, maybe an API could be added to have this in common? ### Your contribution If the issue is accepted - i can contribute the code, because I would like to have it automated and generalised.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6740/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6739/comments
https://api.github.com/repos/huggingface/datasets/issues/6739/events
https://github.com/huggingface/datasets/pull/6739
2,192,730,134
PR_kwDODunzps5p-Bwe
6,739
Transpose images with EXIF Orientation tag
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-18T16:43:06
2024-03-19T15:35:57
2024-03-19T15:29:42
COLLABORATOR
null
Closes https://github.com/huggingface/datasets/issues/6252
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6739/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6739", "html_url": "https://github.com/huggingface/datasets/pull/6739", "diff_url": "https://github.com/huggingface/datasets/pull/6739.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6739.patch", "merged_at": "2024-03-19T15:29:41" }
true
https://api.github.com/repos/huggingface/datasets/issues/6738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6738/comments
https://api.github.com/repos/huggingface/datasets/issues/6738/events
https://github.com/huggingface/datasets/issues/6738
2,192,386,536
I_kwDODunzps6CrSno
6,738
Dict feature is non-nullable while nested dict feature is
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2024-03-18T14:31:47
2024-03-20T10:24:15
2024-03-19T20:05:20
CONTRIBUTOR
null
When i try to create a `Dataset` object with None values inside a dict column, like this: ```python from datasets import Dataset, Features, Value Dataset.from_dict( { "dict": [{"a": 0, "b": 0}, None], }, features=Features( {"dict": {"a": Value("int16"), "b": Value("int16")}} ) ) ``` i get `ValueError: Got None but expected a dictionary instead`. At the same time, having None in _nested_ dict feature works, for example, this doesn't throw any errors: ```python from datasets import Dataset, Features, Value, Sequence dataset = Dataset.from_dict( { "list_dict": [[{"a": 0, "b": 0}], None], "sequence_dict": [[{"a": 0, "b": 0}], None], }, features=Features({ "list_dict": [{"a": Value("int16"), "b": Value("int16")}], "sequence_dict": Sequence({"a": Value("int16"), "b": Value("int16")}), }) ) ``` Other types of features also seem to be nullable (but I haven't checked all of them). Version of `datasets` is the latest atm (2.18.0) Is this an expected behavior or a bug?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6738/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6737/comments
https://api.github.com/repos/huggingface/datasets/issues/6737/events
https://github.com/huggingface/datasets/issues/6737
2,190,198,425
I_kwDODunzps6Ci8aZ
6,737
Invalid pattern: '**' can only be an entire path component
{ "login": "JPonsa", "id": 28976175, "node_id": "MDQ6VXNlcjI4OTc2MTc1", "avatar_url": "https://avatars.githubusercontent.com/u/28976175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JPonsa", "html_url": "https://github.com/JPonsa", "followers_url": "https://api.github.com/users/JPonsa/followers", "following_url": "https://api.github.com/users/JPonsa/following{/other_user}", "gists_url": "https://api.github.com/users/JPonsa/gists{/gist_id}", "starred_url": "https://api.github.com/users/JPonsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JPonsa/subscriptions", "organizations_url": "https://api.github.com/users/JPonsa/orgs", "repos_url": "https://api.github.com/users/JPonsa/repos", "events_url": "https://api.github.com/users/JPonsa/events{/privacy}", "received_events_url": "https://api.github.com/users/JPonsa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-16T19:28:46
2024-05-13T14:03:18
2024-05-13T11:32:57
NONE
null
### Describe the bug ValueError: Invalid pattern: '**' can only be an entire path component when loading any dataset ### Steps to reproduce the bug import datasets ds = datasets.load_dataset("TokenBender/code_instructions_122k_alpaca_style") ### Expected behavior loading the dataset successfully ### Environment info - `datasets` version: 2.18.0 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6737/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6737/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6736/comments
https://api.github.com/repos/huggingface/datasets/issues/6736/events
https://github.com/huggingface/datasets/issues/6736
2,190,181,422
I_kwDODunzps6Ci4Qu
6,736
Mosaic Streaming (MDS) Support
{ "login": "siddk", "id": 2498509, "node_id": "MDQ6VXNlcjI0OTg1MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2498509?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siddk", "html_url": "https://github.com/siddk", "followers_url": "https://api.github.com/users/siddk/followers", "following_url": "https://api.github.com/users/siddk/following{/other_user}", "gists_url": "https://api.github.com/users/siddk/gists{/gist_id}", "starred_url": "https://api.github.com/users/siddk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siddk/subscriptions", "organizations_url": "https://api.github.com/users/siddk/orgs", "repos_url": "https://api.github.com/users/siddk/repos", "events_url": "https://api.github.com/users/siddk/events{/privacy}", "received_events_url": "https://api.github.com/users/siddk/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
null
2024-03-16T18:42:04
2024-03-18T15:13:34
null
NONE
null
### Feature request I'm a huge fan of the current HF Datasets `webdataset` integration (especially the built-in streaming support). However, I'd love to upload some robotics and multimodal datasets I've processed for use with [Mosaic Streaming](https://docs.mosaicml.com/projects/streaming/en/stable/), specifically their [MDS Format](https://docs.mosaicml.com/projects/streaming/en/stable/fundamentals/dataset_format.html#mds). Because the shard files have similar semantics to WebDataset, I'm hoping that adding such support won't be too much trouble? ### Motivation One of the downsides with WebDataset is a lack of out-of-the-box determinism (especially for large-scale training and reproducibility), easy job resumption, and the ability to quickly debug / visualize individual examples. Mosaic Streaming provides a [great interface for this out of the box](https://docs.mosaicml.com/projects/streaming/en/stable/#key-features), so I'd love to see it supported in HF Datasets. ### Your contribution Happy to help test things / provide example data. Can potentially submit a PR if maintainers could point me to the necessary WebDataset logic / steps for adding a new streaming format!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6736/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6736/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6735/comments
https://api.github.com/repos/huggingface/datasets/issues/6735/events
https://github.com/huggingface/datasets/pull/6735
2,189,132,932
PR_kwDODunzps5px84g
6,735
Add `mode` parameter to `Image` feature
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-15T17:21:12
2024-03-18T15:47:48
2024-03-18T15:41:33
COLLABORATOR
null
Fix https://github.com/huggingface/datasets/issues/6675
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6735/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6735", "html_url": "https://github.com/huggingface/datasets/pull/6735", "diff_url": "https://github.com/huggingface/datasets/pull/6735.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6735.patch", "merged_at": "2024-03-18T15:41:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/6734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6734/comments
https://api.github.com/repos/huggingface/datasets/issues/6734/events
https://github.com/huggingface/datasets/issues/6734
2,187,646,694
I_kwDODunzps6CZNbm
6,734
Tokenization slows towards end of dataset
{ "login": "ethansmith2000", "id": 98723285, "node_id": "U_kgDOBeJl1Q", "avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ethansmith2000", "html_url": "https://github.com/ethansmith2000", "followers_url": "https://api.github.com/users/ethansmith2000/followers", "following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}", "gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions", "organizations_url": "https://api.github.com/users/ethansmith2000/orgs", "repos_url": "https://api.github.com/users/ethansmith2000/repos", "events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}", "received_events_url": "https://api.github.com/users/ethansmith2000/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-15T03:27:36
2024-04-11T10:48:07
null
NONE
null
### Describe the bug Mapped tokenization slows down substantially towards end of dataset. train set started off very slow, caught up to 20k then tapered off til the end. what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted downloads, and the speed ups/downs consistently happened the same times ```bash Running tokenizer on dataset (num_proc=48): 0%| | 847000/881416735 [12:18<252:45:45, 967.72 examples/s] Running tokenizer on dataset (num_proc=48): 0%| | 848000/881416735 [12:19<224:16:10, 1090.66 examples/s] Running tokenizer on dataset (num_proc=48): 10%|β–‰ | 84964000/881416735 [3:48:00<11:21:34, 19476.01 examples/s] Running tokenizer on dataset (num_proc=48): 10%|β–‰ | 84967000/881416735 [3:48:00<12:04:01, 18333.79 examples/s] Running tokenizer on dataset (num_proc=48): 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 538631977/881416735 [13:46:40<27:50:04, 3420.84 examples/s] Running tokenizer on dataset (num_proc=48): 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 538632977/881416735 [13:46:40<23:48:20, 3999.77 examples/s] Running tokenizer on dataset (num_proc=48): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 881365886/881416735 [38:30:19<04:34, 185.10 examples/s] Running tokenizer on dataset (num_proc=48): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 881366886/881416735 [38:30:25<04:36, 180.57 examples/s] ``` and validation set as well ```bash Running tokenizer on dataset (num_proc=48): 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 41544000/46390354 [28:44<02:37, 30798.76 examples/s] Running tokenizer on dataset (num_proc=48): 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 41550000/46390354 [28:44<02:08, 37698.08 examples/s] Running tokenizer on dataset (num_proc=48): 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 44747422/46390354 [2:15:48<12:22:44, 36.87 examples/s] Running tokenizer on dataset (num_proc=48): 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 44747422/46390354 [2:16:00<12:22:44, 36.87 examples/s] ``` ### Steps to reproduce the bug using the following kwargs ```python with accelerator.main_process_first(): lm_datasets = tokenized_datasets.map( group_texts, batched=True, num_proc=48 load_from_cache_file=True, desc=f"Grouping texts in chunks of {block_size}", ) ``` running through slurm script ```bash #SBATCH --partition=gpu-nvidia-a100 #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --gpus-per-task=8 #SBATCH --cpus-per-task=96 ``` using this dataset https://huggingface.co./datasets/togethercomputer/RedPajama-Data-1T ### Expected behavior Constant speed throughout ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.15.0-1049-aws-x86_64-with-glibc2.10 - Python version: 3.8.18 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6734/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6733/comments
https://api.github.com/repos/huggingface/datasets/issues/6733/events
https://github.com/huggingface/datasets/issues/6733
2,186,811,724
I_kwDODunzps6CWBlM
6,733
EmptyDatasetError when loading dataset downloaded with HuggingFace cli
{ "login": "StwayneXG", "id": 77196999, "node_id": "MDQ6VXNlcjc3MTk2OTk5", "avatar_url": "https://avatars.githubusercontent.com/u/77196999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StwayneXG", "html_url": "https://github.com/StwayneXG", "followers_url": "https://api.github.com/users/StwayneXG/followers", "following_url": "https://api.github.com/users/StwayneXG/following{/other_user}", "gists_url": "https://api.github.com/users/StwayneXG/gists{/gist_id}", "starred_url": "https://api.github.com/users/StwayneXG/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StwayneXG/subscriptions", "organizations_url": "https://api.github.com/users/StwayneXG/orgs", "repos_url": "https://api.github.com/users/StwayneXG/repos", "events_url": "https://api.github.com/users/StwayneXG/events{/privacy}", "received_events_url": "https://api.github.com/users/StwayneXG/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-14T16:41:27
2024-03-15T18:09:02
null
NONE
null
### Describe the bug I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error: ```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None``` The dataset I'm using is "lmsys/chatbot_arena_conversations". The folder structure is - README.md - data - train-00000-of-00001-cced8514c7ed782a.parquet ### Steps to reproduce the bug 1. Download dataset using HuggingFace CLI: ```huggingface-cli download lmsys/chatbot_arena_conversations --local-dir ./lmsys/chatbot_arena_conversations``` 2. In Python ``` from datasets import load_dataset load_dataset("lmsys/chatbot_arena_conversations") ``` ### Expected behavior Should return a Dataset Dict in the form of ``` DatasetDict({ train: Dataset({ features: [...], num_rows: 33,000 }) }) ``` ### Environment info Python 3.11.5 Datasets 2.18.0 Transformers 4.38.2 Pytorch 2.2.0 Pyarrow 15.0.1 Rocky Linux release 8.9 (Green Obsidian)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6733/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6731/comments
https://api.github.com/repos/huggingface/datasets/issues/6731/events
https://github.com/huggingface/datasets/issues/6731
2,182,844,673
I_kwDODunzps6CG5EB
6,731
Unexpected behavior when using load_dataset with streaming=True in a for loop
{ "login": "uApiv", "id": 42908296, "node_id": "MDQ6VXNlcjQyOTA4Mjk2", "avatar_url": "https://avatars.githubusercontent.com/u/42908296?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uApiv", "html_url": "https://github.com/uApiv", "followers_url": "https://api.github.com/users/uApiv/followers", "following_url": "https://api.github.com/users/uApiv/following{/other_user}", "gists_url": "https://api.github.com/users/uApiv/gists{/gist_id}", "starred_url": "https://api.github.com/users/uApiv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/uApiv/subscriptions", "organizations_url": "https://api.github.com/users/uApiv/orgs", "repos_url": "https://api.github.com/users/uApiv/repos", "events_url": "https://api.github.com/users/uApiv/events{/privacy}", "received_events_url": "https://api.github.com/users/uApiv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-12T23:26:43
2024-04-16T00:00:00
2024-04-16T00:00:00
NONE
null
### Describe the bug ### My Code ``` from datasets import load_dataset res=[] for i in [0,1]: di=load_dataset( "json", data_files='path_to.json', split='train', streaming=True, ).map(lambda x: {"source": i}) res.append(di) for e in res[0]: print(e) ``` ### Unexpected Behavior Data in `res[0]` has `source=1`. However the expected value is 0. ### FYI I further switch `streaming` to `False`. And the output value is as expected (0). So there may exist bugs in setting `streaming=True` in a for loop. ### Environment Python 3.8.0 datasets==2.18.0 transformers==4.28.1 ### Steps to reproduce the bug 1. Create a Json file with any content. 2. Run the provided code. 3. Switch `streaming` to `False` and run again to see the expected behavior. ### Expected behavior The expected behavior is the data are mapped with its corresponding value in the for loop. ### Environment info Python 3.8.0 datasets==2.18.0 transformers==4.28.1 Ubuntu 20.04
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6731/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6730/comments
https://api.github.com/repos/huggingface/datasets/issues/6730/events
https://github.com/huggingface/datasets/pull/6730
2,181,881,499
PR_kwDODunzps5pZDsB
6,730
Deprecate Pandas builder
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-12T15:12:13
2024-03-12T17:42:33
2024-03-12T17:36:24
COLLABORATOR
null
The Pandas packaged builder is undocumented and relies on `pickle` to read the data, making it **unsafe**. Moreover, I haven't seen a single instance of this builder being used (not even using the GH/Hub search), so we should deprecate it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6730/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6730", "html_url": "https://github.com/huggingface/datasets/pull/6730", "diff_url": "https://github.com/huggingface/datasets/pull/6730.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6730.patch", "merged_at": "2024-03-12T17:36:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/6729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6729/comments
https://api.github.com/repos/huggingface/datasets/issues/6729/events
https://github.com/huggingface/datasets/issues/6729
2,180,237,159
I_kwDODunzps6B88dn
6,729
Support zipfiles that span multiple disks?
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
null
2024-03-11T21:07:41
2024-03-11T21:07:46
null
CONTRIBUTOR
null
See https://huggingface.co./datasets/PhilEO-community/PhilEO-downstream The dataset viewer gives the following error: ``` Error code: ConfigNamesError Exception: BadZipFile Message: zipfiles that span multiple disks are not supported Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 67, in compute_config_names_response get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1846, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1240, in get_module module_name, default_builder_kwargs = infer_module_for_data_files( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 584, in infer_module_for_data_files split_modules = { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 585, in <dictcomp> split: infer_module_for_data_files_list(data_files_list, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 526, in infer_module_for_data_files_list return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 554, in infer_module_for_data_files_list_in_archives for f in xglob(extracted, recursive=True, download_config=download_config)[ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 576, in xglob fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem return cls(**storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__ self.zip = zipfile.ZipFile( File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__ self._RealGetContents() File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents endrec = _EndRecData(fp) File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData return _EndRecData64(fpin, -sizeEndCentDir, endrec) File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64 raise BadZipFile("zipfiles that span multiple disks are not supported") zipfile.BadZipFile: zipfiles that span multiple disks are not supported ``` The files (https://huggingface.co./datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are: <img width="629" alt="Capture d’écran 2024-03-11 aΜ€ 22 07 30" src="https://github.com/huggingface/datasets/assets/1676121/0bb15a51-d54f-4d73-8572-e427ea644b36">
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6729/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6728/comments
https://api.github.com/repos/huggingface/datasets/issues/6728/events
https://github.com/huggingface/datasets/issues/6728
2,178,607,012
I_kwDODunzps6B2uek
6,728
Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT`
{ "login": "padeoe", "id": 10057041, "node_id": "MDQ6VXNlcjEwMDU3MDQx", "avatar_url": "https://avatars.githubusercontent.com/u/10057041?v=4", "gravatar_id": "", "url": "https://api.github.com/users/padeoe", "html_url": "https://github.com/padeoe", "followers_url": "https://api.github.com/users/padeoe/followers", "following_url": "https://api.github.com/users/padeoe/following{/other_user}", "gists_url": "https://api.github.com/users/padeoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/padeoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padeoe/subscriptions", "organizations_url": "https://api.github.com/users/padeoe/orgs", "repos_url": "https://api.github.com/users/padeoe/repos", "events_url": "https://api.github.com/users/padeoe/events{/privacy}", "received_events_url": "https://api.github.com/users/padeoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-11T09:06:38
2024-03-15T14:52:07
2024-03-15T14:52:07
NONE
null
### Describe the bug This bug is triggered under the following conditions: - datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`. - If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`. - This issue occurs with `datasets>2.15.0` or `huggingface-hub>0.19.4`. For example, using the latest versions: `datasets==2.18.0` and `huggingface-hub==0.21.4`, ### Steps to reproduce the bug the issue can be reproduced with the following code: 1. install specific datasets and huggingface_hub. ```bash pip install datasets==2.18.0 pip install huggingface_hub==0.21.4 ``` 2. execute python code. ```Python import os os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com' from datasets import load_dataset bookcorpus = load_dataset('bookcorpus', split='train') ``` console output: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset builder_instance = load_dataset_builder( File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory raise e1 from None File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1830, in dataset_module_factory with fs.open(f"datasets/{path}/{filename}", "r", encoding="utf-8") as f: File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1295, in open self.open( File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1307, in open f = self._open( File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 228, in _open return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs) File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 615, in __init__ self.resolved_path = fs.resolve_path(path, revision=revision) File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 180, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 117, in _repo_and_revision_exist self._api.repo_info(repo_id, revision=revision, repo_type=repo_type) File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2413, in repo_info return method( File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2286, in dataset_info hf_raise_for_status(r) File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 362, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 401 Client Error: Unauthorized for url: https://hf-mirror.com/api/datasets/bookcorpus/bookcorpus.py (Request ID: Root=1-65ee8659-5ab10eec5960c63e71f2bb58;b00bdbea-fd6e-4a74-8fe0-bc4682ae090e) ``` ### Expected behavior The dataset was downloaded correctly without any errors. ### Environment info datasets==2.18.0 huggingface-hub==0.21.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6728/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6727/comments
https://api.github.com/repos/huggingface/datasets/issues/6727/events
https://github.com/huggingface/datasets/pull/6727
2,177,826,110
PR_kwDODunzps5pLJyE
6,727
Using a registry instead of calling globals for fetching feature types
{ "login": "psmyth94", "id": 11325244, "node_id": "MDQ6VXNlcjExMzI1MjQ0", "avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4", "gravatar_id": "", "url": "https://api.github.com/users/psmyth94", "html_url": "https://github.com/psmyth94", "followers_url": "https://api.github.com/users/psmyth94/followers", "following_url": "https://api.github.com/users/psmyth94/following{/other_user}", "gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}", "starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions", "organizations_url": "https://api.github.com/users/psmyth94/orgs", "repos_url": "https://api.github.com/users/psmyth94/repos", "events_url": "https://api.github.com/users/psmyth94/events{/privacy}", "received_events_url": "https://api.github.com/users/psmyth94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-10T17:47:51
2024-03-13T12:08:49
2024-03-13T10:46:02
CONTRIBUTOR
null
Hello, When working with bio-data, each feature often has metadata associated with it (e.g. species, lineage, snp position, etc). To store this, I like to use the feature classes with the added `metadata` attribute. However, when saving or loading with custom features, you get an error since that class doesn't exist in the global namespace in `datasets.features.features`. Take for example, ```python from dataclasses import dataclass, field from datasets import Dataset from datasets.features.features import Value, Features @dataclass class FeatureA(Value): metadata: dict = field(default=dict) _type: str = field(default="FeatureA", init=False, repr=False) @dataclass class FeatureB(Value): metadata: dict = field(default_factory=dict) _type: str = field(default="FeatureB", init=False, repr=False) test_data = { "a": [1, 2, 3], "b": [4, 5, 6], } test_data = Dataset.from_dict( test_data, features=Features({ "a": FeatureA("int32", metadata={"species": "lactobacillus acetotolerans"}), "b": FeatureB("int32", metadata={"species": "lactobacillus iners"}), }) ) # returns an error since FeatureA and FeatureB are not in the global namespace test_data.save_to_disk('./test_data') ``` Saving the dataset (0/1 shards): 0%| | 0/3 [00:00<?, ? examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[2], line 28 19 test_data = Dataset.from_dict( 20 test_data, 21 features=Features({ (...) 24 }) 25 ) 27 # returns an error since FeatureA and FeatureB are not in the global namespace ---> 28 test_data.save_to_disk('./test_data') ... File ~\Documents\datasets\src\datasets\features\features.py:1361, in generate_from_dict(obj) 1359 return {key: generate_from_dict(value) for key, value in obj.items()} 1360 obj = dict(obj) -> 1361 class_type = globals()[obj.pop("_type")] 1363 if class_type == Sequence: 1364 return Sequence(feature=generate_from_dict(obj["feature"]), length=obj.get("length", -1)) KeyError: 'FeatureA' We can avoid this by having a registry (like formatters) and doing ```python from datasets.features.features import register_feature register_feature(FeatureA, "FeatureA") register_feature(FeatureB, "FeatureB") test_data.save_to_disk('./test_data') ``` Saving the dataset (1/1 shards): 100%|------| 3/3 [00:00<00:00, 211.13 examples/s] and loading from disk returns with all metadata information ```python from datasets import load_from_disk test_data = load_from_disk('./test_data') test_data.features ``` {'a': FeatureA(dtype='int32', id=None, metadata={'species': 'lactobacillus acetotolerans'}), 'b': FeatureB(dtype='int32', id=None, metadata={'species': 'lactobacillus iners'})}
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6727/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6727", "html_url": "https://github.com/huggingface/datasets/pull/6727", "diff_url": "https://github.com/huggingface/datasets/pull/6727.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6727.patch", "merged_at": "2024-03-13T10:46:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/6726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6726/comments
https://api.github.com/repos/huggingface/datasets/issues/6726/events
https://github.com/huggingface/datasets/issues/6726
2,177,097,232
I_kwDODunzps6Bw94Q
6,726
Profiling for HF Filesystem shows there are easy performance gains to be made
{ "login": "awgr", "id": 159512661, "node_id": "U_kgDOCYH4VQ", "avatar_url": "https://avatars.githubusercontent.com/u/159512661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awgr", "html_url": "https://github.com/awgr", "followers_url": "https://api.github.com/users/awgr/followers", "following_url": "https://api.github.com/users/awgr/following{/other_user}", "gists_url": "https://api.github.com/users/awgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/awgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awgr/subscriptions", "organizations_url": "https://api.github.com/users/awgr/orgs", "repos_url": "https://api.github.com/users/awgr/repos", "events_url": "https://api.github.com/users/awgr/events{/privacy}", "received_events_url": "https://api.github.com/users/awgr/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-09T07:08:45
2024-03-09T07:11:08
null
NONE
null
### Describe the bug # Let's make it faster First, an evidence... ![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965) Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106 seconds long. See? It's pretty slow. What is resolve pattern doing? ``` resolve_pattern called with **/train/** and hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543 resolve_pattern took 20.815081119537354 seconds ``` Makes sense. How to improve it? ## Bigger project, biggest payoff Databricks (and consequently, spark) store a compressed manifest file of the files contained in the remote filesystem. Then, you download one tiny file, decompress it, and all the operations are local instead of this shenanigans. It seems pretty straightforward to make dataset uploads compute a manifest and upload it alongside their data. This would make resolution time so fast that nobody would ever think about it again. It also means you either need to have the uploader compute it _every time_, or have a hook that computes it. ## Smaller project, immediate payoff: Be diligent in avoiding deepcopy Revise the _ls_tree method to avoid deepcopy: ``` def _ls_tree( self, path: str, recursive: bool = False, refresh: bool = False, revision: Optional[str] = None, expand_info: bool = True, ): ..... omitted ..... for path_info in tree: if isinstance(path_info, RepoFile): cache_path_info = { "name": root_path + "/" + path_info.path, "size": path_info.size, "type": "file", "blob_id": path_info.blob_id, "lfs": path_info.lfs, "last_commit": path_info.last_commit, "security": path_info.security, } else: cache_path_info = { "name": root_path + "/" + path_info.path, "size": 0, "type": "directory", "tree_id": path_info.tree_id, "last_commit": path_info.last_commit, } parent_path = self._parent(cache_path_info["name"]) self.dircache.setdefault(parent_path, []).append(cache_path_info) out.append(cache_path_info) return copy.deepcopy(out) # copy to not let users modify the dircache ``` Observe this deepcopy at the end. It is making a copy of a very simple data structure. We do not need to copy. We can simply generate the data structure twice instead. It will be much faster. ``` def _ls_tree( self, path: str, recursive: bool = False, refresh: bool = False, revision: Optional[str] = None, expand_info: bool = True, ): ..... omitted ..... def make_cache_path_info(path_info): if isinstance(path_info, RepoFile): return { "name": root_path + "/" + path_info.path, "size": path_info.size, "type": "file", "blob_id": path_info.blob_id, "lfs": path_info.lfs, "last_commit": path_info.last_commit, "security": path_info.security, } else: return { "name": root_path + "/" + path_info.path, "size": 0, "type": "directory", "tree_id": path_info.tree_id, "last_commit": path_info.last_commit, } for path_info in tree: cache_path_info = make_cache_path_info(path_info) out_cache_path_info = make_cache_path_info(path_info) # copy to not let users modify the dircache parent_path = self._parent(cache_path_info["name"]) self.dircache.setdefault(parent_path, []).append(cache_path_info) out.append(out_cache_path_info) return out ``` Note there is no longer a deepcopy in this method. We have replaced it with generating the output twice. This is substantially faster. For me, the entire resolution went from 1100s to 360s. ## Medium project, medium payoff After the above change, we have this profile: ![image](https://github.com/huggingface/datasets/assets/159512661/db7b83da-2dfc-4c2e-abab-0ede9477876c) Figure 2: x-axis is 355 seconds. Note that globbing and _ls_tree deep copy is gone. No surprise there. It's much faster now, but we still spend ~187seconds in get_fs_token_paths. Well get_fs_token_paths is part of fsspec. We don't need to fix that because we can trust their developers to write high performance code. Probably the caller has misconfigured something. Let's take a look at the storage_options being provided to the filesystem that is constructed during this call. Ah yes, streaming_download_manager::_prepare_single_hop_path_and_storage_options. We know streaming download manager is not compatible with async right now, but we really need this specific part of the code to be async. We're spending so much time checking isDir on the remote filesystem, it's a huge waste. We can make the call easily 20-30x faster by using async, removing this performance bottleneck almost entirely (and reducing the total time of this part of the code to <30s. There is no reason to block async isDir calls for streaming. I'm not going to mess w/ this one myself; I didn't write the streaming impl, and I don't know how it works, but I know the isDir check can be async. ### Steps to reproduce the bug ``` with cProfile.Profile() as pr: pr.enable() # Begin Data if not os.path.exists(data_cache_dir): os.makedirs(data_cache_dir, exist_ok=True) training_dataset = load_dataset(training_dataset_name, split=training_split, cache_dir=data_cache_dir, streaming=True).take(training_slice) eval_dataset = load_dataset(eval_dataset_name, split=eval_split, cache_dir=data_cache_dir, streaming=True).take(eval_slice) # End Data pr.disable() pr.create_stats() if not os.path.exists(profiling_path): os.makedirs(profiling_path, exist_ok=True) pr.dump_stats(os.path.join(profiling_path, "cprofile.prof")) ``` run this code for "cerebras/SlimPajama-627B" and whatever other params ### Expected behavior Something better. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.21.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6726/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6726/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6725/comments
https://api.github.com/repos/huggingface/datasets/issues/6725/events
https://github.com/huggingface/datasets/issues/6725
2,175,527,530
I_kwDODunzps6Bq-pq
6,725
Request for a comparison of huggingface datasets compared with other data format especially webdataset
{ "login": "Luciennnnnnn", "id": 20135317, "node_id": "MDQ6VXNlcjIwMTM1MzE3", "avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luciennnnnnn", "html_url": "https://github.com/Luciennnnnnn", "followers_url": "https://api.github.com/users/Luciennnnnnn/followers", "following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}", "gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions", "organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs", "repos_url": "https://api.github.com/users/Luciennnnnnn/repos", "events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}", "received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
null
2024-03-08T08:23:01
2024-03-08T08:23:01
null
NONE
null
### Feature request Request for a comparison of huggingface datasets compared with other data format especially webdataset ### Motivation I see huggingface datasets uses Apache Arrow as its backend, it seems to be great, but I'm curious about how it is good compared with other dataset format, like webdataset, what's the pros/cons of them. ### Your contribution More information
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6725/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6724/comments
https://api.github.com/repos/huggingface/datasets/issues/6724/events
https://github.com/huggingface/datasets/issues/6724
2,174,398,227
I_kwDODunzps6Bmq8T
6,724
Dataset with loading script does not work in renamed repos
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-07T17:38:38
2024-03-07T20:06:25
null
CONTRIBUTOR
null
### Describe the bug My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line. https://github.com/huggingface/datasets/blob/6fb6c834f008996c994b0a86c3808d0a33d44525/src/datasets/load.py#L1845 When I print `filename` it returns `hplt-mono-v1-2.py` but the files in the repo are of course `['.gitattributes', 'README.md', 'hplt_mono_v1_2.py']`. So the `filename` is the original reponame instead of the renamed one. I am not sure if this is a caching issue or not or how I can resolve it. ### Steps to reproduce the bug ``` from datasets import load_dataset ds = load_dataset( "BramVanroy/hplt-mono-v1-2", "ky", trust_remote_code=True ) ``` ### Expected behavior That the most recent repo name is used when `filename` is generated. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6724/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6723/comments
https://api.github.com/repos/huggingface/datasets/issues/6723/events
https://github.com/huggingface/datasets/pull/6723
2,174,344,456
PR_kwDODunzps5o_fPU
6,723
get_dataset_default_config_name docstring
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-07T17:09:29
2024-03-07T17:27:29
2024-03-07T17:21:20
MEMBER
null
fix https://github.com/huggingface/datasets/pull/6722
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6723/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6723", "html_url": "https://github.com/huggingface/datasets/pull/6723", "diff_url": "https://github.com/huggingface/datasets/pull/6723.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6723.patch", "merged_at": "2024-03-07T17:21:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/6722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6722/comments
https://api.github.com/repos/huggingface/datasets/issues/6722/events
https://github.com/huggingface/datasets/pull/6722
2,174,332,127
PR_kwDODunzps5o_ch0
6,722
Add details in docstring
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-07T17:02:07
2024-03-07T17:21:10
2024-03-07T17:21:08
CONTRIBUTOR
null
see https://github.com/huggingface/datasets-server/pull/2554#discussion_r1516516867
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6722/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6722", "html_url": "https://github.com/huggingface/datasets/pull/6722", "diff_url": "https://github.com/huggingface/datasets/pull/6722.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6722.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6721/comments
https://api.github.com/repos/huggingface/datasets/issues/6721/events
https://github.com/huggingface/datasets/issues/6721
2,173,931,714
I_kwDODunzps6Bk5DC
6,721
Hi,do you know how to load the dataset from local file now?
{ "login": "Gera001", "id": 50232044, "node_id": "MDQ6VXNlcjUwMjMyMDQ0", "avatar_url": "https://avatars.githubusercontent.com/u/50232044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gera001", "html_url": "https://github.com/Gera001", "followers_url": "https://api.github.com/users/Gera001/followers", "following_url": "https://api.github.com/users/Gera001/following{/other_user}", "gists_url": "https://api.github.com/users/Gera001/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gera001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gera001/subscriptions", "organizations_url": "https://api.github.com/users/Gera001/orgs", "repos_url": "https://api.github.com/users/Gera001/repos", "events_url": "https://api.github.com/users/Gera001/events{/privacy}", "received_events_url": "https://api.github.com/users/Gera001/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-07T13:58:40
2024-03-31T08:09:25
null
NONE
null
Hi, if I want to load the dataset from local file, then how to specify the configuration name? _Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6721/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6720/comments
https://api.github.com/repos/huggingface/datasets/issues/6720/events
https://github.com/huggingface/datasets/issues/6720
2,173,603,459
I_kwDODunzps6Bjo6D
6,720
TypeError: 'str' object is not callable
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-07T11:07:09
2024-03-08T07:34:53
2024-03-07T15:13:58
CONTRIBUTOR
null
### Describe the bug I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co./datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get the error below. It happens during the clean-up phase where the directory cannot be removed because it is not empty. My only guess would be that this may have to do with zstandard ``` Traceback (most recent call last): File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1744, in _prepare_split_single writer.write(example, key) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write self.write_examples_on_file() File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file if self.schema File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema else (pa.schema(self._features.type) if self._features is not None else None) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type return get_nested_type(self) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type {key: get_nested_type(schema[key]) for key in schema} File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp> {key: get_nested_type(schema[key]) for key in schema} File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type value_type = get_nested_type(schema.feature) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type return schema() TypeError: 'str' object is not callable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1753, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 588, in finalize self.write_examples_on_file() File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file if self.schema File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema else (pa.schema(self._features.type) if self._features is not None else None) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type return get_nested_type(self) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type {key: get_nested_type(schema[key]) for key in schema} File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp> {key: get_nested_type(schema[key]) for key in schema} File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type value_type = get_nested_type(schema.feature) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type return schema() TypeError: 'str' object is not callable The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 959, in incomplete_dir yield tmp_dir File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1605, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1762, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/pricie/vanroy/.config/JetBrains/PyCharm2023.3/scratches/scratch_5.py", line 4, in <module> ds = load_dataset( File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/load.py", line 2549, in load_dataset builder_instance.download_and_prepare( File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 985, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 966, in incomplete_dir shutil.rmtree(tmp_dir) File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 731, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 729, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete' ``` Interestingly, though, this directory _does_ appear to be empty: ```shell > cd /home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete > ls -lah total 0 drwxr-xr-x. 1 vanroy vanroy 0 Mar 7 12:01 . drwxr-xr-x. 1 vanroy vanroy 304 Mar 7 11:52 .. > cd .. > ls 7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47_builder.lock 7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete ``` ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset( "BramVanroy/hplt_mono_v1_2", "ky", trust_remote_code=True ) ``` ### Expected behavior No error. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6720/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6719/comments
https://api.github.com/repos/huggingface/datasets/issues/6719/events
https://github.com/huggingface/datasets/issues/6719
2,169,585,727
I_kwDODunzps6BUUA_
6,719
Is there any way to solve hanging of IterableDataset using split by node + filtering during inference
{ "login": "ssharpe42", "id": 8136905, "node_id": "MDQ6VXNlcjgxMzY5MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/8136905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ssharpe42", "html_url": "https://github.com/ssharpe42", "followers_url": "https://api.github.com/users/ssharpe42/followers", "following_url": "https://api.github.com/users/ssharpe42/following{/other_user}", "gists_url": "https://api.github.com/users/ssharpe42/gists{/gist_id}", "starred_url": "https://api.github.com/users/ssharpe42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ssharpe42/subscriptions", "organizations_url": "https://api.github.com/users/ssharpe42/orgs", "repos_url": "https://api.github.com/users/ssharpe42/repos", "events_url": "https://api.github.com/users/ssharpe42/events{/privacy}", "received_events_url": "https://api.github.com/users/ssharpe42/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-05T15:55:13
2024-03-05T15:55:13
null
NONE
null
### Describe the bug I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset_by_node`, it results in shards that are not equal sizes due to unequal samples filtered from each one. The distributed process hangs when trying to accomplish this. Is there any way to resolve this or is it impossible to implement? ### Steps to reproduce the bug Here is a toy example of what I am trying to do that reproduces the behavior ``` # torchrun --nproc-per-node 2 file.py import os import pandas as pd import torch from accelerate import Accelerator from datasets import Features, Value, load_dataset from datasets.distributed import split_dataset_by_node from torch.utils.data import DataLoader accelerator = Accelerator(device_placement=True, dispatch_batches=False) if accelerator.is_main_process: if not os.path.exists("scratch_data"): os.mkdir("scratch_data") n_shards = 4 for i in range(n_shards): df = pd.DataFrame({"id": list(range(10 * i, 10 * (i + 1)))}) df.to_parquet(f"scratch_data/shard_{i}.parquet") world_size = accelerator.num_processes local_rank = accelerator.process_index def collate_fn(examples): input_ids = [] for example in examples: input_ids.append(example["id"]) return torch.LongTensor(input_ids) dataset = load_dataset( "parquet", data_dir="scratch_data", split="train", streaming=True ) dataset = ( split_dataset_by_node(dataset, rank=local_rank, world_size=world_size) .filter(lambda x: x["id"] < 35) .shuffle(seed=42, buffer_size=100) ) batch_size = 2 train_dataloader = DataLoader( dataset, batch_size=batch_size, collate_fn=collate_fn, num_workers=2 ) for x in train_dataloader: x = x.to(accelerator.device) print({"rank": local_rank, "id": x}) y = accelerator.gather_for_metrics(x) if accelerator.is_main_process: print("gathered", y) ``` ### Expected behavior Is there any way to continue training/inference on the GPUs that have remaining data left without waiting for the others? Is it impossible to filter when ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.10.209-198.812.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.21.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2023.6.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6719/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6718/comments
https://api.github.com/repos/huggingface/datasets/issues/6718/events
https://github.com/huggingface/datasets/pull/6718
2,169,468,488
PR_kwDODunzps5ouwwE
6,718
Fix concurrent script loading with force_redownload
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-05T15:04:20
2024-03-07T14:05:53
2024-03-07T13:58:04
MEMBER
null
I added `lock_importable_file` in `get_dataset_builder_class` and `extend_dataset_builder_for_streaming` to fix the issue, and I also added a test cc @clefourrier
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6718/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6718/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6718", "html_url": "https://github.com/huggingface/datasets/pull/6718", "diff_url": "https://github.com/huggingface/datasets/pull/6718.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6718.patch", "merged_at": "2024-03-07T13:58:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/6717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6717/comments
https://api.github.com/repos/huggingface/datasets/issues/6717/events
https://github.com/huggingface/datasets/issues/6717
2,168,726,432
I_kwDODunzps6BRCOg
6,717
`remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio
{ "login": "jhauret", "id": 53187038, "node_id": "MDQ6VXNlcjUzMTg3MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/53187038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jhauret", "html_url": "https://github.com/jhauret", "followers_url": "https://api.github.com/users/jhauret/followers", "following_url": "https://api.github.com/users/jhauret/following{/other_user}", "gists_url": "https://api.github.com/users/jhauret/gists{/gist_id}", "starred_url": "https://api.github.com/users/jhauret/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jhauret/subscriptions", "organizations_url": "https://api.github.com/users/jhauret/orgs", "repos_url": "https://api.github.com/users/jhauret/repos", "events_url": "https://api.github.com/users/jhauret/events{/privacy}", "received_events_url": "https://api.github.com/users/jhauret/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-05T09:33:26
2024-03-05T10:32:19
null
NONE
null
### Describe the bug When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated. ### Steps to reproduce the bug Minimal error code: ```python from datasets import load_dataset dataset_name = "zinc75/Vibravox_dummy" config_name = "BWE_Larynx_microphone" # if we use "ASR_Larynx_microphone" subset which is a monochannel audio, no error is thrown. dataset = load_dataset( path=dataset_name, name=config_name, split="train", streaming=True ) dataset = dataset.remove_columns(["sensor_id"]) # dataset = dataset.map(lambda x:x, remove_columns=["sensor_id"]) # The commented version does not produce an error, but loses the dataset features. sample = next(iter(dataset)) ``` Error: ``` Traceback (most recent call last): File "/home/julien/Bureau/github/vibravox/tmp.py", line 15, in <module> sample = next(iter(dataset)) ^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1392, in __iter__ example = _apply_feature_types_on_example( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1080, in _apply_feature_types_on_example encoded_example = features.encode_example(example) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1889, in encode_example return encode_nested_example(self, example) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in encode_nested_example {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema} File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in <dictcomp> {k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema} ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1300, in encode_nested_example return schema.encode_example(obj) if obj is not None else None ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/audio.py", line 98, in encode_example sf.write(buffer, value["array"], value["sampling_rate"], format="wav") File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 343, in write with SoundFile(file, 'w', samplerate, channels, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 658, in __init__ self._file = self._open(file, mode_int, closefd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 1216, in _open raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7fd795d24680>: Format not recognised. Process finished with exit code 1 ``` ### Expected behavior I would expect this code to run without error. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35 - Python version: 3.11.0 - `huggingface_hub` version: 0.21.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6717/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6716/comments
https://api.github.com/repos/huggingface/datasets/issues/6716/events
https://github.com/huggingface/datasets/issues/6716
2,168,706,558
I_kwDODunzps6BQ9X-
6,716
Non-deterministic `Dataset.builder_name` value
{ "login": "harupy", "id": 17039389, "node_id": "MDQ6VXNlcjE3MDM5Mzg5", "avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harupy", "html_url": "https://github.com/harupy", "followers_url": "https://api.github.com/users/harupy/followers", "following_url": "https://api.github.com/users/harupy/following{/other_user}", "gists_url": "https://api.github.com/users/harupy/gists{/gist_id}", "starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harupy/subscriptions", "organizations_url": "https://api.github.com/users/harupy/orgs", "repos_url": "https://api.github.com/users/harupy/repos", "events_url": "https://api.github.com/users/harupy/events{/privacy}", "received_events_url": "https://api.github.com/users/harupy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-05T09:23:21
2024-03-19T07:58:14
2024-03-19T07:58:14
NONE
null
### Describe the bug I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`: ```python import datasets for _ in range(100): ds = datasets.load_dataset("rotten_tomatoes", split="train") print(ds.builder_name) # prints out "rotten_tomatoes" sometimes instead of "parquet" ``` Output: ``` ... parquet parquet parquet rotten_tomatoes parquet parquet parquet ... ``` Here's a reproduction using GitHub Actions: https://github.com/mlflow/mlflow/actions/runs/8153247984/job/22284263613?pr=11329#step:12:241 One of our tests is flaky because `builder_name` is not deterministic. ### Steps to reproduce the bug 1. Run the code above. ### Expected behavior Always prints out `parquet`? ### Environment info ``` Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.18.0 - Platform: Linux-6.5.0-1015-azure-x86_64-with-glibc2.34 - Python version: 3.8.18 - `huggingface_hub` version: 0.21.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6716/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6715/comments
https://api.github.com/repos/huggingface/datasets/issues/6715/events
https://github.com/huggingface/datasets/pull/6715
2,167,747,095
PR_kwDODunzps5oo36i
6,715
Fix sliced ConcatenationTable pickling with mixed schemas vertically
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-04T21:02:07
2024-03-05T11:23:05
2024-03-05T11:17:04
MEMBER
null
A sliced + pickled ConcatenationTable could end up with a different schema than the original schema, if the slice only contains blocks with only a subset of the columns. This can lead to issues when saving datasets from a concatenation of datasets with mixed schemas Reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6715/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6715", "html_url": "https://github.com/huggingface/datasets/pull/6715", "diff_url": "https://github.com/huggingface/datasets/pull/6715.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6715.patch", "merged_at": "2024-03-05T11:17:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/6714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6714/comments
https://api.github.com/repos/huggingface/datasets/issues/6714/events
https://github.com/huggingface/datasets/pull/6714
2,167,569,080
PR_kwDODunzps5ooQd2
6,714
Expand no-code dataset info with datasets-server info
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-04T19:18:10
2024-03-04T20:28:30
2024-03-04T20:22:15
COLLABORATOR
null
E.g., to have info about a dataset's number of examples for more informative TQDM bars.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6714/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6714", "html_url": "https://github.com/huggingface/datasets/pull/6714", "diff_url": "https://github.com/huggingface/datasets/pull/6714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6714.patch", "merged_at": "2024-03-04T20:22:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/6713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6713/comments
https://api.github.com/repos/huggingface/datasets/issues/6713/events
https://github.com/huggingface/datasets/pull/6713
2,166,797,560
PR_kwDODunzps5olmqh
6,713
Bump huggingface-hub lower version to 0.21.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-04T13:00:52
2024-03-04T18:14:03
2024-03-04T18:06:05
MEMBER
null
This should fix the version compatibility issue when using `huggingface_hub` < 0.21.2 and latest fsspec (>=2023.12.0). See my comment: https://github.com/huggingface/datasets/pull/6687#issuecomment-1976493336 >> EDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `huggingface_hub@main` > >Please note that people using `huggingface_hub` < 0.21.2 and latest `fsspec` will have issues when using `datasets`: >- https://github.com/huggingface/lighteval/actions/runs/8139147047/job/22241658122?pr=86 >- https://github.com/huggingface/lighteval/pull/84 CC: @clefourrier
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6713/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6713/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6713", "html_url": "https://github.com/huggingface/datasets/pull/6713", "diff_url": "https://github.com/huggingface/datasets/pull/6713.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6713.patch", "merged_at": "2024-03-04T18:06:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/6712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6712/comments
https://api.github.com/repos/huggingface/datasets/issues/6712/events
https://github.com/huggingface/datasets/pull/6712
2,166,588,373
PR_kwDODunzps5ok4VF
6,712
fix CastError pickling
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-04T11:14:18
2024-03-04T20:23:47
2024-03-04T20:17:17
MEMBER
null
reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6712/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6712", "html_url": "https://github.com/huggingface/datasets/pull/6712", "diff_url": "https://github.com/huggingface/datasets/pull/6712.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6712.patch", "merged_at": "2024-03-04T20:17:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/6711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6711/comments
https://api.github.com/repos/huggingface/datasets/issues/6711/events
https://github.com/huggingface/datasets/pull/6711
2,165,507,817
PR_kwDODunzps5ohM1a
6,711
3x Faster Text Preprocessing
{ "login": "ashvardanian", "id": 1983160, "node_id": "MDQ6VXNlcjE5ODMxNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1983160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashvardanian", "html_url": "https://github.com/ashvardanian", "followers_url": "https://api.github.com/users/ashvardanian/followers", "following_url": "https://api.github.com/users/ashvardanian/following{/other_user}", "gists_url": "https://api.github.com/users/ashvardanian/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashvardanian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashvardanian/subscriptions", "organizations_url": "https://api.github.com/users/ashvardanian/orgs", "repos_url": "https://api.github.com/users/ashvardanian/repos", "events_url": "https://api.github.com/users/ashvardanian/events{/privacy}", "received_events_url": "https://api.github.com/users/ashvardanian/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-03T19:03:04
2024-03-04T15:15:51
null
NONE
null
I was preparing some datasets for AI training and noticed that `datasets` by HuggingFace uses the conventional `open` mechanism to read the file and split it into chunks. I thought it can be significantly accelerated, and [started with a benchmark](https://gist.github.com/ashvardanian/55c2052e9f78b05b8d614aa90cb12347): ```sh $ pip install --upgrade --force-reinstall datasets $ python benchmark_huggingface_datasets.py xlsum.csv Generating train split: 1004598 examples [00:47, 21116.16 examples/s] Time taken to load the dataset: 48.66838526725769 seconds Time taken to chunk the dataset into parts of size 10000: 0.11466407775878906 seconds Total time taken: 48.78304934501648 seconds ``` For benchmarks I've used a [large CSV file with mixed UTF-8 content](https://github.com/ashvardanian/StringZilla/blob/main/CONTRIBUTING.md#benchmarking-datasets), most common in modern large-scale pre-training pipelines. I've later patched the `datasets` library to use `stringzilla`, which resulted in significantly lower memory consumption and in 2.9x throughput improvement on the AWS `r7iz` instances. That's using slow SSDs mounted over the network. Performance on local SSDs on something like a DGX-H100 should be even higher: ```sh $ pip install -e . $ python benchmark_huggingface_datasets.py xlsum.csv Generating train split: 1004598 examples [00:15, 64529.90 examples/s] Time taken to load the dataset: 16.45028805732727 seconds Time taken to chunk the dataset into parts of size 10000: 0.1291060447692871 seconds Total time taken: 16.579394102096558 seconds ``` I've already [pushed the patches to my fork](https://github.com/ashvardanian/datasets/tree/faster-text-parsers), and would love to contribute them to the upstream repository. --- All the tests pass, but they leave a couple of important questions open. The default Python `open(..., newline=None)` uses universal newlines, where `\n`, `\r`, and `\r\n` are all converted to `\n` on the fly. I am not sure if its a good idea for a general purpose dataset preparation pipeline? I can simulate the same behavior (which I don't yet do) for `"line"` splitter. Adjusting it for `"paragraph"`-splitter would be harder. Should we stick exactly to the old Pythonic behavior or stay closer to how C and other programming languages do that?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6711/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6711/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6711", "html_url": "https://github.com/huggingface/datasets/pull/6711", "diff_url": "https://github.com/huggingface/datasets/pull/6711.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6711.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6710/comments
https://api.github.com/repos/huggingface/datasets/issues/6710/events
https://github.com/huggingface/datasets/pull/6710
2,164,781,564
PR_kwDODunzps5oe4ov
6,710
Persist IterableDataset epoch in workers
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-03-02T12:08:50
2024-03-06T14:41:54
null
MEMBER
null
Use shared memory for the IterableDataset epoch. This way calling `ds.set_epoch()` in the main process will update the epoch in the DataLoader workers as well. This is useful especially because the epoch is used to compute the `effective_seed` used for shuffling. I used torch's shared memory in case users want to send dataset copies without shared memory using pickle. I also find it easier to use than using `multiprocessing.shared_memory` than requires unlinking only in the main process, or `mp.Value` that is not picklable. close https://github.com/huggingface/datasets/issues/6673 cc @rwightman
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6710/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6710", "html_url": "https://github.com/huggingface/datasets/pull/6710", "diff_url": "https://github.com/huggingface/datasets/pull/6710.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6710.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6709/comments
https://api.github.com/repos/huggingface/datasets/issues/6709/events
https://github.com/huggingface/datasets/pull/6709
2,164,169,913
PR_kwDODunzps5oc2Fg
6,709
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-01T21:01:14
2024-03-01T21:07:35
2024-03-01T21:01:23
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6709/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6709", "html_url": "https://github.com/huggingface/datasets/pull/6709", "diff_url": "https://github.com/huggingface/datasets/pull/6709.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6709.patch", "merged_at": "2024-03-01T21:01:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/6708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6708/comments
https://api.github.com/repos/huggingface/datasets/issues/6708/events
https://github.com/huggingface/datasets/pull/6708
2,164,158,579
PR_kwDODunzps5oczmi
6,708
Release: 2.18.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-01T20:52:17
2024-03-01T21:03:01
2024-03-01T20:56:50
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6708/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6708", "html_url": "https://github.com/huggingface/datasets/pull/6708", "diff_url": "https://github.com/huggingface/datasets/pull/6708.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6708.patch", "merged_at": "2024-03-01T20:56:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/6707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6707/comments
https://api.github.com/repos/huggingface/datasets/issues/6707/events
https://github.com/huggingface/datasets/pull/6707
2,163,799,868
PR_kwDODunzps5obkhA
6,707
Silence ruff deprecation messages
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-01T16:52:29
2024-03-01T17:32:14
2024-03-01T17:25:46
COLLABORATOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6707/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6707", "html_url": "https://github.com/huggingface/datasets/pull/6707", "diff_url": "https://github.com/huggingface/datasets/pull/6707.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6707.patch", "merged_at": "2024-03-01T17:25:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/6706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6706/comments
https://api.github.com/repos/huggingface/datasets/issues/6706/events
https://github.com/huggingface/datasets/pull/6706
2,163,783,123
PR_kwDODunzps5obgt-
6,706
Update ruff
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-01T16:44:58
2024-03-01T17:02:13
2024-03-01T16:52:17
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6706/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6706", "html_url": "https://github.com/huggingface/datasets/pull/6706", "diff_url": "https://github.com/huggingface/datasets/pull/6706.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6706.patch", "merged_at": "2024-03-01T16:52:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/6705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6705/comments
https://api.github.com/repos/huggingface/datasets/issues/6705/events
https://github.com/huggingface/datasets/pull/6705
2,163,768,640
PR_kwDODunzps5obdbY
6,705
Fix data_files when passing data_dir
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-01T16:38:53
2024-03-01T18:59:06
2024-03-01T18:52:49
MEMBER
null
This code should not return empty data files ```python from datasets import load_dataset_builder revision = "3d406e70bc21c3ca92a9a229b4c6fc3ed88279fd" b = load_dataset_builder("bigcode/the-stack-v2-dedup", data_dir="data/Dockerfile", revision=revision) print(b.config.data_files) ``` Previously it would return no data files because it would apply the YAML `data_files: data/**/train-*` pattern to this directory cc @anton-l
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6705/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6705", "html_url": "https://github.com/huggingface/datasets/pull/6705", "diff_url": "https://github.com/huggingface/datasets/pull/6705.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6705.patch", "merged_at": "2024-03-01T18:52:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/6704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6704/comments
https://api.github.com/repos/huggingface/datasets/issues/6704/events
https://github.com/huggingface/datasets/pull/6704
2,163,752,391
PR_kwDODunzps5obZyj
6,704
Improve default patterns resolution
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-01T16:31:25
2024-04-23T09:43:09
2024-03-15T15:22:03
COLLABORATOR
null
Separate the default patterns that match directories from the ones matching files and ensure directories are checked first (reverts the change from https://github.com/huggingface/datasets/pull/6244, which merged these patterns). Also, ensure that the glob patterns do not overlap to avoid duplicates in the result. Additionally, replace `get_fs_token_paths` with `url_to_fs` to avoid [unnecessary glob calls](https://github.com/fsspec/filesystem_spec/blob/14dce8ca78f7aa509a20edb263bff83a7760c24d/fsspec/core.py#L655-L656). fix https://github.com/huggingface/datasets/issues/6259 fix https://github.com/huggingface/datasets/issues/6272
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6704/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6704/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6704", "html_url": "https://github.com/huggingface/datasets/pull/6704", "diff_url": "https://github.com/huggingface/datasets/pull/6704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6704.patch", "merged_at": "2024-03-15T15:22:03" }
true
https://api.github.com/repos/huggingface/datasets/issues/6703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6703/comments
https://api.github.com/repos/huggingface/datasets/issues/6703/events
https://github.com/huggingface/datasets/issues/6703
2,163,250,590
I_kwDODunzps6A8JWe
6,703
Unable to load dataset that was saved with `save_to_disk`
{ "login": "casper-hansen", "id": 27340033, "node_id": "MDQ6VXNlcjI3MzQwMDMz", "avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/casper-hansen", "html_url": "https://github.com/casper-hansen", "followers_url": "https://api.github.com/users/casper-hansen/followers", "following_url": "https://api.github.com/users/casper-hansen/following{/other_user}", "gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}", "starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions", "organizations_url": "https://api.github.com/users/casper-hansen/orgs", "repos_url": "https://api.github.com/users/casper-hansen/repos", "events_url": "https://api.github.com/users/casper-hansen/events{/privacy}", "received_events_url": "https://api.github.com/users/casper-hansen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-03-01T11:59:56
2024-03-04T13:46:20
2024-03-04T13:46:20
NONE
null
### Describe the bug I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead. ### Steps to reproduce the bug 1. Save a dataset with `save_to_disk` 2. Try to load it with `load_datasets` ### Expected behavior I am able to load the dataset again with `load_datasets` which most packages uses over `load_from_disk`. I want to have a workaround that allows me to create the same indexing that `push_to_hub` creates for you before using `save_to_disk` - how can that be achieved? ### Environment info datasets 2.17.1, python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6703/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6702/comments
https://api.github.com/repos/huggingface/datasets/issues/6702/events
https://github.com/huggingface/datasets/issues/6702
2,161,938,484
I_kwDODunzps6A3JA0
6,702
Push samples to dataset on hub without having the dataset locally
{ "login": "jbdel", "id": 17854096, "node_id": "MDQ6VXNlcjE3ODU0MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbdel", "html_url": "https://github.com/jbdel", "followers_url": "https://api.github.com/users/jbdel/followers", "following_url": "https://api.github.com/users/jbdel/following{/other_user}", "gists_url": "https://api.github.com/users/jbdel/gists{/gist_id}", "starred_url": "https://api.github.com/users/jbdel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbdel/subscriptions", "organizations_url": "https://api.github.com/users/jbdel/orgs", "repos_url": "https://api.github.com/users/jbdel/repos", "events_url": "https://api.github.com/users/jbdel/events{/privacy}", "received_events_url": "https://api.github.com/users/jbdel/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
null
2024-02-29T19:17:12
2024-03-08T21:08:38
2024-03-08T21:08:38
NONE
null
### Feature request Say I have the following code: ``` from datasets import Dataset import pandas as pd new_data = { "column_1": ["value1", "value2"], "column_2": ["value3", "value4"], } df_new = pd.DataFrame(new_data) dataset_new = Dataset.from_pandas(df_new) # add these samples to a remote dataset ``` It would be great to have a way to push dataset_new to a remote dataset that respects the same schema. This way one would not have to do the following: ``` from datasets import load_dataset dataset = load_dataset('username/dataset_name', use_auth_token='your_hf_token_here') updated_dataset = dataset['train'].concatenate(dataset_new) updated_dataset.push_to_hub('username/dataset_name', use_auth_token='your_hf_token_here') ``` ### Motivation No need to download the dataset. ### Your contribution Maybe this feature already exists, didnt see it though. I do not have the expertise to do this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6702/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6701/comments
https://api.github.com/repos/huggingface/datasets/issues/6701/events
https://github.com/huggingface/datasets/pull/6701
2,161,448,017
PR_kwDODunzps5oTfO_
6,701
Base parquet batch_size on parquet row group size
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-29T14:53:01
2024-02-29T15:15:18
2024-02-29T15:08:55
MEMBER
null
This allows to stream datasets like [Major-TOM/Core-S2L2A](https://huggingface.co./datasets/Major-TOM/Core-S2L2A) which have row groups with few rows (one row is ~10MB). Previously the cold start would take a lot of time and OOM because it would download many row groups before yielding the first example. I tried on OpenOrca and imagenet-hard and it does't affect overall throughput. Even if the overall throughput doesn't change for datasets like imagenet-hard with big rows, note that it does create shorter and more frequent pauses to download the next row group. Though I find it fine because previously the pauses were less frequent but very long (downloading multiple row groups at a time)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6701/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6701/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6701", "html_url": "https://github.com/huggingface/datasets/pull/6701", "diff_url": "https://github.com/huggingface/datasets/pull/6701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6701.patch", "merged_at": "2024-02-29T15:08:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/6700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6700/comments
https://api.github.com/repos/huggingface/datasets/issues/6700/events
https://github.com/huggingface/datasets/issues/6700
2,158,871,038
I_kwDODunzps6ArcH-
6,700
remove_columns is not in-place but the doc shows it is in-place
{ "login": "shelfofclub", "id": 32047804, "node_id": "MDQ6VXNlcjMyMDQ3ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/32047804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shelfofclub", "html_url": "https://github.com/shelfofclub", "followers_url": "https://api.github.com/users/shelfofclub/followers", "following_url": "https://api.github.com/users/shelfofclub/following{/other_user}", "gists_url": "https://api.github.com/users/shelfofclub/gists{/gist_id}", "starred_url": "https://api.github.com/users/shelfofclub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shelfofclub/subscriptions", "organizations_url": "https://api.github.com/users/shelfofclub/orgs", "repos_url": "https://api.github.com/users/shelfofclub/repos", "events_url": "https://api.github.com/users/shelfofclub/events{/privacy}", "received_events_url": "https://api.github.com/users/shelfofclub/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-28T12:36:22
2024-04-02T17:15:28
2024-04-02T17:15:28
NONE
null
### Describe the bug The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co./docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns) In the text classification example of transformers v4.38.1, the columns are not removed. https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421 ### Steps to reproduce the bug https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421 ### Expected behavior Actually remove the columns. ### Environment info 1. datasets v2.17.0 2. transformers v4.38.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6700/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6699/comments
https://api.github.com/repos/huggingface/datasets/issues/6699/events
https://github.com/huggingface/datasets/issues/6699
2,158,152,341
I_kwDODunzps6AosqV
6,699
`Dataset` unexpected changed dict data and may cause error
{ "login": "scruel", "id": 16933298, "node_id": "MDQ6VXNlcjE2OTMzMjk4", "avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scruel", "html_url": "https://github.com/scruel", "followers_url": "https://api.github.com/users/scruel/followers", "following_url": "https://api.github.com/users/scruel/following{/other_user}", "gists_url": "https://api.github.com/users/scruel/gists{/gist_id}", "starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scruel/subscriptions", "organizations_url": "https://api.github.com/users/scruel/orgs", "repos_url": "https://api.github.com/users/scruel/repos", "events_url": "https://api.github.com/users/scruel/events{/privacy}", "received_events_url": "https://api.github.com/users/scruel/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-02-28T05:30:10
2024-02-28T19:14:36
null
NONE
null
### Describe the bug Will unexpected get keys with `None` value in the parsed json dict. ### Steps to reproduce the bug ```jsonl test.jsonl {"id": 0, "indexs": {"-1": [0, 10]}} {"id": 1, "indexs": {"-1": [0, 10]}} ``` ```python dataset = Dataset.from_json('.test.jsonl') print(dataset[0]) ``` Result: ``` {'id': 0, 'indexs': {'-1': [...], '-2': None, '-3': None, '-4': None, '-5': None, '-6': None, '-7': None, '-8': None, '-9': None, ...}} ``` Those keys with `None` value will unexpected appear in the dict. ### Expected behavior Result should be ``` {'id': 0, 'indexs': {'-1': [0, 10]}} ``` ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35 - Python version: 3.11.6 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6699/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6698/comments
https://api.github.com/repos/huggingface/datasets/issues/6698/events
https://github.com/huggingface/datasets/pull/6698
2,157,752,392
PR_kwDODunzps5oG6Xt
6,698
Faster `xlistdir`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-27T22:55:08
2024-02-27T23:44:49
2024-02-27T23:38:14
COLLABORATOR
null
Pass `detail=False` to the `fsspec` `listdir` to avoid unnecessarily fetching expensive metadata about the paths.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6698/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6698", "html_url": "https://github.com/huggingface/datasets/pull/6698", "diff_url": "https://github.com/huggingface/datasets/pull/6698.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6698.patch", "merged_at": "2024-02-27T23:38:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/6697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6697/comments
https://api.github.com/repos/huggingface/datasets/issues/6697/events
https://github.com/huggingface/datasets/issues/6697
2,157,322,224
I_kwDODunzps6Alh_w
6,697
Unable to Load Dataset in Kaggle
{ "login": "vrunm", "id": 97465624, "node_id": "U_kgDOBc81GA", "avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrunm", "html_url": "https://github.com/vrunm", "followers_url": "https://api.github.com/users/vrunm/followers", "following_url": "https://api.github.com/users/vrunm/following{/other_user}", "gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrunm/subscriptions", "organizations_url": "https://api.github.com/users/vrunm/orgs", "repos_url": "https://api.github.com/users/vrunm/repos", "events_url": "https://api.github.com/users/vrunm/events{/privacy}", "received_events_url": "https://api.github.com/users/vrunm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-27T18:19:34
2024-02-29T17:32:42
2024-02-29T17:32:41
NONE
null
### Describe the bug Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook. Get this Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[8], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("llm-blender/mix-instruct") File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1661 ignore_verifications = ignore_verifications or save_infos 1663 # Create a dataset builder -> 1664 builder_instance = load_dataset_builder( 1665 path=path, 1666 name=name, 1667 data_dir=data_dir, 1668 data_files=data_files, 1669 cache_dir=cache_dir, 1670 features=features, 1671 download_config=download_config, 1672 download_mode=download_mode, 1673 revision=revision, 1674 use_auth_token=use_auth_token, 1675 **config_kwargs, 1676 ) 1678 # Return iterable dataset in case of streaming 1679 if streaming: File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1488 download_config = download_config.copy() if download_config else DownloadConfig() 1489 download_config.use_auth_token = use_auth_token -> 1490 dataset_module = dataset_module_factory( 1491 path, 1492 revision=revision, 1493 download_config=download_config, 1494 download_mode=download_mode, 1495 data_dir=data_dir, 1496 data_files=data_files, 1497 ) 1499 # Get dataset builder class from the processing script 1500 builder_cls = import_main_class(dataset_module.module_path) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1242, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1237 if isinstance(e1, FileNotFoundError): 1238 raise FileNotFoundError( 1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1241 ) from None -> 1242 raise e1 from None 1243 else: 1244 raise FileNotFoundError( 1245 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory." 1246 ) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1230, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1215 return HubDatasetModuleFactoryWithScript( 1216 path, 1217 revision=revision, (...) 1220 dynamic_modules_path=dynamic_modules_path, 1221 ).get_module() 1222 else: 1223 return HubDatasetModuleFactoryWithoutScript( 1224 path, 1225 revision=revision, 1226 data_dir=data_dir, 1227 data_files=data_files, 1228 download_config=download_config, 1229 download_mode=download_mode, -> 1230 ).get_module() 1231 except Exception as e1: # noqa: all the attempts failed, before raising the error we should check if the module is already cached. 1232 try: File /opt/conda/lib/python3.10/site-packages/datasets/load.py:846, in HubDatasetModuleFactoryWithoutScript.get_module(self) 836 token = self.download_config.use_auth_token 837 hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info( 838 self.name, 839 revision=self.revision, 840 token=token, 841 timeout=100.0, 842 ) 843 patterns = ( 844 sanitize_patterns(self.data_files) 845 if self.data_files is not None --> 846 else get_patterns_in_dataset_repository(hfh_dataset_info) 847 ) 848 data_files = DataFilesDict.from_hf_repo( 849 patterns, 850 dataset_info=hfh_dataset_info, 851 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 852 ) 853 infered_module_names = { 854 key: infer_module_for_data_files(data_files_list, use_auth_token=self.download_config.use_auth_token) 855 for key, data_files_list in data_files.items() 856 } File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:471, in get_patterns_in_dataset_repository(dataset_info) 469 resolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info) 470 try: --> 471 return _get_data_files_patterns(resolver) 472 except FileNotFoundError: 473 raise FileNotFoundError( 474 f"The dataset repository at '{dataset_info.id}' doesn't contain any data file." 475 ) from None File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:99, in _get_data_files_patterns(pattern_resolver) 97 try: 98 for pattern in patterns: ---> 99 data_files = pattern_resolver(pattern) 100 if len(data_files) > 0: 101 non_empty_splits.append(split) File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:303, in _resolve_single_pattern_in_dataset_repository(dataset_info, pattern, allowed_extensions) 301 data_files_ignore = FILES_TO_IGNORE 302 fs = HfFileSystem(repo_info=dataset_info) --> 303 glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)] 304 matched_paths = [ 305 filepath 306 for filepath in glob_iter 307 if filepath.name not in data_files_ignore and not filepath.name.startswith(".") 308 ] 309 if allowed_extensions is not None: File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:606, in AbstractFileSystem.glob(self, path, maxdepth, **kwargs) 602 depth = None 604 allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) --> 606 pattern = glob_translate(path + ("/" if ends_with_sep else "")) 607 pattern = re.compile(pattern) 609 out = { 610 p: info 611 for p, info in sorted(allpaths.items()) (...) 618 ) 619 } File /opt/conda/lib/python3.10/site-packages/fsspec/utils.py:734, in glob_translate(pat) 732 continue 733 elif "**" in part: --> 734 raise ValueError( 735 "Invalid pattern: '**' can only be an entire path component" 736 ) 737 if part: 738 results.extend(_translate(part, f"{not_sep}*", not_sep)) ValueError: Invalid pattern: '**' can only be an entire path component ``` ``` After loading this dataset ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("llm-blender/mix-instruct") ``` ### Expected behavior The dataset should load with desired split. ### Environment info - `datasets` version: 2.17.1 - Platform: Linux-5.15.133+-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6697/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6696/comments
https://api.github.com/repos/huggingface/datasets/issues/6696/events
https://github.com/huggingface/datasets/pull/6696
2,154,161,357
PR_kwDODunzps5n6ipH
6,696
Make JSON builder support an array of strings
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-26T13:18:31
2024-02-28T06:45:23
2024-02-28T06:39:12
MEMBER
null
Support JSON file with an array of strings. Fix #6695.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6696/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6696", "html_url": "https://github.com/huggingface/datasets/pull/6696", "diff_url": "https://github.com/huggingface/datasets/pull/6696.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6696.patch", "merged_at": "2024-02-28T06:39:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/6695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6695/comments
https://api.github.com/repos/huggingface/datasets/issues/6695/events
https://github.com/huggingface/datasets/issues/6695
2,154,075,509
I_kwDODunzps6AZJV1
6,695
Support JSON file with an array of strings
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2024-02-26T12:35:11
2024-03-08T14:16:25
2024-02-28T06:39:13
MEMBER
null
Support loading a dataset from a JSON file with an array of strings. See: https://huggingface.co./datasets/CausalLM/Refined-Anime-Text/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6695/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6695/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6694/comments
https://api.github.com/repos/huggingface/datasets/issues/6694/events
https://github.com/huggingface/datasets/pull/6694
2,153,086,984
PR_kwDODunzps5n23Jz
6,694
__add__ for Dataset, IterableDataset
{ "login": "oh-gnues-iohc", "id": 79557937, "node_id": "MDQ6VXNlcjc5NTU3OTM3", "avatar_url": "https://avatars.githubusercontent.com/u/79557937?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oh-gnues-iohc", "html_url": "https://github.com/oh-gnues-iohc", "followers_url": "https://api.github.com/users/oh-gnues-iohc/followers", "following_url": "https://api.github.com/users/oh-gnues-iohc/following{/other_user}", "gists_url": "https://api.github.com/users/oh-gnues-iohc/gists{/gist_id}", "starred_url": "https://api.github.com/users/oh-gnues-iohc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oh-gnues-iohc/subscriptions", "organizations_url": "https://api.github.com/users/oh-gnues-iohc/orgs", "repos_url": "https://api.github.com/users/oh-gnues-iohc/repos", "events_url": "https://api.github.com/users/oh-gnues-iohc/events{/privacy}", "received_events_url": "https://api.github.com/users/oh-gnues-iohc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-02-26T01:46:55
2024-02-29T16:52:58
null
NONE
null
It's too cumbersome to write this command every time we perform a dataset merging operation. ```pythonfrom datasets import concatenate_datasets``` We have added a simple `__add__` magic method to each class using `concatenate_datasets.` ```python from datasets import load_dataset bookcorpus = load_dataset("bookcorpus", split="train") wiki = load_dataset("wikimedia/wikipedia", "20231101.ab", split="train") wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"]) # only keep the 'text' column bookcorpus + wiki #Dataset({ # features: ['text'], # num_rows: 74004228 #}) #Dataset({ # features: ['text'], # num_rows: 6152 #}) #Dataset({ # features: ['text'], # num_rows: 74010380 #}) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6694/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6694", "html_url": "https://github.com/huggingface/datasets/pull/6694", "diff_url": "https://github.com/huggingface/datasets/pull/6694.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6694.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6693/comments
https://api.github.com/repos/huggingface/datasets/issues/6693/events
https://github.com/huggingface/datasets/pull/6693
2,152,887,712
PR_kwDODunzps5n2ObO
6,693
Update the print message for chunked_dataset in process.mdx
{ "login": "gzbfgjf2", "id": 142939562, "node_id": "U_kgDOCIUVqg", "avatar_url": "https://avatars.githubusercontent.com/u/142939562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gzbfgjf2", "html_url": "https://github.com/gzbfgjf2", "followers_url": "https://api.github.com/users/gzbfgjf2/followers", "following_url": "https://api.github.com/users/gzbfgjf2/following{/other_user}", "gists_url": "https://api.github.com/users/gzbfgjf2/gists{/gist_id}", "starred_url": "https://api.github.com/users/gzbfgjf2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gzbfgjf2/subscriptions", "organizations_url": "https://api.github.com/users/gzbfgjf2/orgs", "repos_url": "https://api.github.com/users/gzbfgjf2/repos", "events_url": "https://api.github.com/users/gzbfgjf2/events{/privacy}", "received_events_url": "https://api.github.com/users/gzbfgjf2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-25T18:37:07
2024-02-25T19:57:12
2024-02-25T19:51:02
CONTRIBUTOR
null
Update documentation to align with `Dataset.__repr__` change after #423
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6693/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6693", "html_url": "https://github.com/huggingface/datasets/pull/6693", "diff_url": "https://github.com/huggingface/datasets/pull/6693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6693.patch", "merged_at": "2024-02-25T19:51:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/6692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6692/comments
https://api.github.com/repos/huggingface/datasets/issues/6692/events
https://github.com/huggingface/datasets/pull/6692
2,152,270,987
PR_kwDODunzps5n0XN1
6,692
Enhancement: Enable loading TSV files in load_dataset()
{ "login": "harsh1504660", "id": 77767961, "node_id": "MDQ6VXNlcjc3NzY3OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harsh1504660", "html_url": "https://github.com/harsh1504660", "followers_url": "https://api.github.com/users/harsh1504660/followers", "following_url": "https://api.github.com/users/harsh1504660/following{/other_user}", "gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}", "starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions", "organizations_url": "https://api.github.com/users/harsh1504660/orgs", "repos_url": "https://api.github.com/users/harsh1504660/repos", "events_url": "https://api.github.com/users/harsh1504660/events{/privacy}", "received_events_url": "https://api.github.com/users/harsh1504660/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-24T11:38:59
2024-02-26T15:33:50
2024-02-26T07:14:03
NONE
null
Fix #6691
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6692/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6692", "html_url": "https://github.com/huggingface/datasets/pull/6692", "diff_url": "https://github.com/huggingface/datasets/pull/6692.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6692.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6691/comments
https://api.github.com/repos/huggingface/datasets/issues/6691/events
https://github.com/huggingface/datasets/issues/6691
2,152,134,041
I_kwDODunzps6ARvWZ
6,691
load_dataset() does not support tsv
{ "login": "dipsivenkatesh", "id": 26873178, "node_id": "MDQ6VXNlcjI2ODczMTc4", "avatar_url": "https://avatars.githubusercontent.com/u/26873178?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dipsivenkatesh", "html_url": "https://github.com/dipsivenkatesh", "followers_url": "https://api.github.com/users/dipsivenkatesh/followers", "following_url": "https://api.github.com/users/dipsivenkatesh/following{/other_user}", "gists_url": "https://api.github.com/users/dipsivenkatesh/gists{/gist_id}", "starred_url": "https://api.github.com/users/dipsivenkatesh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dipsivenkatesh/subscriptions", "organizations_url": "https://api.github.com/users/dipsivenkatesh/orgs", "repos_url": "https://api.github.com/users/dipsivenkatesh/repos", "events_url": "https://api.github.com/users/dipsivenkatesh/events{/privacy}", "received_events_url": "https://api.github.com/users/dipsivenkatesh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "harsh1504660", "id": 77767961, "node_id": "MDQ6VXNlcjc3NzY3OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harsh1504660", "html_url": "https://github.com/harsh1504660", "followers_url": "https://api.github.com/users/harsh1504660/followers", "following_url": "https://api.github.com/users/harsh1504660/following{/other_user}", "gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}", "starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions", "organizations_url": "https://api.github.com/users/harsh1504660/orgs", "repos_url": "https://api.github.com/users/harsh1504660/repos", "events_url": "https://api.github.com/users/harsh1504660/events{/privacy}", "received_events_url": "https://api.github.com/users/harsh1504660/received_events", "type": "User", "site_admin": false }
[ { "login": "harsh1504660", "id": 77767961, "node_id": "MDQ6VXNlcjc3NzY3OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harsh1504660", "html_url": "https://github.com/harsh1504660", "followers_url": "https://api.github.com/users/harsh1504660/followers", "following_url": "https://api.github.com/users/harsh1504660/following{/other_user}", "gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}", "starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions", "organizations_url": "https://api.github.com/users/harsh1504660/orgs", "repos_url": "https://api.github.com/users/harsh1504660/repos", "events_url": "https://api.github.com/users/harsh1504660/events{/privacy}", "received_events_url": "https://api.github.com/users/harsh1504660/received_events", "type": "User", "site_admin": false } ]
null
null
2024-02-24T05:56:04
2024-02-26T07:15:07
2024-02-26T07:09:35
NONE
null
### Feature request the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values). ### Motivation cant easily load files of type tsv, have to convert them to another type like csv then load ### Your contribution Can try by raising a PR with a little help, currently went through the code but didn't fully understand
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6691/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6690/comments
https://api.github.com/repos/huggingface/datasets/issues/6690/events
https://github.com/huggingface/datasets/issues/6690
2,150,800,065
I_kwDODunzps6AMprB
6,690
Add function to convert a script-dataset to Parquet
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2024-02-23T10:28:20
2024-04-12T15:27:05
2024-04-12T15:27:05
MEMBER
null
Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6690/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6690/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6689/comments
https://api.github.com/repos/huggingface/datasets/issues/6689/events
https://github.com/huggingface/datasets/issues/6689
2,149,581,147
I_kwDODunzps6AIAFb
6,689
.load_dataset() method defaults to zstandard
{ "login": "ElleLeonne", "id": 87243032, "node_id": "MDQ6VXNlcjg3MjQzMDMy", "avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ElleLeonne", "html_url": "https://github.com/ElleLeonne", "followers_url": "https://api.github.com/users/ElleLeonne/followers", "following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}", "gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}", "starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions", "organizations_url": "https://api.github.com/users/ElleLeonne/orgs", "repos_url": "https://api.github.com/users/ElleLeonne/repos", "events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}", "received_events_url": "https://api.github.com/users/ElleLeonne/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-22T17:39:27
2024-03-07T14:54:16
2024-03-07T14:54:15
NONE
null
### Describe the bug Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets. This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it happens on datasets that are uploaded in json format too, meaning the dataset loader will attempt to convert the data to a zstandard compatible format, and THEN try to unpackage it. My 4tb drive runs out of room when using zstandard on slimpajama. It loads fine on 1.5tb when using json, however I lack the understanding of the "magic numbers" system used to select the unpackaging algorithm, so I can't push a change myself. Commenting out this line, in "/datasets/utils/extract.py" fixes the issue, and causes SlimPajama to properly extract using rational amounts of storage, however it completely disables zstandard, which is probably undesirable behavior. Someone with an understanding of the "magic numbers" system should probably take a pass over this issue. ``` class Extractor: # Put zip file to the last, b/c it is possible wrongly detected as zip (I guess it means: as tar or gzip) extractors: Dict[str, Type[BaseExtractor]] = { "tar": TarExtractor, "gzip": GzipExtractor, "zip": ZipExtractor, "xz": XzExtractor, #"zstd": ZstdExtractor, # This line needs to go, in order for datasets to work w/o non-dependent packages "rar": RarExtractor, "bz2": Bzip2Extractor, "7z": SevenZipExtractor, # <Added version="2.4.0"/> "lz4": Lz4Extractor, # <Added version="2.4.0"/> } ``` ### Steps to reproduce the bug ''' from datasaets import load_dataset load_dataset(path="/cerebras/SlimPajama-627B") ''' This alone should trigger the error on any system that does not have zstandard pip installed. ### Expected behavior This repository (which is encoded in json format, not zstandard) should check whether zstandard is installed before defaulting to it. Additionally, using zstandard should not use more than 3x the required space that other extraction mechanisms use. ### Environment info - `datasets` version: 2.17.1 - Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35 - Python version: 3.12.0 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6689/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6688/comments
https://api.github.com/repos/huggingface/datasets/issues/6688/events
https://github.com/huggingface/datasets/issues/6688
2,148,609,859
I_kwDODunzps6AES9D
6,688
Tensor type (e.g. from `return_tensors`) ignored in map
{ "login": "srossi93", "id": 11166137, "node_id": "MDQ6VXNlcjExMTY2MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/11166137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srossi93", "html_url": "https://github.com/srossi93", "followers_url": "https://api.github.com/users/srossi93/followers", "following_url": "https://api.github.com/users/srossi93/following{/other_user}", "gists_url": "https://api.github.com/users/srossi93/gists{/gist_id}", "starred_url": "https://api.github.com/users/srossi93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srossi93/subscriptions", "organizations_url": "https://api.github.com/users/srossi93/orgs", "repos_url": "https://api.github.com/users/srossi93/repos", "events_url": "https://api.github.com/users/srossi93/events{/privacy}", "received_events_url": "https://api.github.com/users/srossi93/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-02-22T09:27:57
2024-02-22T15:56:21
null
NONE
null
### Describe the bug I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument. If this is an expected behaviour (e.g., for caching/Arrow compatibility/etc.) it should be clearly documented. For example, current documentation (see [here](https://huggingface.co./docs/datasets/v2.17.1/en/nlp_process#map)) clearly state to "set `return_tensors="np"` when you tokenize your text" to have Numpy arrays. ### Steps to reproduce the bug ```py # %%% import datasets import numpy as np import tensorflow as tf import torch from transformers import AutoTokenizer # %% ds = datasets.load_dataset("cnn_dailymail", "1.0.0", split="train[:1%]") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") #%% for return_tensors in [None, "np", "pt", "tf", "jax"]: print(f"********** no map, return_tensors={return_tensors} **********") _ds = tokenizer(ds["article"], return_tensors=return_tensors, truncation=True, padding=True) print('Type <input_ids>:', type(_ds["input_ids"])) # %% for return_tensors in [None, "np", "pt", "tf", "jax"]: print(f"********** map, return_tensors={return_tensors} **********") _ds = ds.map( lambda examples: tokenizer(examples["article"], return_tensors=return_tensors, truncation=True, padding=True), batched=True, remove_columns=["article"], ) print('Type <input_ids>:', type(_ds[0]["input_ids"])) ``` ### Expected behavior The output from the script above. I would expect the second half to be the same. ``` ********** no map, return_tensors=None ********** Type <input_ids>: <class 'list'> ********** no map, return_tensors=np ********** Type <input_ids>: <class 'numpy.ndarray'> ********** no map, return_tensors=pt ********** Type <input_ids>: <class 'torch.Tensor'> ********** no map, return_tensors=tf ********** Type <input_ids>: <class 'tensorflow.python.framework.ops.EagerTensor'> ********** no map, return_tensors=jax ********** Type <input_ids>: <class 'jaxlib.xla_extension.ArrayImpl'> ********** map, return_tensors=None ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=np ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=pt ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=tf ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=jax ********** Type <input_ids>: <class 'list'> ``` ### Environment info - `datasets` version: 2.17.1 - Platform: Redacted (linux) - Python version: 3.10.12 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.1.3 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6688/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6687/comments
https://api.github.com/repos/huggingface/datasets/issues/6687/events
https://github.com/huggingface/datasets/pull/6687
2,148,554,178
PR_kwDODunzps5nnqBB
6,687
fsspec: support fsspec>=2023.12.0 glob changes
{ "login": "pmrowla", "id": 651988, "node_id": "MDQ6VXNlcjY1MTk4OA==", "avatar_url": "https://avatars.githubusercontent.com/u/651988?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pmrowla", "html_url": "https://github.com/pmrowla", "followers_url": "https://api.github.com/users/pmrowla/followers", "following_url": "https://api.github.com/users/pmrowla/following{/other_user}", "gists_url": "https://api.github.com/users/pmrowla/gists{/gist_id}", "starred_url": "https://api.github.com/users/pmrowla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pmrowla/subscriptions", "organizations_url": "https://api.github.com/users/pmrowla/orgs", "repos_url": "https://api.github.com/users/pmrowla/repos", "events_url": "https://api.github.com/users/pmrowla/events{/privacy}", "received_events_url": "https://api.github.com/users/pmrowla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-22T08:59:32
2024-03-04T12:59:42
2024-02-29T15:12:17
CONTRIBUTOR
null
- adds support for the `fs.glob` changes introduced in `fsspec==2023.12.0` and unpins the current upper bound Should close #6644 Should close #6645 The `test_data_files` glob/pattern tests pass for me in: - `fsspec==2023.10.0` (the pinned max version in datasets `main`) - `fsspec==2023.12.0` (#6644) - `fsspec==2024.2.0` (#6645)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6687/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6687/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6687", "html_url": "https://github.com/huggingface/datasets/pull/6687", "diff_url": "https://github.com/huggingface/datasets/pull/6687.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6687.patch", "merged_at": "2024-02-29T15:12:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/6686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6686/comments
https://api.github.com/repos/huggingface/datasets/issues/6686/events
https://github.com/huggingface/datasets/issues/6686
2,147,795,103
I_kwDODunzps6ABMCf
6,686
Question: Is there any way for uploading a large image dataset?
{ "login": "zhjohnchan", "id": 37367987, "node_id": "MDQ6VXNlcjM3MzY3OTg3", "avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhjohnchan", "html_url": "https://github.com/zhjohnchan", "followers_url": "https://api.github.com/users/zhjohnchan/followers", "following_url": "https://api.github.com/users/zhjohnchan/following{/other_user}", "gists_url": "https://api.github.com/users/zhjohnchan/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhjohnchan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhjohnchan/subscriptions", "organizations_url": "https://api.github.com/users/zhjohnchan/orgs", "repos_url": "https://api.github.com/users/zhjohnchan/repos", "events_url": "https://api.github.com/users/zhjohnchan/events{/privacy}", "received_events_url": "https://api.github.com/users/zhjohnchan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-02-21T22:07:21
2024-05-02T03:44:59
null
NONE
null
I am uploading an image dataset like this: ``` dataset = load_dataset( "json", data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"}, ) dataset = dataset.cast_column("images", Sequence(Image())) dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_size="1GB") ``` where it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it. Thanks in advance! Best,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6686/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6685/comments
https://api.github.com/repos/huggingface/datasets/issues/6685/events
https://github.com/huggingface/datasets/pull/6685
2,145,570,006
PR_kwDODunzps5ndZQa
6,685
Updated Quickstart Notebook link
{ "login": "Codeblockz", "id": 55932554, "node_id": "MDQ6VXNlcjU1OTMyNTU0", "avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Codeblockz", "html_url": "https://github.com/Codeblockz", "followers_url": "https://api.github.com/users/Codeblockz/followers", "following_url": "https://api.github.com/users/Codeblockz/following{/other_user}", "gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions", "organizations_url": "https://api.github.com/users/Codeblockz/orgs", "repos_url": "https://api.github.com/users/Codeblockz/repos", "events_url": "https://api.github.com/users/Codeblockz/events{/privacy}", "received_events_url": "https://api.github.com/users/Codeblockz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-21T01:04:18
2024-03-12T21:31:04
2024-02-25T18:48:08
CONTRIBUTOR
null
Fixed Quickstart Notebook Link in the [Overview notebook](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6685/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6685", "html_url": "https://github.com/huggingface/datasets/pull/6685", "diff_url": "https://github.com/huggingface/datasets/pull/6685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6685.patch", "merged_at": "2024-02-25T18:48:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/6684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6684/comments
https://api.github.com/repos/huggingface/datasets/issues/6684/events
https://github.com/huggingface/datasets/pull/6684
2,144,092,388
PR_kwDODunzps5nYUIf
6,684
Improve error message for gated datasets on load
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-20T10:51:27
2024-02-20T15:40:52
2024-02-20T15:33:56
MEMBER
null
Internal Slack discussion: https://huggingface.slack.com/archives/C02V51Q3800/p1708424971135029
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6684/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6684", "html_url": "https://github.com/huggingface/datasets/pull/6684", "diff_url": "https://github.com/huggingface/datasets/pull/6684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6684.patch", "merged_at": "2024-02-20T15:33:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/6683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6683/comments
https://api.github.com/repos/huggingface/datasets/issues/6683/events
https://github.com/huggingface/datasets/pull/6683
2,142,751,955
PR_kwDODunzps5nTxGu
6,683
Fix imagefolder dataset url
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-19T16:26:51
2024-02-19T17:24:25
2024-02-19T17:18:10
COLLABORATOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6683/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6683", "html_url": "https://github.com/huggingface/datasets/pull/6683", "diff_url": "https://github.com/huggingface/datasets/pull/6683.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6683.patch", "merged_at": "2024-02-19T17:18:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/6682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6682/comments
https://api.github.com/repos/huggingface/datasets/issues/6682/events
https://github.com/huggingface/datasets/pull/6682
2,142,000,800
PR_kwDODunzps5nRME6
6,682
Update GitHub Actions to Node 20
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-19T10:10:50
2024-02-28T07:02:40
2024-02-28T06:56:34
MEMBER
null
Update GitHub Actions to Node 20. Fix #6679.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6682/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6682", "html_url": "https://github.com/huggingface/datasets/pull/6682", "diff_url": "https://github.com/huggingface/datasets/pull/6682.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6682.patch", "merged_at": "2024-02-28T06:56:34" }
true
https://api.github.com/repos/huggingface/datasets/issues/6681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6681/comments
https://api.github.com/repos/huggingface/datasets/issues/6681/events
https://github.com/huggingface/datasets/pull/6681
2,141,985,239
PR_kwDODunzps5nRItQ
6,681
Update release instructions
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
null
[]
null
null
2024-02-19T10:03:08
2024-02-28T07:23:49
2024-02-28T07:17:22
MEMBER
null
Update release instructions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6681/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6681", "html_url": "https://github.com/huggingface/datasets/pull/6681", "diff_url": "https://github.com/huggingface/datasets/pull/6681.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6681.patch", "merged_at": "2024-02-28T07:17:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/6680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6680/comments
https://api.github.com/repos/huggingface/datasets/issues/6680/events
https://github.com/huggingface/datasets/pull/6680
2,141,979,527
PR_kwDODunzps5nRHcz
6,680
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-19T10:00:31
2024-02-19T10:06:43
2024-02-19T10:00:40
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6680/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6680", "html_url": "https://github.com/huggingface/datasets/pull/6680", "diff_url": "https://github.com/huggingface/datasets/pull/6680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6680.patch", "merged_at": "2024-02-19T10:00:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/6679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6679/comments
https://api.github.com/repos/huggingface/datasets/issues/6679/events
https://github.com/huggingface/datasets/issues/6679
2,141,953,981
I_kwDODunzps5_q5-9
6,679
Node.js 16 GitHub Actions are deprecated
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2024-02-19T09:47:37
2024-02-28T06:56:35
2024-02-28T06:56:35
MEMBER
null
`Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/ We should update them to Node 20. See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678 > Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3, actions/setup-python@v4. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6679/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6678/comments
https://api.github.com/repos/huggingface/datasets/issues/6678/events
https://github.com/huggingface/datasets/pull/6678
2,141,902,154
PR_kwDODunzps5nQ2ZO
6,678
Release: 2.17.1
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-19T09:24:29
2024-02-19T10:03:00
2024-02-19T09:56:52
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6678/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6678", "html_url": "https://github.com/huggingface/datasets/pull/6678", "diff_url": "https://github.com/huggingface/datasets/pull/6678.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6678.patch", "merged_at": "2024-02-19T09:56:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/6677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6677/comments
https://api.github.com/repos/huggingface/datasets/issues/6677/events
https://github.com/huggingface/datasets/pull/6677
2,141,244,167
PR_kwDODunzps5nOmo_
6,677
Pass through information about location of cache directory.
{ "login": "stridge-cruxml", "id": 94808782, "node_id": "U_kgDOBaaqzg", "avatar_url": "https://avatars.githubusercontent.com/u/94808782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stridge-cruxml", "html_url": "https://github.com/stridge-cruxml", "followers_url": "https://api.github.com/users/stridge-cruxml/followers", "following_url": "https://api.github.com/users/stridge-cruxml/following{/other_user}", "gists_url": "https://api.github.com/users/stridge-cruxml/gists{/gist_id}", "starred_url": "https://api.github.com/users/stridge-cruxml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stridge-cruxml/subscriptions", "organizations_url": "https://api.github.com/users/stridge-cruxml/orgs", "repos_url": "https://api.github.com/users/stridge-cruxml/repos", "events_url": "https://api.github.com/users/stridge-cruxml/events{/privacy}", "received_events_url": "https://api.github.com/users/stridge-cruxml/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-18T23:48:57
2024-02-28T18:57:39
2024-02-28T18:51:15
CONTRIBUTOR
null
If cache directory is set, information is not passed through. Pass download config in as an arg too.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6677/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6677", "html_url": "https://github.com/huggingface/datasets/pull/6677", "diff_url": "https://github.com/huggingface/datasets/pull/6677.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6677.patch", "merged_at": "2024-02-28T18:51:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/6676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6676/comments
https://api.github.com/repos/huggingface/datasets/issues/6676/events
https://github.com/huggingface/datasets/issues/6676
2,140,648,619
I_kwDODunzps5_l7Sr
6,676
Can't Read List of JSON Files Properly
{ "login": "lordsoffallen", "id": 20232088, "node_id": "MDQ6VXNlcjIwMjMyMDg4", "avatar_url": "https://avatars.githubusercontent.com/u/20232088?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lordsoffallen", "html_url": "https://github.com/lordsoffallen", "followers_url": "https://api.github.com/users/lordsoffallen/followers", "following_url": "https://api.github.com/users/lordsoffallen/following{/other_user}", "gists_url": "https://api.github.com/users/lordsoffallen/gists{/gist_id}", "starred_url": "https://api.github.com/users/lordsoffallen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lordsoffallen/subscriptions", "organizations_url": "https://api.github.com/users/lordsoffallen/orgs", "repos_url": "https://api.github.com/users/lordsoffallen/repos", "events_url": "https://api.github.com/users/lordsoffallen/events{/privacy}", "received_events_url": "https://api.github.com/users/lordsoffallen/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-02-17T22:58:15
2024-03-02T20:47:22
null
NONE
null
### Describe the bug Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging: The code fails with ``` ArrowInvalid: JSON parse error: Invalid value. in row 0 UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug This doesn't work ``` from datasets import Dataset # dir contains 100 json files. Dataset.from_json("/PUT SOME PATH HERE/*") ``` This works: ``` from datasets import concatenate_datasets ls_ds = [] for file in list_of_json_files: ls_ds.append(Dataset.from_json(file)) ds = concatenate_datasets(ls_ds) ``` ### Expected behavior I expect this to read json files properly as error is not clear ### Environment info - `datasets` version: 2.17.0 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6676/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6675/comments
https://api.github.com/repos/huggingface/datasets/issues/6675/events
https://github.com/huggingface/datasets/issues/6675
2,139,640,381
I_kwDODunzps5_iFI9
6,675
Allow image model (color conversion) to be specified as part of datasets Image() decode
{ "login": "rwightman", "id": 5702664, "node_id": "MDQ6VXNlcjU3MDI2NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rwightman", "html_url": "https://github.com/rwightman", "followers_url": "https://api.github.com/users/rwightman/followers", "following_url": "https://api.github.com/users/rwightman/following{/other_user}", "gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}", "starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwightman/subscriptions", "organizations_url": "https://api.github.com/users/rwightman/orgs", "repos_url": "https://api.github.com/users/rwightman/repos", "events_url": "https://api.github.com/users/rwightman/events{/privacy}", "received_events_url": "https://api.github.com/users/rwightman/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
null
2024-02-16T23:43:20
2024-03-18T15:41:34
2024-03-18T15:41:34
NONE
null
### Feature request Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.org/vision/main/generated/torchvision.io.decode_jpeg.html, and similarly in tensorflow.data pipelines decode_jpeg or https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg have a channels arg that allows controlling the image mode in the decode step. datasets currently requires this pattern (from [examples](https://huggingface.co./docs/datasets/main/en/image_process)): ``` from torchvision.transforms import Compose, ColorJitter, ToTensor jitter = Compose( [ ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7), ToTensor(), ] ) def transforms(examples): examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]] return examples ``` ### Motivation It would be nice to be able to handle `image.convert("RGB")` (or other modes) in the decode step, before applying torchvision transforms, this would reduce differences in code when handling pipelines that can handle torchvision, webdatset, or hf datasets with fewer code differences and without needing to handle image mode argument passing in two different stages of the pipelines... ### Your contribution Can do a PR with guidance on how mode should be passed / set on the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6675/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6675/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6674/comments
https://api.github.com/repos/huggingface/datasets/issues/6674/events
https://github.com/huggingface/datasets/issues/6674
2,139,595,576
I_kwDODunzps5_h6M4
6,674
Depprcated Overview.ipynb Link to new Quickstart Notebook invalid
{ "login": "Codeblockz", "id": 55932554, "node_id": "MDQ6VXNlcjU1OTMyNTU0", "avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Codeblockz", "html_url": "https://github.com/Codeblockz", "followers_url": "https://api.github.com/users/Codeblockz/followers", "following_url": "https://api.github.com/users/Codeblockz/following{/other_user}", "gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions", "organizations_url": "https://api.github.com/users/Codeblockz/orgs", "repos_url": "https://api.github.com/users/Codeblockz/repos", "events_url": "https://api.github.com/users/Codeblockz/events{/privacy}", "received_events_url": "https://api.github.com/users/Codeblockz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-16T22:51:35
2024-02-25T18:48:09
2024-02-25T18:48:09
CONTRIBUTOR
null
### Describe the bug For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken. ### Steps to reproduce the bug Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quickstart.ipynb) link in the notebook. ### Expected behavior I believe is it suposed to link [here](https://github.com/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb). That is mentioned in the readme. ### Environment info Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6674/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6673/comments
https://api.github.com/repos/huggingface/datasets/issues/6673/events
https://github.com/huggingface/datasets/issues/6673
2,139,522,827
I_kwDODunzps5_hocL
6,673
IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True`
{ "login": "rwightman", "id": 5702664, "node_id": "MDQ6VXNlcjU3MDI2NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rwightman", "html_url": "https://github.com/rwightman", "followers_url": "https://api.github.com/users/rwightman/followers", "following_url": "https://api.github.com/users/rwightman/following{/other_user}", "gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}", "starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwightman/subscriptions", "organizations_url": "https://api.github.com/users/rwightman/orgs", "repos_url": "https://api.github.com/users/rwightman/repos", "events_url": "https://api.github.com/users/rwightman/events{/privacy}", "received_events_url": "https://api.github.com/users/rwightman/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
open
false
null
[]
null
null
2024-02-16T21:38:12
2024-02-22T13:17:14
null
NONE
null
### Describe the bug When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes. PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does not. In my own use of IterableDatasets I usually track the epoch count which crosses process boundaries in a multiprocessing.Value ### Steps to reproduce the bug Use a streaming dataset (Iterable) w/ the recommended pattern below and `persistent_workers=True` in the torch DataLoader. ``` for epoch in range(epochs): shuffled_dataset.set_epoch(epoch) for example in shuffled_dataset: ... ``` ### Expected behavior When the canonical bit of code above is used with `num_workers > 0` and `persistent_workers=True`, the epoch set via `set_epoch()` is propagated to the IterableDataset instances in the worker processes ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6673/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6672/comments
https://api.github.com/repos/huggingface/datasets/issues/6672/events
https://github.com/huggingface/datasets/pull/6672
2,138,732,288
PR_kwDODunzps5nGAlw
6,672
Remove deprecated verbose parameter from CSV builder
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-16T14:26:21
2024-02-19T09:26:34
2024-02-19T09:20:22
MEMBER
null
Remove deprecated `verbose` parameter from CSV builder. Note that the `verbose` parameter is deprecated since pandas 2.2.0. See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450 Fix #6671.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6672/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6672", "html_url": "https://github.com/huggingface/datasets/pull/6672", "diff_url": "https://github.com/huggingface/datasets/pull/6672.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6672.patch", "merged_at": "2024-02-19T09:20:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/6671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6671/comments
https://api.github.com/repos/huggingface/datasets/issues/6671/events
https://github.com/huggingface/datasets/issues/6671
2,138,727,870
I_kwDODunzps5_emW-
6,671
CSV builder raises deprecation warning on verbose parameter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2024-02-16T14:23:46
2024-02-19T09:20:23
2024-02-19T09:20:23
MEMBER
null
CSV builder raises a deprecation warning on `verbose` parameter: ``` FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version. ``` See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6671/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6670/comments
https://api.github.com/repos/huggingface/datasets/issues/6670/events
https://github.com/huggingface/datasets/issues/6670
2,138,372,958
I_kwDODunzps5_dPte
6,670
ValueError
{ "login": "prashanth19bolukonda", "id": 112316000, "node_id": "U_kgDOBrHOYA", "avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prashanth19bolukonda", "html_url": "https://github.com/prashanth19bolukonda", "followers_url": "https://api.github.com/users/prashanth19bolukonda/followers", "following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}", "gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}", "starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions", "organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs", "repos_url": "https://api.github.com/users/prashanth19bolukonda/repos", "events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}", "received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-16T11:05:17
2024-02-17T04:26:34
2024-02-16T14:43:53
NONE
null
### Describe the bug ValueError Traceback (most recent call last) [<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>() 9 import numpy as np 10 import matplotlib.pyplot as plt ---> 11 from datasets import DatasetDict, Dataset 12 from transformers import AutoTokenizer, AutoModelForSequenceClassification 13 from transformers import Trainer, TrainingArguments 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 16 __version__ = "2.17.0" 17 ---> 18 from .arrow_dataset import Dataset 19 from .arrow_reader import ReadInstruction 20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 65 66 from . import config ---> 67 from .arrow_reader import ArrowReader 68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 69 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 27 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 31 [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module> 18 # flake8: noqa 19 ---> 20 from .core import * [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module> 34 import pyarrow as pa 35 import pyarrow.lib as lib ---> 36 import pyarrow._parquet as _parquet 37 38 from pyarrow._parquet import (ParquetReader, Statistics, # noqa /usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet() ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Steps to reproduce the bug ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Expected behavior Resolve the binary incompatibility ### Environment info Google Colab Note book
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6670/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6669/comments
https://api.github.com/repos/huggingface/datasets/issues/6669/events
https://github.com/huggingface/datasets/issues/6669
2,138,322,662
I_kwDODunzps5_dDbm
6,669
attribute error when writing trainer.train()
{ "login": "prashanth19bolukonda", "id": 112316000, "node_id": "U_kgDOBrHOYA", "avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prashanth19bolukonda", "html_url": "https://github.com/prashanth19bolukonda", "followers_url": "https://api.github.com/users/prashanth19bolukonda/followers", "following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}", "gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}", "starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions", "organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs", "repos_url": "https://api.github.com/users/prashanth19bolukonda/repos", "events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}", "received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-16T10:40:49
2024-03-01T10:58:00
2024-02-29T17:25:17
NONE
null
### Describe the bug AttributeError Traceback (most recent call last) Cell In[39], line 2 1 # Start the training process ----> 2 trainer.train() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1537 hf_hub_utils.enable_progress_bars() 1538 else: -> 1539 return inner_training_loop( 1540 args=args, 1541 resume_from_checkpoint=resume_from_checkpoint, 1542 trial=trial, 1543 ignore_keys_for_eval=ignore_keys_for_eval, 1544 ) File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1833 rng_to_sync = True 1835 step = -1 -> 1836 for step, inputs in enumerate(epoch_iterator): 1837 total_batched_samples += 1 1839 if self.args.include_num_input_tokens_seen: File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self) 449 # We iterate one batch ahead to check when we are at the end 450 try: --> 451 current_batch = next(dataloader_iter) 452 except StopIteration: 453 yield File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self) 627 if self._sampler_iter is None: 628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E) 629 self._reset() # type: ignore[call-arg] --> 630 data = self._next_data() 631 self._num_yielded += 1 632 if self._dataset_kind == _DatasetKind.Iterable and \ 633 self._IterableDataset_len_called is not None and \ 634 self._num_yielded > self._IterableDataset_len_called: File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self) 672 def _next_data(self): 673 index = self._next_index() # may raise StopIteration --> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 675 if self._pin_memory: 676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key) 1762 def __getitem__(self, key): # noqa: F811 1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 1764 return self._getitem( 1765 key, 1766 ) File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs) 1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 1749 formatted_output = format_table( 1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1751 ) 1752 return formatted_output File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns) 538 else: 539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns) --> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type) 541 if output_all_columns: 542 if isinstance(formatted_output, MutableMapping): File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table) 56 def format_row(self, pa_table: pa.Table) -> dict: ---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table) 58 return self.recursive_tensorize(row) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table) 153 def extract_row(self, pa_table: pa.Table) -> dict: --> 154 return _unnest(self.extract_batch(pa_table)) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: --> 196 if any( 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: 196 if any( --> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr) 319 warnings.warn( 320 f"In the future `np.{attr}` will be defined as the " 321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2) 323 if attr in __former_attrs__: --> 324 raise AttributeError(__former_attrs__[attr]) 326 if attr == 'testing': 327 import numpy.testing as testing AttributeError: module 'numpy' has no attribute 'object'. `np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecationsAttributeError Traceback (most recent call last) Cell In[39], line 2 1 # Start the training process ----> 2 trainer.train() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1537 hf_hub_utils.enable_progress_bars() 1538 else: -> 1539 return inner_training_loop( 1540 args=args, 1541 resume_from_checkpoint=resume_from_checkpoint, 1542 trial=trial, 1543 ignore_keys_for_eval=ignore_keys_for_eval, 1544 ) File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1833 rng_to_sync = True 1835 step = -1 -> 1836 for step, inputs in enumerate(epoch_iterator): 1837 total_batched_samples += 1 1839 if self.args.include_num_input_tokens_seen: File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self) 449 # We iterate one batch ahead to check when we are at the end 450 try: --> 451 current_batch = next(dataloader_iter) 452 except StopIteration: 453 yield File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self) 627 if self._sampler_iter is None: 628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E) 629 self._reset() # type: ignore[call-arg] --> 630 data = self._next_data() 631 self._num_yielded += 1 632 if self._dataset_kind == _DatasetKind.Iterable and \ 633 self._IterableDataset_len_called is not None and \ 634 self._num_yielded > self._IterableDataset_len_called: File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self) 672 def _next_data(self): 673 index = self._next_index() # may raise StopIteration --> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 675 if self._pin_memory: 676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key) 1762 def __getitem__(self, key): # noqa: F811 1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 1764 return self._getitem( 1765 key, 1766 ) File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs) 1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 1749 formatted_output = format_table( 1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1751 ) 1752 return formatted_output File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns) 538 else: 539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns) --> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type) 541 if output_all_columns: 542 if isinstance(formatted_output, MutableMapping): File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table) 56 def format_row(self, pa_table: pa.Table) -> dict: ---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table) 58 return self.recursive_tensorize(row) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table) 153 def extract_row(self, pa_table: pa.Table) -> dict: --> 154 return _unnest(self.extract_batch(pa_table)) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: --> 196 if any( 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: 196 if any( --> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr) 319 warnings.warn( 320 f"In the future `np.{attr}` will be defined as the " 321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2) 323 if attr in __former_attrs__: --> 324 raise AttributeError(__former_attrs__[attr]) 326 if attr == 'testing': 327 import numpy.testing as testing AttributeError: module 'numpy' has no attribute 'object'. `np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations Please help me to resolve the above error ### Steps to reproduce the bug Please resolve the issue of deprecated function np.object to object in the numpy ### Expected behavior np.object should be written as object only ### Environment info kaggle notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6669/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6668/comments
https://api.github.com/repos/huggingface/datasets/issues/6668/events
https://github.com/huggingface/datasets/issues/6668
2,137,859,935
I_kwDODunzps5_bSdf
6,668
Chapter 6 - Issue Loading `cnn_dailymail` dataset
{ "login": "hariravichandran", "id": 34660389, "node_id": "MDQ6VXNlcjM0NjYwMzg5", "avatar_url": "https://avatars.githubusercontent.com/u/34660389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hariravichandran", "html_url": "https://github.com/hariravichandran", "followers_url": "https://api.github.com/users/hariravichandran/followers", "following_url": "https://api.github.com/users/hariravichandran/following{/other_user}", "gists_url": "https://api.github.com/users/hariravichandran/gists{/gist_id}", "starred_url": "https://api.github.com/users/hariravichandran/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hariravichandran/subscriptions", "organizations_url": "https://api.github.com/users/hariravichandran/orgs", "repos_url": "https://api.github.com/users/hariravichandran/repos", "events_url": "https://api.github.com/users/hariravichandran/events{/privacy}", "received_events_url": "https://api.github.com/users/hariravichandran/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-02-16T04:40:56
2024-02-16T04:40:56
null
NONE
null
### Describe the bug So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code: `dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")` Error Message: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[4], line 4 1 #hide_output 2 from datasets import load_dataset ----> 4 dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0") 7 # dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0", trust_remote_code=True) 8 print(f"Features: {dataset['train'].column_names}") File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2583 # Build dataset for splits 2584 keep_in_memory = ( 2585 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2586 ) -> 2587 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2588 # Rename and cast features to match task schema 2589 if task is not None: 2590 # To avoid issuing the same warning twice File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1244, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1241 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1243 # Create a dataset for each of the given splits -> 1244 datasets = map_nested( 1245 partial( 1246 self._build_single_dataset, 1247 run_post_process=run_post_process, 1248 verification_mode=verification_mode, 1249 in_memory=in_memory, 1250 ), 1251 split, 1252 map_tuple=True, 1253 disable_tqdm=True, 1254 ) 1255 if isinstance(datasets, dict): 1256 datasets = DatasetDict(datasets) File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:477, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc) 466 mapped = [ 467 map_nested( 468 function=function, (...) 474 for obj in iterable 475 ] 476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length: --> 477 mapped = [ 478 _single_map_nested((function, obj, types, None, True, None)) 479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 480 ] 481 else: 482 with warnings.catch_warnings(): File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:478, in <listcomp>(.0) 466 mapped = [ 467 map_nested( 468 function=function, (...) 474 for obj in iterable 475 ] 476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length: 477 mapped = [ --> 478 _single_map_nested((function, obj, types, None, True, None)) 479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 480 ] 481 else: 482 with warnings.catch_warnings(): File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:370, in _single_map_nested(args) 368 # Singleton first to spare some computation 369 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 370 return function(data_struct) 372 # Reduce logging to keep things readable in multiprocessing with tqdm 373 if rank is not None and logging.get_verbosity() < logging.WARNING: File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1274, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory) 1271 split = Split(split) 1273 # Build base dataset -> 1274 ds = self._as_dataset( 1275 split=split, 1276 in_memory=in_memory, 1277 ) 1278 if run_post_process: 1279 for resource_file_name in self._post_processing_resources(split).values(): File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1348, in DatasetBuilder._as_dataset(self, split, in_memory) 1346 if self._check_legacy_cache(): 1347 dataset_name = self.name -> 1348 dataset_kwargs = ArrowReader(cache_dir, self.info).read( 1349 name=dataset_name, 1350 instructions=split, 1351 split_infos=self.info.splits.values(), 1352 in_memory=in_memory, 1353 ) 1354 fingerprint = self._get_dataset_fingerprint(split) 1355 return Dataset(fingerprint=fingerprint, **dataset_kwargs) File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\arrow_reader.py:254, in BaseReader.read(self, name, instructions, split_infos, in_memory) 252 if not files: 253 msg = f'Instruction "{instructions}" corresponds to no data!' --> 254 raise ValueError(msg) 255 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) **ValueError: Instruction "validation" corresponds to no data!** ```` Looks like the data is not being loaded. Any advice would be appreciated. Thanks! ### Steps to reproduce the bug Run all cells of Chapter 6 notebook. ### Expected behavior Data should load correctly without any errors. ### Environment info - `datasets` version: 2.17.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.18 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6668/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6667/comments
https://api.github.com/repos/huggingface/datasets/issues/6667/events
https://github.com/huggingface/datasets/issues/6667
2,137,769,552
I_kwDODunzps5_a8ZQ
6,667
Default config for squad is incorrect
{ "login": "kiddyboots216", "id": 22651617, "node_id": "MDQ6VXNlcjIyNjUxNjE3", "avatar_url": "https://avatars.githubusercontent.com/u/22651617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiddyboots216", "html_url": "https://github.com/kiddyboots216", "followers_url": "https://api.github.com/users/kiddyboots216/followers", "following_url": "https://api.github.com/users/kiddyboots216/following{/other_user}", "gists_url": "https://api.github.com/users/kiddyboots216/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiddyboots216/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiddyboots216/subscriptions", "organizations_url": "https://api.github.com/users/kiddyboots216/orgs", "repos_url": "https://api.github.com/users/kiddyboots216/repos", "events_url": "https://api.github.com/users/kiddyboots216/events{/privacy}", "received_events_url": "https://api.github.com/users/kiddyboots216/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
null
2024-02-16T02:36:55
2024-02-23T09:10:00
null
NONE
null
### Describe the bug If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say; ValueError: Couldn't find cache for squad for config 'default' Available configs in the cache: ['plain_text'] ### Steps to reproduce the bug 1. export HF_DATASETS_OFFLINE=0 2. load_dataset("squad") 3. export HF_DATASETS_OFFLINE=1 4. load_dataset("squad") ### Expected behavior We should change the config_name I guess? ### Environment info linux, latest version of datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6667/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6665/comments
https://api.github.com/repos/huggingface/datasets/issues/6665/events
https://github.com/huggingface/datasets/pull/6665
2,136,136,425
PR_kwDODunzps5m9JgW
6,665
Allow SplitDict setitem to replace existing SplitInfo
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-15T10:17:08
2024-03-01T16:02:46
2024-03-01T15:56:38
MEMBER
null
Fix this code provided by @clefourrier ```python import datasets import os token = os.getenv("TOKEN") results = datasets.load_dataset("gaia-benchmark/results_public", "2023", token=token, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD) results["test"] = datasets.Dataset.from_list([row for row in results["test"] if row["model"] != "StateFlow"]) results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test") ``` ``` ValueError Traceback (most recent call last) Cell In[43], line 1 ----> 1 results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test") File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/arrow_dataset.py:5498, in Dataset.push_to_hub(self, repo_id, config_name, split, private, token, branch, max_shard_size, num_shards, embed_external_files) 5496 repo_info.dataset_size = (repo_info.dataset_size or 0) + dataset_nbytes 5497 repo_info.size_in_bytes = repo_info.download_size + repo_info.dataset_size -> 5498 repo_info.splits[split] = SplitInfo( 5499 split, num_bytes=dataset_nbytes, num_examples=len(self), dataset_name=dataset_name 5500 ) 5501 info_to_dump = repo_info 5502 # create the metadata configs if it was uploaded with push_to_hub before metadata configs existed File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/splits.py:541, in SplitDict.__setitem__(self, key, value) 539 raise ValueError(f"Cannot add elem. (key mismatch: '{key}' != '{value.name}')") 540 if key in self: --> 541 raise ValueError(f"Split {key} already present") 542 super().__setitem__(key, value) ValueError: Split test already present ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6665/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6665/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6665", "html_url": "https://github.com/huggingface/datasets/pull/6665", "diff_url": "https://github.com/huggingface/datasets/pull/6665.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6665.patch", "merged_at": "2024-03-01T15:56:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/6664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6664/comments
https://api.github.com/repos/huggingface/datasets/issues/6664/events
https://github.com/huggingface/datasets/pull/6664
2,135,483,978
PR_kwDODunzps5m67g0
6,664
Revert the changes in `arrow_writer.py` from #6636
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-15T01:47:33
2024-02-16T14:02:39
2024-02-16T02:31:11
CONTRIBUTOR
null
#6636 broke `write_examples_on_file` and `write_batch` from the class `ArrowWriter`. I'm undoing these changes. See #6663. Note the current implementation doesn't keep the order of the columns and the schema, thus setting a wrong schema for each column.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6664/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6664", "html_url": "https://github.com/huggingface/datasets/pull/6664", "diff_url": "https://github.com/huggingface/datasets/pull/6664.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6664.patch", "merged_at": "2024-02-16T02:31:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/6663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6663/comments
https://api.github.com/repos/huggingface/datasets/issues/6663/events
https://github.com/huggingface/datasets/issues/6663
2,135,480,811
I_kwDODunzps5_SNnr
6,663
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter`
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-15T01:43:27
2024-02-16T09:25:00
2024-02-16T09:25:00
CONTRIBUTOR
null
### Describe the bug `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well. ### Steps to reproduce the bug Try to do `write_batch` with anything that has many columns, and it's likely to break. ### Expected behavior I expect these functions to work, instead of it trying to cast a column to its incorrect type. ### Environment info - `datasets` version: 2.17.0 - Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.19.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6663/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6662/comments
https://api.github.com/repos/huggingface/datasets/issues/6662/events
https://github.com/huggingface/datasets/pull/6662
2,132,425,812
PR_kwDODunzps5mwgKP
6,662
fix: show correct package name to install biopython
{ "login": "BioGeek", "id": 59344, "node_id": "MDQ6VXNlcjU5MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/59344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BioGeek", "html_url": "https://github.com/BioGeek", "followers_url": "https://api.github.com/users/BioGeek/followers", "following_url": "https://api.github.com/users/BioGeek/following{/other_user}", "gists_url": "https://api.github.com/users/BioGeek/gists{/gist_id}", "starred_url": "https://api.github.com/users/BioGeek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BioGeek/subscriptions", "organizations_url": "https://api.github.com/users/BioGeek/orgs", "repos_url": "https://api.github.com/users/BioGeek/repos", "events_url": "https://api.github.com/users/BioGeek/events{/privacy}", "received_events_url": "https://api.github.com/users/BioGeek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-13T14:15:04
2024-03-01T17:49:48
2024-03-01T17:43:39
CONTRIBUTOR
null
When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error: ``` >>> from datasets import load_dataset >>> dataset = load_dataset("InstaDeepAI/multi_species_genomes") /home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py:1454: FutureWarning: The repository for InstaDeepAI/multi_species_genomes contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/InstaDeepAI/multi_species_genomes You can avoid this message in future by passing the argument `trust_remote_code=True`. Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. warnings.warn( Downloading builder script: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.51k/7.51k [00:00<00:00, 7.67MB/s] Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 17.2k/17.2k [00:00<00:00, 11.0MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2548, in load_dataset builder_instance = load_dataset_builder( File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2220, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1871, in dataset_module_factory raise e1 from None File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1844, in dataset_module_factory ).get_module() File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1466, in get_module local_imports = _download_additional_modules( File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 346, in _download_additional_modules raise ImportError( ImportError: To be able to use InstaDeepAI/multi_species_genomes, you need to install the following dependency: Bio. Please install it using 'pip install Bio' for instance. >>> ``` `Bio` comes from the `biopython` package that can be installed with `pip install biopython`, not with `pip install Bio` as suggested. This PR adds special logic to show the correct package name in the error message of ` _download_additional_modules`, similarly as is done for `sklearn` / `scikit-learn` already. There are more packages where importable module name differs from the PyPI package name, so this could be made more generic, like: ``` # Mapping of importable module names to their PyPI package names package_map = { "sklearn": "scikit-learn", "Bio": "biopython", "PIL": "Pillow", "bs4": "beautifulsoup4" } for module_name, pypi_name in package_map.items(): if module_name in needs_to_be_installed.keys(): needs_to_be_installed[module_name] = pypi_name ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6662/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6662", "html_url": "https://github.com/huggingface/datasets/pull/6662", "diff_url": "https://github.com/huggingface/datasets/pull/6662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6662.patch", "merged_at": "2024-03-01T17:43:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/6661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6661/comments
https://api.github.com/repos/huggingface/datasets/issues/6661/events
https://github.com/huggingface/datasets/issues/6661
2,132,296,267
I_kwDODunzps5_GEJL
6,661
Import error on Google Colab
{ "login": "kithogue", "id": 16103566, "node_id": "MDQ6VXNlcjE2MTAzNTY2", "avatar_url": "https://avatars.githubusercontent.com/u/16103566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kithogue", "html_url": "https://github.com/kithogue", "followers_url": "https://api.github.com/users/kithogue/followers", "following_url": "https://api.github.com/users/kithogue/following{/other_user}", "gists_url": "https://api.github.com/users/kithogue/gists{/gist_id}", "starred_url": "https://api.github.com/users/kithogue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kithogue/subscriptions", "organizations_url": "https://api.github.com/users/kithogue/orgs", "repos_url": "https://api.github.com/users/kithogue/repos", "events_url": "https://api.github.com/users/kithogue/events{/privacy}", "received_events_url": "https://api.github.com/users/kithogue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-13T13:12:40
2024-02-25T16:37:54
2024-02-14T08:04:47
NONE
null
### Describe the bug Cannot be imported on Google Colab, the import throws the following error: ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Steps to reproduce the bug 1. `! pip install -U datasets` 2. `import datasets` ### Expected behavior Should be possible to use the library ### Environment info - `datasets` version: 2.17.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6661/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6660/comments
https://api.github.com/repos/huggingface/datasets/issues/6660/events
https://github.com/huggingface/datasets/pull/6660
2,131,977,011
PR_kwDODunzps5mu9wU
6,660
Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes
{ "login": "mohalisad", "id": 23399590, "node_id": "MDQ6VXNlcjIzMzk5NTkw", "avatar_url": "https://avatars.githubusercontent.com/u/23399590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mohalisad", "html_url": "https://github.com/mohalisad", "followers_url": "https://api.github.com/users/mohalisad/followers", "following_url": "https://api.github.com/users/mohalisad/following{/other_user}", "gists_url": "https://api.github.com/users/mohalisad/gists{/gist_id}", "starred_url": "https://api.github.com/users/mohalisad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mohalisad/subscriptions", "organizations_url": "https://api.github.com/users/mohalisad/orgs", "repos_url": "https://api.github.com/users/mohalisad/repos", "events_url": "https://api.github.com/users/mohalisad/events{/privacy}", "received_events_url": "https://api.github.com/users/mohalisad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2024-02-13T10:24:33
2024-03-01T19:01:57
2024-03-01T18:52:37
CONTRIBUTOR
null
This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example: ```python from datasets import Dataset, Sequence, Value, Features def gen(): for i in range(100): yield {'seq': list(range(i, i + 20))} ds = Dataset.from_generator(gen, features=Features({'seq': Sequence(feature=Value(dtype='uint16'), length=-1)})) ds.set_format('torch') print(ds[0]) ``` This code snippet triggers the following error due to the inability to convert numpy.uint16 arrays to a PyTorch-supported format: ``` TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool. ``` This PR introduces an automatic mechanism to convert np.uint16 and np.uint32 datatypes to np.int64 for seamless compatibility with PyTorch formats, simplifying workflows and improving developer experience by eliminating the need for manual conversion handling.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6660/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6660/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6660", "html_url": "https://github.com/huggingface/datasets/pull/6660", "diff_url": "https://github.com/huggingface/datasets/pull/6660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6660.patch", "merged_at": "2024-03-01T18:52:37" }
true