url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.2B
1.82B
| node_id
stringlengths 18
19
| number
int64 4.13k
6.08k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5152/comments | https://api.github.com/repos/huggingface/datasets/issues/5152/events | https://github.com/huggingface/datasets/issues/5152 | 1,420,808,919 | I_kwDODunzps5Ur9LX | 5,152 | refactor FolderBasedBuilder and Image/AudioFolder tests | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | open | false | null | [] | null | [] | 2022-10-24T13:11:52 | 2022-10-24T13:11:52 | null | CONTRIBUTOR | null | null | null | Tests for FolderBasedBuilder, ImageFolder and AudioFolder are mostly duplicating each other. They need to be refactored and Audio/ImageFolder should have only tests specific to the loader. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5152/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5151/comments | https://api.github.com/repos/huggingface/datasets/issues/5151/events | https://github.com/huggingface/datasets/issues/5151 | 1,420,791,163 | I_kwDODunzps5Ur417 | 5,151 | Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?) | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"also asked in https://discuss.huggingface.co/t/create-multiple-dataset-configs-with-push-to-hub-method/25480"
] | 2022-10-24T12:59:18 | 2022-11-04T14:55:20 | null | CONTRIBUTOR | null | null | null | Now one can push only different splits within one default config of a dataset.
Would be nice to allow something like:
```
ds.push_to_hub(repo_name, config=config_name)
```
I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs for packaged modules datasets.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5151/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5151/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5150/comments | https://api.github.com/repos/huggingface/datasets/issues/5150/events | https://github.com/huggingface/datasets/issues/5150 | 1,420,684,999 | I_kwDODunzps5Ure7H | 5,150 | Problems after upgrading to 2.6.1 | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! I can't reproduce the error following these steps. Can you please provide a reproducible example?",
"I faced the same issue:\r\n\r\n### Repro\r\n```\r\n!pip install datasets==2.6.1\r\nimport datasets as Dataset\r\ndataset = Dataset.from_pandas(dataframe)\r\ndataset.save_to_disk(local)\r\n\r\n!pip install datasets==2.5.2\r\nimport datasets as Dataset\r\ndataset = Dataset.load_from_disk(local)\r\n```\r\n\r\n",
"@Lokiiiiii And what are the contents of the \"dataframe\" in your example?",
"I bumped into the issue too. @Lokiiiiii thanks for steps. I \"solved\" if for now by `pip install datasets>=2.6.1` everywhere.",
"Hi all, \r\nI experienced the same issue. \r\nPlease note that the pull request is related to the IMDB example provided in the doc, and is a fix for that, in that context, to make sure that people can follow the doc example and have a working system. \r\nIt does not provide a fix for Datasets itself. ",
"im getting the same error.\r\n- using the base AWS HF container that uses a datasets <2.\r\n- updating the AWS HF container to use dataset 2.4\r\n",
"Same here, running on our SageMaker pipelines. It's only happening for some but not all of our saved Datasets.",
"I am also receiving this error on Sagemaker but not locally, I have noticed that this occurs when the `.dataset/` folder does not contain a single file like:\r\n\r\n`dataset.arrow`\r\n\r\nbut instead contains multiple files like:\r\n\r\n`data-00000-of-00002.arrow`\r\n`data-00001-of-00002.arrow`\r\n\r\nI think that it may have something to do with this recent PR that updated the behaviour of `dataset.save_to_disk` by introducing sharding: https://github.com/huggingface/datasets/pull/5268\r\n\r\nFor now I can get around this by forcing datasets==2.8.0 on machine that creates dataset and in the huggingface instance for training (by running this at the start of training script `os.system(\"pip install datasets==2.8.0\")`)\r\n\r\nTo ensure the dataset is a single shard when saving the dataset locally:\r\n\r\n```python3\r\ndataset.flatten_indices().save_to_disk('path/to/dataset', num_shards=1)\r\n```\r\n\r\n and then manually changing the name afterwards from `path/to/dataset/data-00000-of-00001.arrow` to `path/to/dataset/dataset.arrow` and updating the `path/to/dataset/state.json` to reflect this name change. i.e. by changing `state.json` to this:\r\n\r\n```javascript\r\n{\r\n \"_data_files\": [\r\n {\r\n \"filename\": \"dataset.arrow\"\r\n }\r\n ],\r\n \"_fingerprint\": \"420086f0636f8727\",\r\n \"_format_columns\": null,\r\n \"_format_kwargs\": {},\r\n \"_format_type\": null,\r\n \"_output_all_columns\": false,\r\n \"_split\": null\r\n}\r\n```"
] | 2022-10-24T11:32:36 | 2023-01-03T15:26:00 | null | NONE | null | null | null | ### Describe the bug
Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2.
Context:
- Each individual dataset in the dict is created with `Dataset.from_pandas`
- The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- The pandas dataframe, besides text columns, has a column with a dictionary inside and potentially different keys in each row. Correctly the `Dataset.from_pandas` function adds `key: None` to all dictionaries in each row so that the schema can be correctly inferred.
### Steps to reproduce the bug
Steps to reproduce:
- Upgrade to datasets==2.6.1
- Create a dataset from pandas dataframe with `Dataset.from_pandas`
- Create a dataset_dict from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds})
- Save to disk with the `save` function
### Expected behavior
Same as in v2.5.2, that is load from disk without errors
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5150/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5149/comments | https://api.github.com/repos/huggingface/datasets/issues/5149/events | https://github.com/huggingface/datasets/pull/5149 | 1,420,415,639 | PR_kwDODunzps5BZJab | 5,149 | Make iter_files deterministic | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-24T08:16:27 | 2022-10-27T09:53:23 | 2022-10-27T09:51:09 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5149",
"html_url": "https://github.com/huggingface/datasets/pull/5149",
"diff_url": "https://github.com/huggingface/datasets/pull/5149.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5149.patch",
"merged_at": "2022-10-27T09:51:09"
} | Fix #5145. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5149/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5149/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5148/comments | https://api.github.com/repos/huggingface/datasets/issues/5148/events | https://github.com/huggingface/datasets/issues/5148 | 1,420,219,222 | I_kwDODunzps5UptNW | 5,148 | Cannot find the rvl_cdip dataset | {
"login": "santule",
"id": 20509836,
"node_id": "MDQ6VXNlcjIwNTA5ODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/20509836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santule",
"html_url": "https://github.com/santule",
"followers_url": "https://api.github.com/users/santule/followers",
"following_url": "https://api.github.com/users/santule/following{/other_user}",
"gists_url": "https://api.github.com/users/santule/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santule/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santule/subscriptions",
"organizations_url": "https://api.github.com/users/santule/orgs",
"repos_url": "https://api.github.com/users/santule/repos",
"events_url": "https://api.github.com/users/santule/events{/privacy}",
"received_events_url": "https://api.github.com/users/santule/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, @santule.\r\n\r\nWe have transferred all dataset scripts from GitHub to the Hugging Face Hub: https://huggingface.co./datasets\r\n- Concretely, you have \"rvl_cdip\" here: https://huggingface.co./datasets/rvl_cdip\r\n\r\nTo be able to load them, you should update your `datasets` library:\r\n```\r\npip install -U datasets\r\n```",
"thank you, it worked"
] | 2022-10-24T04:57:42 | 2022-10-24T12:23:47 | 2022-10-24T06:25:28 | NONE | null | null | null | Hi,
I am trying to use load_dataset to load the official "rvl_cdip" dataset but getting an error.
dataset = load_dataset("rvl_cdip")
Couldn't find 'rvl_cdip' on the Hugging Face Hub either: FileNotFoundError: Couldn't find the file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/rvl_cdip/rvl_cdip.py
Regards,
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5148/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5147/comments | https://api.github.com/repos/huggingface/datasets/issues/5147/events | https://github.com/huggingface/datasets/issues/5147 | 1,419,522,275 | I_kwDODunzps5UnDDj | 5,147 | Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting | {
"login": "falcaopetri",
"id": 8387736,
"node_id": "MDQ6VXNlcjgzODc3MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8387736?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/falcaopetri",
"html_url": "https://github.com/falcaopetri",
"followers_url": "https://api.github.com/users/falcaopetri/followers",
"following_url": "https://api.github.com/users/falcaopetri/following{/other_user}",
"gists_url": "https://api.github.com/users/falcaopetri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/falcaopetri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falcaopetri/subscriptions",
"organizations_url": "https://api.github.com/users/falcaopetri/orgs",
"repos_url": "https://api.github.com/users/falcaopetri/repos",
"events_url": "https://api.github.com/users/falcaopetri/events{/privacy}",
"received_events_url": "https://api.github.com/users/falcaopetri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! In the `transformers` issue the object to not hash is a `Pool` - I think you can instantiate it inside your function instead of passing it as a parameter. It's good practice that your function and all its fn_kwargs are picklable, in case you want to parallelize `map` using `num_proc>1`\r\n\r\nFor the other case `def fn(example, verbose=False):` however, I agree it would be nice to let the user specify that \"verbose\" needs to be ignored.\r\n\r\nDo you think providing a decorator could help ? Maybe\r\n```python\r\[email protected](ignore_kwargs=[\"verbose\"])\r\ndef func(example, verbose=False):\r\n ...\r\n```",
"Hi @lhoestq! Thanks for your response.\r\n\r\nA `Pool` shouldn't be instantiated within the function, because there's a huge overhead in doing so. The main idea is that the same `Pool` should be used across all function calls. Parallel `map` is not helpful/desired in that specific scenario, because the heavy parallel computation is done by another lib (`pyctcdecode`, called within `transformer`'s model inference code).\r\n\r\nBut yes, it makes sense to be able to leverage parallel processing by just doing `num_proc>1` when possible.\r\n\r\nYour decorator suggestions seems like a pretty clean API to me. I didn't find a `datasets.hashing` module though. Would it be created for this specific purpose? Any downsides in just using `datasets.fingerprint`?\r\n\r\nAnd would `datasets.hashing.register` just add some metadata to `func` in your approach (so it could be inspected from `fingerprint_transform`)?\r\n\r\nAnd looking to the `datasets.Dataset` API, `.filter` would also benefited from this.",
"> Would it be created for this specific purpose? Any downsides in just using datasets.fingerprint?\r\n\r\nThis can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\n> And would datasets.hashing.register just add some metadata to func in your approach (so it could be inspected from fingerprint_transform)?\r\n\r\nYup that's the idea :)\r\n\r\n> And looking to the datasets.Dataset API, .filter would also benefited from this.\r\n\r\nIndeed !\r\n\r\n-----\r\n\r\nIf you would like to contribute this you can assign yourself to this issue by posting #self-assign\r\nAnd of course if you have questions or if I can help, feel free to ping me !",
"> This can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\nSure, it makes sense.\r\n\r\n---\r\n\r\nI don't plan to work on it right now, so I'll let it unassigned in case somebody wants to join. I'll get back at it as soon as possible though.\r\n"
] | 2022-10-22T21:46:38 | 2022-11-01T22:19:07 | null | NONE | null | null | null | ### Feature request
`dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint.
I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing.
Of course, users should be aware to properly use this new feature, just like the internal usages of `fingerprint_transform` [does](https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/src/datasets/arrow_dataset.py#L2700).
### Motivation
This is originally motivated by https://github.com/huggingface/transformers/pull/18351#issuecomment-1263588680.
Nonetheless, consider a more general processing function that accepts a kwarg that does not influence it's output:
```python
def fn(example, verbose=False):
...
```
Then `dataset.map(fn, verbose=True)` would not benefit from dataset caching.
I'm not sure if other methods in the `Dataset` API could benefit from this feature.
### Your contribution
Based on `fingerprint_transform `'s `wrapper` function [here](https://github.com/huggingface/datasets/blob/c59cc34fcd2a369d27b77cc678017f5976a926a9/src/datasets/fingerprint.py#L443), it seems to me that it should be possible to make `.map`/`._map_single` accept something like `fn_use_fingerprint_kwargs`/`fn_ignore_fingerprint_kwargs` (probably another arg name). This would then be used by `fingerprint_transform.wrapper` to better/more flexibly hash the transformation.
I could contribute with a PR if this feature and approach look good to you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5147/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5146/comments | https://api.github.com/repos/huggingface/datasets/issues/5146/events | https://github.com/huggingface/datasets/pull/5146 | 1,418,331,282 | PR_kwDODunzps5BSUWW | 5,146 | Delete duplicate issue template file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-21T13:18:46 | 2022-10-21T13:52:30 | 2022-10-21T13:50:04 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5146",
"html_url": "https://github.com/huggingface/datasets/pull/5146",
"diff_url": "https://github.com/huggingface/datasets/pull/5146.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5146.patch",
"merged_at": "2022-10-21T13:50:04"
} | A conflict between two PRs:
- #5116
- #5136
was not properly resolved, resulting in a duplicate issue template.
This PR removes the duplicate template. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5146/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5145/comments | https://api.github.com/repos/huggingface/datasets/issues/5145/events | https://github.com/huggingface/datasets/issues/5145 | 1,418,005,452 | I_kwDODunzps5UhQvM | 5,145 | Dataset order is not deterministic with ZIP archives and `iter_files` | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting ! The issue doesn't come from shuffling, but from `beans` row order not being deterministic:\r\n\r\nhttps://huggingface.co./datasets/beans/blob/main/beans.py uses `dl_manager.iter_files` on ZIP archives and the file order doesn't seen to be deterministic and changes across machines",
"Thank you for noticing indeed!",
"This is still a bug, so I'd keep this one open if you don't mind ;)",
"Besides the linked PR, to make the loading process fully deterministic, I believe we should also sort the data files [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L276) and [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L485) (e.g. fsspec's `LocalFileSystem.glob` relies on `os.scandir`, which yields the contents in arbitrary order). My concern is the overhead of these sorts... Maybe we could introduce a new flag to `load_dataset` similar to TFDS' [`shuffle_files`](https://www.tensorflow.org/datasets/determinism#determinism_when_reading) or sort only if the number of data files is small?",
"We already return the result sorted at the end of `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository` if I'm not mistaken",
"@lhoestq Oh, you are right. Feel free to ignore my comment.",
"I think the corresponding PR is ready to be merged :hugs: ",
"@albertvillanova Thanks for the fix!"
] | 2022-10-21T09:00:03 | 2022-10-27T09:51:49 | 2022-10-27T09:51:10 | CONTRIBUTOR | null | null | null | ### Describe the bug
For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order.
### Steps to reproduce the bug
In a clean docker container or conda environment with datasets==2.6.1, run
```python
from datasets import load_dataset
from pprint import pprint
data = load_dataset("beans", split="validation")
pprint(data["image_file_path"])
```
### Expected behavior
The order of the images is the same on all machines.
### Environment info
On the EC2 instance:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
- Numpy version: not checked
```
On my local laptop:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Numpy version: 1.23.1
```
On github actions:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5
- Python version: 3.8.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
- Numpy version: 1.23.4
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5145/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5144/comments | https://api.github.com/repos/huggingface/datasets/issues/5144/events | https://github.com/huggingface/datasets/issues/5144 | 1,417,974,731 | I_kwDODunzps5UhJPL | 5,144 | Inconsistent documentation on map remove_columns | {
"login": "zhaowei-wang-nlp",
"id": 22047467,
"node_id": "MDQ6VXNlcjIyMDQ3NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaowei-wang-nlp",
"html_url": "https://github.com/zhaowei-wang-nlp",
"followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers",
"following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs",
"repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos",
"events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Thanks for reporting, @zhaowei-wang-nlp.\r\n\r\nYou are right, the documentation is confusing on the behavior of `remove_columns`. We should better explain it. ",
"This is a duplicate of https://github.com/huggingface/datasets/issues/2343.",
"I'm closing this issue because as @mariosasko pointed out, it is a duplicate of:\r\n- #2343"
] | 2022-10-21T08:37:53 | 2022-11-15T14:15:10 | 2022-11-15T14:15:10 | NONE | null | null | null | ### Describe the bug
The page [process](https://huggingface.co./docs/datasets/process) says this about the parameter `remove_columns` of the function `map`:
When you remove a column, it is only removed after the example has been provided to the mapped function.
So it seems that the `remove_columns` parameter removes after the mapped functions.
However, another page, [the documentation of the function map](https://huggingface.co./docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says:
Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept.
So one page says "after the mapped function" and another says "before the mapped function."
Is there something wrong?
### Steps to reproduce the bug
Not about code.
### Expected behavior
consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`.
### Environment info
datasets V2.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5144/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5143/comments | https://api.github.com/repos/huggingface/datasets/issues/5143/events | https://github.com/huggingface/datasets/issues/5143 | 1,416,837,186 | I_kwDODunzps5UczhC | 5,143 | DownloadManager Git LFS support | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hey ! Actually it works, just pass the right URL ;)\r\nThe URL must be the one with “/resolve/”\r\n\r\ne.g. https://huggingface.co./datasets/imagenet-1k/resolve/main/data/test_images.tar.gz\r\n\r\nYou can even pass a relative path to the dl_manager instead, like `dl_manager.download(\"data/test_images.tar.gz\")`",
"Amazing it works, thanks!"
] | 2022-10-20T15:29:29 | 2022-10-20T17:17:10 | 2022-10-20T17:17:10 | CONTRIBUTOR | null | null | null | ### Feature request
Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right?
Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict.
Is there a good way to write a dataset loading script for a repo with lfs files?
### Motivation
/
### Your contribution
/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5143/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5142/comments | https://api.github.com/repos/huggingface/datasets/issues/5142/events | https://github.com/huggingface/datasets/pull/5142 | 1,416,317,678 | PR_kwDODunzps5BLd90 | 5,142 | Deprecate num_proc parameter in DownloadManager.extract | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @mariosasko . Can you please help me with why the tests keep failing. I have reviewed the code changes multiple times but can't spot any mistakes. ",
"You can fix this failure by formatting your code with the `make style` command (run it from the root of the cloned repo).",
"hey @mariosasko ,i cant understand how to use the `make style` command .I searched for it on the internet but cant find any results. \r\nSo i formatted the code using vs-code document formatter. Hope this helps.",
"`make style` runs the \"style\" target defined here: https://github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/Makefile#L12\r\n\r\nThis seems to be a good tutorial on Makefiles: https://opensource.com/article/18/8/what-how-makefile",
"\r\n\r\n\r\n\r\n> `make style` runs the \"style\" target defined here:\r\n> \r\n> https://github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/Makefile#L12\r\n> \r\n> This seems to be a good tutorial on Makefiles: https://opensource.com/article/18/8/what-how-makefile\r\n\r\nThanks! I will look into this :relaxed: "
] | 2022-10-20T09:52:52 | 2022-10-25T18:06:56 | 2022-10-25T15:56:45 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5142",
"html_url": "https://github.com/huggingface/datasets/pull/5142",
"diff_url": "https://github.com/huggingface/datasets/pull/5142.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5142.patch",
"merged_at": "2022-10-25T15:56:45"
} | fixes #5132 : Deprecated the `num_proc` parameter in `DownloadManager.extract` by passing `num_proc` parameter to `map_nested` . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5142/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5141/comments | https://api.github.com/repos/huggingface/datasets/issues/5141/events | https://github.com/huggingface/datasets/pull/5141 | 1,415,479,438 | PR_kwDODunzps5BIp1l | 5,141 | Raise ImportError instead of OSError | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @mariosasko ,i commited the changes as you said.\r\n\r\n"
] | 2022-10-19T19:30:05 | 2022-10-25T15:59:25 | 2022-10-25T15:56:58 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5141",
"html_url": "https://github.com/huggingface/datasets/pull/5141",
"diff_url": "https://github.com/huggingface/datasets/pull/5141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5141.patch",
"merged_at": "2022-10-25T15:56:58"
} | fixes #5134 : Replaced OSError with ImportError if required extraction library is not installed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5141/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5140/comments | https://api.github.com/repos/huggingface/datasets/issues/5140/events | https://github.com/huggingface/datasets/pull/5140 | 1,415,075,530 | PR_kwDODunzps5BHTNq | 5,140 | Make the KeyHasher FIPS compliant | {
"login": "vvalouch",
"id": 22592860,
"node_id": "MDQ6VXNlcjIyNTkyODYw",
"avatar_url": "https://avatars.githubusercontent.com/u/22592860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvalouch",
"html_url": "https://github.com/vvalouch",
"followers_url": "https://api.github.com/users/vvalouch/followers",
"following_url": "https://api.github.com/users/vvalouch/following{/other_user}",
"gists_url": "https://api.github.com/users/vvalouch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvalouch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvalouch/subscriptions",
"organizations_url": "https://api.github.com/users/vvalouch/orgs",
"repos_url": "https://api.github.com/users/vvalouch/repos",
"events_url": "https://api.github.com/users/vvalouch/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvalouch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-10-19T14:25:52 | 2022-11-07T16:20:43 | 2022-11-07T16:20:43 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5140",
"html_url": "https://github.com/huggingface/datasets/pull/5140",
"diff_url": "https://github.com/huggingface/datasets/pull/5140.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5140.patch",
"merged_at": null
} | MD5 is not FIPS compliant thus I am proposing this minimal change to make datasets package FIPS compliant | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5140/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5137/comments | https://api.github.com/repos/huggingface/datasets/issues/5137/events | https://github.com/huggingface/datasets/issues/5137 | 1,414,642,723 | I_kwDODunzps5UUbwj | 5,137 | Align task tags in dataset metadata | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I removed all the invalid task_ids in datasts without namespace, based on the <s>(internal)</s> types.ts",
"(Types.ts is not internal it's public)",
"I have opened PRs to fix the task_ids in all datasets within a namespace as well.\r\n\r\nWorking on task_categories...",
"For future reference: this fix had some complications\r\n\r\nWhen trying to open a PR to fix the task tags, an exception was thrown if:\r\n- the metadata contained \"languages\" or \"licenses\" (instead of \"language\" or \"license\")\r\n- the metadata contained a non-valid language: `en-US` (instead of `en`), `no` (instead of `'no'`),...\r\n- the metadata contained a non-valid license\r\n- either `task_categories` or `task_ids` was not an array (a dict for each config)\r\n- the metadata contained non-valid tag names\r\n\r\nErrors:\r\n```\r\nValueError: - Error: \"languages\" is deprecated. Use \"language\" instead.\r\n```\r\n```\r\nValueError: - Error: \"licenses\" is deprecated. Use \"license\" instead.\r\n```\r\n```\r\nValueError: - Error: \"language[17]\" must only contain lowercase characters\r\n```\r\n```\r\nValueError: - Error: \"language[0]\" with value \"cz, de, it\" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like \"code\", \"multilingual\". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.\r\n```\r\n```\r\nValueError: - Error: \"task_ids\" must be an array\r\n```",
"All Hub datasets are done.",
"great job! did you have feedback from Hub users/i.E. repo authors?",
"Yes, @julien-c. These are some of the feedbacks:\r\n- Most people just thank for the fix: [cahya/librivox-indonesia](https://huggingface.co./datasets/cahya/librivox-indonesia/discussions/1#6357cd8a292a050ebd705f84), [TurkuNLP/xlsum-fi](https://huggingface.co./datasets/TurkuNLP/xlsum-fi/discussions/1#6357828aa1f8ad1c31bcbe46), [coastalcph/fairlex](https://huggingface.co./datasets/coastalcph/fairlex/discussions/4#6351a527a8e595171ab1aef2)\r\n- Why are we changing their task names? [joelito/lextreme](https://huggingface.co./datasets/joelito/lextreme/discussions/1#6351b576fe367c0d9b12041b)\r\n - I take note of this for the next bulk operation; besides the PR title, we should also add a description to explain the reason for the change and also maybe putting a link to some pertinent GH Issue page\r\n- Some of them ask where to find the list of the supported task values is: [dennlinger/klexikon](https://huggingface.co./datasets/dennlinger/klexikon/discussions/3#6356b3ea80f8cb3ab777ac5c), [lmqg/qg_jaquad](https://huggingface.co./datasets/lmqg/qg_jaquad/discussions/1#635262467e4cc3135fd09f58)\r\n - Currently, the list is here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L85\r\n - Maybe we could made them more easily accessible\r\n- Some people do not agree about current \"hierarchy\":\r\n - text-scoring: [emrecan/nli_tr_for_simcse](https://huggingface.co./datasets/emrecan/nli_tr_for_simcse/discussions/1#6357c1b128792d8cdd51e9f9) (but referring to [emrecan/nli_tr_for_simcse](https://huggingface.co./datasets/emrecan/nli_tr_for_simcse/discussions/2/files))\r\n - Before \"text-scoring\" was a task_category, with task_ids [\"semantic-similarity-scoring\", \"sentiment-scoring\"]\r\n - Now all three are task_ids [\"text-scoring\", \"semantic-similarity-scoring\", \"sentiment-scoring\"] under the task_category \"text-classification\"\r\n - People complain that their scoring tasks are not classification task\r\n - binary-classification: why don't we have binary-classification? We have multi-class-classification, multi-label-classification and sentiment-classification, but not binary-classification\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co./datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co./datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co./datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?\r\n\r\nNOTE: I'm editing this comment to add more feedback",
"As someone with feedback on the updates (which I highly appreciate seeing included here :D), a few comments from a \"user perspective\": \r\n\r\n* I think the general confusion for me was also surrounding the hierarchy; it doesn't really become super clear (even when using the tagger space) that one is a subset of the other, especially since it seems to be still possible to include fine-grained tasks without the \"parent category\"?\r\n* The datasets explorer still shows tags that are no longer valid (e.g., super specific ones such as `summarization-other-paper-abstract-generation`, but also ones that should be `task_categories`, such as `summarization`). I'm assuming this will be fixed soon, but until then it can confuse people who don't understand why they suddenly can't use seemingly still valid tags anymore.\r\n* As I mentioned to @albertvillanova, having a dedicated page in the docs with explanations (especially wrt the difference between `task_categories` and `task_ids`) would be super helpful. However, I think it would have been sufficient to just include some description in the dataset PRs where you can link to the Github/other discussion on the topic :) That way, I can check myself what changes are expected to happen.\r\n\r\nThanks again for the streamlining process, I personally learned a fair bit about the tagging structure in the meantime!\r\nBest,\r\nDennis",
"Thanks to you both for your feedback! super useful! cc'ing @osanseviero too 🙂\r\n\r\n> The datasets explorer still shows tags that are no longer valid\r\n\r\nwait which explorer is that? is it https://huggingface.co./datasets/viewer/ ?\r\n",
"Sorry, this one: https://huggingface.co./datasets \r\nAnd then selecting the \"Fine-Grained Tasks\".",
"good feedback! we'll improve this",
"Super useful feedback, thanks a lot!",
"- Some people do not agree about current \"hierarchy\":\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co./datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co./datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co./datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?",
"@albertvillanova \r\nThank you for sharing our voice here!\r\n\r\nYes, we want `symbolic-regression` to be listed as a task. This task has been attracting attention from the machine learning/deep learning community, and unfortunately existing symbolic regression datasets are de-centralized in the community (hosted at individual platforms like author website, github, etc).\r\nIt would be great for the community if Hugging Face can support the task."
] | 2022-10-19T09:41:42 | 2022-11-10T05:25:58 | 2022-10-25T06:17:00 | MEMBER | null | null | null | ## Describe
Once we have agreed on a common naming for task tags for all open source projects, we should align on them.
## Steps
- [x] Align task tags in canonical datasets
- [x] task_categories: 4 datasets
- [x] task_ids (by @lhoestq)
- [x] Open PRs in community datasets
- [x] task_categories: 451 datasets
- [x] task_ids: 556 datasets
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5137/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5136/comments | https://api.github.com/repos/huggingface/datasets/issues/5136/events | https://github.com/huggingface/datasets/pull/5136 | 1,414,492,139 | PR_kwDODunzps5BFWMG | 5,136 | Update docs once dataset scripts transferred to the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-19T07:58:27 | 2022-10-20T08:12:21 | 2022-10-20T08:10:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5136",
"html_url": "https://github.com/huggingface/datasets/pull/5136",
"diff_url": "https://github.com/huggingface/datasets/pull/5136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5136.patch",
"merged_at": "2022-10-20T08:10:00"
} | Todo:
- [x] Update docs:
- [x] Datasets on GitHub (legacy)
- [x] Load: offline
- [x] About dataset load:
- [x] Maintaining integrity
- [x] Security
- [x] Update docstrings:
- [x] Inspect:
- [x] get_dataset_config_info
- [x] get_dataset_split_names
- [x] Load:
- [x] dataset_module_factory
- [x] load_dataset_builder
- [x] load_dataset
- [x] Remove `ADD_NEW_DATASET.md`
- [x] Update `.github/ISSUE_TEMPLATE/config.yml`
Fix #5135. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5136/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5135/comments | https://api.github.com/repos/huggingface/datasets/issues/5135/events | https://github.com/huggingface/datasets/issues/5135 | 1,414,413,519 | I_kwDODunzps5UTjzP | 5,135 | Update docs once dataset scripts transferred to the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-10-19T06:58:19 | 2022-10-20T08:10:01 | 2022-10-20T08:10:01 | MEMBER | null | null | null | ## Describe the bug
As discussed in:
- https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701
we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub):
- #4974
Concretely:
- [x] Datasets on GitHub (legacy): https://huggingface.co./docs/datasets/main/en/share#datasets-on-github-legacy
- [x] ADD_NEW_DATASET: https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md
- ...
This PR complements the work of:
- #5067
This PR is a follow-up of PRs:
- #3777
CC: @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5135/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5134/comments | https://api.github.com/repos/huggingface/datasets/issues/5134/events | https://github.com/huggingface/datasets/issues/5134 | 1,413,623,687 | I_kwDODunzps5UQi-H | 5,134 | Raise ImportError instead of OSError if required extraction library is not installed | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"hey ,i would like to work on this issue . Please assign it to me.",
"hey @mariosasko , i made a pr for this issue. Could you please review it.\r\nAlso i found multiple `OSError` in `extract.py` file which i thought could be replaced too but wasn't sure about them.\r\nPlease do tell if that also needs to be done."
] | 2022-10-18T17:53:46 | 2022-10-25T15:56:59 | 2022-10-25T15:56:59 | CONTRIBUTOR | null | null | null | According to the official Python docs, `OSError` should be thrown in the following situations:
> This exception is raised when a system function returns a system-related error, including I/O failures such as “file not found” or “disk full” (not for illegal argument types or other incidental errors).
Hence, it makes more sense to raise `ImportError` instead of `OSError` when the required extraction/decompression library is not installed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5134/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5133/comments | https://api.github.com/repos/huggingface/datasets/issues/5133/events | https://github.com/huggingface/datasets/issues/5133 | 1,413,623,462 | I_kwDODunzps5UQi6m | 5,133 | Tensor operation not functioning in dataset mapping | {
"login": "xinghaow99",
"id": 50691954,
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinghaow99",
"html_url": "https://github.com/xinghaow99",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .",
"> Hi! The Torch ops in your snippet are not equivalent to the NumPy ones, hence the difference. You can get the same behavior by replacing the line `feature = torch.mean(feature, dim=1)` with `feature = feature.squeeze().mean(1)` .\r\n\r\nThank you. "
] | 2022-10-18T17:53:35 | 2022-10-19T04:15:45 | 2022-10-19T04:15:44 | NONE | null | null | null | ## Describe the bug
I'm doing a torch.mean() operation in data preprocessing, and it's not working.
## Steps to reproduce the bug
```
from transformers import pipeline
import torch
import numpy as np
from datasets import load_dataset
device = 'cuda:0'
raw_dataset = load_dataset("glue", "sst2")
feature_extraction = pipeline('feature-extraction', 'bert-base-uncased', device=device)
def extracted_data(examples):
# feature = torch.tensor(feature_extraction(examples['sentence'], batch_size=16), device=device)
# feature = torch.mean(feature, dim=1)
feature = np.asarray(feature_extraction(examples['sentence'], batch_size=16)).squeeze().mean(1)
print(feature.shape)
return {'feature': feature}
extracted_dataset = raw_dataset.map(extracted_data, batched=True, batch_size=16)
```
## Results
When running with torch.mean(), the shape printed out is [16, seq_len, 768], which is exactly the same before the operation. While numpy works just fine, which gives [16, 768].
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5133/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5132/comments | https://api.github.com/repos/huggingface/datasets/issues/5132/events | https://github.com/huggingface/datasets/issues/5132 | 1,413,607,306 | I_kwDODunzps5UQe-K | 5,132 | Depracate `num_proc` parameter in `DownloadManager.extract` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ayushthe1",
"id": 114604338,
"node_id": "U_kgDOBtS5Mg",
"avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushthe1",
"html_url": "https://github.com/ayushthe1",
"followers_url": "https://api.github.com/users/ayushthe1/followers",
"following_url": "https://api.github.com/users/ayushthe1/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions",
"organizations_url": "https://api.github.com/users/ayushthe1/orgs",
"repos_url": "https://api.github.com/users/ayushthe1/repos",
"events_url": "https://api.github.com/users/ayushthe1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushthe1/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I can take this! #self-assign",
"#self-assign",
"@lazarust i'm already working on this issue :smile: ",
"#self-assign",
"hey @mariosasko , i made a pr for this issue. Could you please review it."
] | 2022-10-18T17:41:05 | 2022-10-25T15:56:46 | 2022-10-25T15:56:46 | CONTRIBUTOR | null | null | null | The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5132/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5131/comments | https://api.github.com/repos/huggingface/datasets/issues/5131/events | https://github.com/huggingface/datasets/issues/5131 | 1,413,534,863 | I_kwDODunzps5UQNSP | 5,131 | WikiText 103 tokenizer hangs | {
"login": "TrentBrick",
"id": 12433427,
"node_id": "MDQ6VXNlcjEyNDMzNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/12433427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TrentBrick",
"html_url": "https://github.com/TrentBrick",
"followers_url": "https://api.github.com/users/TrentBrick/followers",
"following_url": "https://api.github.com/users/TrentBrick/following{/other_user}",
"gists_url": "https://api.github.com/users/TrentBrick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TrentBrick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TrentBrick/subscriptions",
"organizations_url": "https://api.github.com/users/TrentBrick/orgs",
"repos_url": "https://api.github.com/users/TrentBrick/repos",
"events_url": "https://api.github.com/users/TrentBrick/events{/privacy}",
"received_events_url": "https://api.github.com/users/TrentBrick/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-10-18T16:44:00 | 2023-07-21T14:41:51 | 2023-07-21T14:41:51 | NONE | null | null | null | See issue here: https://github.com/huggingface/transformers/issues/19702 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5131/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5130/comments | https://api.github.com/repos/huggingface/datasets/issues/5130/events | https://github.com/huggingface/datasets/pull/5130 | 1,413,435,000 | PR_kwDODunzps5BBxXX | 5,130 | Avoid extra cast in `class_encode_column` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-18T15:31:24 | 2022-10-19T11:53:02 | 2022-10-19T11:50:46 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5130",
"html_url": "https://github.com/huggingface/datasets/pull/5130",
"diff_url": "https://github.com/huggingface/datasets/pull/5130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5130.patch",
"merged_at": "2022-10-19T11:50:46"
} | Pass the updated features to `map` to avoid the `cast` in `class_encode_column`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5130/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5129/comments | https://api.github.com/repos/huggingface/datasets/issues/5129/events | https://github.com/huggingface/datasets/issues/5129 | 1,413,031,664 | I_kwDODunzps5UOSbw | 5,129 | unexpected `cast` or `class_encode_column` result after `rename_column` | {
"login": "quaeast",
"id": 35144675,
"node_id": "MDQ6VXNlcjM1MTQ0Njc1",
"avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quaeast",
"html_url": "https://github.com/quaeast",
"followers_url": "https://api.github.com/users/quaeast/followers",
"following_url": "https://api.github.com/users/quaeast/following{/other_user}",
"gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quaeast/subscriptions",
"organizations_url": "https://api.github.com/users/quaeast/orgs",
"repos_url": "https://api.github.com/users/quaeast/repos",
"events_url": "https://api.github.com/users/quaeast/events{/privacy}",
"received_events_url": "https://api.github.com/users/quaeast/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...",
"Hi, 方子东. I tried running the code with exact the same configuration (both datasets 2.5.2 and 2.6.1, python, pyarrow, pandas), but on Linux. The results seem to be the expected `{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}`.\r\nI don't have a Mac device. I can't verify whether this is a M1 chip-specific problem.",
"I've just tested the code on my M1 Mac, and it behaves as expected.",
"> Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...\r\n\r\nThank you for your attention and feel sorry to take your time. Since this is a bug of old version, I think mybe my problem is because `cast` operation directaly used cached data generated by older verion of `datasets`. I tried to deleted the cached data and I got expected result.\r\n"
] | 2022-10-18T11:15:24 | 2022-10-19T03:02:26 | 2022-10-19T03:02:26 | NONE | null | null | null | ## Describe the bug
When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi", "en")
data = dataset['train']
data = data.remove_columns(
[
"review_id",
"product_id",
"reviewer_id",
"review_title",
"language",
"product_category",
]
)
data = data.rename_column("review_body", "text")
data1 = data.class_encode_column("stars")
print(set(data1.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
data = data.rename_column("stars", "label")
print(set(data.data.columns[0]))
# output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>}
data2 = data.class_encode_column("label")
print(set(data2.data.columns[0]))
# output: {<pyarrow.Int64Scalar: 0>}
```
## Expected results
the last print should be:
{<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>}
## Actual results
but it output:
{<pyarrow.Int64Scalar: 0>}
## Environment info
- `datasets` version: 2.6.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5129/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5128/comments | https://api.github.com/repos/huggingface/datasets/issues/5128/events | https://github.com/huggingface/datasets/pull/5128 | 1,412,783,855 | PR_kwDODunzps5A_k9s | 5,128 | Make filename matching more robust | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> I think we should also modify one of the metadata files in the `folder_based_builder` tests to make sure \"./\" is ignored now in the `file_name`\r\n\r\n@mariosasko what do you mean here? I'm not sure which metadata file I should modify here",
"You can modify this line for instance: https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/tests/packaged_modules/test_folder_based_builder.py#L135"
] | 2022-10-18T08:22:48 | 2022-10-28T13:07:38 | 2022-10-28T13:05:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5128",
"html_url": "https://github.com/huggingface/datasets/pull/5128",
"diff_url": "https://github.com/huggingface/datasets/pull/5128.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5128.patch",
"merged_at": "2022-10-28T13:05:06"
} | Fix #5046 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5128/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5127/comments | https://api.github.com/repos/huggingface/datasets/issues/5127/events | https://github.com/huggingface/datasets/pull/5127 | 1,411,897,544 | PR_kwDODunzps5A8m-Q | 5,127 | [WIP] WebDataset export | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5127). All of your documentation changes will be reflected on that endpoint."
] | 2022-10-17T16:50:22 | 2023-02-22T09:51:10 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5127",
"html_url": "https://github.com/huggingface/datasets/pull/5127",
"diff_url": "https://github.com/huggingface/datasets/pull/5127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5127.patch",
"merged_at": null
} | I added a first draft of the `IterableDataset.to_wds` method.
You can use it to savea dataset loaded in streamign mode as a webdataset locally.
The API can be further improved to allow to export to a cloud storage like the HF Hub.
I also included sharding with a default max shard size of 500MB (uncompressed), and it is single-processed fo rnow.
Choosing the number of shards is not implemented yet - though if we know the size of the `IterableDataset` this is probably doable`.
For example
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
>>> ds.to_wds("output_dir", compress=True)
>>> import webdataset as wds
>>> ds = wds.WebDataset("output_dir/rotten_tomatoes-train-000000.tar.gz").decode()
>>> next(iter(ds))
{'__key__': '0',
'__url__': 'output_dir/rotten_tomatoes-train-000000.tar.gz',
'label.cls': 1,
'text.txt': 'the rock is destined to be the 21st century\'s new ..., jean-claud van damme or steven segal .'}
```
### Implementation details
The WebDataset format is made of TAR archives containing a series of files per example. For example one pair of `image.jpg` and `label.cls` for image classification.
WebDataset automatically decodes serialized data based on the extension of the files, and output a dictionary. For example `{"image.png": np.array(...), "label.cls": 0}` if you choose the numpy decoding.
To use the automatic decoding, I store each field of each example as a file with its corresponding extension (jpg, json, cls, etc.)
While this is useful to end up with a dictionary with one key per column and appropriate decoding, it can create huge TAR archives if the dataset is made of small samples of text - probably because of useless TAR metadata for each file. This also makes loading super slow: iterating on SQuAD takes 50sec vs 7sec using `datasets` in streaming mode.
I haven't taken a look at alternatives for text datasets made out of small samples, but for image datasets this can already be used to run some benchmarks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5127/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5126/comments | https://api.github.com/repos/huggingface/datasets/issues/5126/events | https://github.com/huggingface/datasets/pull/5126 | 1,411,757,124 | PR_kwDODunzps5A8Iw3 | 5,126 | Fix class name of symbolic link | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5126). All of your documentation changes will be reflected on that endpoint.",
"I have removed the reference to the Issue in the PR title, so that we avoid to have both references (to the issue and to the PR) in the merge commit to the main branch.\r\n\r\nInstead, it should be commented in the PR description, so that the PR is appropriately linked by GitHub to its corresponding Issue:\r\n\r\n> Fix #5098.",
"@albertvillanova What should I test in your opinion? Also, where should I save the test file and how should I name it? Thanks for your support",
"The regression test to be implemented should test what your PR fixes: that is, that `_resolve_single_pattern_locally` function does not resolve any symbolic link when passed a directory that does contain any.\r\n\r\nAs you are testing a function in `data_files.py`, the corresponding test should be in `tests/test_data_files.py`.\r\n\r\nYou could name the test something lilke: `test_resolve_single_pattern_locally_does_not_resolve_symbolic_links`\r\n\r\nYou could take inspiration from other tests there in that file."
] | 2022-10-17T15:11:02 | 2022-11-14T14:40:18 | 2022-11-14T14:40:18 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5126",
"html_url": "https://github.com/huggingface/datasets/pull/5126",
"diff_url": "https://github.com/huggingface/datasets/pull/5126.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5126.patch",
"merged_at": "2022-11-14T14:40:18"
} | Fix #5098 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5126/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5125/comments | https://api.github.com/repos/huggingface/datasets/issues/5125/events | https://github.com/huggingface/datasets/pull/5125 | 1,411,602,813 | PR_kwDODunzps5A7nr8 | 5,125 | Add `pyproject.toml` for `black` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-17T13:38:47 | 2022-10-17T14:23:27 | 2022-10-17T14:21:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5125",
"html_url": "https://github.com/huggingface/datasets/pull/5125",
"diff_url": "https://github.com/huggingface/datasets/pull/5125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5125.patch",
"merged_at": "2022-10-17T14:21:09"
} | Add `pyproject.toml` as a config file for the `black` tool to support VS Code's auto-formatting on save (and to be more consistent with the other HF projects).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5125/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5124/comments | https://api.github.com/repos/huggingface/datasets/issues/5124/events | https://github.com/huggingface/datasets/pull/5124 | 1,411,159,725 | PR_kwDODunzps5A6HeL | 5,124 | Install tensorflow-macos dependency conditionally | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-17T08:45:08 | 2022-10-19T09:12:17 | 2022-10-19T09:10:06 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5124",
"html_url": "https://github.com/huggingface/datasets/pull/5124",
"diff_url": "https://github.com/huggingface/datasets/pull/5124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5124.patch",
"merged_at": "2022-10-19T09:10:06"
} | Fix #5118. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5124/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5123/comments | https://api.github.com/repos/huggingface/datasets/issues/5123/events | https://github.com/huggingface/datasets/issues/5123 | 1,410,828,756 | I_kwDODunzps5UF4nU | 5,123 | datasets freezes with streaming mode in multiple-gpu | {
"login": "jackfeinmann5",
"id": 59409879,
"node_id": "MDQ6VXNlcjU5NDA5ODc5",
"avatar_url": "https://avatars.githubusercontent.com/u/59409879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackfeinmann5",
"html_url": "https://github.com/jackfeinmann5",
"followers_url": "https://api.github.com/users/jackfeinmann5/followers",
"following_url": "https://api.github.com/users/jackfeinmann5/following{/other_user}",
"gists_url": "https://api.github.com/users/jackfeinmann5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackfeinmann5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackfeinmann5/subscriptions",
"organizations_url": "https://api.github.com/users/jackfeinmann5/orgs",
"repos_url": "https://api.github.com/users/jackfeinmann5/repos",
"events_url": "https://api.github.com/users/jackfeinmann5/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackfeinmann5/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"@lhoestq I tested the script without accelerator, and I confirm this is due to datasets part as this gets similar results without accelerator.",
"Hi ! You said it works on 1 GPU but doesn't wortk without accelerator - what's the difference between running on 1 GPU and running without accelerator in your case ?",
"Hi @lhoestq \r\nthanks for coming back to me. Sorry for the confusion I made. I meant this works fine on 1 GPU, but on multi-gpu it is freezing. \"accelerator\" is not an issue as if you adapt the code without accelerator this still gets the same issue.\r\nIn order to test it. Please run \"accelerate config\", then use the setup for multi-gpu in one node.\r\nAfter that run \"accelerate launch code.py\" and then you would see the freezing occurs.",
"Hi @lhoestq \r\ncould you have the chance to reproduce the error by running the minimal example shared?\r\nthanks",
"I think you need to do `train_dataset = train_dataset.with_format(\"torch\")` to work with the DataLoader in a multiprocessing setup :)\r\n\r\nThe hang is probably caused by our streamign lib `fsspec` which doesn't work in multiprocessing out of the box - but we made it work with the PyTorch DataLoader when the dataset format is set to \"torch\"",
"Hi @lhoestq \r\nthanks for the response. I added the line suggested right before calling `with accelerator.main_process_first():` in the code above and I confirm this also freezes. to reproduce it please run \"accelerate launch code.py\". I was wondering if you could have more suggestions for me? I do not have an idea how to fix this or debug this freezing. many thanks.",
"Maybe the `fsspec` stuff need to be clearer even before - can you try to run this function at the very beginning of your script ?\r\n```python\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n_set_fsspec_for_multiprocess()\r\n```",
"Hi @lhoestq \r\nthank you. I tried it, I am getting `AttributeError: module 'fsspec' has no attribute 'asyn'`. which version of fsspect do you use?\r\nI am using \r\n```fsspec 2022.8.2 pypi_0 pypi```\r\nthank you.",
"Hi @lhoestq \r\nI solved `fsspec` error with this hack for now https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 but this is still freezing, I greatly appreciate if you could run this script on your side. Many thanks.\r\n\r\n```\r\nimport fsspec\r\n\r\ndef _set_fsspec_for_multiprocess() -> None:\r\n \"\"\"\r\n Clear reference to the loop and thread.\r\n This is necessary otherwise HTTPFileSystem hangs in the ML training loop.\r\n Only required for fsspec >= 0.9.0\r\n See https://github.com/fsspec/gcsfs/issues/379\r\n \"\"\"\r\n fsspec.asyn.iothread[0] = None\r\n fsspec.asyn.loop[0] = None\r\n\r\n\r\n_set_fsspec_for_multiprocess()\r\n\r\nfrom accelerate import Accelerator\r\nfrom accelerate.logging import get_logger\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data.dataloader import DataLoader\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\nimport torch\r\nfrom accelerate.logging import get_logger\r\nfrom torch.utils.data import IterableDataset\r\nfrom torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe\r\n\r\n\r\nlogger = get_logger(__name__)\r\n\r\n\r\nclass ConstantLengthDataset(IterableDataset):\r\n \"\"\"\r\n Iterable dataset that returns constant length chunks of tokens from stream of text files.\r\n Args:\r\n tokenizer (Tokenizer): The processor used for proccessing the data.\r\n dataset (dataset.Dataset): Dataset with text files.\r\n infinite (bool): If True the iterator is reset after dataset reaches end else stops.\r\n max_seq_length (int): Length of token sequences to return.\r\n num_of_sequences (int): Number of token sequences to keep in buffer.\r\n chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.\r\n \"\"\"\r\n\r\n def __init__(\r\n self,\r\n tokenizer,\r\n dataset,\r\n infinite=False,\r\n max_seq_length=1024,\r\n num_of_sequences=1024,\r\n chars_per_token=3.6,\r\n ):\r\n self.tokenizer = tokenizer\r\n # self.concat_token_id = tokenizer.bos_token_id\r\n self.dataset = dataset\r\n self.max_seq_length = max_seq_length\r\n self.epoch = 0\r\n self.infinite = infinite\r\n self.current_size = 0\r\n self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences\r\n self.content_field = \"text\"\r\n\r\n def __iter__(self):\r\n iterator = iter(self.dataset)\r\n more_examples = True\r\n while more_examples:\r\n buffer, buffer_len = [], 0\r\n while True:\r\n if buffer_len >= self.max_buffer_size:\r\n break\r\n try:\r\n buffer.append(next(iterator)[self.content_field])\r\n buffer_len += len(buffer[-1])\r\n except StopIteration:\r\n if self.infinite:\r\n iterator = iter(self.dataset)\r\n self.epoch += 1\r\n logger.info(f\"Dataset epoch: {self.epoch}\")\r\n else:\r\n more_examples = False\r\n break\r\n tokenized_inputs = self.tokenizer(buffer, truncation=False)[\"input_ids\"]\r\n all_token_ids = []\r\n for tokenized_input in tokenized_inputs:\r\n all_token_ids.extend(tokenized_input)\r\n for i in range(0, len(all_token_ids), self.max_seq_length):\r\n input_ids = all_token_ids[i : i + self.max_seq_length]\r\n if len(input_ids) == self.max_seq_length:\r\n self.current_size += 1\r\n yield torch.tensor(input_ids)\r\n\r\n def shuffle(self, buffer_size=1000):\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n\r\n\r\ndef create_dataloaders(tokenizer, accelerator):\r\n ds_kwargs = {\"streaming\": True}\r\n # In distributed training, the load_dataset function gaurantees that only one process\r\n # can concurrently download the dataset.\r\n datasets = load_dataset(\r\n \"c4\",\r\n \"en\",\r\n cache_dir=\"cache_dir\",\r\n **ds_kwargs,\r\n )\r\n train_data, valid_data = datasets[\"train\"], datasets[\"validation\"]\r\n with accelerator.main_process_first():\r\n train_data = train_data.shuffle(buffer_size=10000, seed=None)\r\n train_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n train_data,\r\n infinite=True,\r\n max_seq_length=256,\r\n )\r\n valid_dataset = ConstantLengthDataset(\r\n tokenizer,\r\n valid_data,\r\n infinite=False,\r\n max_seq_length=256,\r\n )\r\n train_dataset = train_dataset.shuffle(buffer_size=10000)\r\n train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True)\r\n eval_dataloader = DataLoader(valid_dataset, batch_size=160)\r\n return train_dataloader, eval_dataloader\r\n\r\n\r\ndef main():\r\n # Accelerator.\r\n logging_dir = \"data_save_dir/log\"\r\n accelerator = Accelerator(\r\n gradient_accumulation_steps=1,\r\n mixed_precision=\"bf16\",\r\n log_with=\"tensorboard\",\r\n logging_dir=logging_dir,\r\n )\r\n # We need to initialize the trackers we use, and also store our configuration.\r\n # The trackers initializes automatically on the main process.\r\n if accelerator.is_main_process:\r\n accelerator.init_trackers(\"test\")\r\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n # Load datasets and create dataloaders.\r\n train_dataloader, _ = create_dataloaders(tokenizer, accelerator)\r\n\r\n train_dataloader = accelerator.prepare(train_dataloader)\r\n for step, batch in enumerate(train_dataloader, start=1):\r\n print(step)\r\n accelerator.end_training()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line: \r\n```\r\n return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n```\r\n`ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.",
"> Are you using `Pytorch 1.11`? Otherwise the script freezes because of the shuffling in this line:\r\n> \r\n> ```\r\n> return ShufflerIterDataPipe(self, buffer_size=buffer_size)\r\n> ```\r\n> \r\n> `ShufflerIterDataPipe` behavior must have changed for newer Pytorch versions. But this doesn't change whether you're using streaming or not in `datasets`, so probably not the same issue, but something to try.\r\n\r\nI met the same issue for pytorch 1.12 and 1.13, is there a way to work around for this function for newer pytorch versions?"
] | 2022-10-17T03:28:16 | 2023-05-14T06:55:20 | null | NONE | null | null | null | ## Describe the bug
Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/codeparrot_training.py#L22
During using multi-gpu in accelerator in one node, the code freezes, but works for 1 GPU:
```
10/16/2022 14:18:46 - INFO - datasets.info - Loading Dataset Infos from /home/jack/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01
Steps: 0%| | 0/400000 [00:00<?, ?it/s]10/16/2022 14:18:47 - INFO - torch.utils.data.dataloader - Shared seed (135290893754684706) sent to store on rank 0
```
# Code to reproduce
please run this code with `accelerate launch code.py`
```
from accelerate import Accelerator
from accelerate.logging import get_logger
from datasets import load_dataset
from torch.utils.data.dataloader import DataLoader
import torch
from datasets import load_dataset
from transformers import AutoTokenizer
import torch
from accelerate.logging import get_logger
from torch.utils.data import IterableDataset
from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe
logger = get_logger(__name__)
class ConstantLengthDataset(IterableDataset):
"""
Iterable dataset that returns constant length chunks of tokens from stream of text files.
Args:
tokenizer (Tokenizer): The processor used for proccessing the data.
dataset (dataset.Dataset): Dataset with text files.
infinite (bool): If True the iterator is reset after dataset reaches end else stops.
max_seq_length (int): Length of token sequences to return.
num_of_sequences (int): Number of token sequences to keep in buffer.
chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer.
"""
def __init__(
self,
tokenizer,
dataset,
infinite=False,
max_seq_length=1024,
num_of_sequences=1024,
chars_per_token=3.6,
):
self.tokenizer = tokenizer
# self.concat_token_id = tokenizer.bos_token_id
self.dataset = dataset
self.max_seq_length = max_seq_length
self.epoch = 0
self.infinite = infinite
self.current_size = 0
self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences
self.content_field = "text"
def __iter__(self):
iterator = iter(self.dataset)
more_examples = True
while more_examples:
buffer, buffer_len = [], 0
while True:
if buffer_len >= self.max_buffer_size:
break
try:
buffer.append(next(iterator)[self.content_field])
buffer_len += len(buffer[-1])
except StopIteration:
if self.infinite:
iterator = iter(self.dataset)
self.epoch += 1
logger.info(f"Dataset epoch: {self.epoch}")
else:
more_examples = False
break
tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"]
all_token_ids = []
for tokenized_input in tokenized_inputs:
all_token_ids.extend(tokenized_input)
for i in range(0, len(all_token_ids), self.max_seq_length):
input_ids = all_token_ids[i : i + self.max_seq_length]
if len(input_ids) == self.max_seq_length:
self.current_size += 1
yield torch.tensor(input_ids)
def shuffle(self, buffer_size=1000):
return ShufflerIterDataPipe(self, buffer_size=buffer_size)
def create_dataloaders(tokenizer, accelerator):
ds_kwargs = {"streaming": True}
# In distributed training, the load_dataset function gaurantees that only one process
# can concurrently download the dataset.
datasets = load_dataset(
"c4",
"en",
cache_dir="cache_dir",
**ds_kwargs,
)
train_data, valid_data = datasets["train"], datasets["validation"]
with accelerator.main_process_first():
train_data = train_data.shuffle(buffer_size=10000, seed=None)
train_dataset = ConstantLengthDataset(
tokenizer,
train_data,
infinite=True,
max_seq_length=256,
)
valid_dataset = ConstantLengthDataset(
tokenizer,
valid_data,
infinite=False,
max_seq_length=256,
)
train_dataset = train_dataset.shuffle(buffer_size=10000)
train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True)
eval_dataloader = DataLoader(valid_dataset, batch_size=160)
return train_dataloader, eval_dataloader
def main():
# Accelerator.
logging_dir = "data_save_dir/log"
accelerator = Accelerator(
gradient_accumulation_steps=1,
mixed_precision="bf16",
log_with="tensorboard",
logging_dir=logging_dir,
)
# We need to initialize the trackers we use, and also store our configuration.
# The trackers initializes automatically on the main process.
if accelerator.is_main_process:
accelerator.init_trackers("test")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# Load datasets and create dataloaders.
train_dataloader, _ = create_dataloaders(tokenizer, accelerator)
train_dataloader = accelerator.prepare(train_dataloader)
for step, batch in enumerate(train_dataloader, start=1):
print(step)
accelerator.end_training()
if __name__ == "__main__":
main()
```
## Results expected
Being able to run the code for streamining datasets with multi-gpu
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: linux
- Python version: 3.9.12
- PyArrow version: 9.0.0
@lhoestq I do not have any idea why this freezing happens, and I removed the streaming mode and this was working fine, so I know this is caused by streaming mode of the dataloader part not working well with multi-gpu setting. Since datasets are large, I hope to keep the streamining mode. I very much appreciate your help.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5123/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5122/comments | https://api.github.com/repos/huggingface/datasets/issues/5122/events | https://github.com/huggingface/datasets/pull/5122 | 1,410,732,403 | PR_kwDODunzps5A4rWn | 5,122 | Add warning | {
"login": "Salehbigdeli",
"id": 34204311,
"node_id": "MDQ6VXNlcjM0MjA0MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/34204311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Salehbigdeli",
"html_url": "https://github.com/Salehbigdeli",
"followers_url": "https://api.github.com/users/Salehbigdeli/followers",
"following_url": "https://api.github.com/users/Salehbigdeli/following{/other_user}",
"gists_url": "https://api.github.com/users/Salehbigdeli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Salehbigdeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Salehbigdeli/subscriptions",
"organizations_url": "https://api.github.com/users/Salehbigdeli/orgs",
"repos_url": "https://api.github.com/users/Salehbigdeli/repos",
"events_url": "https://api.github.com/users/Salehbigdeli/events{/privacy}",
"received_events_url": "https://api.github.com/users/Salehbigdeli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"As mentioned in https://github.com/huggingface/datasets/issues/5105 I think we just need to keep the existing files instead of deleting them.\r\nThe `dataset_info.json` file contains the split names anyway, so we know which files belong to the dataset, and which ones don't."
] | 2022-10-17T01:30:37 | 2022-11-05T12:23:53 | 2022-11-05T12:23:53 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5122",
"html_url": "https://github.com/huggingface/datasets/pull/5122",
"diff_url": "https://github.com/huggingface/datasets/pull/5122.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5122.patch",
"merged_at": null
} | Fixes: #5105
I think removing the directory with warning is a better solution for this issue. Because if we decide to keep existing files in directory, then we should deal with the case providing same directory for several datasets! Which we know is not possible since `dataset_info.json` exists in that directory. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5122/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5121/comments | https://api.github.com/repos/huggingface/datasets/issues/5121/events | https://github.com/huggingface/datasets/pull/5121 | 1,410,681,067 | PR_kwDODunzps5A4gUB | 5,121 | Bugfix ignore function when creating new_fingerprint for caching | {
"login": "Salehbigdeli",
"id": 34204311,
"node_id": "MDQ6VXNlcjM0MjA0MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/34204311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Salehbigdeli",
"html_url": "https://github.com/Salehbigdeli",
"followers_url": "https://api.github.com/users/Salehbigdeli/followers",
"following_url": "https://api.github.com/users/Salehbigdeli/following{/other_user}",
"gists_url": "https://api.github.com/users/Salehbigdeli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Salehbigdeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Salehbigdeli/subscriptions",
"organizations_url": "https://api.github.com/users/Salehbigdeli/orgs",
"repos_url": "https://api.github.com/users/Salehbigdeli/repos",
"events_url": "https://api.github.com/users/Salehbigdeli/events{/privacy}",
"received_events_url": "https://api.github.com/users/Salehbigdeli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Adding \"function\" to the kwargs to ignore when computing the fingerprint will break `map` caching. Indeed passing two different function would result in two different datasets that have the same fingerprint - and the cache wouldn't be able to distinguish them.\r\n\r\nE.g this code would reload ds1 from the cache insetad of computing the dataset for ds2\r\n```python\r\nds = Dataset.from_dict({\"a\": [1, 2, 3]})\r\nds1 = ds.map(lambda x: {\"b\": 1})\r\nds2 = ds.map(lambda x: {\"b\": 2})\r\n```"
] | 2022-10-17T00:03:43 | 2022-10-17T12:39:36 | 2022-10-17T12:39:36 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5121",
"html_url": "https://github.com/huggingface/datasets/pull/5121",
"diff_url": "https://github.com/huggingface/datasets/pull/5121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5121.patch",
"merged_at": null
} | maybe fixes: #5109 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5121/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5120/comments | https://api.github.com/repos/huggingface/datasets/issues/5120/events | https://github.com/huggingface/datasets/pull/5120 | 1,410,641,221 | PR_kwDODunzps5A4X10 | 5,120 | Fix `tqdm` zip bug | {
"login": "david1542",
"id": 9879252,
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david1542",
"html_url": "https://github.com/david1542",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"repos_url": "https://api.github.com/users/david1542/repos",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova Thanks for your comment. What do you think about creating 2 `pbar` for each case? I see the `pbar_iterable` is initialized differently. Maybe `pbar` can also be initialized like that.",
"@albertvillanova Another solution I implemented is to change `pbar_iterable` and add the `zip` to it. I updated the PR with this solution. Let me know what you think.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova Done :) Let me know what you think.",
"@albertvillanova Thanks :) I also don't see an easy way to test this. This was just a problem in the way `tqdm` was used. I'm not sure we should cover it in tests.",
"Hi, \r\n\r\nFirst of all, thanks for this PR. \r\nIt's the first time I join a discussion on GitHUB on problem resolution in libraries such as transformers, so I hope I comply to the best practices for an efficient communication...\r\n\r\nI am running `AutoTokenizer.from_pretrained` in a Google Colab notebook for using with BERT base. \r\nI am experiencing issue [5117](https://github.com/huggingface/datasets/issues/5117).\r\n\r\nEach time I run my notebook, I do:\r\n\r\n`! pip install transformers \r\n! pip install datasets \r\n! pip install huggingface_hub`\r\n\r\nAs I understand, the issue has been resolved and the solution merged to the released version of the code?\r\nSo I expect that the bug is resolved in my notebook, however this is not the case.\r\n\r\nDo I get something wrong? \r\nDo I have to implement some change in the source code myself?\r\n\r\nThanks in advance for your help!",
"@Cochonaki Hi :) The problem was fixed but there wasn't a release since then. I believe a new release should come out in the upcoming weeks. Maybe someone from the core maintainers can answer that :)\r\n\r\ncc: @albertvillanova ",
"Baby Haiti Coffee SE is born\n\nNH watch\n\nOn Sun, Oct 23, 2022 at 02:39 Dudu Lasry ***@***.***> wrote:\n\n> @Cochonaki <https://github.com/Cochonaki> Hi :) The problem was fixed but\n> there wasn't a release since then. I believe a new release should come out\n> in the upcoming weeks. Maybe someone from the core maintainers can answer\n> that :)\n>\n> cc: @albertvillanova <https://github.com/albertvillanova>\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5120#issuecomment-1288024546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAB4E2NCT7QO7W3PTQGDIKDWETMQ7ANCNFSM6AAAAAARGRBY2M>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n",
"Hi, @Cochonaki.\r\n\r\nAs @david1542 pointed out, we have not made a release since this bug was fixed. We will make one in the following weeks.\r\n\r\nIn the meantime, if you would like to incorporate the bug fix, you can install `datasets` from this repo main branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```",
"Thanks a lot @albertvillanova and @david1542, it works now!\r\nI am really thankful for your help, that encourages me to participate more in this community.\r\nSee you around!",
"Welcome!!! 🤗"
] | 2022-10-16T22:19:18 | 2022-10-23T10:27:53 | 2022-10-19T08:53:17 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5120",
"html_url": "https://github.com/huggingface/datasets/pull/5120",
"diff_url": "https://github.com/huggingface/datasets/pull/5120.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5120.patch",
"merged_at": "2022-10-19T08:53:17"
} | This PR solves #5117, by wrapping the entire `zip` clause in tqdm.
For more information, please checkout this Stack Overflow thread:
https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5120/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5119/comments | https://api.github.com/repos/huggingface/datasets/issues/5119/events | https://github.com/huggingface/datasets/pull/5119 | 1,410,561,363 | PR_kwDODunzps5A4IQp | 5,119 | [TYPO] Update new_dataset_script.py | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-16T17:36:49 | 2022-10-19T09:48:19 | 2022-10-19T09:45:59 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5119",
"html_url": "https://github.com/huggingface/datasets/pull/5119",
"diff_url": "https://github.com/huggingface/datasets/pull/5119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5119.patch",
"merged_at": "2022-10-19T09:45:59"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5119/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5118/comments | https://api.github.com/repos/huggingface/datasets/issues/5118/events | https://github.com/huggingface/datasets/issues/5118 | 1,410,547,373 | I_kwDODunzps5UEz6t | 5,118 | Installing `datasets` on M1 computers | {
"login": "david1542",
"id": 9879252,
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david1542",
"html_url": "https://github.com/david1542",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"repos_url": "https://api.github.com/users/david1542/repos",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @david1542."
] | 2022-10-16T16:50:08 | 2022-10-19T09:10:08 | 2022-10-19T09:10:08 | CONTRIBUTOR | null | null | null | ## Describe the bug
I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`.
On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1?
## Steps to reproduce the bug
Fresh clone this project (on m1), create a virtualenv and run this:
```python
pip install -e ".[dev]"
```
## Expected results
Installation should be smooth, and all the dependencies should be installed on M1.
## Actual results
You should receive an error, saying pip couldn't find a version that matches this pattern:
```
tensorflow>=2.3,!=2.6.0,!=2.6.1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.2.dev0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5118/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5117/comments | https://api.github.com/repos/huggingface/datasets/issues/5117/events | https://github.com/huggingface/datasets/issues/5117 | 1,409,571,346 | I_kwDODunzps5UBFoS | 5,117 | Progress bars have color red and never completed to 100% | {
"login": "echatzikyriakidis",
"id": 63857529,
"node_id": "MDQ6VXNlcjYzODU3NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/63857529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echatzikyriakidis",
"html_url": "https://github.com/echatzikyriakidis",
"followers_url": "https://api.github.com/users/echatzikyriakidis/followers",
"following_url": "https://api.github.com/users/echatzikyriakidis/following{/other_user}",
"gists_url": "https://api.github.com/users/echatzikyriakidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echatzikyriakidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echatzikyriakidis/subscriptions",
"organizations_url": "https://api.github.com/users/echatzikyriakidis/orgs",
"repos_url": "https://api.github.com/users/echatzikyriakidis/repos",
"events_url": "https://api.github.com/users/echatzikyriakidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/echatzikyriakidis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "david1542",
"id": 9879252,
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david1542",
"html_url": "https://github.com/david1542",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"repos_url": "https://api.github.com/users/david1542/repos",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "david1542",
"id": 9879252,
"node_id": "MDQ6VXNlcjk4NzkyNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david1542",
"html_url": "https://github.com/david1542",
"followers_url": "https://api.github.com/users/david1542/followers",
"following_url": "https://api.github.com/users/david1542/following{/other_user}",
"gists_url": "https://api.github.com/users/david1542/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david1542/subscriptions",
"organizations_url": "https://api.github.com/users/david1542/orgs",
"repos_url": "https://api.github.com/users/david1542/repos",
"events_url": "https://api.github.com/users/david1542/events{/privacy}",
"received_events_url": "https://api.github.com/users/david1542/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @echatzikyriakidis, thanks for submitting the issue.\r\nWhich shell are you using exactly? I tried to run the command you sent, but I don't see colors at all 🧐\r\n\r\nI tried from bash and zsh as well.",
"Hi @david1542 ,\r\n\r\nI use Google Colab.\r\n",
"Got it. I [created a PR](https://github.com/huggingface/datasets/pull/5120) that fixes this issue. Turns out that the wrapping logic for the inner loop was slightly incorrect.",
"Thank you!"
] | 2022-10-14T16:12:30 | 2022-10-23T12:58:41 | 2022-10-23T12:58:41 | NONE | null | null | null | ## Describe the bug
Progress bars after transformative operations turn in red and never be completed to 100%
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('rotten_tomatoes', split='test').filter(lambda o: True)
```
## Expected results
Progress bar should be 100% and green
## Actual results
Progress bar turn in red and never completed to 100%
## Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- PyArrow version: 6.0.1
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5117/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5116/comments | https://api.github.com/repos/huggingface/datasets/issues/5116/events | https://github.com/huggingface/datasets/pull/5116 | 1,409,549,471 | PR_kwDODunzps5A09sk | 5,116 | Use yaml for issue templates + revamp | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-14T15:53:13 | 2022-10-19T13:05:49 | 2022-10-19T13:03:22 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5116",
"html_url": "https://github.com/huggingface/datasets/pull/5116",
"diff_url": "https://github.com/huggingface/datasets/pull/5116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5116.patch",
"merged_at": "2022-10-19T13:03:22"
} | Use YAML instead of markdown (more expressive) for the issue templates. In addition, update their structure/fields to be more aligned with Transformers.
PS: also removes the "add_dataset" PR template, as we no longer accept such PRs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5116/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5115/comments | https://api.github.com/repos/huggingface/datasets/issues/5115/events | https://github.com/huggingface/datasets/pull/5115 | 1,409,250,020 | PR_kwDODunzps5Az9Pm | 5,115 | Fix iter_batches | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I also ran the code in https://github.com/huggingface/datasets/issues/5111 and it works fine now :)",
"This is ready for review :)"
] | 2022-10-14T12:06:14 | 2022-10-14T15:02:15 | 2022-10-14T14:59:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5115",
"html_url": "https://github.com/huggingface/datasets/pull/5115",
"diff_url": "https://github.com/huggingface/datasets/pull/5115.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5115.patch",
"merged_at": "2022-10-14T14:59:58"
} | The `pa.Table.to_reader()` method available in `pyarrow>=8.0.0` may return chunks of size < `max_chunksize`, therefore `iter_batches` can return batches smaller than the `batch_size` specified by the user
Therefore batched `map` couldn't always use batches of the right size, e.g. this fails because it runs only on one batch of one element:
```python
from datasets import Dataset, concatenate_datasets
ds = concatenate_datasets([Dataset.from_dict({"a": [i]}) for i in range(10)])
ds2 = ds.map(lambda _: {}, batched=True)
assert list(ds2) == list(ds)
```
This was introduced in https://github.com/huggingface/datasets/pull/5030
Close https://github.com/huggingface/datasets/issues/5111
This will require a patch release along with https://github.com/huggingface/datasets/pull/5113
TODO:
- [x] fix tests
- [x] add more tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5115/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5114/comments | https://api.github.com/repos/huggingface/datasets/issues/5114/events | https://github.com/huggingface/datasets/issues/5114 | 1,409,236,738 | I_kwDODunzps5T_z8C | 5,114 | load_from_disk with remote filesystem fails due to a wrong temporary local folder path | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi Hubert! Could you please probably create a publicly available `gs://` dataset link? I think this would be easier for others to directly start to debug.",
"What seems to work is to change the line to:\r\n```\r\nfs.download(src_dataset_path, dataset_path.parent.as_posix(), recursive=True)\r\n```"
] | 2022-10-14T11:54:53 | 2022-11-19T07:13:10 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py:
```python
if is_remote_filesystem(fs):
src_dataset_path = extract_path_from_uri(dataset_path)
dataset_path = Dataset._build_local_temp_path(src_dataset_path)
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
```
If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train`
Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice)
Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right:
```python
fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True)
```
## Steps to reproduce the bug
```python
fs = gcsfs.GCSFileSystem(**storage_options)
dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine
dataset.save_to_disk(output_dir, fs=fs) #works fine
dataset = load_from_disk(output_dir, fs=fs) # crashes
```
## Expected results
The dataset is loaded
## Actual results
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.6.1.dev0
- Platform: mac os monterey 12.5.1
- Python version: 3.8.13
- PyArrow version:pyarrow==9.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5114/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5113/comments | https://api.github.com/repos/huggingface/datasets/issues/5113/events | https://github.com/huggingface/datasets/pull/5113 | 1,409,207,607 | PR_kwDODunzps5Az0Ei | 5,113 | Fix filter indices when batched | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think a patch release will be necessary.",
"I'm also fixing https://github.com/huggingface/datasets/issues/5111 which will lalso require a patch release"
] | 2022-10-14T11:30:03 | 2022-10-24T06:21:09 | 2022-10-14T12:11:44 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5113",
"html_url": "https://github.com/huggingface/datasets/pull/5113",
"diff_url": "https://github.com/huggingface/datasets/pull/5113.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5113.patch",
"merged_at": "2022-10-14T12:11:44"
} | This PR fixes a bug introduced by:
- #5030
Fix #5112. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5113/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5113/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5112/comments | https://api.github.com/repos/huggingface/datasets/issues/5112/events | https://github.com/huggingface/datasets/issues/5112 | 1,409,143,409 | I_kwDODunzps5T_dJx | 5,112 | Bug with filtered indices | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The issue is here:\r\nhttps://github.com/huggingface/datasets/blob/3ad9644b9a2e4558dd1d0f1e43c67658674e6228/src/datasets/arrow_dataset.py#L2964",
"@PartiallyTyped, @Muennighoff: the issue is fixed.\r\n\r\nWe are planning to make a patch release today.",
"Thanks a lot for the swift response! For a brief moment yesterday I thought I had gone insane 🤣On 14 Oct 2022, at 15:44, Albert Villanova del Moral ***@***.***> wrote:\n@PartiallyTyped, @Muennighoff: the issue is fixed.\nWe are planning to make a patch release today.\n\n—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>"
] | 2022-10-14T10:35:47 | 2022-10-14T13:55:03 | 2022-10-14T12:11:45 | MEMBER | null | null | null | ## Describe the bug
As reported by @PartiallyTyped (and by @Muennighoff):
- https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524
There is an issue with the indices of a filtered dataset.
## Steps to reproduce the bug
```python
ds = Dataset.from_dict({"num": [0, 1, 2, 3]})
ds = ds.filter(lambda num: num % 2 == 0, input_columns="num", batch_size=2)
assert all(item["num"] % 2 == 0 for item in ds)
```
## Expected results
The indices of the filtered dataset should correspond to the examples with "language" equals to "english".
## Actual results
Indices to items with other languages are included in the filtered dataset indices
## Preliminar investigation
It seems a bug introduced by:
- #5030
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5112/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5112/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5111/comments | https://api.github.com/repos/huggingface/datasets/issues/5111/events | https://github.com/huggingface/datasets/issues/5111 | 1,408,143,170 | I_kwDODunzps5T7o9C | 5,111 | map and filter not working properly in multiprocessing with the new release 2.6.0 | {
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Same bug exists with `num_proc=1` on colab. `3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0]` ",
"Thanks for reporting, @loubnabnl and for the additional information, @PartiallyTyped.\r\n\r\nHowever, I'm not able to reproduce this issue, neither locally nor on Colab:\r\n```\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\nDataset({\r\n features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],\r\n num_rows: 10\r\n})\r\n```\r\nCC: @huggingface/datasets can anybody reproduce this?",
"This is the minimum reproducible example. I ran this on the premium instances of colab.\r\n\r\n```\r\n# !pip install datasets\r\nimport datasets\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"copenlu/answerable_tydiqa\").filter(\"english\".__eq__, input_columns=\"language\")\r\nassert all(map(\"english\".__eq__, ds[\"train\"][\"language\"]))\r\n```\r\n\r\nIn my case, the number of samples is correct, however, the samples selected when indexing are wrong.\r\n\r\n```python\r\nDatasetDict({\r\n validation: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 990\r\n })\r\n train: Dataset({\r\n features: ['question_text', 'document_title', 'language', 'annotations', 'document_plaintext', 'document_url'],\r\n num_rows: 7389\r\n })\r\n})\r\n```\r\n\r\nThe number of rows is indeed correct, and i have checked it with a version that works.",
"I can reproduce the issue on my mac too \r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: macOS-12.2.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.4.3\r\n```\r\nBut not on Colab with python 3.7, maybe related to python version? (didn't manage to install python 3.9)\r\n```\r\n- `datasets` version: 2.6.0\r\n- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.14\r\n- PyArrow version: 9.0.0\r\n- Pandas version: 1.3.5\r\n```",
"I have the same issue, here's a simple notebook to reproduce: https://colab.research.google.com/drive/1Lvo9fg5DSpGUUgXW5JAutZ0bFsR-WV--?usp=sharing\r\n\r\n\r\n\r\n",
"I think there are 2 different issues here:\r\n- the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n- the issue reported by @PartiallyTyped is related just to \"filter\" (without multiprocessing) and I can reproduce it.",
"Could you create another issue for the @PartiallyTyped one please ?\r\n\r\nRegarding the OP issue, I also tried on colab or locally on py3.7 or py3.10 but didn't reproduce",
"I have created another issue for the one reported by @PartiallyTyped: \r\n- #5112 ",
"I managed to reproduce your issue @loubnabnl on colab by upgrading pyarrow to 9.0.0 instead of 6.0.1",
"I managed to have a _super_ minimal reproducible example:\r\n```python\r\n\r\nfrom datasets import Dataset, concatenate_datasets\r\n\r\nds = concatenate_datasets([Dataset.from_dict({\"a\": [i]}) for i in range(10)])\r\nds2 = ds.map(lambda _: {}, batched=True)\r\nassert list(ds2) == list(ds)\r\n```\r\n(filter uses a batched `map` under the hood)",
"> the one reported by @loubnabnl is related to multiprocessing in map and then filter; we should reproduce it first: I have tried with Python version 3.9.7 and I can't reproduce it either; maybe it is related to the version of PyArrow? To be checked.\r\n\r\nSo finally it was related to PyArrow version! :+1: ",
"Doing a patch release asap :)",
"Did the patch release yesterday, lmk if you still have issues",
"It works now, thanks!\r\n"
] | 2022-10-13T17:00:55 | 2022-10-17T08:26:59 | 2022-10-14T14:59:59 | NONE | null | null | null | ## Describe the bug
When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5.2
In the code below the data is filtered differently when we increase `num_proc` used in `map` although the datsets before and after mapping have identical elements.
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
def preprocess(example):
return example
ds = load_dataset("codeparrot/codeparrot-clean-valid", split="train").select([i for i in range(10)])
ds1 = ds.map(preprocess, num_proc=2)
ds2 = ds.map(preprocess)
# the datasets elements are the same
for i in range(len(ds1)):
assert ds1[i]==ds2[i]
print(f'Target column before filtering {ds1["autogenerated"]}')
print(f'Target column before filtering {ds2["autogenerated"]}')
print(f"datasets version {datasets.__version__}")
ds_filtered_1 = ds1.filter(lambda x: not x["autogenerated"])
ds_filtered_2 = ds2.filter(lambda x: not x["autogenerated"])
# all elements in Target column are false so they should all be kept, but for ds2 only the first 5=num_samples/num_proc are kept
print(ds_filtered_1)
print(ds_filtered_2)
```
```
Target column before filtering [False, False, False, False, False, False, False, False, False, False]
Target column before filtering [False, False, False, False, False, False, False, False, False, False]
Dataset({
features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],
num_rows: 5
})
Dataset({
features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],
num_rows: 10
})
```
## Expected results
Increasing `num_proc` in mapping shouldn't alter filtering. With the previous version 2.5.2 this doesn't happen
## Actual results
Filtering doesn't work properly when we increase `num_proc` in mapping but not when calling `filter`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5111/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5111/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5109/comments | https://api.github.com/repos/huggingface/datasets/issues/5109/events | https://github.com/huggingface/datasets/issues/5109 | 1,407,434,706 | I_kwDODunzps5T47_S | 5,109 | Map caching not working for some class methods | {
"login": "Mouhanedg56",
"id": 23029765,
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mouhanedg56",
"html_url": "https://github.com/Mouhanedg56",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"The hash used for caching is computed by pickling recursively the function passed to `map`. Maybe some objects don't have the same hash across sessions. In particular you can check the hash of your model using\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nobj = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nprint(Hasher.hash(obj))\r\n```\r\n\r\nYou can find mode info here: https://huggingface.co./docs/datasets/about_cache\r\n\r\nYou can also provide your own unique hash in `map` if you want, with the `new_fingerprint` argument",
"Indeed, the hash is changing. The `dumps` function serialize the model object in different ways because the model object is not deterministic\r\n```python\r\nfrom datasets.utils.py_utils import dumps\r\nobj1 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\nobj2 = AutoModel.from_config(config=config, add_pooling_layer=False)\r\n\r\ndumps(bert) == dumps(bert2). # False\r\n```\r\n\r\n> You can find mode info here: https://huggingface.co./docs/datasets/about_cache\r\n> \r\n> You can also provide your own unique hash in map if you want, with the new_fingerprint argument\r\n\r\n\r\nThanks, the doc is so helpful. Indeed, we can fix the hash and get cache hit using `new_fingerprint`. Closing the issue."
] | 2022-10-13T09:12:58 | 2022-10-17T10:38:45 | 2022-10-17T10:38:45 | CONTRIBUTOR | null | null | null | ## Describe the bug
The cache loading is not working as expected for some class methods with a model stored in an attribute.
The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method.
This comes from `dumps` function in `datasets.utils.py_utils` which generates a different dump at each run.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from transformers import AutoConfig, AutoModel, AutoTokenizer
dataset = load_dataset("ethos", "binary")
BASE_MODELNAME = "sentence-transformers/all-MiniLM-L6-v2"
class Object:
def __init__(self):
config = AutoConfig.from_pretrained(BASE_MODELNAME)
self.bert = AutoModel.from_config(config=config, add_pooling_layer=False)
self.tok = AutoTokenizer.from_pretrained(BASE_MODELNAME)
def tokenize(self, examples):
tokenized_texts = self.tok(
examples["text"],
padding="max_length",
truncation=True,
max_length=256,
)
return tokenized_texts
instance = Object()
result = dict()
for phase in ["train"]:
result[phase] = dataset[phase].map(instance.tokenize, batched=True, load_from_cache_file=True, num_proc=2)
```
## Expected results
Load cache instead of recompute result.
## Actual results
Result recomputed from scratch at each run.
The cache works fine when deleting `bert` attribute.
## Environment info
- `datasets` version: 2.5.3.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.13
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5109/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5108/comments | https://api.github.com/repos/huggingface/datasets/issues/5108/events | https://github.com/huggingface/datasets/pull/5108 | 1,407,044,107 | PR_kwDODunzps5AskeK | 5,108 | Fix a typo in arrow_dataset.py | {
"login": "yangky11",
"id": 5431913,
"node_id": "MDQ6VXNlcjU0MzE5MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5431913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangky11",
"html_url": "https://github.com/yangky11",
"followers_url": "https://api.github.com/users/yangky11/followers",
"following_url": "https://api.github.com/users/yangky11/following{/other_user}",
"gists_url": "https://api.github.com/users/yangky11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangky11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangky11/subscriptions",
"organizations_url": "https://api.github.com/users/yangky11/orgs",
"repos_url": "https://api.github.com/users/yangky11/repos",
"events_url": "https://api.github.com/users/yangky11/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangky11/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-10-13T02:33:55 | 2022-10-14T09:47:28 | 2022-10-14T09:47:27 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5108",
"html_url": "https://github.com/huggingface/datasets/pull/5108",
"diff_url": "https://github.com/huggingface/datasets/pull/5108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5108.patch",
"merged_at": "2022-10-14T09:47:27"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5108/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5107/comments | https://api.github.com/repos/huggingface/datasets/issues/5107/events | https://github.com/huggingface/datasets/pull/5107 | 1,406,736,710 | PR_kwDODunzps5ArjCZ | 5,107 | Multiprocessed dataset builder | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I would also like to add a test, but am not sure whether it should go into `test_builder` (more natural imo) or `test_load` (which already contains a lot of the things I have to import to run my current testing setup). For reference, what I run to test that it works looks like:\r\n\r\n```\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\n\r\nimport datasets\r\nfrom datasets.builder import DatasetBuilder\r\nfrom datasets.features import Features, Value\r\n\r\nDATASET_LOADING_SCRIPT_NAME = \"__dummy_dataset1__\"\r\n\r\nDATASET_LOADING_SCRIPT_CODE = \"\"\"\r\nimport os\r\n\r\nimport datasets\r\nfrom datasets import DatasetInfo, Features, Split, SplitGenerator, Value\r\n\r\n\r\nclass __DummyDataset1__(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self) -> DatasetInfo:\r\n return DatasetInfo(features=Features({\"text\": Value(\"string\")}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [\r\n SplitGenerator(Split.TRAIN, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"train1.txt\"), os.path.join(dl_manager.manual_dir, \"train2.txt\")]}),\r\n SplitGenerator(Split.TEST, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"test.txt\")]}),\r\n ]\r\n\r\n def _generate_examples(self, filepaths, **kwargs):\r\n idx = 0\r\n for filepath in filepaths:\r\n with open(filepath, \"r\", encoding=\"utf-8\") as f:\r\n for line in f:\r\n yield idx, {\"text\": line.strip()}\r\n idx += 1\r\n\"\"\"\r\n\r\n\r\ndef dataset_loading_script_dir(tmp_path):\r\n script_name = DATASET_LOADING_SCRIPT_NAME\r\n script_dir = tmp_path / script_name\r\n script_dir.mkdir()\r\n script_path = script_dir / f\"{script_name}.py\"\r\n with open(script_path, \"w\") as f:\r\n f.write(DATASET_LOADING_SCRIPT_CODE)\r\n return str(script_dir)\r\n\r\n\r\ndef data_dir(tmp_path):\r\n data_dir = tmp_path / \"data_dir\"\r\n data_dir.mkdir()\r\n with open(data_dir / \"train1.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"train2.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"test.txt\", \"w\") as f:\r\n f.write(\"bar\\n\" * 10)\r\n return str(data_dir)\r\n\r\n\r\ndef load_dataset_builder_multiprocessed(tmp_path):\r\n builder = datasets.load_dataset_builder(\r\n os.path.join(dataset_loading_script_dir(tmp_path), DATASET_LOADING_SCRIPT_NAME + \".py\"),\r\n data_dir=data_dir(tmp_path),\r\n )\r\n assert isinstance(builder, DatasetBuilder)\r\n assert builder.name == DATASET_LOADING_SCRIPT_NAME\r\n assert builder.info.features == Features({\"text\": Value(\"string\")})\r\n builder.download_and_prepare(tmp_path / \"prepare_target\", max_shard_size=500, num_proc=2)\r\n\r\nif __name__ == \"__main__\":\r\n tmp_path = \"tmp\"\r\n if os.path.exists(tmp_path):\r\n raise FileExistsError(f\"path {tmp_path} already exists\")\r\n os.makedirs(tmp_path)\r\n try:\r\n load_dataset_builder_multiprocessed(Path(tmp_path))\r\n finally:\r\n # pass\r\n shutil.rmtree(tmp_path)\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5107). All of your documentation changes will be reflected on that endpoint.",
"Nice ! I think the test can go in `test_builder.py` :)",
"I've added sharded arrow dataset loading. Two WIP items in the PR:\r\n- ~~Order is not conserved (it seems like the sharded files are read in the wrong order)~~\r\n- the tqdm for preparing the splits is wrong (it compares against the size of the whole split rather than against the size of the multiprocessing shard, but I am not sure how to access the latter)\r\n\r\nAlso `naming.filenames_for_dataset_split` is not very elegant imo.\r\n\r\n@lvwerra if you don't care about order, as I do, it's functional for now but I'd still quite like to get to the bottom of this.",
"Found the ordering bug ! (`glob.glob` returning stuff in arbitrary order)",
"I fixed the tqdm to be less misleading, but it can't tell where to stop. I am a bit hesitant to add a top-level tqdm (on the shard iterator) since for most intents it will do 0 -> N shards straight, but I am not sure what is the best way to present that info here.",
"I'm continuing the PR :)",
"Did a few changes:\r\n- make shards naming consistent:\r\n - use `{builder_name}-{split_name}.{file_format}` when there's only 1 shard\r\n - otherwise use `{builder_name}-{split_name}-{shard_idx:05d}-of-{num_shards:05d}.{file_format}`\r\n- update the reader to support reading several shards\r\n - added a new `shard_lengths` field in `SplitInfo` (FYI it is saved in `dataset_info.json` next to the shards as usual)\r\n - it's None when there's only 1 shard\r\n - otherwise it's a list of integers that correspond to the number of rows per shard\r\n - implemented partial reading to only memory map the required shards\r\n - e.g. when someone asks for a partial split like `train[:10%]`\r\n- align the sharding for beam datasets\r\n - no more combining into 1 big arrow file\r\n- added a tqdm bar\r\n - only one single bar, handled by the main process\r\n - gathers progress updates from other processes using `iflatmap_unordered`\r\n - shows the number of examples (even for datasets prepared by generating arrow tables)\r\n- disabled multiprocessing by default - users must pass `num_proc` explicitly\r\n- tests\r\n- docs",
"Alright this is ready for review - sorry it ended up so big ^^'\r\n\r\nIf I can do anything to make it easier for your to review this PR @mariosasko let me know",
"Multiprocessing is disabled by default but we may show a warning to encourage users to pass `num_proc` if the dataset is split in many files. Let me know what you think",
"Hey, is this error seems to you guys natural? \r\n\r\nThe package built from `0d4e3907` commit tag, and here is the version displayed from the import ... \r\n```bash\r\n>>> datasets.__version__\r\n'2.6.1.dev0'\r\n>>> \r\n```\r\n\r\n```bash\r\n>>> data = load_dataset('dataset_loaders/rfw2latentplay', num_proc=14)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1719, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1523, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 1292, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 303, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 456, in _create_builder_config\r\n builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'num_proc'\r\n```\r\n\r\nLet me know if I can help fixing this ... \r\n",
"> Do we have some benchmarks to see the speed-up?\r\n\r\nOn my machine running `load_dataset(\"oscar-corpus/OSCAR-2201\", \"br\")` (which is split in shards) I go from 2-3k examples per sec to 4-5k examples per sec with num_proc=2 😉",
"> Hey, is this error seems to you guys natural?\r\n>\r\n> The package built from 0d4e3907 commit tag, and here is the version displayed from the import ...\r\n\r\nI don't know where you got the `0d4e3907` commit tag from, it doesn't seem to be in this PR. You should try installing from this PR, or wait for it to be merged on `main`",
"## Splits vs Shards\r\n\r\nMaybe it's a good idea to add some documentation on the `sharding` that can be achieved by passing `list` based arguments to the `SplitGenerator`s `gen_kwargs` ... \r\n\r\nI had to read the whole dataset generation source code to find this out ... \r\n\r\n\r\n",
"> Maybe it's a good idea to add some documentation on the sharding that can be achieved by passing list based arguments to the SplitGenerators gen_kwargs ...\r\n\r\nThis is part of this PR :) you can check the changes in docs/source/dataset_script.mdx",
"I took your comments into account @mariosasko thanks !\r\nLet me know if it's good for you now ;)",
"The doc CI should be fixed by now hopefully, merging !"
] | 2022-10-12T19:59:17 | 2022-12-01T15:37:09 | 2022-11-09T17:11:43 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5107",
"html_url": "https://github.com/huggingface/datasets/pull/5107",
"diff_url": "https://github.com/huggingface/datasets/pull/5107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5107.patch",
"merged_at": "2022-11-09T17:11:43"
} | This PR adds the multiprocessing part of #2650 (but not the caching of already-computed arrow files). On the other side, loading of sharded arrow files still needs to be implemented (sharded parquet files can already be loaded). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5107/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5106/comments | https://api.github.com/repos/huggingface/datasets/issues/5106/events | https://github.com/huggingface/datasets/pull/5106 | 1,406,635,758 | PR_kwDODunzps5ArM6G | 5,106 | Fix task template reload from dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Just wondering if there might be other data classes default values missed that could cause an issue... Apart from feature-like classes and tasks, I don't see any others though...\r\n\r\nI think we're good ! `asdict` is used on the DatasetInfo attributes like features, tasks etc. and they all support dict conversion properly now\r\n\r\n> And a question: but this information about the tasks is no longer being saved as YAML tags in the dataset card; won't be a problem with current datasets using task templates (with this information in their metadata JSON) once we replace the JSON by the YAML tags (which do not have this information about the task templates)?\r\n\r\nIn the long run we'll use the train_eval_index YAML tags instead, but I agree when removing the JSON files we should try to not break existing code that may rely on this"
] | 2022-10-12T18:33:49 | 2022-10-13T09:59:07 | 2022-10-13T09:56:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5106",
"html_url": "https://github.com/huggingface/datasets/pull/5106",
"diff_url": "https://github.com/huggingface/datasets/pull/5106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5106.patch",
"merged_at": "2022-10-13T09:56:51"
} | Since #4926 the JSON dumps are simplified and it made task template dicts empty by default.
I fixed this by always including the task name which is needed to reload a task from a dict | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5106/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5105/comments | https://api.github.com/repos/huggingface/datasets/issues/5105/events | https://github.com/huggingface/datasets/issues/5105 | 1,406,078,357 | I_kwDODunzps5Tzw2V | 5,105 | Specifying an exisiting folder in download_and_prepare deletes everything in it | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"cc @lhoestq ",
"Thanks for reporting, @cakiki.\r\n\r\nI would say the deletion of the dir is an expected behavior though...",
"`dask.to_parquet` has an \"overwrite\" parameter and default is `False`, we could also have something similar",
"Thank you both for your feedback!\r\n\r\n@albertvillanova I think I might have have the wrong mental model of what the function was meant to do. I thought it would be an API similar to the pandas `to_XX` write methods (Like the one @lhoestq mentions) so I just assumed it would download the dataframe to whichever folder I specififed (`\"./\"` in my case) so I could load it into a dask dataframe. I absolutely did not expect it to delete everything in my local directory, including the script where I called it from :smile: \r\n\r\nI think Quentin's proposed solution sounds like a reasonable feature!",
"actually there's already a `download_mode` parameter that defaults to `REUSE_DATASET_IF_EXISTS` - so I guess it's just a matter of not deleting files unrelated to the dataset, and to overwrite existing dataset files if the download mode is `REUSE_CACHE_IF_EXISTS` or `FORCE_REDOWNLOAD`"
] | 2022-10-12T11:53:33 | 2022-10-20T11:53:59 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following:
```
Traceback (most recent call last)
Input In [11], in <cell line: 1>()
----> 1 rotten_tomatoes_builder.download_and_prepare(output_dir=".", max_shard_size="200MB", file_format="parquet")
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:818, in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
File /usr/lib/python3.9/contextlib.py:124, in _GeneratorContextManager.__exit__(self, type, value, traceback)
122 if type is None:
123 try:
--> 124 next(self.gen)
125 except StopIteration:
126 return False
File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:760, in incomplete_dir(dirname)
File /usr/lib/python3.9/shutil.py:722, in rmtree(path, ignore_errors, onerror)
720 os.rmdir(path)
721 except OSError:
--> 722 onerror(os.rmdir, path, sys.exc_info())
723 else:
724 try:
725 # symlinks to directories are forbidden, see bug #1669
File /usr/lib/python3.9/shutil.py:720, in rmtree(path, ignore_errors, onerror)
718 _rmtree_safe_fd(fd, path, onerror)
719 try:
--> 720 os.rmdir(path)
721 except OSError:
722 onerror(os.rmdir, path, sys.exc_info())
OSError: [Errno 22] Invalid argument: '/home/christopher/BIGSCIENCE/.'
```
## Steps to reproduce the bug
```python
rotten_tomatoes_builder = load_dataset_builder("rotten_tomatoes")
rotten_tomatoes_builder.download_and_prepare(output_dir="./test_folder", max_shard_size="200MB", file_format="parquet")
```
If `test_folder` contains any files they will all be deleted
## Expected results
Either a warning that all files will be deleted, but preferably that they not be deleted at all.
## Actual results
N/A
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5105/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5104/comments | https://api.github.com/repos/huggingface/datasets/issues/5104/events | https://github.com/huggingface/datasets/pull/5104 | 1,405,973,102 | PR_kwDODunzps5Ao9Mq | 5,104 | Fix loading how to guide (#5102) | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-12T10:34:42 | 2022-10-12T11:34:07 | 2022-10-12T11:31:55 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5104",
"html_url": "https://github.com/huggingface/datasets/pull/5104",
"diff_url": "https://github.com/huggingface/datasets/pull/5104.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5104.patch",
"merged_at": "2022-10-12T11:31:55"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5104/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5103/comments | https://api.github.com/repos/huggingface/datasets/issues/5103/events | https://github.com/huggingface/datasets/pull/5103 | 1,405,956,311 | PR_kwDODunzps5Ao5gI | 5,103 | url encode hub url (#5099) | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-12T10:22:12 | 2022-10-12T15:27:24 | 2022-10-12T15:24:47 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5103",
"html_url": "https://github.com/huggingface/datasets/pull/5103",
"diff_url": "https://github.com/huggingface/datasets/pull/5103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5103.patch",
"merged_at": "2022-10-12T15:24:47"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5103/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5102/comments | https://api.github.com/repos/huggingface/datasets/issues/5102/events | https://github.com/huggingface/datasets/issues/5102 | 1,404,746,554 | I_kwDODunzps5Turs6 | 5,102 | Error in create a dataset from a Python generator | {
"login": "yangxuhui",
"id": 9004682,
"node_id": "MDQ6VXNlcjkwMDQ2ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9004682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangxuhui",
"html_url": "https://github.com/yangxuhui",
"followers_url": "https://api.github.com/users/yangxuhui/followers",
"following_url": "https://api.github.com/users/yangxuhui/following{/other_user}",
"gists_url": "https://api.github.com/users/yangxuhui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangxuhui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangxuhui/subscriptions",
"organizations_url": "https://api.github.com/users/yangxuhui/orgs",
"repos_url": "https://api.github.com/users/yangxuhui/repos",
"events_url": "https://api.github.com/users/yangxuhui/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangxuhui/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting! The last line should be `dataset = Dataset.from_generator(my_gen)`.",
"Can I work on this one?"
] | 2022-10-11T14:28:58 | 2022-10-12T11:31:56 | 2022-10-12T11:31:56 | NONE | null | null | null | ## Describe the bug
In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co./docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in.
```Python
>>> from datasets import Dataset
>>> def my_gen():
... for i in range(1, 4):
... yield {"a": i}
>>> dataset = Dataset.from_generator(my_dict)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5102/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5101/comments | https://api.github.com/repos/huggingface/datasets/issues/5101/events | https://github.com/huggingface/datasets/pull/5101 | 1,404,513,085 | PR_kwDODunzps5AkHJc | 5,101 | Free the "hf" filesystem protocol for `hffs` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-11T11:57:21 | 2022-10-12T15:32:59 | 2022-10-12T15:30:38 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5101",
"html_url": "https://github.com/huggingface/datasets/pull/5101",
"diff_url": "https://github.com/huggingface/datasets/pull/5101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5101.patch",
"merged_at": "2022-10-12T15:30:38"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5101/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5101/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5100/comments | https://api.github.com/repos/huggingface/datasets/issues/5100/events | https://github.com/huggingface/datasets/issues/5100 | 1,404,458,586 | I_kwDODunzps5TtlZa | 5,100 | datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method | {
"login": "jagochi",
"id": 115545475,
"node_id": "U_kgDOBuMVgw",
"avatar_url": "https://avatars.githubusercontent.com/u/115545475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jagochi",
"html_url": "https://github.com/jagochi",
"followers_url": "https://api.github.com/users/jagochi/followers",
"following_url": "https://api.github.com/users/jagochi/following{/other_user}",
"gists_url": "https://api.github.com/users/jagochi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jagochi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jagochi/subscriptions",
"organizations_url": "https://api.github.com/users/jagochi/orgs",
"repos_url": "https://api.github.com/users/jagochi/repos",
"events_url": "https://api.github.com/users/jagochi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jagochi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-10-11T11:16:31 | 2022-10-11T13:48:26 | 2022-10-11T13:48:26 | NONE | null | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5100/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5099/comments | https://api.github.com/repos/huggingface/datasets/issues/5099/events | https://github.com/huggingface/datasets/issues/5099 | 1,404,370,191 | I_kwDODunzps5TtP0P | 5,099 | datasets doesn't support # in data paths | {
"login": "loubnabnl",
"id": 44069155,
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loubnabnl",
"html_url": "https://github.com/loubnabnl",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"`datasets` doesn't seem to urlencode the directory names here\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/file_utils.py#L109-L111\r\n\r\nfor example we should have\r\n```python\r\nfrom datasets.utils.file_utils import hf_hub_url\r\n\r\nurl = hf_hub_url(\"loubnabnl/bigcode_csharp\", \"data/c#/data_0003.jsonl\")\r\nprint(url)\r\n# Currently returns\r\n# https://huggingface.co./datasets/loubnabnl/bigcode_csharp/resolve/main/data/c#/data_0003.jsonl\r\n# while it should be \r\n# https://huggingface.co./datasets/loubnabnl/bigcode_csharp/resolve/main/data/c%23/data_0003.jsonl\r\n```",
"I'll work on this :)",
"@loubnabnl The dataset you linked in the description of the bug does not work and returns a 404. Where can I find the dataset to reproduce the bug?",
"I think you can create a dataset repository on the Hub with a dummy file containing a `#`",
"Ah sorry it was private I just made it public, I can also help with this if needed",
"@lhoestq Should I url encode also repo_id and revision parameters? I'm not sure what are the valid characters there.\r\n\r\nPersonally, I would be cautious and only url encode the path parameter.",
"These are possible solutions (assuming `from urllib.parse import quote`):\r\n\r\n1) url encode only the path parameter:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=repo_id, path=quote(path), revision=revision)\r\n```\r\n2) url encode all parameters:\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HUB_DATASETS_URL.format(repo_id=quote(repo_id), path=quote(path), revision=quote(revision))\r\n```\r\n3) url encode the whole url:\r\n```\r\n# src/datasets/config.py\r\nHUB_DATASETS_PATH = \"/datasets/{repo_id}/resolve/{revision}/{path}\"\r\nHUB_DATASETS_URL = HF_ENDPOINT + HUB_DATASETS_PATH\r\n```\r\n```\r\n# src/datasets/utils/file_utils.py\r\ndef hf_hub_url(repo_id: str, path: str, revision: Optional[str] = None) -> str:\r\n revision = revision or config.HUB_DEFAULT_VERSION\r\n return config.HF_ENDPOINT + quote(config.HUB_DATASETS_PATH.format(repo_id=repo_id, path=path, revision=revision))\r\n```",
"repo_id can only contain alphanumeric characters and _- so it doesn't need to be encoded.\r\n\r\nHowever I agree it's a good idea to also apply `quote` to the revision as well as in 2. !",
"Should be fixed by https://github.com/huggingface/datasets/issues/5099 - we'll do a release later today"
] | 2022-10-11T10:05:32 | 2022-10-13T13:14:20 | 2022-10-13T13:14:20 | NONE | null | null | null | ## Describe the bug
dataset files with `#` symbol their paths aren't read correctly.
## Steps to reproduce the bug
The data in folder `c#`of this [dataset](https://huggingface.co./datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly
```python
ds = load_dataset('loubnabnl/bigcode_csharp', split="train", data_files=["data/c#/*"])
```
```
FileNotFoundError: Couldn't find file at https://huggingface.co./datasets/loubnabnl/bigcode_csharp/resolve/27a3166cff4bb18e11919cafa6f169c0f57483de/data/c#/data_0003.jsonl
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5099/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5098/comments | https://api.github.com/repos/huggingface/datasets/issues/5098/events | https://github.com/huggingface/datasets/issues/5098 | 1,404,058,518 | I_kwDODunzps5TsDuW | 5,098 | Classes label error when loading symbolic links using imagefolder | {
"login": "horizon86",
"id": 49552732,
"node_id": "MDQ6VXNlcjQ5NTUyNzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/49552732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/horizon86",
"html_url": "https://github.com/horizon86",
"followers_url": "https://api.github.com/users/horizon86/followers",
"following_url": "https://api.github.com/users/horizon86/following{/other_user}",
"gists_url": "https://api.github.com/users/horizon86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/horizon86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/horizon86/subscriptions",
"organizations_url": "https://api.github.com/users/horizon86/orgs",
"repos_url": "https://api.github.com/users/horizon86/repos",
"events_url": "https://api.github.com/users/horizon86/events{/privacy}",
"received_events_url": "https://api.github.com/users/horizon86/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It can be solved temporarily by remove `resolve` in \r\nhttps://github.com/huggingface/datasets/blob/bef23be3d9543b1ca2da87ab2f05070201044ddc/src/datasets/data_files.py#L278",
"Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.",
"> Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `Path(os.path.abspath(filepath))`.\r\n\r\nThanks for your reply!"
] | 2022-10-11T06:10:58 | 2022-11-14T14:40:20 | 2022-11-14T14:40:20 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
Like this: #4015
When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide whether to enable symbolic link tracking?
This is inconsistent with the `torchvision.datasets.ImageFolder` behavior.
For example:
![image](https://user-images.githubusercontent.com/49552732/195008591-3cce644e-aabe-4f39-90b9-832861cadb3d.png)
![image](https://user-images.githubusercontent.com/49552732/195008841-0b0c2289-eb7f-411a-977b-37426f23a277.png)
It use `others` in green circle as class label but not `abnormal`, I wish `load_dataset` not use the real file parent as label.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5098/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5097/comments | https://api.github.com/repos/huggingface/datasets/issues/5097/events | https://github.com/huggingface/datasets/issues/5097 | 1,403,679,353 | I_kwDODunzps5TqnJ5 | 5,097 | Fatal error with pyarrow/libarrow.so | {
"login": "catalys1",
"id": 11340846,
"node_id": "MDQ6VXNlcjExMzQwODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11340846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/catalys1",
"html_url": "https://github.com/catalys1",
"followers_url": "https://api.github.com/users/catalys1/followers",
"following_url": "https://api.github.com/users/catalys1/following{/other_user}",
"gists_url": "https://api.github.com/users/catalys1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/catalys1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/catalys1/subscriptions",
"organizations_url": "https://api.github.com/users/catalys1/orgs",
"repos_url": "https://api.github.com/users/catalys1/repos",
"events_url": "https://api.github.com/users/catalys1/events{/privacy}",
"received_events_url": "https://api.github.com/users/catalys1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting, @catalys1.\r\n\r\nThis seems a duplicate of:\r\n- #3310 \r\n\r\nThe source of the problem is in PyArrow:\r\n- [ARROW-15141: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-15141)\r\n- [ARROW-17501: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-17501)\r\n\r\nThe bug in their dependency is still unresolved:\r\n- https://github.com/aws/aws-sdk-cpp/issues/1809\r\n\r\nApparently, the `aws-sdk-cpp` PyArrow dependency needs to be pinned at version `1.8.186` if using conda. Have you updated it after installing PyArrow?\r\n```shell\r\nconda list aws-sdk-cpp\r\n```\r\n\r\nMaybe you should try to downgrade it to that version:\r\n```shell\r\nconda install -c conda-forge aws-sdk-cpp=1.8.186\r\n```"
] | 2022-10-10T20:29:04 | 2022-10-11T06:56:01 | 2022-10-11T06:56:00 | NONE | null | null | null | ## Describe the bug
When using datasets, at the very end of my jobs the program crashes (see trace below).
It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error.
## Steps to reproduce the bug
This is sufficient to reproduce the problem:
```bash
python -c "import datasets"
```
## Expected results
Program should run to completion without an error.
## Actual results
```bash
Fatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS
Exiting Application
################################################################################
Stack trace:
################################################################################
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x150dff547f06]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x150dff53f8e5]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x150dff464e09]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x150dff462948]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x150dff41db46]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x150dfee8246a]
/lib64/libc.so.6(+0x39b0c) [0x150e15eadb0c]
/lib64/libc.so.6(on_exit+0) [0x150e15eadc40]
/u/user/miniconda3/envs/env/bin/python(+0x28db18) [0x560ae370eb18]
/u/user/miniconda3/envs/env/bin/python(+0x28db4b) [0x560ae370eb4b]
/u/user/miniconda3/envs/env/bin/python(+0x28db90) [0x560ae370eb90]
/u/user/miniconda3/envs/env/bin/python(_PyRun_SimpleFileObject+0x1e6) [0x560ae37123e6]
/u/user/miniconda3/envs/env/bin/python(_PyRun_AnyFileObject+0x44) [0x560ae37124c4]
/u/user/miniconda3/envs/env/bin/python(Py_RunMain+0x35d) [0x560ae37135bd]
/u/user/miniconda3/envs/env/bin/python(Py_BytesMain+0x39) [0x560ae37137d9]
/lib64/libc.so.6(__libc_start_main+0xf3) [0x150e15e97493]
/u/user/miniconda3/envs/env/bin/python(+0x2125d4) [0x560ae36935d4]
Aborted (core dumped)
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5097/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5096/comments | https://api.github.com/repos/huggingface/datasets/issues/5096/events | https://github.com/huggingface/datasets/issues/5096 | 1,403,379,816 | I_kwDODunzps5TpeBo | 5,096 | Transfer some canonical datasets under an organization namespace | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 2.01MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default (download: 411 bytes, generated: 385 bytes, post-processed: Unknown size, total: 796 bytes) to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [00:00<00:00, 293kB/s]\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 304.16it/s]\r\nOut[1]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"dummy-canonical-org/dummy_canonical_dataset\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 1.57MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 362.48it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n```",
"Cool ! 🚀 "
] | 2022-10-10T15:44:31 | 2023-06-07T07:51:54 | null | MEMBER | null | null | null | As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it).
First, we should test it using a dummy dataset/organization.
TODO:
- [x] Test with a dummy dataset
- [x] Create dummy canonical dataset: https://huggingface.co./datasets/dummy_canonical_dataset
- [x] Create dummy organization: https://huggingface.co./dummy-canonical-org
- [x] Transfer dummy canonical dataset to dummy organization
- [ ] Transfer datasets
- [x] babi_qa => facebook
- [x] cord19 => allenai
- [x] emotion => dair-ai
- [ ] gem => GEM
- [x] hendrycks_test => cais/mmlu
- [x] indonlu => indonlp
- [ ] multilingual_librispeech => facebook
- It already exists "facebook/multilingual_librispeech"
- [ ] oscar => oscar-corpus
- [x] peer_read => allenai
- [x] qasper => allenai
- [x] reddit => webis/tldr-17
- [x] russian_super_glue => russiannlp
- [x] rvl_cdip => aharley
- [x] s2orc => allenai
- [x] scicite => allenai
- [x] scifact => allenai
- [x] scitldr => allenai
- [x] swiss_judgment_prediction => rcds
- [x] the_pile => EleutherAI
- [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt
- [ ] Deprecate (and eventually remove) datasets that cannot be transferred because they already exist
- [x] banking77 => PolyAI
- [x] common_voice => mozilla-foundation
- [x] german_legal_entity_recognition => elenanereiss
- ...
EDIT: the list above is continuously being updated | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5096/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5096/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5095/comments | https://api.github.com/repos/huggingface/datasets/issues/5095/events | https://github.com/huggingface/datasets/pull/5095 | 1,403,221,408 | PR_kwDODunzps5Afzsq | 5,095 | Fix tutorial (#5093) | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oops I merged without linking to the hacktoberfest issue - not sure if it counts in this case\r\n\r\nsorry about that..\r\n\r\nNext time you can just mention \"Close #XXXX\" in your issue to link it",
"It should :) (the `hacktoberfest` repo topic is all that matters)"
] | 2022-10-10T13:55:15 | 2022-10-10T17:50:52 | 2022-10-10T15:32:20 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5095",
"html_url": "https://github.com/huggingface/datasets/pull/5095",
"diff_url": "https://github.com/huggingface/datasets/pull/5095.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5095.patch",
"merged_at": "2022-10-10T15:32:20"
} | Close #5093 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5095/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5094/comments | https://api.github.com/repos/huggingface/datasets/issues/5094/events | https://github.com/huggingface/datasets/issues/5094 | 1,403,214,950 | I_kwDODunzps5To1xm | 5,094 | Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock | {
"login": "RR-28023",
"id": 36822895,
"node_id": "MDQ6VXNlcjM2ODIyODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/36822895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RR-28023",
"html_url": "https://github.com/RR-28023",
"followers_url": "https://api.github.com/users/RR-28023/followers",
"following_url": "https://api.github.com/users/RR-28023/following{/other_user}",
"gists_url": "https://api.github.com/users/RR-28023/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RR-28023/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RR-28023/subscriptions",
"organizations_url": "https://api.github.com/users/RR-28023/orgs",
"repos_url": "https://api.github.com/users/RR-28023/repos",
"events_url": "https://api.github.com/users/RR-28023/events{/privacy}",
"received_events_url": "https://api.github.com/users/RR-28023/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Could it be an Out of Memory issue that could have killed one of the processes ? can you check your memory ?",
"Hi! I don't think it is a memory issue. I'm monitoring the main and spawn python processes and threads with `htop` and the memory does not peak. Besides, the example I've posted above should not be that demanding in terms of memory, right? (I have 32GB of RAM). ",
"Indeed it should be fine. I couldn't reproduce the error though - I ran your script on my side and it works fine. What version of pytorch are you using ?",
"Interesting.. I'm using `torch 1.12.1`",
"I also tried on colab and it works fine 🤔 \r\nMaybe something is wrong with your installation of pytorch ?",
"Oh actually I just saw that you're using python 3.9\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/4113\r\n\r\nWe'll fix that as soon as we can, in the meantime you can try to use use single process, or use an older version of python maybe ?",
"I tried with python 3.7 and the issue persists. In collab, which also uses 3.7 I don't get the issue, so yes I guess is something on mu side... will post it here if I manage to fix it",
"Hi! Which version of transformers are you using? I test the code on Colab (so python 3.7) with transformers 4.23.1, torch 1.12.1 and pyarrow 9.0.0 (also 6.x), it worked without stuck.",
"Hi, I have the same problem in use **datasets.IterableDatasetDict.map()**\r\nmy pytorch is 2.0.0a0+gitc263bd4\r\nmy python is 3.8.16(default, Jun 12 2023, 17:37:21)\r\nwork on aarch64 in 16 node, each node with 4*nVidia-A100-40G\r\nevery node have 4 process execute code as ↓\r\n\r\n```\r\nfrom datasets import load_dataset, interleave_datasets, IterableDatasetDict, concatenate_datasets\r\n```\r\n...\r\n```\r\n model_args.cache_dir = '/home/scx/.cache'\r\n for dataset_name in data_args.datasets_name:\r\n train_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='train'\r\n ).select_columns('text')\r\n )\r\n valid_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='validation'\r\n ).select_columns('text')\r\n )\r\n train_dataset = interleave_datasets(train_datasets,\r\n probabilities=data_args.datasets_probabilities, \r\n seed=training_args.seed,\r\n stopping_strategy='all_exhausted')\r\n raw_datasets = IterableDatasetDict({'train': train_dataset, 'validation': valid_dataset})\r\n```\r\n...\r\n\r\n```\r\n tokenized_datasets = None\r\n with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n if not data_args.streaming:\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n remove_columns=column_names,\r\n )\r\n else:\r\n #TODO 20230722\r\n logger.info('{}: {}'.format(__file__, 'tokenized_datasets = raw_datasets.map('))\r\n logger.info('len raw_datasets: {}'.format(len(raw_datasets.items())))\r\n logger.info('raw_datasets:{}'.format(raw_datasets.items()))\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n batch_size=1000,\r\n remove_columns=column_names\r\n )\r\n logger.info('map ok!')\r\n logger.info('show train: {}'.format(next(iter(tokenized_datasets['train']))))\r\n logger.info('ok')\r\n # ### RAW CODE ###\r\n # tokenized_datasets = raw_datasets.map(\r\n # tokenize_function,\r\n # batched=True,\r\n # batch_size=1000,\r\n # remove_columns=column_names\r\n # )\r\n #TODO 20230722\r\n logger.info(\"Finish tokenization\")\r\n```\r\nthe output of my code is\r\n```\r\n07/22/2023 21:57:09 - INFO - __main__ - /demo/run_blue_space.py: tokenized_datasets = raw_datasets.map(\r\n07/22/2023 21:57:09 - INFO - __main__ - len raw_datasets: 2\r\n07/22/2023 21:57:09 - INFO - __main__ - raw_datasets:dict_items([('train', <datasets.iterable_dataset.IterableDataset object at 0x4005ee301190>), ('validation', <datasets.iterable_dataset.IterableDataset object at 0x4005ee5427f0>)])\r\n07/22/2023 21:57:09 - INFO - __main__ - map ok!\r\n07/22/2023 22:01:07 - INFO - __main__ - show train: {'input_ids': [14608, 26797, 31891, 34260, 12227, 33207, 5, 5, 31632, 26797, 31891, 34260, 12227, 33207, 7398, 28561, 31236, 31177, 31253, 33558, 31556, 31377, 72, 20732, 32383, 32295, 14027, 31178, 53, 61, 53, 55, 31189, 31146, 31321, 31235, 53, 61, 56, 58, 31189, 31145, 72, 53, 61, 58, 54, 31189, 54, 31245, 53, 60, 31224, 31896, 31178, 28561, 29331, 20732, 31888, 32637, 4426, 2824, 72, 53, 61, 60, 55, 31189, 53, 54, 31245, 53, 31224, 31896, 31178, 28561, 29331, 26137, 20732, 4426, 2824, 73, 54, 52, 52, 52, 31189, 61, 31245, 59, 31224, 31896, 31178, 29331, 28561, 20732, 4426, 2824, 73, 5], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n07/22/2023 22:01:07 - INFO - __main__ - ok\r\n```\r\n\r\n",
"@bio-punk `IterableDatasetDict.map` does not support multiprocessing (only `DatasetDict.map` and `Dataset.map` do), so please open a new issue as this doesn't seem to be related to the original issue. ",
"Closing as this issue doesn't seem to be related to `datasets`."
] | 2022-10-10T13:50:56 | 2023-07-24T15:29:13 | 2023-07-24T15:29:13 | NONE | null | null | null | ## Describe the bug
There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datasets/blob/1b935dab9d2f171a8c6294269421fe967eb55e34/src/datasets/arrow_dataset.py#L2663) go into wait mode forever.
## Steps to reproduce the bug
The below code goes into deadlock when `NUMBER_OF_PROCESSES` is greater than one.
```python
NUMBER_OF_PROCESSES = 2
from transformers import AutoTokenizer, AutoModel
from datasets import load_dataset
dataset = load_dataset("glue", "mrpc", split="train")
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model.to("cpu")
def cls_pooling(model_output):
return model_output.last_hidden_state[:, 0]
def generate_embeddings_batched(examples):
sentences_batch = list(examples['sentence1'])
encoded_input = tokenizer(
sentences_batch, padding=True, truncation=True, return_tensors="pt"
)
encoded_input = {k: v.to("cpu") for k, v in encoded_input.items()}
model_output = model(**encoded_input)
embeddings = cls_pooling(model_output)
examples['embeddings'] = embeddings.detach().cpu().numpy() # 64, 384
return examples
embeddings_dataset = dataset.map(
generate_embeddings_batched,
batched=True,
batch_size=10,
num_proc=NUMBER_OF_PROCESSES
)
```
While debugging it I've seen that it gets "stuck" when calling `torch.nn.Embedding.forward` but some testing shows that the same happens with other functions from `torch.nn`.
## Environment info
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.31
- Python version: 3.9.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
Not sure if this is a HF problem, a PyTorch problem or something I'm doing wrong..
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5094/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5093/comments | https://api.github.com/repos/huggingface/datasets/issues/5093/events | https://github.com/huggingface/datasets/issues/5093 | 1,402,939,660 | I_kwDODunzps5TnykM | 5,093 | Mismatch between tutoriel and doc | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "riccardobucco",
"id": 9295277,
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riccardobucco",
"html_url": "https://github.com/riccardobucco",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).",
"Can I work on this?",
"Fixed in https://github.com/huggingface/datasets/pull/5095"
] | 2022-10-10T10:23:53 | 2022-10-10T17:51:15 | 2022-10-10T17:51:14 | CONTRIBUTOR | null | null | null | ## Describe the bug
In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co./docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co./docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work.
## Steps to reproduce the bug
MWE:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt")
```
## Expected results
return_tensors to be a valid kwarg :smiley:
## Actual results
```python
>> TypeError: map() got an unexpected keyword argument 'return_tensors'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5093/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5092/comments | https://api.github.com/repos/huggingface/datasets/issues/5092/events | https://github.com/huggingface/datasets/pull/5092 | 1,402,713,517 | PR_kwDODunzps5AeIsS | 5,092 | Use HTML relative paths for tiles in the docs | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Good catch, @lewtun. Thanks for the fix.\r\n> \r\n> Do you know if there are other absolute paths in the docs that should be fixed as well?\r\n\r\nI found a few more in [0d4796b](https://github.com/huggingface/datasets/pull/5092/commits/0d4796b747e6620d9fcc17a8f74acc5cf4bba7be).\r\n\r\nHowever, I noticed that none of the cross-references (e.g. to API classes / methods) work locally, but that is probably just a limitation of the local build",
"Thanks."
] | 2022-10-10T07:24:27 | 2022-10-11T13:25:45 | 2022-10-11T13:23:23 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5092",
"html_url": "https://github.com/huggingface/datasets/pull/5092",
"diff_url": "https://github.com/huggingface/datasets/pull/5092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5092.patch",
"merged_at": "2022-10-11T13:23:23"
} | This PR replaces the absolute paths in the landing page tiles with relative ones so that one can test navigation both locally in and in future PRs (see [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084/en/index) for an example PR where the links don't work).
I encountered this while working on the `optimum` docs and figured I'd fix it elsewhere too :)
Internal Slack thread: https://huggingface.slack.com/archives/C02GLJ5S0E9/p1665129710176619 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5092/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5091/comments | https://api.github.com/repos/huggingface/datasets/issues/5091/events | https://github.com/huggingface/datasets/pull/5091 | 1,401,112,552 | PR_kwDODunzps5AZCm9 | 5,091 | Allow connection objects in `from_sql` + small doc improvement | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-07T12:39:44 | 2022-10-09T13:19:15 | 2022-10-09T13:16:57 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5091",
"html_url": "https://github.com/huggingface/datasets/pull/5091",
"diff_url": "https://github.com/huggingface/datasets/pull/5091.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5091.patch",
"merged_at": "2022-10-09T13:16:57"
} | Allow connection objects in `from_sql` (emit a warning that they are cachable) and add a tip that explains the format of the con parameter when provided as a URI string.
PS: ~~This PR contains a parameter link, so https://github.com/huggingface/doc-builder/pull/311 needs to be merged before it's "ready for review".~~ Done! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5091/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5090/comments | https://api.github.com/repos/huggingface/datasets/issues/5090/events | https://github.com/huggingface/datasets/issues/5090 | 1,401,102,407 | I_kwDODunzps5TgyBH | 5,090 | Review sync issues from GitHub to Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Nice!!"
] | 2022-10-07T12:31:56 | 2022-10-08T07:07:36 | 2022-10-08T07:07:36 | MEMBER | null | null | null | ## Describe the bug
We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch.
For example:
- this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b
- was not properly synced with the Hub: https://github.com/huggingface/datasets/actions/runs/3002495269/jobs/4819769684
```
[main 9e641de] Add Papers with Code ID to scifact dataset (#4941)
Author: Albert Villanova del Moral <[email protected]>
1 file changed, 42 insertions(+), 14 deletions(-)
push failed !
GitCommandError(['git', 'push'], 1, b'remote: ---------------------------------------------------------- \nremote: Sorry, your push was rejected during YAML metadata verification: \nremote: - Error: "license" does not match any of the allowed types \nremote: ---------------------------------------------------------- \nremote: Please find the documentation at: \nremote: https://huggingface.co./docs/hub/models-cards#model-card-metadata \nremote: ---------------------------------------------------------- \nTo [https://huggingface.co./datasets/scifact.git\n](https://huggingface.co./datasets/scifact.git/n) ! [remote rejected] main -> main (pre-receive hook declined)\nerror: failed to push some refs to \'[https://huggingface.co./datasets/scifact.git\](https://huggingface.co./datasets/scifact.git/)'', b'')
```
We are reviewing sync issues in previous commits to recover them and repushing to the Hub.
TODO: Review
- [x] #4941
- scifact
- [x] #4931
- scifact
- [x] #4753
- wikipedia
- [x] #4554
- wmt17, wmt19, wmt_t2t
- Fixed with "Release 2.4.0" commit: https://github.com/huggingface/datasets/commit/401d4c4f9b9594cb6527c599c0e7a72ce1a0ea49
- https://huggingface.co./datasets/wmt17/commit/5c0afa83fbbd3508ff7627c07f1b27756d1379ea
- https://huggingface.co./datasets/wmt19/commit/b8ad5bf1960208a376a0ab20bc8eac9638f7b400
- https://huggingface.co./datasets/wmt_t2t/commit/b6d67191804dd0933476fede36754a436b48d1fc
- [x] #4607
- [x] #4416
- lccc
- Fixed with "Release 2.3.0" commit: https://huggingface.co./datasets/lccc/commit/8b1f8cf425b5653a0a4357a53205aac82ce038d1
- [x] #4367
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5090/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5089/comments | https://api.github.com/repos/huggingface/datasets/issues/5089/events | https://github.com/huggingface/datasets/issues/5089 | 1,400,788,486 | I_kwDODunzps5TflYG | 5,089 | Resume failed process | {
"login": "felix-schneider",
"id": 208336,
"node_id": "MDQ6VXNlcjIwODMzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felix-schneider",
"html_url": "https://github.com/felix-schneider",
"followers_url": "https://api.github.com/users/felix-schneider/followers",
"following_url": "https://api.github.com/users/felix-schneider/following{/other_user}",
"gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions",
"organizations_url": "https://api.github.com/users/felix-schneider/orgs",
"repos_url": "https://api.github.com/users/felix-schneider/repos",
"events_url": "https://api.github.com/users/felix-schneider/events{/privacy}",
"received_events_url": "https://api.github.com/users/felix-schneider/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2022-10-07T08:07:03 | 2022-10-07T08:07:03 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress.
**Describe the solution you'd like**
It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart where it left off.
**Describe alternatives you've considered**
Doing processing outside of `datasets`, by writing the dataset to json files and building a restart mechanism myself.
**Additional context**
N/A
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5089/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5088/comments | https://api.github.com/repos/huggingface/datasets/issues/5088/events | https://github.com/huggingface/datasets/issues/5088 | 1,400,530,412 | I_kwDODunzps5TemXs | 5,088 | load_datasets("json", ...) don't read local .json.gz properly | {
"login": "junwang-wish",
"id": 112650299,
"node_id": "U_kgDOBrboOw",
"avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junwang-wish",
"html_url": "https://github.com/junwang-wish",
"followers_url": "https://api.github.com/users/junwang-wish/followers",
"following_url": "https://api.github.com/users/junwang-wish/following{/other_user}",
"gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions",
"organizations_url": "https://api.github.com/users/junwang-wish/orgs",
"repos_url": "https://api.github.com/users/junwang-wish/repos",
"events_url": "https://api.github.com/users/junwang-wish/events{/privacy}",
"received_events_url": "https://api.github.com/users/junwang-wish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @junwang-wish, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce the bug. Which version of `datasets` are you using? Does the problem persist if you update `datasets`?\r\n```shell\r\npip install -U datasets\r\n``` ",
"Thanks @albertvillanova I updated `datasets` from `2.5.1` to `2.5.2` and tested copying the `json.gz` to a different directory and my mind was blown:\r\n\r\n```python\r\nfpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nproduces \r\n```python\r\nUsing custom data configuration default-0e6cf24134163e8b\r\nFound cached dataset json (/data/junwang/.cache/huggingface/datasets/json/default-0e6cf24134163e8b/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab)\r\n(1, 0)\r\n```\r\nbut then I ran below command to see if the same file in a different directory leads to same discrepancy\r\n```shell\r\ncp /data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz tmp_test.json.gz\r\n```\r\nand so I ran\r\n```python\r\nfpath = 'tmp_test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nand behold, I get \r\n```python\r\nUsing custom data configuration default-f679b32ab0008520\r\nDownloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n(1, 1)\r\n```\r\nThey match now !\r\n\r\nThis problem happens regardless of the shell I use (VScode jupyter extension or plain old Python REPL). \r\n\r\nI attached the `json.gz` here for reference: [test.json.gz](https://github.com/huggingface/datasets/files/9734843/test.json.gz)\r\n\r\n"
] | 2022-10-07T02:16:58 | 2022-10-07T14:43:16 | null | NONE | null | null | null | ## Describe the bug
I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines)
## Steps to reproduce the bug
```python
fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'
ds_panda = DatasetDict(
test=Dataset.from_pandas(
pd.read_json(fpath, lines=True)
)
)
ds_direct = load_dataset(
'json', data_files={
'test': fpath
}, features=Features(
text_input=Value(dtype="string", id=None),
text_output=Value(dtype="string", id=None)
)
)
len(ds_panda['test']), len(ds_direct['test'])
```
## Expected results
Lines of `ds_panda['test']` and `ds_direct['test']` should match.
## Actual results
```
Using custom data configuration default-c0ef2598760968aa
Downloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...
Dataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.
(62087, 0)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.8.13
- PyArrow version: 9.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5088/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5087/comments | https://api.github.com/repos/huggingface/datasets/issues/5087/events | https://github.com/huggingface/datasets/pull/5087 | 1,400,487,967 | PR_kwDODunzps5AW-N9 | 5,087 | Fix filter with empty indices | {
"login": "Mouhanedg56",
"id": 23029765,
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mouhanedg56",
"html_url": "https://github.com/Mouhanedg56",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-07T01:07:00 | 2022-10-07T18:43:03 | 2022-10-07T18:40:26 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5087",
"html_url": "https://github.com/huggingface/datasets/pull/5087",
"diff_url": "https://github.com/huggingface/datasets/pull/5087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5087.patch",
"merged_at": "2022-10-07T18:40:26"
} | Fix #5085 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5087/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5086/comments | https://api.github.com/repos/huggingface/datasets/issues/5086/events | https://github.com/huggingface/datasets/issues/5086 | 1,400,216,975 | I_kwDODunzps5TdZ2P | 5,086 | HTTPError: 404 Client Error: Not Found for url | {
"login": "km5ar",
"id": 54015474,
"node_id": "MDQ6VXNlcjU0MDE1NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/km5ar",
"html_url": "https://github.com/km5ar",
"followers_url": "https://api.github.com/users/km5ar/followers",
"following_url": "https://api.github.com/users/km5ar/following{/other_user}",
"gists_url": "https://api.github.com/users/km5ar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/km5ar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/km5ar/subscriptions",
"organizations_url": "https://api.github.com/users/km5ar/orgs",
"repos_url": "https://api.github.com/users/km5ar/repos",
"events_url": "https://api.github.com/users/km5ar/events{/privacy}",
"received_events_url": "https://api.github.com/users/km5ar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"FYI @lewtun ",
"Hi @km5ar, thanks for reporting.\r\n\r\nThis should be fixed in the notebook:\r\n- the filename `datasets-issues-with-hf-doc-builder.jsonl` no longer exists on the repo; instead, current filename is `datasets-issues-with-comments.jsonl`\r\n- see: https://huggingface.co./datasets/lewtun/github-issues/tree/main\r\n\r\nAnyway, depending on your version of `datasets`, you can now use:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"lewtun/github-issues\")\r\nissues_dataset\r\n```\r\ninstead of:\r\n```python\r\nfrom huggingface_hub import hf_hub_url\r\n\r\ndata_files = hf_hub_url(\r\n repo_id=\"lewtun/github-issues\",\r\n filename=\"datasets-issues-with-hf-doc-builder.jsonl\",\r\n repo_type=\"dataset\",\r\n)\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\nissues_dataset\r\n```\r\n\r\nOutput:\r\n```python\r\nIn [25]: ds = load_dataset(\"lewtun/github-issues\")\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.5k/10.5k [00:00<00:00, 5.75MB/s]\r\nUsing custom data configuration lewtun--github-issues-cff5093ecc410ea2\r\nDownloading and preparing dataset json/lewtun--github-issues to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.2M/12.2M [00:00<00:00, 26.5MB/s]\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.70s/it]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1589.96it/s]\r\nDataset json downloaded and prepared to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 133.95it/s]\r\n\r\nIn [26]: ds\r\nOut[26]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'timeline_url', 'performed_via_github_app', 'is_pull_request'],\r\n num_rows: 3019\r\n })\r\n})\r\n```",
"Thanks for reporting @km5ar and thank you @albertvillanova for the quick solution! I'll post a fix on the source too"
] | 2022-10-06T19:48:58 | 2022-10-07T15:12:01 | 2022-10-07T15:12:01 | NONE | null | null | null | ## Describe the bug
I was following chap 5 from huggingface course: https://huggingface.co./course/chapter5/6?fw=tf
However, I'm not able to download the datasets, with a 404 erros
<img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-4686-8631-13d879a0edee.png">
## Steps to reproduce the bug
```python
from huggingface_hub import hf_hub_url
data_files = hf_hub_url(
repo_id="lewtun/github-issues",
filename="datasets-issues-with-hf-doc-builder.jsonl",
repo_type="dataset",
)
from datasets import load_dataset
issues_dataset = load_dataset("json", data_files=data_files, split="train")
issues_dataset
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5086/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5085/comments | https://api.github.com/repos/huggingface/datasets/issues/5085/events | https://github.com/huggingface/datasets/issues/5085 | 1,400,113,569 | I_kwDODunzps5TdAmh | 5,085 | Filtering on an empty dataset returns a corrupted dataset. | {
"login": "gabegma",
"id": 36087158,
"node_id": "MDQ6VXNlcjM2MDg3MTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/36087158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabegma",
"html_url": "https://github.com/gabegma",
"followers_url": "https://api.github.com/users/gabegma/followers",
"following_url": "https://api.github.com/users/gabegma/following{/other_user}",
"gists_url": "https://api.github.com/users/gabegma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabegma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabegma/subscriptions",
"organizations_url": "https://api.github.com/users/gabegma/orgs",
"repos_url": "https://api.github.com/users/gabegma/repos",
"events_url": "https://api.github.com/users/gabegma/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabegma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "Mouhanedg56",
"id": 23029765,
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mouhanedg56",
"html_url": "https://github.com/Mouhanedg56",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Mouhanedg56",
"id": 23029765,
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mouhanedg56",
"html_url": "https://github.com/Mouhanedg56",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"~~It seems like #5043 fix (merged recently) is the root cause of such behaviour. When we empty indices mapping (because the dataset length equals to zero), we can no longer get column item like: `ds_filter_2['sentence']` which uses\r\n`ds_filter_1._indices.column(0)`~~\r\n\r\n**UPDATE:**\r\nEmpty datasets are returned without going through partial function on `map` method, which will not work to get indices for `filter`: we need to run `get_indices_from_mask_function` partial function on the dataset to get output = `{\"indices\": []}`. But this is complicated since functions used in args, in particular `get_indices_from_mask_function`, do not support empty datasets.\r\nWe can just handle empty datasets aside on filter method.",
"#self-assign",
"Thank you for solving this amazingly quickly!"
] | 2022-10-06T18:18:49 | 2022-10-07T19:06:02 | 2022-10-07T18:40:26 | NONE | null | null | null | ## Describe the bug
When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted.
## Steps to reproduce the bug
```python
datasets = load_dataset("glue", "sst2")
dataset_split = datasets['validation']
ds_filter_1 = dataset_split.filter(lambda x: False) # Some filtering condition that leads to an empty dataset
assert ds_filter_1.num_rows == 0
sentences = ds_filter_1['sentence']
assert len(sentences) == 0
ds_filter_2 = ds_filter_1.filter(lambda x: False) # Some other filtering condition
assert ds_filter_2.num_rows == 0
assert 'sentence' in ds_filter_2.column_names
sentences = ds_filter_2['sentence']
```
## Expected results
The last line should be returning an empty list, same as 4 lines above.
## Actual results
The last line currently raises `IndexError: index out of bounds`.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-11.6.6-x86_64-i386-64bit
- Python version: 3.9.11
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5085/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5085/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5084/comments | https://api.github.com/repos/huggingface/datasets/issues/5084/events | https://github.com/huggingface/datasets/pull/5084 | 1,400,016,229 | PR_kwDODunzps5AVXwm | 5,084 | IterableDataset formatting in numpy/torch/tf/jax | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084). All of your documentation changes will be reflected on that endpoint.",
"Actually I'm not happy with this implementation. It always require the iterable dataset to have definite `features`, which removes a lot of flexibility. So I think we need an actual formatting from python objects, not from arrow data.",
"Closing this one since it has too many conflicts and still require some work - it will be easier to open a new PR"
] | 2022-10-06T16:53:38 | 2022-12-20T17:19:52 | 2022-12-20T17:19:52 | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5084",
"html_url": "https://github.com/huggingface/datasets/pull/5084",
"diff_url": "https://github.com/huggingface/datasets/pull/5084.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5084.patch",
"merged_at": null
} | This code now returns a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
It also works with "arrow", "pandas", "torch", "tf" and "jax"
### Implementation details:
I'm using the existing code to format an Arrow Table to the right output format for simplicity.
Therefore it's probbaly not the most optimized approach.
For example to output PyTorch tensors it does this for every example:
python data -> arrow table -> numpy extracted data -> pytorch formatted data
### Releasing this feature
Even though I consider this as a bug/inconsistency, this change is a breaking change.
And I'm sure some users were relying on the torch iterable dataset to return PIL Image and used data collators to convert to pytorch.
So I guess this is `datasets` 3.0 ?
### TODO
- [x] merge https://github.com/huggingface/datasets/pull/5072
- [ ] docs
- [ ] tests
Close https://github.com/huggingface/datasets/issues/5083 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5084/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5083/comments | https://api.github.com/repos/huggingface/datasets/issues/5083/events | https://github.com/huggingface/datasets/issues/5083 | 1,399,842,514 | I_kwDODunzps5Tb-bS | 5,083 | Support numpy/torch/tf/jax formatting for IterableDataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-10-06T15:14:58 | 2023-02-17T14:10:01 | null | MEMBER | null | null | null | Right now `IterableDataset` doesn't do any formatting.
In particular this code should return a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
Right now it returns a PIL.Image.
Setting `streaming=False` does return a numpy array after #5072 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5083/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5082/comments | https://api.github.com/repos/huggingface/datasets/issues/5082/events | https://github.com/huggingface/datasets/pull/5082 | 1,399,379,777 | PR_kwDODunzps5ATJv- | 5,082 | adding keep in memory | {
"login": "Mustapha-AJEGHRIR",
"id": 66799406,
"node_id": "MDQ6VXNlcjY2Nzk5NDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/66799406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mustapha-AJEGHRIR",
"html_url": "https://github.com/Mustapha-AJEGHRIR",
"followers_url": "https://api.github.com/users/Mustapha-AJEGHRIR/followers",
"following_url": "https://api.github.com/users/Mustapha-AJEGHRIR/following{/other_user}",
"gists_url": "https://api.github.com/users/Mustapha-AJEGHRIR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mustapha-AJEGHRIR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mustapha-AJEGHRIR/subscriptions",
"organizations_url": "https://api.github.com/users/Mustapha-AJEGHRIR/orgs",
"repos_url": "https://api.github.com/users/Mustapha-AJEGHRIR/repos",
"events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mustapha-AJEGHRIR/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @mariosasko , I have added a test for the `keep_in_memory` version. I have also removed the `Compatible with temp_seed` part in the scope of `dset_shuffled`, please verify if that makes sense."
] | 2022-10-06T11:10:46 | 2022-10-07T14:35:34 | 2022-10-07T14:32:54 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5082",
"html_url": "https://github.com/huggingface/datasets/pull/5082",
"diff_url": "https://github.com/huggingface/datasets/pull/5082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5082.patch",
"merged_at": "2022-10-07T14:32:54"
} | Fixing #514 .
Hello @mariosasko 👋, I have implemented what you have recommanded to fix the keep in memory problem for shuffle on the issue #514 . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5082/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5081/comments | https://api.github.com/repos/huggingface/datasets/issues/5081/events | https://github.com/huggingface/datasets/issues/5081 | 1,399,340,050 | I_kwDODunzps5TaDwS | 5,081 | Bug loading `sentence-transformers/parallel-sentences` | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"tagging @nreimers ",
"The dataset is sadly not really compatible to be loaded with `load_dataset`. So far it is better to git clone it and to use the files directly.\r\n\r\nA data loading script would be needed to be added to this dataset. But this was too much overhead / not really intuitive how to create it.",
"Since the dataset is a bunch of TSVs we should not need a dataset script I think.\r\n\r\nBy default it tries to load all the TSVs at once, which fails here because they don't all have the same columns (pd.read_csv uses the first line as header by default). But those files have no header ! So, to properly load any TSV file in this repo, one has to pass `names=[...]` for pd.read_csv to know which column names to use.\r\n\r\nTo fix this situation, we can either do\r\n1. replace the TSVs by TSV with column names\r\n2. OR specify the pd.read_csv kwargs as YAML in the dataset card - and `datasets` would use that by default\r\n\r\nWDTY ?",
"There are more issues in the dataset.\r\nTo load OpenSubtitles I have to provide this (see `skiprows`):\r\n\r\n```python\r\ndf_os = pd.read_csv(\r\n \"./parallel-sentences/OpenSubtitles/OpenSubtitles-en-de-train.tsv.gz\", \r\n sep=\"\\t\", \r\n quoting=csv.QUOTE_NONE,\r\n header=None,\r\n names=[\"en\", \"de\"],\r\n skiprows=[540344, 9151700, 10040173, 10040199, 11314673, 11338258, 11869223, 12159297, 12251078, 12303334],\r\n)\r\n```",
"What's wrong with those lines exactly ?\r\nMaybe passing `error_bad_lines=False` (and maybe `warn_bad_lines=True`) can be helpful",
"> What's wrong with those lines exactly ? \r\n\r\nStuff like this: `ParserError: Error tokenizing data. C error: Expected 2 fields in line 540345, saw 3`\r\n\r\n",
"> Maybe passing error_bad_lines=False (and maybe warn_bad_lines=True) can be helpful\r\n\r\nYes. That would hide the issue but not solve it.",
"@nreimers WDYT about the two options mentioned above ?"
] | 2022-10-06T10:47:51 | 2022-10-11T10:00:48 | null | CONTRIBUTOR | null | null | null | ## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sentence-transformers/parallel-sentences")
```
raises this:
```
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [4], line 1
----> 1 dataset = load_dataset("sentence-transformers/parallel-sentences", split="train")
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/load.py:1693, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1690 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1692 # Download and prepare data
-> 1693 builder_instance.download_and_prepare(
1694 download_config=download_config,
1695 download_mode=download_mode,
1696 ignore_verifications=ignore_verifications,
1697 try_from_hf_gcs=try_from_hf_gcs,
1698 use_auth_token=use_auth_token,
1699 )
1701 # Build dataset for splits
1702 keep_in_memory = (
1703 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1704 )
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:807, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
801 if not downloaded_from_gcs:
802 prepare_split_kwargs = {
803 "file_format": file_format,
804 "max_shard_size": max_shard_size,
805 **download_and_prepare_kwargs,
806 }
--> 807 self._download_and_prepare(
808 dl_manager=dl_manager,
809 verify_infos=verify_infos,
810 **prepare_split_kwargs,
811 **download_and_prepare_kwargs,
812 )
813 # Sync info
814 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:898, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
894 split_dict.add(split_generator.split_info)
896 try:
897 # Prepare split will record examples associated to the split
--> 898 self._prepare_split(split_generator, **prepare_split_kwargs)
899 except OSError as e:
900 raise OSError(
901 "Cannot find data file. "
902 + (self.manual_download_instructions or "")
903 + "\nOriginal error:\n"
904 + str(e)
905 ) from None
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:1513, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size)
1506 shard_id += 1
1507 writer = writer_class(
1508 features=writer._features,
1509 path=fpath.replace("SSSSS", f"{shard_id:05d}"),
1510 storage_options=self._fs.storage_options,
1511 embed_local_files=embed_local_files,
1512 )
-> 1513 writer.write_table(table)
1514 finally:
1515 num_shards = shard_id + 1
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/arrow_writer.py:540, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
538 if self.pa_writer is None:
539 self._build_writer(inferred_schema=pa_table.schema)
--> 540 pa_table = table_cast(pa_table, self._schema)
541 if self.embed_local_files:
542 pa_table = embed_table_storage(pa_table)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2044, in table_cast(table, schema)
2032 """Improved version of pa.Table.cast.
2033
2034 It supports casting to feature types stored in the schema metadata.
(...)
2041 table (:obj:`pyarrow.Table`): the casted table
2042 """
2043 if table.schema != schema:
-> 2044 return cast_table_to_schema(table, schema)
2045 elif table.schema.metadata != schema.metadata:
2046 return table.replace_schema_metadata(schema.metadata)
File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2005, in cast_table_to_schema(table, schema)
2003 features = Features.from_arrow_schema(schema)
2004 if sorted(table.column_names) != sorted(features):
-> 2005 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2006 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
2007 return pa.Table.from_arrays(arrays, schema=schema)
ValueError: Couldn't cast
Action taken on Parliament's resolutions: see Minutes: string
Následný postup na základě usnesení Parlamentu: viz zápis: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 742
to
{'Membership of Parliament: see Minutes': Value(dtype='string', id=None), 'Състав на Парламента: вж. протоколи': Value(dtype='string', id=None)}
because column names don't match
```
## Expected results
no error
## Actual results
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.13
- PyArrow version: pyarrow 9.0.0
- transformers 4.22.2
- datasets 2.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5081/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5080/comments | https://api.github.com/repos/huggingface/datasets/issues/5080/events | https://github.com/huggingface/datasets/issues/5080 | 1,398,849,565 | I_kwDODunzps5TYMAd | 5,080 | Use hfh for caching | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)"
] | 2022-10-06T05:51:58 | 2022-10-06T14:26:05 | null | MEMBER | null | null | null | ## Is your feature request related to a problem?
As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching.
## Describe the solution you'd like
Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages.
First, we could easily start using `hfh` caching for:
- dataset Python scripts
- dataset READMEs
- dataset infos JSON files (now deprecated)
Second, we could also use `hfh` caching for data files downloaded from the Hub.
Further investigation is needed for:
- files downloaded from non-Hub hosts
- extracted files from downloaded archive/compressed files
- generated Arrow files
## Additional context
Docs about the `hfh` caching system:
- [Manage huggingface_hub cache-system](https://huggingface.co./docs/huggingface_hub/main/en/how-to-cache)
- [Cache-system reference](https://huggingface.co./docs/huggingface_hub/main/en/package_reference/cache)
The `transformers` library has already adopted `hfh` for caching. See:
- huggingface/transformers#18438
- huggingface/transformers#18857
- huggingface/transformers#18966
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5080/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5079/comments | https://api.github.com/repos/huggingface/datasets/issues/5079/events | https://github.com/huggingface/datasets/pull/5079 | 1,398,609,305 | PR_kwDODunzps5AQemi | 5,079 | refactor: replace AssertionError with more meaningful exceptions (#5074) | {
"login": "galbwe",
"id": 20004072,
"node_id": "MDQ6VXNlcjIwMDA0MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galbwe",
"html_url": "https://github.com/galbwe",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galbwe/subscriptions",
"organizations_url": "https://api.github.com/users/galbwe/orgs",
"repos_url": "https://api.github.com/users/galbwe/repos",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"received_events_url": "https://api.github.com/users/galbwe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-06T01:39:35 | 2022-10-07T14:35:43 | 2022-10-07T14:33:10 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5079",
"html_url": "https://github.com/huggingface/datasets/pull/5079",
"diff_url": "https://github.com/huggingface/datasets/pull/5079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5079.patch",
"merged_at": "2022-10-07T14:33:10"
} | Closes #5074
Replaces `AssertionError` in the following files with more descriptive exceptions:
- `src/datasets/arrow_reader.py`
- `src/datasets/builder.py`
- `src/datasets/utils/version.py`
The issue listed more files that needed to be fixed, but the rest of them were contained in the top-level `datasets` directory, which was removed when #4974 was merged | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5079/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5078/comments | https://api.github.com/repos/huggingface/datasets/issues/5078/events | https://github.com/huggingface/datasets/pull/5078 | 1,398,335,148 | PR_kwDODunzps5APjkH | 5,078 | Fix header level in Audio docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T20:22:44 | 2022-10-06T08:12:23 | 2022-10-06T08:09:41 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5078",
"html_url": "https://github.com/huggingface/datasets/pull/5078",
"diff_url": "https://github.com/huggingface/datasets/pull/5078.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5078.patch",
"merged_at": "2022-10-06T08:09:41"
} | Fixes header level so `Dataset features` is the doc title instead of `The Audio type`:
![Screen Shot 2022-10-05 at 1 22 02 PM](https://user-images.githubusercontent.com/59462357/194155840-eeb5d62f-f4eb-411e-b281-8494c5fffdce.png) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5078/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5077/comments | https://api.github.com/repos/huggingface/datasets/issues/5077/events | https://github.com/huggingface/datasets/pull/5077 | 1,398,080,859 | PR_kwDODunzps5AOs9L | 5,077 | Fix passed download_config in HubDatasetModuleFactoryWithoutScript | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T16:42:36 | 2022-10-06T05:31:22 | 2022-10-06T05:29:06 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5077",
"html_url": "https://github.com/huggingface/datasets/pull/5077",
"diff_url": "https://github.com/huggingface/datasets/pull/5077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5077.patch",
"merged_at": "2022-10-06T05:29:06"
} | Fix passed `download_config` in `HubDatasetModuleFactoryWithoutScript`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5077/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5076/comments | https://api.github.com/repos/huggingface/datasets/issues/5076/events | https://github.com/huggingface/datasets/pull/5076 | 1,397,918,092 | PR_kwDODunzps5AOJp7 | 5,076 | fix: update exception throw from OSError to EnvironmentError in `push… | {
"login": "rahulXs",
"id": 29496999,
"node_id": "MDQ6VXNlcjI5NDk2OTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/29496999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahulXs",
"html_url": "https://github.com/rahulXs",
"followers_url": "https://api.github.com/users/rahulXs/followers",
"following_url": "https://api.github.com/users/rahulXs/following{/other_user}",
"gists_url": "https://api.github.com/users/rahulXs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahulXs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahulXs/subscriptions",
"organizations_url": "https://api.github.com/users/rahulXs/orgs",
"repos_url": "https://api.github.com/users/rahulXs/repos",
"events_url": "https://api.github.com/users/rahulXs/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahulXs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T14:46:29 | 2022-10-07T14:35:57 | 2022-10-07T14:33:27 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5076",
"html_url": "https://github.com/huggingface/datasets/pull/5076",
"diff_url": "https://github.com/huggingface/datasets/pull/5076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5076.patch",
"merged_at": "2022-10-07T14:33:27"
} | Status:
Ready for review
Description of Changes:
Fixes #5075
Changes proposed in this pull request:
- Throw EnvironmentError instead of OSError in `push_to_hub` when the Hub token is not present. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5076/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5075/comments | https://api.github.com/repos/huggingface/datasets/issues/5075/events | https://github.com/huggingface/datasets/issues/5075 | 1,397,865,501 | I_kwDODunzps5TUbwd | 5,075 | Throw EnvironmentError when token is not present | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"@mariosasko I've raised a PR #5076 against this issue. Please help to review. Thanks."
] | 2022-10-05T14:14:18 | 2022-10-07T14:33:28 | 2022-10-07T14:33:28 | CONTRIBUTOR | null | null | null | Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5075/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5074/comments | https://api.github.com/repos/huggingface/datasets/issues/5074/events | https://github.com/huggingface/datasets/issues/5074 | 1,397,850,352 | I_kwDODunzps5TUYDw | 5,074 | Replace AssertionErrors with more meaningful errors | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4614514401,
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest",
"name": "hacktoberfest",
"color": "DF8D62",
"default": false,
"description": ""
}
] | closed | false | {
"login": "galbwe",
"id": 20004072,
"node_id": "MDQ6VXNlcjIwMDA0MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galbwe",
"html_url": "https://github.com/galbwe",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galbwe/subscriptions",
"organizations_url": "https://api.github.com/users/galbwe/orgs",
"repos_url": "https://api.github.com/users/galbwe/repos",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"received_events_url": "https://api.github.com/users/galbwe/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "galbwe",
"id": 20004072,
"node_id": "MDQ6VXNlcjIwMDA0MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galbwe",
"html_url": "https://github.com/galbwe",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galbwe/subscriptions",
"organizations_url": "https://api.github.com/users/galbwe/orgs",
"repos_url": "https://api.github.com/users/galbwe/repos",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"received_events_url": "https://api.github.com/users/galbwe/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, can I pick up this issue?",
"#self-assign",
"Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix."
] | 2022-10-05T14:03:55 | 2022-10-07T14:33:11 | 2022-10-07T14:33:11 | CONTRIBUTOR | null | null | null | Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5074/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5073/comments | https://api.github.com/repos/huggingface/datasets/issues/5073/events | https://github.com/huggingface/datasets/pull/5073 | 1,397,832,183 | PR_kwDODunzps5AN3Gn | 5,073 | Restore saved format state in `load_from_disk` | {
"login": "asofiaoliveira",
"id": 74454835,
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofiaoliveira",
"html_url": "https://github.com/asofiaoliveira",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T13:51:47 | 2022-10-11T16:55:07 | 2022-10-11T16:49:23 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5073",
"html_url": "https://github.com/huggingface/datasets/pull/5073",
"diff_url": "https://github.com/huggingface/datasets/pull/5073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5073.patch",
"merged_at": "2022-10-11T16:49:23"
} | Hello! @mariosasko
This pull request relates to issue #5050 and intends to add the format to datasets loaded from disk.
All I did was add a set_format in the Dataset.load_from_disk, as DatasetDict.load_from_disk relies on the first.
I don't know if I should add a test and where, so let me know if I should and I can work on that as well!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5073/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5072/comments | https://api.github.com/repos/huggingface/datasets/issues/5072/events | https://github.com/huggingface/datasets/pull/5072 | 1,397,765,531 | PR_kwDODunzps5ANoo5 | 5,072 | Image & Audio formatting for numpy/torch/tf/jax | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I just added a consolidation step so that numpy arrays or tensors of images are stacked together if the shapes match, instead of having lists of tensors\r\n\r\nFeel free to review @mariosasko :)",
"I added a few lines in the docs and reverted the ragged numpy array change :)\r\n\r\nready for another review @mariosasko !"
] | 2022-10-05T13:07:03 | 2022-10-10T13:24:10 | 2022-10-10T13:21:32 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5072",
"html_url": "https://github.com/huggingface/datasets/pull/5072",
"diff_url": "https://github.com/huggingface/datasets/pull/5072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5072.patch",
"merged_at": "2022-10-10T13:21:32"
} | Added support for image and audio formatting for numpy, torch, tf and jax.
For images, the dtype used is the one of the image (the one returned by PIL.Image), e.g. uint8
I also added support for string, binary and None types. In particular for torch and jax, strings are kept unchanged (previously it was returning an error because you can't create a tensor of strings) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5072/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5071/comments | https://api.github.com/repos/huggingface/datasets/issues/5071/events | https://github.com/huggingface/datasets/pull/5071 | 1,397,301,270 | PR_kwDODunzps5AMG3g | 5,071 | Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Super, thanks a lot for adding this support, Albert!"
] | 2022-10-05T06:28:39 | 2022-10-06T14:43:12 | 2022-10-06T14:40:26 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5071",
"html_url": "https://github.com/huggingface/datasets/pull/5071",
"diff_url": "https://github.com/huggingface/datasets/pull/5071.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5071.patch",
"merged_at": "2022-10-06T14:40:25"
} | This PR supports defining a default config name, even if no predefined allowed config names are set.
Fix #5070.
CC: @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5071/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5071/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5070/comments | https://api.github.com/repos/huggingface/datasets/issues/5070/events | https://github.com/huggingface/datasets/issues/5070 | 1,396,765,647 | I_kwDODunzps5TQPPP | 5,070 | Support default config name when no builder configs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co./datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co./datasets/HuggingFaceM4/cm4-synthetic-testing/discussions/2/files\r\n\r\nbut which I had to revert since providing no split breaks at run time.\r\n"
] | 2022-10-04T19:49:35 | 2022-10-06T14:40:26 | 2022-10-06T14:40:26 | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to support creating configs on the fly **by name** (not using kwargs), the list of allowed builder configs `BUILDER_CONFIGS` must not be set.
However, if so, then `DEFAULT_CONFIG_NAME` is not supported.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5070/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5070/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5067/comments | https://api.github.com/repos/huggingface/datasets/issues/5067/events | https://github.com/huggingface/datasets/pull/5067 | 1,396,361,768 | PR_kwDODunzps5AI86d | 5,067 | Fix CONTRIBUTING once dataset scripts transferred to Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T14:16:05 | 2022-10-06T06:14:43 | 2022-10-06T06:12:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5067",
"html_url": "https://github.com/huggingface/datasets/pull/5067",
"diff_url": "https://github.com/huggingface/datasets/pull/5067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5067.patch",
"merged_at": "2022-10-06T06:12:12"
} | This PR updates the `CONTRIBUTING.md` guide, once the all dataset scripts have been removed from the GitHub repo and transferred to the HF Hub:
- #4974
See diff here: https://github.com/huggingface/datasets/commit/e3291ecff9e54f09fcee3f313f051a03fdc3d94b
Additionally, this PR fixes the line separator that by some previous mistake was CRLF instead of LF. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5067/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5066/comments | https://api.github.com/repos/huggingface/datasets/issues/5066/events | https://github.com/huggingface/datasets/pull/5066 | 1,396,086,745 | PR_kwDODunzps5AIDWj | 5,066 | Support streaming gzip.open | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T11:20:05 | 2022-10-06T15:13:51 | 2022-10-06T15:11:29 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5066",
"html_url": "https://github.com/huggingface/datasets/pull/5066",
"diff_url": "https://github.com/huggingface/datasets/pull/5066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5066.patch",
"merged_at": "2022-10-06T15:11:29"
} | This PR implements support for streaming out-of-the-box dataset scripts containing `gzip.open`.
This has been a recurring issue. See, e.g.:
- #5060
- #3191 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5066/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5065/comments | https://api.github.com/repos/huggingface/datasets/issues/5065/events | https://github.com/huggingface/datasets/pull/5065 | 1,396,003,362 | PR_kwDODunzps5AHxlQ | 5,065 | Ci py3.10 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Does it sound good to you @albertvillanova ?"
] | 2022-10-04T10:13:51 | 2022-11-29T15:28:05 | 2022-11-29T15:25:26 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5065",
"html_url": "https://github.com/huggingface/datasets/pull/5065",
"diff_url": "https://github.com/huggingface/datasets/pull/5065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5065.patch",
"merged_at": "2022-11-29T15:25:26"
} | Added a CI job for python 3.10
Some dependencies don't work on 3.10 like apache beam, so I remove them from the extras in this case.
I also removed some s3 fixtures that we don't use anymore (and that don't work on 3.10 anyway) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5065/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5064/comments | https://api.github.com/repos/huggingface/datasets/issues/5064/events | https://github.com/huggingface/datasets/pull/5064 | 1,395,978,143 | PR_kwDODunzps5AHsP0 | 5,064 | Align signature of create/delete_repo with latest hfh | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T09:54:53 | 2022-10-07T17:02:11 | 2022-10-07T16:59:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5064",
"html_url": "https://github.com/huggingface/datasets/pull/5064",
"diff_url": "https://github.com/huggingface/datasets/pull/5064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5064.patch",
"merged_at": "2022-10-07T16:59:30"
} | This PR aligns the signature of `create_repo`/`delete_repo` with the current one in hfh, by removing deprecated `name` and `organization`, and using `repo_id` instead.
Related to:
- #5063
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5064/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5063/comments | https://api.github.com/repos/huggingface/datasets/issues/5063/events | https://github.com/huggingface/datasets/pull/5063 | 1,395,895,463 | PR_kwDODunzps5AHasG | 5,063 | Align signature of list_repo_files with latest hfh | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T08:51:46 | 2022-10-07T16:42:57 | 2022-10-07T16:40:16 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5063",
"html_url": "https://github.com/huggingface/datasets/pull/5063",
"diff_url": "https://github.com/huggingface/datasets/pull/5063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5063.patch",
"merged_at": "2022-10-07T16:40:16"
} | This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`.
This is already the case for `dataset_info`.
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5063/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5062/comments | https://api.github.com/repos/huggingface/datasets/issues/5062/events | https://github.com/huggingface/datasets/pull/5062 | 1,395,739,417 | PR_kwDODunzps5AG6SA | 5,062 | Fix CI hfh token warning | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"good catch !"
] | 2022-10-04T06:36:54 | 2022-10-04T08:58:15 | 2022-10-04T08:42:31 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5062",
"html_url": "https://github.com/huggingface/datasets/pull/5062",
"diff_url": "https://github.com/huggingface/datasets/pull/5062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5062.patch",
"merged_at": "2022-10-04T08:42:31"
} | In our CI, we get warnings from `hfh` about using deprecated `token`: https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_overwrite_files
C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\huggingface_hub\utils\_deprecation.py:97: FutureWarning: Deprecated argument(s) used in 'dataset_info': token. Will not be supported from version '0.12'.
warnings.warn(message, FutureWarning)
```
This PR fixes the tests in `TestPushToHub` so that we fix these warnings.
Continuation of:
- #5031
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5062/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5061/comments | https://api.github.com/repos/huggingface/datasets/issues/5061/events | https://github.com/huggingface/datasets/issues/5061 | 1,395,476,770 | I_kwDODunzps5TLUki | 5,061 | `_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map` | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI",
"I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess==0.70.12.2, dill==0.3.4`: works\r\n- `multiprocess==0.70.12.2, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.4`: can't test, `multiprocess==0.70.13` requires `dill>=0.3.5.1`\r\n\r\nI will pin their versions on my end. I don't have enough knowledge of how python multiprocessing works to debug this, but ideally there could be a fix. It's also possible that I'm doing something wrong in my code, but again the `.name` of the logger that failed to pickle is `datasets.fingerprint`, which I'm not using directly.",
"Do you know which logger fails at being pickled ?",
"I'm not 100% sure how to figure it out -- the stack trace above doesn't clearly give me a place where I can print out who owns the logger, etc. I only found out its `.name` is `datasets.fingerprint` by printing right before\r\n```\r\n File \".../logging/__init__.py\", line 1774, in __reduce__\r\n raise pickle.PicklingError('logger cannot be pickled')\r\n```\r\nIf you have any idea on how to find it out, please let me know.",
"Ok I see, not sure why it triggers this error though, in `logging.py` the code is\r\n\r\nhttps://github.com/python/cpython/blob/c9da063e32725a66495e4047b8a5ed13e72d9e8e/Lib/logging/__init__.py#L1769-L1775\r\n\r\nand on my side it works on 3.10 with dill 0.3.5.1 and multiprocess 0.70.13\r\n```python\r\n>>> datasets.fingerprint.logger.__reduce__() \r\n(<function logging.getLogger(name=None)>, ('datasets.fingerprint',))\r\n```\r\nCould you try to run this code ?\r\n\r\nAre you in an environment where the loggers are instantiated differently ? Can you check the source code of `logging.Logger.__reduce__` in `\".../logging/__init__.py\", line 1774` ?",
"Closing due to inactivity."
] | 2022-10-03T23:51:38 | 2023-07-21T14:43:35 | 2023-07-21T14:43:34 | NONE | null | null | null | ## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 2489, in map
transformed_shards[index] = async_result.get()
File ".../site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File ".../site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File ".../site-packages/multiprocess/connection.py", line 214, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File ".../site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File ".../site-packages/dill/_dill.py", line 620, in dump
StockPickler.dump(self, obj)
File ".../pickle.py", line 487, in dump
self.save(obj)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 902, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1154, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 578, in save
rv = reduce(self.proto)
File ".../logging/__init__.py", line 1774, in __reduce__
raise pickle.PicklingError('logger cannot be pickled')
_pickle.PicklingError: logger cannot be pickled
```
## Steps to reproduce the bug
Sorry I failed to have a minimal reproducible example, but the offending line on my end is
```python
dataset.map(
lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda
batched=True,
num_proc=4,
)
```
This does work when `num_proc=1`, so it's likely a multiprocessing thing.
## Expected results
`map` succeeds
## Actual results
The error trace above.
## Environment info
- `datasets` version: 1.16.1 and 2.5.1 both failed
- Platform: Ubuntu 20.04.4 LTS
- Python version: 3.10.4
- PyArrow version: 9.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5061/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5060/comments | https://api.github.com/repos/huggingface/datasets/issues/5060/events | https://github.com/huggingface/datasets/issues/5060 | 1,395,382,940 | I_kwDODunzps5TK9qc | 5,060 | Unable to Use Custom Dataset Locally | {
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co./datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read from a URL? Any links to code/documentation would be greatly appreciated, I'd love to learn more",
"`datasets` extends `open` in dataset scripts to work with URLs. The builtin `open` from python only works with local files.\r\n\r\nYou can find the extension here: https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/download/streaming_download_manager.py#L435-L451\r\n\r\nI think we can create a docs section dedicated to streaming to explain how this works",
"Closing this one - feel free to reopen if you have more questions"
] | 2022-10-03T21:55:16 | 2022-10-06T14:29:18 | 2022-10-06T14:29:17 | CONTRIBUTOR | null | null | null | ## Describe the bug
I have uploaded a [dataset](https://huggingface.co./datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co./docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in the same folder or repository of the dataset script,
you can just pass the relative paths to the files instead of URLs.
```
Accordingly, I put the [relative path](https://huggingface.co./datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs`
However, if I try to load the data using `load_dataset`, I get the following error
```
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co./datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
```
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True)
>>> t = dataset["train"]
>>> for item in t:
...... print(item)
...... break
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__
for key, example in self._iter():
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter
yield from ex_iterable
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co./datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
````
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5060/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5059/comments | https://api.github.com/repos/huggingface/datasets/issues/5059/events | https://github.com/huggingface/datasets/pull/5059 | 1,395,050,876 | PR_kwDODunzps5AEoX7 | 5,059 | Fix typo | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T17:05:25 | 2022-10-03T17:34:40 | 2022-10-03T17:32:27 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5059",
"html_url": "https://github.com/huggingface/datasets/pull/5059",
"diff_url": "https://github.com/huggingface/datasets/pull/5059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5059.patch",
"merged_at": "2022-10-03T17:32:27"
} | Fixes a small typo :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5059/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5058/comments | https://api.github.com/repos/huggingface/datasets/issues/5058/events | https://github.com/huggingface/datasets/pull/5058 | 1,394,962,424 | PR_kwDODunzps5AEVWn | 5,058 | Mark CI tests as xfail when 502 error | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T15:53:55 | 2022-10-04T10:03:23 | 2022-10-04T10:01:23 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5058",
"html_url": "https://github.com/huggingface/datasets/pull/5058",
"diff_url": "https://github.com/huggingface/datasets/pull/5058.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5058.patch",
"merged_at": "2022-10-04T10:01:23"
} | To make CI more robust, we could mark as xfail when the Hub raises a 502 error (besides 500 error):
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_skip_identical_files
- https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16648055339047.git/info/lfs/objects/batch
```
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_overwrite_files
- https://github.com/huggingface/datasets/actions/runs/3145587033/jobs/5113074889
```
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16643866807322.git/info/lfs/objects/verify
```
Currently, we mark as xfail when 500 error:
- #4845 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5058/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5057/comments | https://api.github.com/repos/huggingface/datasets/issues/5057/events | https://github.com/huggingface/datasets/pull/5057 | 1,394,827,216 | PR_kwDODunzps5AD4c6 | 5,057 | Support `converters` in `CsvBuilder` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T14:23:21 | 2022-10-04T11:19:28 | 2022-10-04T11:17:32 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5057",
"html_url": "https://github.com/huggingface/datasets/pull/5057",
"diff_url": "https://github.com/huggingface/datasets/pull/5057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5057.patch",
"merged_at": "2022-10-04T11:17:32"
} | Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5057/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5056/comments | https://api.github.com/repos/huggingface/datasets/issues/5056/events | https://github.com/huggingface/datasets/pull/5056 | 1,394,713,173 | PR_kwDODunzps5ADfxN | 5,056 | Fix broken URL's (GEM) | {
"login": "manandey",
"id": 6687858,
"node_id": "MDQ6VXNlcjY2ODc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manandey",
"html_url": "https://github.com/manandey",
"followers_url": "https://api.github.com/users/manandey/followers",
"following_url": "https://api.github.com/users/manandey/following{/other_user}",
"gists_url": "https://api.github.com/users/manandey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manandey/subscriptions",
"organizations_url": "https://api.github.com/users/manandey/orgs",
"repos_url": "https://api.github.com/users/manandey/repos",
"events_url": "https://api.github.com/users/manandey/events{/privacy}",
"received_events_url": "https://api.github.com/users/manandey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5056). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @manandey. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub."
] | 2022-10-03T13:13:22 | 2022-10-04T13:49:00 | 2022-10-04T13:48:59 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5056",
"html_url": "https://github.com/huggingface/datasets/pull/5056",
"diff_url": "https://github.com/huggingface/datasets/pull/5056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5056.patch",
"merged_at": null
} | This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5056/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5055/comments | https://api.github.com/repos/huggingface/datasets/issues/5055/events | https://github.com/huggingface/datasets/pull/5055 | 1,394,503,844 | PR_kwDODunzps5ACyVU | 5,055 | Fix backward compatibility for dataset_infos.json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T10:30:14 | 2022-10-03T13:43:55 | 2022-10-03T13:41:32 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5055",
"html_url": "https://github.com/huggingface/datasets/pull/5055",
"diff_url": "https://github.com/huggingface/datasets/pull/5055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5055.patch",
"merged_at": "2022-10-03T13:41:32"
} | While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored only if the README.md has a dataset_info field, which has precedence over the data in the JSON file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5055/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5054/comments | https://api.github.com/repos/huggingface/datasets/issues/5054/events | https://github.com/huggingface/datasets/pull/5054 | 1,394,152,728 | PR_kwDODunzps5ABnd3 | 5,054 | Fix license/citation information of squadshifts dataset card | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T05:19:13 | 2022-10-03T09:26:49 | 2022-10-03T09:24:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5054",
"html_url": "https://github.com/huggingface/datasets/pull/5054",
"diff_url": "https://github.com/huggingface/datasets/pull/5054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5054.patch",
"merged_at": "2022-10-03T09:24:30"
} | This PR fixes the license/citation information of squadshifts dataset card, once the dataset owners have responded to our request for information:
- https://github.com/modestyachts/squadshifts-website/issues/1
Additionally, we have updated the mention in their website to our `datasets` library (they were referring old name `nlp`):
- https://github.com/modestyachts/squadshifts-website/pull/2#event-7500953009 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5054/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5053/comments | https://api.github.com/repos/huggingface/datasets/issues/5053/events | https://github.com/huggingface/datasets/issues/5053 | 1,393,739,882 | I_kwDODunzps5TEshq | 5,053 | Intermittent JSON parse error when streaming the Pile | {
"login": "neelnanda-io",
"id": 77788841,
"node_id": "MDQ6VXNlcjc3Nzg4ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/77788841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neelnanda-io",
"html_url": "https://github.com/neelnanda-io",
"followers_url": "https://api.github.com/users/neelnanda-io/followers",
"following_url": "https://api.github.com/users/neelnanda-io/following{/other_user}",
"gists_url": "https://api.github.com/users/neelnanda-io/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neelnanda-io/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neelnanda-io/subscriptions",
"organizations_url": "https://api.github.com/users/neelnanda-io/orgs",
"repos_url": "https://api.github.com/users/neelnanda-io/repos",
"events_url": "https://api.github.com/users/neelnanda-io/events{/privacy}",
"received_events_url": "https://api.github.com/users/neelnanda-io/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:09 - ERROR - datasets.packaged_modules.json.json - Failed to read file 'gzip://file-000000000007.json::https://huggingface.co./datasets/lvwerra/codeparrot-clean-train/resolve/1d740acb9d09cf7a3307553323e2c677a6535407/file-000000000007.json.gz' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0\r\n```",
"Ah, thanks! I did get errors like that. Sad that PR wasn't merged in! \r\n\r\nI'm currently just downloading 200GB of the Pile locally to avoid streaming (I have space and it's faster anyway), but that's really useful! I can probably apply the dumb patch of just commenting out the bits that raise the JSON Parse Error lol, based on your code - if I continue the loop should it be fine?",
"Yup you can get some inspiration from this PR. It simply ignores the bad chunks (a chunk is ~a few MBs of data).\r\nWe'll try to merge this PR soon"
] | 2022-10-02T11:56:46 | 2022-10-04T17:59:03 | null | NONE | null | null | null | ## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it.
I'm using a remote machine with 8 A6000 GPUs via runpod.io
## Expected results
I have a DataLoader which can iterate through the whole Pile
## Actual results
Stack trace:
```
Failed to read file 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0
```
I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation
```
Traceback (most recent call last):
File "ddp_script.py", line 1258, in <module>
main()
File "ddp_script.py", line 1143, in main
for c, batch in tqdm.tqdm(enumerate(data_iter)):
File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__
next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator)
File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches
broadcast_object_list(batch_info)
File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list
torch.distributed.broadcast_object_list(object_list, src=from_process)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object
return _unpickler(io.BytesIO(buf)).load()
_pickle.UnpicklingError: invalid load key, '@'.
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(
cfg["dataset_name"], streaming=True, split="train")
dataset = dataset.remove_columns("meta")
dataset = dataset.map(tokenize_and_concatenate, batched=True)
dataset = dataset.with_format(type="torch")
train_data_loader = DataLoader(
dataset, batch_size=cfg["batch_size"], num_workers=3)
for batch in train_data_loader:
continue
```
`tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization:
```
import numpy as np
import einops
import torch
def tokenize_and_concatenate(examples):
texts = examples["text"]
full_text = tokenizer.eos_token.join(texts)
div = 20
length = len(full_text) // div
text_list = [full_text[i * length: (i + 1) * length]
for i in range(div)]
tokens = tokenizer(text_list, return_tensors="np", padding=True)[
"input_ids"
].flatten()
tokens = tokens[tokens != tokenizer.pad_token_id]
n = len(tokens)
curr_batch_size = n // (seq_len - 1)
tokens = tokens[: (seq_len - 1) * curr_batch_size]
tokens = einops.rearrange(
tokens,
"(batch_size seq) -> batch_size seq",
batch_size=curr_batch_size,
seq=seq_len - 1,
)
prefix = np.ones((curr_batch_size, 1), dtype=np.int64) * \
tokenizer.bos_token_id
return {
"text": np.concatenate([prefix, tokens], axis=1)
}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-105-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
ZStandard data:
Version: 0.18.0
Summary: Zstandard bindings for Python
Home-page: https://github.com/indygreg/python-zstandard
Author: Gregory Szorc
Author-email: [email protected]
License: BSD
Location: /opt/conda/lib/python3.7/site-packages
Requires:
Required-by: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5053/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5052/comments | https://api.github.com/repos/huggingface/datasets/issues/5052/events | https://github.com/huggingface/datasets/pull/5052 | 1,393,076,765 | PR_kwDODunzps4_-PZw | 5,052 | added from_generator method to IterableDataset class. | {
"login": "hamid-vakilzadeh",
"id": 56002455,
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamid-vakilzadeh",
"html_url": "https://github.com/hamid-vakilzadeh",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I added a test and moved the `streaming` param from `read` to `__init_`. Then, I also decided to update the `read` method of the rest of the packaged modules to account for this param. \r\n\r\n@hamid-vakilzadeh Are you OK with these changes? ",
"@mariosasko these all look great! Thanks for the updates."
] | 2022-09-30T22:14:05 | 2022-10-05T12:51:48 | 2022-10-05T12:10:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5052",
"html_url": "https://github.com/huggingface/datasets/pull/5052",
"diff_url": "https://github.com/huggingface/datasets/pull/5052.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5052.patch",
"merged_at": "2022-10-05T12:10:48"
} | Hello,
This resolves issues #4988.
I added a method `from_generator` to class `IterableDataset`.
I modified the `read` method of input stream generator to also return Iterable_dataset.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5052/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5051/comments | https://api.github.com/repos/huggingface/datasets/issues/5051/events | https://github.com/huggingface/datasets/pull/5051 | 1,392,559,503 | PR_kwDODunzps4_8drw | 5,051 | Revert task removal in folder-based builders | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T14:50:03 | 2022-10-03T12:23:35 | 2022-10-03T12:21:31 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5051",
"html_url": "https://github.com/huggingface/datasets/pull/5051",
"diff_url": "https://github.com/huggingface/datasets/pull/5051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5051.patch",
"merged_at": "2022-10-03T12:21:31"
} | Reverts the removal of `task_templates` in the folder-based builders. I also added the `AudioClassifaction` task for consistency.
This is needed to fix https://github.com/huggingface/transformers/issues/19177.
I think we should soon deprecate and remove the current task API (and investigate if it's possible to integrate the `train eval index` API), but we need to update the Transformers examples before that so we don't break them.
cc @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5051/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5050/comments | https://api.github.com/repos/huggingface/datasets/issues/5050/events | https://github.com/huggingface/datasets/issues/5050 | 1,392,381,882 | I_kwDODunzps5S_g-6 | 5,050 | Restore saved format state in `load_from_disk` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "asofiaoliveira",
"id": 74454835,
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofiaoliveira",
"html_url": "https://github.com/asofiaoliveira",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "asofiaoliveira",
"id": 74454835,
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofiaoliveira",
"html_url": "https://github.com/asofiaoliveira",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] | 2022-09-30T12:40:07 | 2022-10-11T16:49:24 | 2022-10-11T16:49:24 | CONTRIBUTOR | null | null | null | Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5050/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5049/comments | https://api.github.com/repos/huggingface/datasets/issues/5049/events | https://github.com/huggingface/datasets/pull/5049 | 1,392,361,381 | PR_kwDODunzps4_7zOY | 5,049 | Add `kwargs` to `Dataset.from_generator` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T12:24:27 | 2022-10-03T11:00:11 | 2022-10-03T10:58:15 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"merged_at": "2022-10-03T10:58:15"
} | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5049/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5048/comments | https://api.github.com/repos/huggingface/datasets/issues/5048/events | https://github.com/huggingface/datasets/pull/5048 | 1,392,170,680 | PR_kwDODunzps4_7KI2 | 5,048 | Fix bug with labels of eurlex config of lex_glue dataset | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@JamesLYC88 here is the fix! Thanks again!",
"Thanks, @albertvillanova. When do you expect that this change will take effect when someone downloads the dataset?",
"The change is immediately available now, since this change we made to our library:\r\n- #4059"
] | 2022-09-30T09:47:12 | 2022-09-30T16:30:25 | 2022-09-30T16:21:41 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5048",
"html_url": "https://github.com/huggingface/datasets/pull/5048",
"diff_url": "https://github.com/huggingface/datasets/pull/5048.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5048.patch",
"merged_at": "2022-09-30T16:21:41"
} | Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable.
In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequent concepts from level 2 [...]”._ The current label list has all 127 labels, which leads to different (lower) results, as communicated by users.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5048/timeline | null | null | true |