url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.18B
2.35B
node_id
stringlengths
18
19
number
int64
3.98k
6.97k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
12
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
1
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4085/comments
https://api.github.com/repos/huggingface/datasets/issues/4085/events
https://github.com/huggingface/datasets/issues/4085
1,190,621,345
I_kwDODunzps5G93Ch
4,085
datasets.set_progress_bar_enabled(False) not working in datasets v2
{ "login": "virilo", "id": 3381112, "node_id": "MDQ6VXNlcjMzODExMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/virilo", "html_url": "https://github.com/virilo", "followers_url": "https://api.github.com/users/virilo/followers", "following_url": "https://api.github.com/users/virilo/following{/other_user}", "gists_url": "https://api.github.com/users/virilo/gists{/gist_id}", "starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/virilo/subscriptions", "organizations_url": "https://api.github.com/users/virilo/orgs", "repos_url": "https://api.github.com/users/virilo/repos", "events_url": "https://api.github.com/users/virilo/events{/privacy}", "received_events_url": "https://api.github.com/users/virilo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-04-02T12:40:10
2022-09-17T02:18:03
2022-04-04T06:44:34
NONE
null
## Describe the bug datasets.set_progress_bar_enabled(False) not working in datasets v2 ## Steps to reproduce the bug ```python datasets.set_progress_bar_enabled(False) ``` ## Expected results datasets not using any progress bar ## Actual results AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled ## Environment info datasets version 2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4085/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4084/comments
https://api.github.com/repos/huggingface/datasets/issues/4084/events
https://github.com/huggingface/datasets/issues/4084
1,190,060,415
I_kwDODunzps5G7uF_
4,084
Errors in `Train with Datasets` Tensorflow code section on Huggingface.co
{ "login": "blackhat-coder", "id": 57095771, "node_id": "MDQ6VXNlcjU3MDk1Nzcx", "avatar_url": "https://avatars.githubusercontent.com/u/57095771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blackhat-coder", "html_url": "https://github.com/blackhat-coder", "followers_url": "https://api.github.com/users/blackhat-coder/followers", "following_url": "https://api.github.com/users/blackhat-coder/following{/other_user}", "gists_url": "https://api.github.com/users/blackhat-coder/gists{/gist_id}", "starred_url": "https://api.github.com/users/blackhat-coder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blackhat-coder/subscriptions", "organizations_url": "https://api.github.com/users/blackhat-coder/orgs", "repos_url": "https://api.github.com/users/blackhat-coder/repos", "events_url": "https://api.github.com/users/blackhat-coder/events{/privacy}", "received_events_url": "https://api.github.com/users/blackhat-coder/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-04-01T17:02:47
2022-04-04T07:24:37
2022-04-04T07:21:31
NONE
null
## Describe the bug Hi ### Error 1 Running the Tensforlow code on [Huggingface](https://huggingface.co./docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors' ### Error 2 `DataCollatorWithPadding` isn't imported ## Steps to reproduce the bug ```python import tensorflow as tf from datasets import load_dataset from transformers import AutoTokenizer dataset = load_dataset('glue', 'mrpc', split='train') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'], shuffle=True, batch_size=16, collate_fn=data_collator, ) ``` This is the same code on Huggingface.co ## Actual results TypeError: __init__() got an unexpected keyword argument 'return_tensors' ## Environment info - `datasets` version: 2.0.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.7 - PyArrow version: 6.0.0 - Pandas version: 1.4.1 >
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4084/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4083/comments
https://api.github.com/repos/huggingface/datasets/issues/4083/events
https://github.com/huggingface/datasets/pull/4083
1,190,025,878
PR_kwDODunzps41gEbu
4,083
Add SacreBLEU Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-04-01T16:24:56
2022-04-12T20:45:00
2022-04-12T20:38:40
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4083/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4083", "html_url": "https://github.com/huggingface/datasets/pull/4083", "diff_url": "https://github.com/huggingface/datasets/pull/4083.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4083.patch", "merged_at": "2022-04-12T20:38:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/4082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4082/comments
https://api.github.com/repos/huggingface/datasets/issues/4082/events
https://github.com/huggingface/datasets/pull/4082
1,189,965,845
PR_kwDODunzps41f3fb
4,082
Add chrF(++) Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-04-01T15:32:12
2022-04-12T20:43:55
2022-04-12T20:38:06
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4082/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4082", "html_url": "https://github.com/huggingface/datasets/pull/4082", "diff_url": "https://github.com/huggingface/datasets/pull/4082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4082.patch", "merged_at": "2022-04-12T20:38:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/4081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4081/comments
https://api.github.com/repos/huggingface/datasets/issues/4081/events
https://github.com/huggingface/datasets/pull/4081
1,189,916,472
PR_kwDODunzps41fsxW
4,081
Close parquet writer properly in `push_to_hub`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-04-01T14:58:50
2022-07-14T19:22:06
2022-04-01T16:16:19
MEMBER
null
We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer. I fixed this by explicitly closing the parquet writer. Close https://github.com/huggingface/datasets/issues/4077.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4081/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4081", "html_url": "https://github.com/huggingface/datasets/pull/4081", "diff_url": "https://github.com/huggingface/datasets/pull/4081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4081.patch", "merged_at": "2022-04-01T16:16:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/4080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4080/comments
https://api.github.com/repos/huggingface/datasets/issues/4080/events
https://github.com/huggingface/datasets/issues/4080
1,189,667,296
I_kwDODunzps5G6OHg
4,080
NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-04-01T11:34:28
2022-04-01T13:59:10
2022-04-01T13:59:10
CONTRIBUTOR
null
## Steps to reproduce the bug ```python datasets.load_dataset("conll2012_ontonotesv5", "english_v12") ``` ## Actual results ``` Downloading builder script: 32.2kB [00:00, 9.72MB/s] Downloading metadata: 20.0kB [00:00, 10.4MB/s] Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size , total: 379.12 MiB) to ... Traceback (most recent call last): [315/390] File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module> train() File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train trainer.fit(model, datamodule=dm) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte rrupt return trainer_fn(*args, **kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run self._data_connector.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre pare_data self.trainer.datamodule.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn fn(*args, **kwargs) File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data raw_dsets = datasets.load_dataset(**load_dataset_kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset builder_instance.download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4080/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4079/comments
https://api.github.com/repos/huggingface/datasets/issues/4079/events
https://github.com/huggingface/datasets/pull/4079
1,189,521,576
PR_kwDODunzps41eYRC
4,079
Increase max retries for GitHub datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-04-01T09:34:03
2022-04-01T15:32:40
2022-04-01T15:27:11
MEMBER
null
As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics: - #4063 Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub: - #4059 Fix #2048 Related to: - #4051 - #3210 - #2787 - #2075 - #2036 CC: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4079/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4079", "html_url": "https://github.com/huggingface/datasets/pull/4079", "diff_url": "https://github.com/huggingface/datasets/pull/4079.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4079.patch", "merged_at": "2022-04-01T15:27:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/4078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4078/comments
https://api.github.com/repos/huggingface/datasets/issues/4078/events
https://github.com/huggingface/datasets/pull/4078
1,189,513,572
PR_kwDODunzps41eWnl
4,078
Fix GithubMetricModuleFactory instantiation with None download_config
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-04-01T09:26:58
2022-04-01T14:44:51
2022-04-01T14:39:27
MEMBER
null
Recent PR: - #4063 introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`. This PR add instantiation tests and fix that potential issue. CC: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4078/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4078", "html_url": "https://github.com/huggingface/datasets/pull/4078", "diff_url": "https://github.com/huggingface/datasets/pull/4078.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4078.patch", "merged_at": "2022-04-01T14:39:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/4077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4077/comments
https://api.github.com/repos/huggingface/datasets/issues/4077/events
https://github.com/huggingface/datasets/issues/4077
1,189,467,585
I_kwDODunzps5G5dXB
4,077
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
null
2022-04-01T08:49:13
2022-04-01T16:16:19
2022-04-01T16:16:19
CONTRIBUTOR
null
## Describe the bug When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine. Basically, I do: ``` from datasets import load_dataset dataset = load_dataset("imagefolder", data_files="path_to_my_files") dataset.push_to_hub("dataset_name") # works fine, no errors reloaded_dataset = load_dataset("dataset_name") ``` and it returns: ``` /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` I created a Colab notebook to reproduce my error: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4077/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4076/comments
https://api.github.com/repos/huggingface/datasets/issues/4076/events
https://github.com/huggingface/datasets/pull/4076
1,188,478,867
PR_kwDODunzps41a1n2
4,076
Add ROUGE Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-31T18:34:34
2022-04-12T20:43:45
2022-04-12T20:37:38
CONTRIBUTOR
null
Add ROUGE metric card. I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4076/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4076", "html_url": "https://github.com/huggingface/datasets/pull/4076", "diff_url": "https://github.com/huggingface/datasets/pull/4076.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4076.patch", "merged_at": "2022-04-12T20:37:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/4075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4075/comments
https://api.github.com/repos/huggingface/datasets/issues/4075/events
https://github.com/huggingface/datasets/issues/4075
1,188,462,162
I_kwDODunzps5G1n5S
4,075
Add CCAgT dataset
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
closed
false
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[ { "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-31T18:20:28
2022-07-06T19:03:42
2022-07-06T19:03:42
NONE
null
## Adding a Dataset - **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique - **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR). - **Paper:** https://doi.org/10.1109/cbms49503.2020.00110 - **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0 - **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4075/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4074/comments
https://api.github.com/repos/huggingface/datasets/issues/4074/events
https://github.com/huggingface/datasets/issues/4074
1,188,449,142
I_kwDODunzps5G1kt2
4,074
Error in google/xtreme_s dataset card
{ "login": "wranai", "id": 1048544, "node_id": "MDQ6VXNlcjEwNDg1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1048544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wranai", "html_url": "https://github.com/wranai", "followers_url": "https://api.github.com/users/wranai/followers", "following_url": "https://api.github.com/users/wranai/following{/other_user}", "gists_url": "https://api.github.com/users/wranai/gists{/gist_id}", "starred_url": "https://api.github.com/users/wranai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wranai/subscriptions", "organizations_url": "https://api.github.com/users/wranai/orgs", "repos_url": "https://api.github.com/users/wranai/repos", "events_url": "https://api.github.com/users/wranai/events{/privacy}", "received_events_url": "https://api.github.com/users/wranai/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
null
2022-03-31T18:07:45
2022-04-01T08:12:56
2022-04-01T08:12:56
NONE
null
**Link:** https://huggingface.co./datasets/google/xtreme_s Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4074/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4073/comments
https://api.github.com/repos/huggingface/datasets/issues/4073/events
https://github.com/huggingface/datasets/pull/4073
1,188,364,711
PR_kwDODunzps41adPA
4,073
Create a metric card for Competition MATH
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-31T16:48:59
2022-04-01T19:02:39
2022-04-01T18:57:13
NONE
null
Proposing metric card for Competition MATH
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4073/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4073", "html_url": "https://github.com/huggingface/datasets/pull/4073", "diff_url": "https://github.com/huggingface/datasets/pull/4073.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4073.patch", "merged_at": "2022-04-01T18:57:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/4072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4072/comments
https://api.github.com/repos/huggingface/datasets/issues/4072/events
https://github.com/huggingface/datasets/pull/4072
1,188,266,410
PR_kwDODunzps41aIUG
4,072
Add installation instructions to image_process doc
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-31T15:29:37
2022-03-31T17:05:46
2022-03-31T17:00:19
COLLABORATOR
null
This PR adds the installation instructions for the Image feature to the image process doc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4072/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4072", "html_url": "https://github.com/huggingface/datasets/pull/4072", "diff_url": "https://github.com/huggingface/datasets/pull/4072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4072.patch", "merged_at": "2022-03-31T17:00:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/4071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4071/comments
https://api.github.com/repos/huggingface/datasets/issues/4071/events
https://github.com/huggingface/datasets/issues/4071
1,187,587,683
I_kwDODunzps5GySZj
4,071
Loading issue for xuyeliu/notebookCDG dataset
{ "login": "Jun-jie-Huang", "id": 46160972, "node_id": "MDQ6VXNlcjQ2MTYwOTcy", "avatar_url": "https://avatars.githubusercontent.com/u/46160972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jun-jie-Huang", "html_url": "https://github.com/Jun-jie-Huang", "followers_url": "https://api.github.com/users/Jun-jie-Huang/followers", "following_url": "https://api.github.com/users/Jun-jie-Huang/following{/other_user}", "gists_url": "https://api.github.com/users/Jun-jie-Huang/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jun-jie-Huang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jun-jie-Huang/subscriptions", "organizations_url": "https://api.github.com/users/Jun-jie-Huang/orgs", "repos_url": "https://api.github.com/users/Jun-jie-Huang/repos", "events_url": "https://api.github.com/users/Jun-jie-Huang/events{/privacy}", "received_events_url": "https://api.github.com/users/Jun-jie-Huang/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
null
2022-03-31T06:36:29
2022-03-31T08:17:01
2022-03-31T08:16:16
NONE
null
## Dataset viewer issue for '*xuyeliu/notebookCDG*' **Link:** *[link to the dataset viewer page](https://huggingface.co./datasets/xuyeliu/notebookCDG)* *Couldn't load the xuyeliu/notebookCDG with provided scripts: * ``` from datasets import load_dataset dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl") ``` I get an error message as follows: FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] Am I the one who added this dataset ? No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4071/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4070/comments
https://api.github.com/repos/huggingface/datasets/issues/4070/events
https://github.com/huggingface/datasets/pull/4070
1,186,810,205
PR_kwDODunzps41VMYq
4,070
Create metric card for seqeval
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T18:08:01
2022-04-01T19:02:58
2022-04-01T18:57:25
NONE
null
Proposing metric card for seqeval. Not sure which values to report for Popular papers though.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4070/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4070", "html_url": "https://github.com/huggingface/datasets/pull/4070", "diff_url": "https://github.com/huggingface/datasets/pull/4070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4070.patch", "merged_at": "2022-04-01T18:57:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/4069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4069/comments
https://api.github.com/repos/huggingface/datasets/issues/4069/events
https://github.com/huggingface/datasets/pull/4069
1,186,790,578
PR_kwDODunzps41VIMJ
4,069
Add support for metadata files to `imagefolder`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T17:47:51
2022-05-03T12:49:00
2022-05-03T12:42:16
COLLABORATOR
null
This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset. To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure: ``` image_id,some_col1_name,some_col2_name rel/path/to/image1.jpg,image1_col1_value,image1_col2_value rel/path/to/image2.jpg,image2_col1_value,image2_col2_value ... ``` This is how the resolution works: ``` - path/to/imagefolder/directory - info.csv - 10.jpg # referenced as 10.jpg in "info.csv" - Cat - 0.jpg # referenced as Cat/0.jpg in "info.csv" - 1.jpg # referenced as Cat/1.jpg in "info.csv" - Dog - 0.jpg # referenced as Dog/0.jpg in "info.csv" - 1.jpg # referenced as Dog/1.jpg in "info.csv" ``` Open questions: 1. IMO it makes more sense to store image metadata as JSON Lines than CSV. CSV is sufficient for textual metadata but not the best for representing bounding boxes, for instance. Also, JSON Lines is more strict, which is good in this case (CSV supports various delimiters, the header line is optional, etc., so it's easier to enforce rules on JSON Lines that it's on CSV) 2. A better name for the `image_id` column, which contains image identifiers? Maybe `image_file` or `image_filename`? 3. WDYT about making `with_metadata=True` the default behavior if the loaded repo/directory contains an `info.csv` file? An example repository: https://huggingface.co./datasets/mariosasko/PetImages. Can be loaded by installing `datasets` from the PR branch and running `load_dataset("mariosasko/PetImages", with_metadata=True)`. cc: @abhishekkrthakur (this PR should address https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) TODOs: - [x] Test - [x] Metadata file nesting ``` - path/to/imagefolder/directory - info.csv - 10.jpg - Cat - info.csv # should have higher precedence in this directory than the top-level info.csv, but we choose the first "eligible" metadata file currently - 0.jpg - 1.jpg ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4069/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4069", "html_url": "https://github.com/huggingface/datasets/pull/4069", "diff_url": "https://github.com/huggingface/datasets/pull/4069.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4069.patch", "merged_at": "2022-05-03T12:42:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/4068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4068/comments
https://api.github.com/repos/huggingface/datasets/issues/4068/events
https://github.com/huggingface/datasets/pull/4068
1,186,765,422
PR_kwDODunzps41VC0I
4,068
Improve out of bounds error message
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T17:22:10
2022-03-31T08:39:08
2022-03-31T08:33:57
MEMBER
null
In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case. I replaced it with a message that is very similar to the one you get with you try to access a list with an out-of-range index.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4068/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4068", "html_url": "https://github.com/huggingface/datasets/pull/4068", "diff_url": "https://github.com/huggingface/datasets/pull/4068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4068.patch", "merged_at": "2022-03-31T08:33:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/4067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4067/comments
https://api.github.com/repos/huggingface/datasets/issues/4067/events
https://github.com/huggingface/datasets/pull/4067
1,186,731,905
PR_kwDODunzps41U7qc
4,067
Update datasets task tags to align tags with models
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T16:49:32
2022-04-13T17:37:27
2022-04-13T17:31:11
MEMBER
null
**Requires https://github.com/huggingface/datasets/pull/4066 to be merged first** Following https://github.com/huggingface/datasets/pull/4066 we need to update many dataset tags to use the new ones. This PR takes case of this and is quite big - feel free to review only certain tags if you don't want to spend too much time on it. Note that the CI will never be green for this PR, because many dataset cards have missing tags or sections, and fixing them is out of scope of this PR (the CI on master will be green anyway)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4067/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4067", "html_url": "https://github.com/huggingface/datasets/pull/4067", "diff_url": "https://github.com/huggingface/datasets/pull/4067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4067.patch", "merged_at": "2022-04-13T17:31:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/4066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4066/comments
https://api.github.com/repos/huggingface/datasets/issues/4066/events
https://github.com/huggingface/datasets/pull/4066
1,186,728,104
PR_kwDODunzps41U63x
4,066
Tasks alignment with models
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T16:45:56
2022-04-13T13:12:52
2022-04-08T12:20:00
MEMBER
null
I updated our `tasks.json` file with the new task taxonomy that is aligned with models. The rule that defines a task is the following: **Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granularity/complexity of the code to be defined - ideally I’d like to say “HF user’s level”) - this is the same definition in `transformers` I will update the tags of all the datasets in this repository [in another PR](https://github.com/huggingface/datasets/pull/4067) for readability. Main changes: - conditional-text-generation is split between summarization, translation, text-generation and text2text-generation - speech-processing is split into automatic-speech-recognition, audio-classification, etc. - structure-prediction is renamed token-classification - abstractive-qa now belongs to text2text-generation Here is just a simplified YAML dump of `tasks.json`: ```yaml audio-classification: - keyword-spotting - speaker-identification - speaker-intent-classification - emotion-recognition - speaker-language-identification audio-to-audio: [] automatic-speech-recognition: [] conversational: - dialogue-generation feature-extraction: [] fill-mask: - slot-filling - masked-language-modeling image-classification: - multi-label-image-classification - multi-class-image-classification image-segmentation: - instance-segmentation - semantic-segmentation - panoptic-segmentation image-to-text: - image-captioning multiple-choice: - multiple-choice-qa - multiple-choice-coreference-resolution object-detection: - face-detection - vehicle-detection question-answering: - extractive-qa - open-domain-qa - closed-domain-qa sentence-similarity: [] tabular-classification: [] tabular-to-text: - rdf-to-text summarization: - news-articles-summarization - news-articles-headline-generation table-to-text: [] table-question-answering: [] text-classification: - acceptability-classification - entity-linking-classification - fact-checking - intent-classification - multi-class-classification - multi-label-classification - natural-language-inference - semantic-similarity-classification - sentiment-classification - topic-classification - semantic-similarity-scoring - sentiment-scoring - sentiment-analysis - hate-speech-detection - text-scoring text-generation: - dialogue-modeling - language-modeling text-retrieval: - document-retrieval - utterance-retrieval - entity-linking-retrieval - fact-checking-retrieval text-to-image: [] text-to-tabular: - relation-extraction - semantic-role-labeling text-to-speech: [] text2text-generation: - text-simplification - explanation-generation - abstractive-qa - open-domain-abstractive-qa - closed-domain-qa - open-book-qa - closed-book-qa time-series-forecasting: - univariate-time-series-forecasting - multivariate-time-series-forecasting token-classification: - named-entity-recognition - part-of-speech-tagging - parsing - lemmatization - word-sense-disambiguation - coreference-resolution translation: [] visual-question-answering: [] voice-activity-detection: [] zero-shot-classification: [] zero-shot-image-classification: [] reinforcement-learning: [] other: [] ``` Feel free to comment and give suggestions, especially if you think we can also align this list with other projects cc @julien-c @osanseviero @severo @lewtun @yjernite @albertvillanova @mariosasko @polinaeterna
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4066/reactions", "total_count": 7, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4066/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4066", "html_url": "https://github.com/huggingface/datasets/pull/4066", "diff_url": "https://github.com/huggingface/datasets/pull/4066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4066.patch", "merged_at": "2022-04-08T12:20:00" }
true
https://api.github.com/repos/huggingface/datasets/issues/4065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4065/comments
https://api.github.com/repos/huggingface/datasets/issues/4065/events
https://github.com/huggingface/datasets/pull/4065
1,186,722,478
PR_kwDODunzps41U5rq
4,065
Create metric card for METEOR
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T16:40:30
2022-03-31T17:12:10
2022-03-31T17:07:50
NONE
null
Proposing a metric card for METEOR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4065/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4065", "html_url": "https://github.com/huggingface/datasets/pull/4065", "diff_url": "https://github.com/huggingface/datasets/pull/4065.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4065.patch", "merged_at": "2022-03-31T17:07:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/4064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4064/comments
https://api.github.com/repos/huggingface/datasets/issues/4064/events
https://github.com/huggingface/datasets/pull/4064
1,186,650,321
PR_kwDODunzps41UqXS
4,064
Contributing MedMCQA dataset
{ "login": "monk1337", "id": 17107749, "node_id": "MDQ6VXNlcjE3MTA3NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monk1337", "html_url": "https://github.com/monk1337", "followers_url": "https://api.github.com/users/monk1337/followers", "following_url": "https://api.github.com/users/monk1337/following{/other_user}", "gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monk1337/subscriptions", "organizations_url": "https://api.github.com/users/monk1337/orgs", "repos_url": "https://api.github.com/users/monk1337/repos", "events_url": "https://api.github.com/users/monk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/monk1337/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T15:42:47
2022-05-06T09:40:40
2022-05-06T08:42:56
CONTRIBUTOR
null
Adding MedMCQA dataset ( https://paperswithcode.com/dataset/medmcqa ) **Name**: MedMCQA **Description**: MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. The dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM), Obstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology, Psychiatry, Radiology Skin, Preventive & Social Medicine (PSM), and Surgery **Code**: https://github.com/medmcqa/medmcqa All files are at place : **a dataset script** : medmcqa.py **a dataset card with tags and information** : README.md. **a metadata file** : dataset_infos.json **a dummy-data file** : Please help to generate this file, I was facing ` raise JSONDecodeError("Extra data", s, end)` error
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4064/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4064", "html_url": "https://github.com/huggingface/datasets/pull/4064", "diff_url": "https://github.com/huggingface/datasets/pull/4064.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4064.patch", "merged_at": "2022-05-06T08:42:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/4063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4063/comments
https://api.github.com/repos/huggingface/datasets/issues/4063/events
https://github.com/huggingface/datasets/pull/4063
1,186,611,368
PR_kwDODunzps41UiDm
4,063
Increase max retries for GitHub metrics
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T15:12:48
2022-03-31T14:42:52
2022-03-31T14:37:47
MEMBER
null
As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics. Related to: - #3134 Also related to: - #4059
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4063/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4063", "html_url": "https://github.com/huggingface/datasets/pull/4063", "diff_url": "https://github.com/huggingface/datasets/pull/4063.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4063.patch", "merged_at": "2022-03-31T14:37:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/4062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4062/comments
https://api.github.com/repos/huggingface/datasets/issues/4062/events
https://github.com/huggingface/datasets/issues/4062
1,186,330,732
I_kwDODunzps5Gtfhs
4,062
Loading mozilla-foundation/common_voice_7_0 dataset failed
{ "login": "aapot", "id": 19529125, "node_id": "MDQ6VXNlcjE5NTI5MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aapot", "html_url": "https://github.com/aapot", "followers_url": "https://api.github.com/users/aapot/followers", "following_url": "https://api.github.com/users/aapot/following{/other_user}", "gists_url": "https://api.github.com/users/aapot/gists{/gist_id}", "starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aapot/subscriptions", "organizations_url": "https://api.github.com/users/aapot/orgs", "repos_url": "https://api.github.com/users/aapot/repos", "events_url": "https://api.github.com/users/aapot/events{/privacy}", "received_events_url": "https://api.github.com/users/aapot/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-30T11:39:41
2024-06-09T12:12:46
2022-03-31T08:18:04
NONE
null
## Describe the bug I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than just the `fi` and `test` split. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token="YOUR TOKEN") ``` ## Expected results load `mozilla-foundation/common_voice_7_0` dataset succesfully ## Actual results ``` JSONDecodeError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 909 try: --> 910 return complexjson.loads(self.text, **kwargs) 911 except JSONDecodeError as e: /opt/conda/lib/python3.7/site-packages/simplejson/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, **kw) 524 and not use_decimal and not kw): --> 525 return _default_decoder.decode(s) 526 if cls is None: /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in decode(self, s, _w, _PY3) 369 s = str(s, self.encoding) --> 370 obj, end = self.raw_decode(s) 371 end = _w(s, end).end() /opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in raw_decode(self, s, idx, _w, _PY3) 399 idx += 3 --> 400 return self.scan_once(s, idx=_w(s, idx).end()) JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: JSONDecodeError Traceback (most recent call last) /tmp/ipykernel_358/370980805.py in <module> 1 # load Common Voice 7.0 dataset from Huggingface with Finnish "test" split ----> 2 test_dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1690 ignore_verifications=ignore_verifications, 1691 try_from_hf_gcs=try_from_hf_gcs, -> 1692 use_auth_token=use_auth_token, 1693 ) 1694 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 if not downloaded_from_gcs: 605 self._download_and_prepare( --> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 607 ) 608 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 1102 1103 def _download_and_prepare(self, dl_manager, verify_infos): -> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) 1105 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 670 split_dict = SplitDict(dataset_name=self.name) 671 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 672 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 673 674 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _split_generators(self, dl_manager) 151 152 self._log_download(self.config.name, bundle_version, hf_auth_token) --> 153 archive = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template)) 154 155 if self.config.version < datasets.Version("5.0.0"): ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _get_bundle_url(self, locale, url_template) 130 path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'") 131 use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024 --> 132 response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json() 133 return response["url"] 134 /opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs) 915 raise RequestsJSONDecodeError(e.message) 916 else: --> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos) 918 919 @property JSONDecodeError: [Errno Expecting value] Not Found: 0 ``` ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 5.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4062/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4061/comments
https://api.github.com/repos/huggingface/datasets/issues/4061/events
https://github.com/huggingface/datasets/issues/4061
1,186,317,071
I_kwDODunzps5GtcMP
4,061
Loading cnn_dailymail dataset failed
{ "login": "Arij-Aladel", "id": 68355048, "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arij-Aladel", "html_url": "https://github.com/Arij-Aladel", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-30T11:29:02
2022-03-30T13:36:14
2022-03-30T13:36:14
NONE
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` ## Expected results load `cnn_dailymail` dataset succesfully ## Actual results failed to load and get error > NotADirectoryError: [Errno 20] Not a directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` 1.8.0: - Platform: Ubuntu-20.04 - Python version: 3.9.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4061/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4060/comments
https://api.github.com/repos/huggingface/datasets/issues/4060/events
https://github.com/huggingface/datasets/pull/4060
1,186,281,033
PR_kwDODunzps41Tbmg
4,060
Deprecate canonical Multilingual Librispeech
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T10:56:56
2022-04-01T12:54:05
2022-04-01T12:48:51
CONTRIBUTOR
null
Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co./datasets/facebook/multilingual_librispeech) which supports streaming. However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not work with new version of the library, including MLS. Should we somehow notify users about that or is it possible to change this line ourselves? For MLS specifically, I cannot change the code directly as I'm not the member of the Facebook org. Hm, and the code should be change after the release, no?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4060/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4060", "html_url": "https://github.com/huggingface/datasets/pull/4060", "diff_url": "https://github.com/huggingface/datasets/pull/4060.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4060.patch", "merged_at": "2022-04-01T12:48:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/4059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4059/comments
https://api.github.com/repos/huggingface/datasets/issues/4059/events
https://github.com/huggingface/datasets/pull/4059
1,186,149,949
PR_kwDODunzps41TC-o
4,059
Load GitHub datasets from Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-30T09:21:56
2022-09-16T12:43:26
2022-09-16T12:40:43
MEMBER
null
We have recurrently had connection errors when requesting GitHub because sometimes the site is not available. This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub. Fix #2048 Related to: - #4051 - #3210 - #2787 - #2075 - #2036
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4059/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4059", "html_url": "https://github.com/huggingface/datasets/pull/4059", "diff_url": "https://github.com/huggingface/datasets/pull/4059.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4059.patch", "merged_at": "2022-09-16T12:40:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/4058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4058/comments
https://api.github.com/repos/huggingface/datasets/issues/4058/events
https://github.com/huggingface/datasets/pull/4058
1,185,611,600
PR_kwDODunzps41RPhl
4,058
Updated annotations for nli_tr dataset
{ "login": "e-budur", "id": 2246791, "node_id": "MDQ6VXNlcjIyNDY3OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/e-budur", "html_url": "https://github.com/e-budur", "followers_url": "https://api.github.com/users/e-budur/followers", "following_url": "https://api.github.com/users/e-budur/following{/other_user}", "gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}", "starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-budur/subscriptions", "organizations_url": "https://api.github.com/users/e-budur/orgs", "repos_url": "https://api.github.com/users/e-budur/repos", "events_url": "https://api.github.com/users/e-budur/events{/privacy}", "received_events_url": "https://api.github.com/users/e-budur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-29T23:46:59
2022-04-12T20:55:12
2022-04-12T10:37:22
CONTRIBUTOR
null
This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters. The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets. This PR is intended only for updating the annotation labels but a followup PR will focus on updating the missing sections in the `README.md` as well. Thanks for all your time to review it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4058/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4058", "html_url": "https://github.com/huggingface/datasets/pull/4058", "diff_url": "https://github.com/huggingface/datasets/pull/4058.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4058.patch", "merged_at": "2022-04-12T10:37:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/4057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4057/comments
https://api.github.com/repos/huggingface/datasets/issues/4057/events
https://github.com/huggingface/datasets/issues/4057
1,185,442,001
I_kwDODunzps5GqGjR
4,057
`load_dataset` consumes too much memory for audio + tar archives
{ "login": "JFCeron", "id": 50839826, "node_id": "MDQ6VXNlcjUwODM5ODI2", "avatar_url": "https://avatars.githubusercontent.com/u/50839826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JFCeron", "html_url": "https://github.com/JFCeron", "followers_url": "https://api.github.com/users/JFCeron/followers", "following_url": "https://api.github.com/users/JFCeron/following{/other_user}", "gists_url": "https://api.github.com/users/JFCeron/gists{/gist_id}", "starred_url": "https://api.github.com/users/JFCeron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JFCeron/subscriptions", "organizations_url": "https://api.github.com/users/JFCeron/orgs", "repos_url": "https://api.github.com/users/JFCeron/repos", "events_url": "https://api.github.com/users/JFCeron/events{/privacy}", "received_events_url": "https://api.github.com/users/JFCeron/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-29T21:38:55
2022-08-16T10:22:55
2022-08-16T10:22:55
NONE
null
## Description `load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists. ## Steps to reproduce the bug Here's my implementation of `_generate_examples`: ```python class MyDatasetBuilder(datasets.GeneratorBasedBuilder): DEFAULT_WRITER_BATCH_SIZE = 1 ... def _split_generators(self, dl_manager): archive_path = dl_manager.download(_DL_URLS[self.config.name]) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "audio_tarfile_path": archive_path["audio_tarfile"] }, ), ] def _generate_examples(self, audio_tarfile_path): key = 0 with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile: for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 ``` I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened. I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times. ```python import tarfile def generate_examples(): audio_tarfile = tarfile.open("audios.tar", mode="r|") key = 0 for audio_tarinfo in audio_tarfile: audio_name = audio_tarinfo.name audio_file_obj = audio_tarfile.extractfile(audio_tarinfo) yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}} key += 1 if __name__ == "__main__": examples = generate_examples() for example in examples: pass ``` ## Expected results Memory consumption should be similar to the non-huggingface script. ## Actual results Process is killed after consuming too much memory. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4057/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4057/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4056/comments
https://api.github.com/repos/huggingface/datasets/issues/4056/events
https://github.com/huggingface/datasets/issues/4056
1,185,155,775
I_kwDODunzps5GpAq_
4,056
Unexpected behavior of _TempDirWithCustomCleanup
{ "login": "JonasGeiping", "id": 22680696, "node_id": "MDQ6VXNlcjIyNjgwNjk2", "avatar_url": "https://avatars.githubusercontent.com/u/22680696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JonasGeiping", "html_url": "https://github.com/JonasGeiping", "followers_url": "https://api.github.com/users/JonasGeiping/followers", "following_url": "https://api.github.com/users/JonasGeiping/following{/other_user}", "gists_url": "https://api.github.com/users/JonasGeiping/gists{/gist_id}", "starred_url": "https://api.github.com/users/JonasGeiping/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JonasGeiping/subscriptions", "organizations_url": "https://api.github.com/users/JonasGeiping/orgs", "repos_url": "https://api.github.com/users/JonasGeiping/repos", "events_url": "https://api.github.com/users/JonasGeiping/events{/privacy}", "received_events_url": "https://api.github.com/users/JonasGeiping/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
null
2022-03-29T16:58:22
2022-03-30T15:08:04
null
NONE
null
## Describe the bug This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side. When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I want to set TMPDIR at runtime using os.ENVIRON["TMPDIR"] = something, but depending on other imported modules this can fail to take effect. ## Steps to reproduce the bug `_TempDirWithCustomCleanup` relies on `tempfile` to generate a path to a temporary directory. However, `tempfile` generates the path only once. This can be a problem when trying to set TMPDIR at runtime whenever other code imports `tempfile` first and does something unexpected. For example (after too much trial and error) I found out that a different part of the code base I work with defines a class `PatchedDataCollatorForLanguageModeling(transformers.DataCollatorForLanguageModeling)` based on a `transformers` class. This import is enough to trigger `tempfile` to generate `tempfile` to generate a temporary path and leading to the wrong path being cached in `tempfile.tempdir`. ## Suggestion: I could file this also as bug with `transformers`, but I think fixing this on the datasets would be much more robust: Datasets could recompute the temporary path once (technically possible via `tempfile._get_default_tempdir` or resetting the global variable `tempfile.tmpdir` to None) before setting its own global `_TEMP_DIR_FOR_TEMP_CACHE_FILES`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4056/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4055/comments
https://api.github.com/repos/huggingface/datasets/issues/4055/events
https://github.com/huggingface/datasets/pull/4055
1,184,976,292
PR_kwDODunzps41PGF1
4,055
[DO NOT MERGE] Test doc-builder
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-29T14:39:02
2022-03-30T12:31:14
2022-03-30T12:25:52
MEMBER
null
This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4055/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4055", "html_url": "https://github.com/huggingface/datasets/pull/4055", "diff_url": "https://github.com/huggingface/datasets/pull/4055.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4055.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4054/comments
https://api.github.com/repos/huggingface/datasets/issues/4054/events
https://github.com/huggingface/datasets/pull/4054
1,184,575,368
PR_kwDODunzps41Nwjz
4,054
Support float data types in pearsonr/spearmanr metrics
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-29T09:29:10
2022-03-29T14:07:59
2022-03-29T14:02:20
MEMBER
null
Fix #4053.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4054/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4054", "html_url": "https://github.com/huggingface/datasets/pull/4054", "diff_url": "https://github.com/huggingface/datasets/pull/4054.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4054.patch", "merged_at": "2022-03-29T14:02:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/4053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4053/comments
https://api.github.com/repos/huggingface/datasets/issues/4053/events
https://github.com/huggingface/datasets/issues/4053
1,184,500,378
I_kwDODunzps5Gmgqa
4,053
Modify datatype from `int32` to `float` for pearsonr, spearmanr.
{ "login": "woodywarhol9", "id": 86637320, "node_id": "MDQ6VXNlcjg2NjM3MzIw", "avatar_url": "https://avatars.githubusercontent.com/u/86637320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/woodywarhol9", "html_url": "https://github.com/woodywarhol9", "followers_url": "https://api.github.com/users/woodywarhol9/followers", "following_url": "https://api.github.com/users/woodywarhol9/following{/other_user}", "gists_url": "https://api.github.com/users/woodywarhol9/gists{/gist_id}", "starred_url": "https://api.github.com/users/woodywarhol9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/woodywarhol9/subscriptions", "organizations_url": "https://api.github.com/users/woodywarhol9/orgs", "repos_url": "https://api.github.com/users/woodywarhol9/repos", "events_url": "https://api.github.com/users/woodywarhol9/events{/privacy}", "received_events_url": "https://api.github.com/users/woodywarhol9/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-29T08:27:41
2022-03-29T14:02:20
2022-03-29T14:02:20
NONE
null
**Is your feature request related to a problem? Please describe.** - Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'. **Describe the solution you'd like** - Considering that those metrics are widely used for the STS task(labels are in 'float' data type), it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4053/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4052/comments
https://api.github.com/repos/huggingface/datasets/issues/4052/events
https://github.com/huggingface/datasets/issues/4052
1,184,447,977
I_kwDODunzps5GmT3p
4,052
metric = metric_cls( TypeError: 'NoneType' object is not callable
{ "login": "klyuhang9", "id": 39409233, "node_id": "MDQ6VXNlcjM5NDA5MjMz", "avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/klyuhang9", "html_url": "https://github.com/klyuhang9", "followers_url": "https://api.github.com/users/klyuhang9/followers", "following_url": "https://api.github.com/users/klyuhang9/following{/other_user}", "gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}", "starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions", "organizations_url": "https://api.github.com/users/klyuhang9/orgs", "repos_url": "https://api.github.com/users/klyuhang9/repos", "events_url": "https://api.github.com/users/klyuhang9/events{/privacy}", "received_events_url": "https://api.github.com/users/klyuhang9/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-29T07:43:08
2022-03-29T14:06:01
2022-03-29T14:06:01
NONE
null
Hi, friend. I meet a problem. When I run the code: `metric = load_metric('glue', 'rte')` There is a problem raising: `metric = metric_cls( TypeError: 'NoneType' object is not callable ` I don't know why. Thanks for your help!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4052/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4051/comments
https://api.github.com/repos/huggingface/datasets/issues/4051/events
https://github.com/huggingface/datasets/issues/4051
1,184,400,179
I_kwDODunzps5GmIMz
4,051
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
{ "login": "klyuhang9", "id": 39409233, "node_id": "MDQ6VXNlcjM5NDA5MjMz", "avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/klyuhang9", "html_url": "https://github.com/klyuhang9", "followers_url": "https://api.github.com/users/klyuhang9/followers", "following_url": "https://api.github.com/users/klyuhang9/following{/other_user}", "gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}", "starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions", "organizations_url": "https://api.github.com/users/klyuhang9/orgs", "repos_url": "https://api.github.com/users/klyuhang9/repos", "events_url": "https://api.github.com/users/klyuhang9/events{/privacy}", "received_events_url": "https://api.github.com/users/klyuhang9/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-29T07:00:31
2022-05-08T07:27:32
2022-03-29T08:29:25
NONE
null
Hi, I meet a problem. When I run the code: `dataset = load_dataset('glue','sst2')` There is a issue raising: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py I don't know why; it is ok when I use Google Chrome to view this url. Thanks for your help!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4051/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4050/comments
https://api.github.com/repos/huggingface/datasets/issues/4050/events
https://github.com/huggingface/datasets/pull/4050
1,184,346,501
PR_kwDODunzps41NAMF
4,050
Add RVL-CDIP dataset
{ "login": "dnaveenr", "id": 17746528, "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dnaveenr", "html_url": "https://github.com/dnaveenr", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "repos_url": "https://api.github.com/users/dnaveenr/repos", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-29T06:00:02
2022-04-22T09:55:07
2022-04-21T17:15:41
CONTRIBUTOR
null
Resolves #2762 Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762) This PR adds the RVL-CDIP dataset. The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions. - I have added the dummy_data.zip as well. Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ? Inputs and suggestions for improvement are welcome. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4050/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4050", "html_url": "https://github.com/huggingface/datasets/pull/4050", "diff_url": "https://github.com/huggingface/datasets/pull/4050.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4050.patch", "merged_at": "2022-04-21T17:15:41" }
true
https://api.github.com/repos/huggingface/datasets/issues/4049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4049/comments
https://api.github.com/repos/huggingface/datasets/issues/4049/events
https://github.com/huggingface/datasets/pull/4049
1,183,832,893
PR_kwDODunzps41LSjv
4,049
Create metric card for the Code Eval metric
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T18:34:23
2022-03-29T13:38:12
2022-03-29T13:32:50
NONE
null
Creating initial Code Eval metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4049/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4049", "html_url": "https://github.com/huggingface/datasets/pull/4049", "diff_url": "https://github.com/huggingface/datasets/pull/4049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4049.patch", "merged_at": "2022-03-29T13:32:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/4048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4048/comments
https://api.github.com/repos/huggingface/datasets/issues/4048/events
https://github.com/huggingface/datasets/issues/4048
1,183,804,576
I_kwDODunzps5Gj2yg
4,048
Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
{ "login": "trentonstrong", "id": 191985, "node_id": "MDQ6VXNlcjE5MTk4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trentonstrong", "html_url": "https://github.com/trentonstrong", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "repos_url": "https://api.github.com/users/trentonstrong/repos", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "trentonstrong", "id": 191985, "node_id": "MDQ6VXNlcjE5MTk4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trentonstrong", "html_url": "https://github.com/trentonstrong", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "repos_url": "https://api.github.com/users/trentonstrong/repos", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "type": "User", "site_admin": false }
[ { "login": "trentonstrong", "id": 191985, "node_id": "MDQ6VXNlcjE5MTk4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trentonstrong", "html_url": "https://github.com/trentonstrong", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "repos_url": "https://api.github.com/users/trentonstrong/repos", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-28T18:12:04
2022-04-08T12:29:30
2022-04-08T12:29:30
CONTRIBUTOR
null
## Describe the bug When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m. Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata. Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first. ## Steps to reproduce the bug ```python load_dataset('amazon_us_reviews', 'PC_v1_00') ``` ## Expected results Dataset is downloaded and extracted successfully. ## Actual results An split size exception is thrown. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4048/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4047/comments
https://api.github.com/repos/huggingface/datasets/issues/4047/events
https://github.com/huggingface/datasets/issues/4047
1,183,789,237
I_kwDODunzps5GjzC1
4,047
Dataset.unique(column: str) -> ArrowNotImplementedError
{ "login": "orkenstein", "id": 1461936, "node_id": "MDQ6VXNlcjE0NjE5MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orkenstein", "html_url": "https://github.com/orkenstein", "followers_url": "https://api.github.com/users/orkenstein/followers", "following_url": "https://api.github.com/users/orkenstein/following{/other_user}", "gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}", "starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions", "organizations_url": "https://api.github.com/users/orkenstein/orgs", "repos_url": "https://api.github.com/users/orkenstein/repos", "events_url": "https://api.github.com/users/orkenstein/events{/privacy}", "received_events_url": "https://api.github.com/users/orkenstein/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-28T17:59:32
2022-04-01T18:24:57
2022-04-01T18:24:57
NONE
null
## Describe the bug I'm trying to use `unique()` function, but it fails ## Steps to reproduce the bug 1. Get dataset 2. Call `unique` 3. Error # Sample code to reproduce the bug ```python !pip show datasets from datasets import load_dataset dataset = load_dataset('wikiann', 'en') dataset['train'].column_names dataset['train'].unique(dataset['train'].column_names[0]) ``` ## Expected results It would be nice to actually see unique items ## Actual results Error: ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) [<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>() 6 7 dataset['train'].column_names ----> 8 dataset['train'].unique(dataset['train'].column_names[0]) 5 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>]) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Google Collab - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4047/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4046/comments
https://api.github.com/repos/huggingface/datasets/issues/4046/events
https://github.com/huggingface/datasets/pull/4046
1,183,723,360
PR_kwDODunzps41K6_H
4,046
Create metric card for XNLI
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T16:57:58
2022-03-29T13:32:59
2022-03-29T13:27:30
NONE
null
Proposing a metric card for XNLI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4046/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4046", "html_url": "https://github.com/huggingface/datasets/pull/4046", "diff_url": "https://github.com/huggingface/datasets/pull/4046.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4046.patch", "merged_at": "2022-03-29T13:27:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/4045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4045/comments
https://api.github.com/repos/huggingface/datasets/issues/4045/events
https://github.com/huggingface/datasets/pull/4045
1,183,661,091
PR_kwDODunzps41KtfV
4,045
Fix CLI dummy data generation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T16:09:15
2022-03-31T15:04:12
2022-03-31T14:59:06
MEMBER
null
PR: - #3868 broke the CLI dummy data generation. Fix #4044.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4045/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4045", "html_url": "https://github.com/huggingface/datasets/pull/4045", "diff_url": "https://github.com/huggingface/datasets/pull/4045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4045.patch", "merged_at": "2022-03-31T14:59:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/4044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4044/comments
https://api.github.com/repos/huggingface/datasets/issues/4044/events
https://github.com/huggingface/datasets/issues/4044
1,183,658,942
I_kwDODunzps5GjTO-
4,044
CLI dummy data generation is broken
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-28T16:07:37
2022-03-31T14:59:06
2022-03-31T14:59:06
MEMBER
null
## Describe the bug We get a TypeError when running CLI dummy data generation: ```shell datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate ``` gives: ``` File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data dataset_builder._prepare_split(split_generator) TypeError: _prepare_split() missing 1 required positional argument: 'check_duplicate_keys' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4044/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4043/comments
https://api.github.com/repos/huggingface/datasets/issues/4043/events
https://github.com/huggingface/datasets/pull/4043
1,183,624,475
PR_kwDODunzps41Kl0b
4,043
Create metric card for CUAD
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T15:38:58
2022-03-29T15:20:56
2022-03-29T15:15:19
NONE
null
Proposing a CUAD metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4043/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4043", "html_url": "https://github.com/huggingface/datasets/pull/4043", "diff_url": "https://github.com/huggingface/datasets/pull/4043.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4043.patch", "merged_at": "2022-03-29T15:15:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/4041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4041/comments
https://api.github.com/repos/huggingface/datasets/issues/4041/events
https://github.com/huggingface/datasets/issues/4041
1,183,599,461
I_kwDODunzps5GjEtl
4,041
Add support for IIIF in datasets
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
null
2022-03-28T15:19:25
2022-04-05T18:20:53
null
MEMBER
null
This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred. ## What is [IIIF](https://iiif.io/)? IIIF (International Image Interoperability Framework) > is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions. The tl;dr is that IIIF provides various specifications for implementing useful functionality for: - Institutions to make available images for various use cases - Users to have a consistent way of interacting/requesting these images - For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF). Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/ ## IIIF APIs IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/) ### IIIF Image API The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL: ```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}``` A concrete example of this: ```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg``` As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return: ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg) We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg) We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg` ![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg) A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size ## Why would/could this be useful for datasets? There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows: - images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller - can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use. - options for quality, rotation, the format can all be encoded in the URL request. These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact. ## What could this look like in datasets? I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach. ### Use through datasets scripts Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script: ```python ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg") ``` This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script. ### Support through dataset scripts (with some datasets support) This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like: ```python features = {"label": ClassLabel(names=['dog','cat']), "url": datasets.IIIFURL()} ``` inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset. ### Other possible integrations Some other possible pseudocode ways that a user could interact with IIIF URLs: The ability to cast to an `IIIFImage` feature type: ``` ds.cast_column('url', IIIFImage, download=False) ``` The ability to specify some options associated with IIIF urls. ``` ds = ds.set_iiif_options(column='url', size="250,250") ``` I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways. ## prerequisite requirements There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support: ### support for handling failed images loaded via a URL (or a specific IIIFImage feature). Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs. ```python from datasets import Dataset import datasets urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3 urls.append("badurl.com/image.jpg") data = {"url":urls} ds = Dataset.from_dict(data) ds = ds.cast_column('url', datasets.Image()) ds[3]['url'] ``` returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this. ### Caching support Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs. ### Support for Parsing IIIF URLs This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share. ## Why it might not be worthwhile/suitable for datasets There are some reasons that this might not be worth implementing: - currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models. - It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble. - The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from. Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets. ## Suggested next steps: I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4041/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4039/comments
https://api.github.com/repos/huggingface/datasets/issues/4039/events
https://github.com/huggingface/datasets/pull/4039
1,183,468,927
PR_kwDODunzps41KFIf
4,039
Support streaming xcopa dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T13:45:55
2022-03-28T16:26:48
2022-03-28T16:21:46
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4039/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4039", "html_url": "https://github.com/huggingface/datasets/pull/4039", "diff_url": "https://github.com/huggingface/datasets/pull/4039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4039.patch", "merged_at": "2022-03-28T16:21:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/4038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4038/comments
https://api.github.com/repos/huggingface/datasets/issues/4038/events
https://github.com/huggingface/datasets/pull/4038
1,183,189,827
PR_kwDODunzps41JKUG
4,038
[DO NOT MERGE] Test doc-builder with skipped installation feature
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T09:58:31
2023-09-24T10:01:05
2022-03-28T12:29:09
MEMBER
null
This PR is just for testing that we can build PR docs with changes made on the [`skip-install-for-real`](https://github.com/huggingface/doc-builder/tree/skip-install-for-real) branch of `doc-builder`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4038/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4038", "html_url": "https://github.com/huggingface/datasets/pull/4038", "diff_url": "https://github.com/huggingface/datasets/pull/4038.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4038.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4037/comments
https://api.github.com/repos/huggingface/datasets/issues/4037/events
https://github.com/huggingface/datasets/issues/4037
1,183,144,486
I_kwDODunzps5GhVom
4,037
Error while building documentation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-28T09:22:44
2022-03-28T10:01:52
2022-03-28T10:00:48
MEMBER
null
## Describe the bug Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4037/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4036/comments
https://api.github.com/repos/huggingface/datasets/issues/4036/events
https://github.com/huggingface/datasets/pull/4036
1,183,126,893
PR_kwDODunzps41I854
4,036
Fix building of documentation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T09:09:12
2023-09-24T09:55:34
2022-03-28T11:13:22
MEMBER
null
Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct. ``` Fix #4037.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4036/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4036", "html_url": "https://github.com/huggingface/datasets/pull/4036", "diff_url": "https://github.com/huggingface/datasets/pull/4036.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4036.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4035/comments
https://api.github.com/repos/huggingface/datasets/issues/4035/events
https://github.com/huggingface/datasets/pull/4035
1,183,067,456
PR_kwDODunzps41Iwb2
4,035
Add zero_division argument to precision and recall metrics
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T08:19:14
2022-03-28T09:53:07
2022-03-28T09:53:06
MEMBER
null
Fix #4025.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4035/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4035", "html_url": "https://github.com/huggingface/datasets/pull/4035", "diff_url": "https://github.com/huggingface/datasets/pull/4035.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4035.patch", "merged_at": "2022-03-28T09:53:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/4034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4034/comments
https://api.github.com/repos/huggingface/datasets/issues/4034/events
https://github.com/huggingface/datasets/pull/4034
1,183,033,285
PR_kwDODunzps41IpN1
4,034
Fix null checksum in xcopa dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T07:48:14
2022-03-28T08:06:14
2022-03-28T08:06:14
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4034/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4034", "html_url": "https://github.com/huggingface/datasets/pull/4034", "diff_url": "https://github.com/huggingface/datasets/pull/4034.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4034.patch", "merged_at": "2022-03-28T08:06:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/4033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4033/comments
https://api.github.com/repos/huggingface/datasets/issues/4033/events
https://github.com/huggingface/datasets/pull/4033
1,182,984,445
PR_kwDODunzps41Ie6w
4,033
Fix checksum error in cats_vs_dogs dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-28T07:01:25
2022-03-28T07:49:39
2022-03-28T07:44:24
MEMBER
null
Recent PR updated the metadata JSON file of cats_vs_dogs dataset: - #3878 However, that new JSON file contains a None checksum. This PR fixes it. Fix #4032.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4033/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4033", "html_url": "https://github.com/huggingface/datasets/pull/4033", "diff_url": "https://github.com/huggingface/datasets/pull/4033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4033.patch", "merged_at": "2022-03-28T07:44:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/4032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4032/comments
https://api.github.com/repos/huggingface/datasets/issues/4032/events
https://github.com/huggingface/datasets/issues/4032
1,182,595,697
I_kwDODunzps5GfPpx
4,032
can't download cats_vs_dogs dataset
{ "login": "RRaphaell", "id": 74569835, "node_id": "MDQ6VXNlcjc0NTY5ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74569835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RRaphaell", "html_url": "https://github.com/RRaphaell", "followers_url": "https://api.github.com/users/RRaphaell/followers", "following_url": "https://api.github.com/users/RRaphaell/following{/other_user}", "gists_url": "https://api.github.com/users/RRaphaell/gists{/gist_id}", "starred_url": "https://api.github.com/users/RRaphaell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RRaphaell/subscriptions", "organizations_url": "https://api.github.com/users/RRaphaell/orgs", "repos_url": "https://api.github.com/users/RRaphaell/repos", "events_url": "https://api.github.com/users/RRaphaell/events{/privacy}", "received_events_url": "https://api.github.com/users/RRaphaell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-27T17:05:39
2022-03-28T07:44:24
2022-03-28T07:44:24
NONE
null
## Describe the bug can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results loaded successfully. ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip'] ## Environment info fresh google colab notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4032/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4031/comments
https://api.github.com/repos/huggingface/datasets/issues/4031/events
https://github.com/huggingface/datasets/issues/4031
1,182,415,124
I_kwDODunzps5GejkU
4,031
Cannot load the dataset conll2012_ontonotesv5
{ "login": "cathyxl", "id": 8326473, "node_id": "MDQ6VXNlcjgzMjY0NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8326473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cathyxl", "html_url": "https://github.com/cathyxl", "followers_url": "https://api.github.com/users/cathyxl/followers", "following_url": "https://api.github.com/users/cathyxl/following{/other_user}", "gists_url": "https://api.github.com/users/cathyxl/gists{/gist_id}", "starred_url": "https://api.github.com/users/cathyxl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cathyxl/subscriptions", "organizations_url": "https://api.github.com/users/cathyxl/orgs", "repos_url": "https://api.github.com/users/cathyxl/repos", "events_url": "https://api.github.com/users/cathyxl/events{/privacy}", "received_events_url": "https://api.github.com/users/cathyxl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-27T07:38:23
2022-03-28T06:58:31
2022-03-28T06:31:18
NONE
null
## Describe the bug Cannot load the dataset conll2012_ontonotesv5 ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test") print(dataset) ``` ## Expected results The datasets should be downloaded successfully ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4031/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4030/comments
https://api.github.com/repos/huggingface/datasets/issues/4030/events
https://github.com/huggingface/datasets/pull/4030
1,182,157,056
PR_kwDODunzps41FxjE
4,030
Use a constant for the articles regex in SQuAD v2
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-26T23:06:30
2022-04-12T16:30:45
2022-04-12T11:00:24
CONTRIBUTOR
null
The main reason for doing this is to be able to change the articles list if using another language, for example. It's not the most elegant solution but at least it makes the metric more extensible with no drawbacks. BTW, what could be the best way to make this more generic (i.e., SQuAD in other languages)? Maybe receive a regex as an optional param, with the current value as the default? Similarly for SQuAD v1 (can't they re-use code?).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4030/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4030", "html_url": "https://github.com/huggingface/datasets/pull/4030", "diff_url": "https://github.com/huggingface/datasets/pull/4030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4030.patch", "merged_at": "2022-04-12T11:00:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/4029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4029/comments
https://api.github.com/repos/huggingface/datasets/issues/4029/events
https://github.com/huggingface/datasets/issues/4029
1,181,057,011
I_kwDODunzps5GZX_z
4,029
Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
{ "login": "MoritzLaurer", "id": 41862082, "node_id": "MDQ6VXNlcjQxODYyMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MoritzLaurer", "html_url": "https://github.com/MoritzLaurer", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
null
2022-03-25T17:31:33
2022-05-06T08:35:52
2022-05-06T08:35:52
NONE
null
**Is your feature request related to a problem? Please describe.** I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I would like to be able to repeat many different queries on the dataset quickly. **Describe the solution you'd like** dataset objects currently have the .get_nearest_examples() method for text retrieval via FAISS. But this only allows retrieving a specific number of K texts instead of everything above a specified similarity threshold. It would be great if HF Datasets would also support the FAISS method .range_search() for retrieving texts above a certain similarity threshold. see details here: https://github.com/facebookresearch/faiss/issues/1273 **Describe alternatives you've considered** I've considered using native FAISS, but doing this via HF datasets would be better. My assumption is that Dataset features like dataset streaming make it easier to work with large datasets **Additional context** The concrete use-case is: I have a large dataset (wikipedia) and I would like to retrieve all paragraphs which are similar to a query. I will use sentence-transformers for encoding the texts.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4029/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4028/comments
https://api.github.com/repos/huggingface/datasets/issues/4028/events
https://github.com/huggingface/datasets/pull/4028
1,181,022,675
PR_kwDODunzps41B429
4,028
Fix docs on audio feature installation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T16:55:11
2022-03-31T16:20:47
2022-03-31T16:15:20
MEMBER
null
This PR: - Removes the explicit installation of `librosa` (this is installed with `pip install datasets[audio]` - Adds the warning for Linux users to install manually the non-Python package `libsndfile` - Explains that the installation of `torchaudio` is only necessary to support loading audio datasets containing MP3 audio files Related to #4000.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4028/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4028", "html_url": "https://github.com/huggingface/datasets/pull/4028", "diff_url": "https://github.com/huggingface/datasets/pull/4028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4028.patch", "merged_at": "2022-03-31T16:15:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/4027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4027/comments
https://api.github.com/repos/huggingface/datasets/issues/4027/events
https://github.com/huggingface/datasets/issues/4027
1,180,991,344
I_kwDODunzps5GZH9w
4,027
ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "login": "MoritzLaurer", "id": 41862082, "node_id": "MDQ6VXNlcjQxODYyMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MoritzLaurer", "html_url": "https://github.com/MoritzLaurer", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-25T16:22:28
2022-04-07T10:29:52
2022-03-28T07:58:56
NONE
null
## Describe the bug I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co./docs/datasets/faiss_es#elasticsearch ``` from datasets import load_dataset squad = load_dataset('crime_and_punish', split='train[:1000]') ``` When I run the line: `squad.add_elasticsearch_index("context", host="localhost", port="9200")` I get the error: `TypeError: __init__() missing 1 required positional argument: 'scheme'` ## Expected results No error message ## Actual results ``` TypeError Traceback (most recent call last) [<ipython-input-23-9205593edef3>](https://localhost:8080/#) in <module>() 1 import elasticsearch ----> 2 squad.add_elasticsearch_index("text", host="localhost", port="9200") 6 frames [/usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py](https://localhost:8080/#) in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.0 - Platform: Linux, Google Colab - Python version: Google Colab (probably 3.7) - PyArrow version: ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4027/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4026/comments
https://api.github.com/repos/huggingface/datasets/issues/4026/events
https://github.com/huggingface/datasets/pull/4026
1,180,968,774
PR_kwDODunzps41Btcm
4,026
Support streaming xtreme dataset for bucc18 config
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T16:00:40
2022-03-25T16:26:50
2022-03-25T16:21:52
MEMBER
null
Support streaming xtreme dataset for bucc18 config.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4026/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4026", "html_url": "https://github.com/huggingface/datasets/pull/4026", "diff_url": "https://github.com/huggingface/datasets/pull/4026.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4026.patch", "merged_at": "2022-03-25T16:21:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/4025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4025/comments
https://api.github.com/repos/huggingface/datasets/issues/4025/events
https://github.com/huggingface/datasets/issues/4025
1,180,963,105
I_kwDODunzps5GZBEh
4,025
Missing argument in precision/recall
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-25T15:55:52
2022-03-28T09:53:06
2022-03-28T09:53:06
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** [`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py#L117) Same issue is present for Recall. **Describe the solution you'd like** Support for **kwargs or adding a new field for `zero_division`. **Describe alternatives you've considered** I could filter the warnings myself, but that is not ideal. **Additional context** I can make the requested changes if this is approved.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4025/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4024/comments
https://api.github.com/repos/huggingface/datasets/issues/4024/events
https://github.com/huggingface/datasets/pull/4024
1,180,951,817
PR_kwDODunzps41Bp3V
4,024
Doc: image_process small tip
{ "login": "FrancescoSaverioZuppichini", "id": 15908060, "node_id": "MDQ6VXNlcjE1OTA4MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancescoSaverioZuppichini", "html_url": "https://github.com/FrancescoSaverioZuppichini", "followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers", "following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}", "gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions", "organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs", "repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos", "events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T15:44:32
2022-03-31T15:35:35
2022-03-31T15:30:20
NONE
null
I've added a small tip in the `image_process` doc
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4024/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4024", "html_url": "https://github.com/huggingface/datasets/pull/4024", "diff_url": "https://github.com/huggingface/datasets/pull/4024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4024.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4023/comments
https://api.github.com/repos/huggingface/datasets/issues/4023/events
https://github.com/huggingface/datasets/pull/4023
1,180,840,399
PR_kwDODunzps41BSZT
4,023
Replace yahoo_answers_topics data url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T14:08:57
2022-03-28T10:12:56
2022-03-28T10:07:52
MEMBER
null
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4023/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4023", "html_url": "https://github.com/huggingface/datasets/pull/4023", "diff_url": "https://github.com/huggingface/datasets/pull/4023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4023.patch", "merged_at": "2022-03-28T10:07:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/4022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4022/comments
https://api.github.com/repos/huggingface/datasets/issues/4022/events
https://github.com/huggingface/datasets/pull/4022
1,180,816,682
PR_kwDODunzps41BNeA
4,022
Replace dbpedia_14 data url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T13:47:21
2022-03-25T15:03:37
2022-03-25T14:58:49
MEMBER
null
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4022/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4022", "html_url": "https://github.com/huggingface/datasets/pull/4022", "diff_url": "https://github.com/huggingface/datasets/pull/4022.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4022.patch", "merged_at": "2022-03-25T14:58:49" }
true
https://api.github.com/repos/huggingface/datasets/issues/4021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4021/comments
https://api.github.com/repos/huggingface/datasets/issues/4021/events
https://github.com/huggingface/datasets/pull/4021
1,180,805,092
PR_kwDODunzps41BLAf
4,021
Fix `map` remove_columns on empty dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T13:36:29
2022-03-29T13:41:31
2022-03-29T13:35:44
MEMBER
null
On an empty dataset, the `remove_columns` parameter of `map` currently doesn't actually remove the columns: ```python >>> ds = datasets.load_dataset("glue", "rte") >>> ds_filtered = ds.filter(lambda x: x["label"] != -1) >>> ds_mapped = ds_filtered.map(lambda x: x, remove_columns=["label"]) >>> print(repr(ds_mapped.column_names)) { 'train': ['sentence1', 'sentence2', 'idx'], 'validation': ['sentence1', 'sentence2', 'idx'], 'test': ['sentence1', 'sentence2', 'label', 'idx'] } ``` I fixed this error and updated the tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4021/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4021", "html_url": "https://github.com/huggingface/datasets/pull/4021", "diff_url": "https://github.com/huggingface/datasets/pull/4021.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4021.patch", "merged_at": "2022-03-29T13:35:44" }
true
https://api.github.com/repos/huggingface/datasets/issues/4020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4020/comments
https://api.github.com/repos/huggingface/datasets/issues/4020/events
https://github.com/huggingface/datasets/pull/4020
1,180,636,754
PR_kwDODunzps41Am4R
4,020
Replace amazon_polarity data URL
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T10:50:57
2022-03-25T15:02:36
2022-03-25T14:57:41
MEMBER
null
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4020/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4020", "html_url": "https://github.com/huggingface/datasets/pull/4020", "diff_url": "https://github.com/huggingface/datasets/pull/4020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4020.patch", "merged_at": "2022-03-25T14:57:41" }
true
https://api.github.com/repos/huggingface/datasets/issues/4019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4019/comments
https://api.github.com/repos/huggingface/datasets/issues/4019/events
https://github.com/huggingface/datasets/pull/4019
1,180,628,293
PR_kwDODunzps41AlFk
4,019
Make yelp_polarity streamable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T10:42:51
2022-03-25T15:02:19
2022-03-25T14:57:16
MEMBER
null
It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4019/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4019", "html_url": "https://github.com/huggingface/datasets/pull/4019", "diff_url": "https://github.com/huggingface/datasets/pull/4019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4019.patch", "merged_at": "2022-03-25T14:57:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/4018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4018/comments
https://api.github.com/repos/huggingface/datasets/issues/4018/events
https://github.com/huggingface/datasets/pull/4018
1,180,622,816
PR_kwDODunzps41Aj7g
4,018
Replace yelp_review_full data url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T10:37:18
2022-03-25T15:01:02
2022-03-25T14:56:10
MEMBER
null
I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive. Close https://github.com/huggingface/datasets/issues/4005
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4018/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4018", "html_url": "https://github.com/huggingface/datasets/pull/4018", "diff_url": "https://github.com/huggingface/datasets/pull/4018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4018.patch", "merged_at": "2022-03-25T14:56:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/4017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4017/comments
https://api.github.com/repos/huggingface/datasets/issues/4017/events
https://github.com/huggingface/datasets/pull/4017
1,180,595,160
PR_kwDODunzps41Ad_L
4,017
Support streaming scan dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T10:11:28
2022-03-25T12:08:55
2022-03-25T12:03:52
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4017/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4017", "html_url": "https://github.com/huggingface/datasets/pull/4017", "diff_url": "https://github.com/huggingface/datasets/pull/4017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4017.patch", "merged_at": "2022-03-25T12:03:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/4016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4016/comments
https://api.github.com/repos/huggingface/datasets/issues/4016/events
https://github.com/huggingface/datasets/pull/4016
1,180,557,828
PR_kwDODunzps41AWBk
4,016
Support streaming blimp dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T09:39:10
2022-03-25T11:19:18
2022-03-25T11:14:13
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4016/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4016", "html_url": "https://github.com/huggingface/datasets/pull/4016", "diff_url": "https://github.com/huggingface/datasets/pull/4016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4016.patch", "merged_at": "2022-03-25T11:14:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/4015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4015/comments
https://api.github.com/repos/huggingface/datasets/issues/4015/events
https://github.com/huggingface/datasets/issues/4015
1,180,510,856
I_kwDODunzps5GXSqI
4,015
Can not correctly parse the classes with imagefolder
{ "login": "YiSyuanChen", "id": 21264909, "node_id": "MDQ6VXNlcjIxMjY0OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/21264909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YiSyuanChen", "html_url": "https://github.com/YiSyuanChen", "followers_url": "https://api.github.com/users/YiSyuanChen/followers", "following_url": "https://api.github.com/users/YiSyuanChen/following{/other_user}", "gists_url": "https://api.github.com/users/YiSyuanChen/gists{/gist_id}", "starred_url": "https://api.github.com/users/YiSyuanChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YiSyuanChen/subscriptions", "organizations_url": "https://api.github.com/users/YiSyuanChen/orgs", "repos_url": "https://api.github.com/users/YiSyuanChen/repos", "events_url": "https://api.github.com/users/YiSyuanChen/events{/privacy}", "received_events_url": "https://api.github.com/users/YiSyuanChen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-25T08:51:17
2022-03-28T01:02:03
2022-03-25T09:27:56
NONE
null
## Describe the bug I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect. ## Steps to reproduce the bug I organized my dataset (ImageNet) in the following structure: ``` - imagenet/ - train/ - n01440764/ - ILSVRC2012_val_00000293.jpg - ...... - n01695060/ - ...... - val/ - n01440764/ - n01695060/ - ...... ``` At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as: ``` from datasets import load_dataset data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'} ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification") ``` but it resulted following error (I mask my personal path as <PERSONAL_PATH>): ``` FileNotFoundError: Unable to find 'https://huggingface.co./datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` Next, I followed a recent issue #3960 to load data as: ``` from datasets import load_dataset data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']} ds = load_dataset("imagefolder", data_files=data_files, task="image-classification") ``` and the data can be loaded without error as: (I copy val folder to train folder for illustration) ``` >>> ds DatasetDict({ train: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) val: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) }) ``` However, the parsed classes is wrong (should be 1000 classes): ``` >>> ds["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)} ``` ## Expected results I expect that the "labels" in ds["train"].features should contain 1000 classes. ## Actual results The "labels" in ds["train"].features contains only 1 wrong class. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Ubuntu 18.04 - Python version: Python 3.7.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4015/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4014/comments
https://api.github.com/repos/huggingface/datasets/issues/4014/events
https://github.com/huggingface/datasets/pull/4014
1,180,481,229
PR_kwDODunzps41AGBu
4,014
Support streaming id_clickbait dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T08:18:28
2022-03-25T08:58:31
2022-03-25T08:53:32
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4014/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4014", "html_url": "https://github.com/huggingface/datasets/pull/4014", "diff_url": "https://github.com/huggingface/datasets/pull/4014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4014.patch", "merged_at": "2022-03-25T08:53:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/4013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4013/comments
https://api.github.com/repos/huggingface/datasets/issues/4013/events
https://github.com/huggingface/datasets/issues/4013
1,180,427,174
I_kwDODunzps5GW-Om
4,013
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
{ "login": "hazalturkmen", "id": 42860397, "node_id": "MDQ6VXNlcjQyODYwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hazalturkmen", "html_url": "https://github.com/hazalturkmen", "followers_url": "https://api.github.com/users/hazalturkmen/followers", "following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}", "gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions", "organizations_url": "https://api.github.com/users/hazalturkmen/orgs", "repos_url": "https://api.github.com/users/hazalturkmen/repos", "events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}", "received_events_url": "https://api.github.com/users/hazalturkmen/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-25T07:12:02
2022-04-04T08:05:01
2022-03-25T14:16:11
NONE
null
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM' **Link:** *https://huggingface.co./datasets/hazal/Turkish-Biomedical-corpus-trM* *I cannot see the dataset preview.* ``` Server Error Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co./api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4013/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4012/comments
https://api.github.com/repos/huggingface/datasets/issues/4012/events
https://github.com/huggingface/datasets/pull/4012
1,180,350,083
PR_kwDODunzps40_qgo
4,012
Rename wer to cer
{ "login": "pmgautam", "id": 28428143, "node_id": "MDQ6VXNlcjI4NDI4MTQz", "avatar_url": "https://avatars.githubusercontent.com/u/28428143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pmgautam", "html_url": "https://github.com/pmgautam", "followers_url": "https://api.github.com/users/pmgautam/followers", "following_url": "https://api.github.com/users/pmgautam/following{/other_user}", "gists_url": "https://api.github.com/users/pmgautam/gists{/gist_id}", "starred_url": "https://api.github.com/users/pmgautam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pmgautam/subscriptions", "organizations_url": "https://api.github.com/users/pmgautam/orgs", "repos_url": "https://api.github.com/users/pmgautam/repos", "events_url": "https://api.github.com/users/pmgautam/events{/privacy}", "received_events_url": "https://api.github.com/users/pmgautam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-25T05:06:05
2022-03-28T13:57:25
2022-03-28T13:57:25
CONTRIBUTOR
null
wer variable changed to cer in README file
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4012/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4012", "html_url": "https://github.com/huggingface/datasets/pull/4012", "diff_url": "https://github.com/huggingface/datasets/pull/4012.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4012.patch", "merged_at": "2022-03-28T13:57:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/4011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4011/comments
https://api.github.com/repos/huggingface/datasets/issues/4011/events
https://github.com/huggingface/datasets/pull/4011
1,179,885,965
PR_kwDODunzps40-Ho0
4,011
Fix SQuAD v2 metric docs on `references` format
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[ { "id": 4190228726, "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate", "name": "transfer-to-evaluate", "color": "E3165C", "default": false, "description": "" } ]
closed
false
null
[]
null
null
2022-03-24T18:27:10
2023-07-11T09:35:46
2023-07-11T09:35:15
CONTRIBUTOR
null
`references` it's not a list of dictionaries but a dictionary that has a list in its values.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4011/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4011", "html_url": "https://github.com/huggingface/datasets/pull/4011", "diff_url": "https://github.com/huggingface/datasets/pull/4011.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4011.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4010/comments
https://api.github.com/repos/huggingface/datasets/issues/4010/events
https://github.com/huggingface/datasets/pull/4010
1,179,848,036
PR_kwDODunzps409_QV
4,010
Fix None issue with Sequence of dict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-24T17:58:59
2022-03-28T10:13:53
2022-03-28T10:08:40
MEMBER
null
`Features.encode_example` currently fails if it contains a sequence if dict like `Sequence({"subcolumn": Value("int32")})` and if `None` is passed instead of the dict. ```python File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 1310, in encode_example return encode_nested_example(self, example) File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in encode_nested_example return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in <dictcomp> return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 998, in encode_nested_example for k, (sub_schema, sub_objs) in zip_dict(schema.feature, obj): File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in zip_dict yield key, tuple(d[key] for d in dicts) File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in <genexpr> yield key, tuple(d[key] for d in dicts) TypeError: 'NoneType' object is not subscriptable ``` I fixed this issue and updated the tests (this case was missing in the tests)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4010/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4010", "html_url": "https://github.com/huggingface/datasets/pull/4010", "diff_url": "https://github.com/huggingface/datasets/pull/4010.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4010.patch", "merged_at": "2022-03-28T10:08:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/4009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4009/comments
https://api.github.com/repos/huggingface/datasets/issues/4009/events
https://github.com/huggingface/datasets/issues/4009
1,179,658,611
I_kwDODunzps5GUClz
4,009
AMI load_dataset error: sndfile library not found
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-24T15:13:38
2022-03-24T15:46:38
2022-03-24T15:17:29
NONE
null
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual results Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4009/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4008/comments
https://api.github.com/repos/huggingface/datasets/issues/4008/events
https://github.com/huggingface/datasets/pull/4008
1,179,591,068
PR_kwDODunzps409Ixp
4,008
Support streaming daily_dialog dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-24T14:23:23
2022-03-24T15:29:01
2022-03-24T14:46:58
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4008/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4008", "html_url": "https://github.com/huggingface/datasets/pull/4008", "diff_url": "https://github.com/huggingface/datasets/pull/4008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4008.patch", "merged_at": "2022-03-24T14:46:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/4007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4007/comments
https://api.github.com/repos/huggingface/datasets/issues/4007/events
https://github.com/huggingface/datasets/issues/4007
1,179,381,021
I_kwDODunzps5GS-0d
4,007
set_format does not work with multi dimension tensor
{ "login": "phihung", "id": 5902432, "node_id": "MDQ6VXNlcjU5MDI0MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phihung", "html_url": "https://github.com/phihung", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "organizations_url": "https://api.github.com/users/phihung/orgs", "repos_url": "https://api.github.com/users/phihung/repos", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "received_events_url": "https://api.github.com/users/phihung/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-24T11:27:43
2022-03-30T07:28:57
2022-03-24T14:39:29
NONE
null
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result ds = ds.with_format("torch") print(ds[0]) ``` ## Expected results ``` {'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]} ``` ## Actual results ``` {'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - datasets version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4007/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4006/comments
https://api.github.com/repos/huggingface/datasets/issues/4006/events
https://github.com/huggingface/datasets/pull/4006
1,179,367,195
PR_kwDODunzps408YnW
4,006
Use audio feature in ASR task template
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-24T11:15:22
2022-03-24T17:19:29
2022-03-24T16:48:02
MEMBER
null
The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column. I changed that and updated all the datasets as well as the tests. The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero usage unfortunately (probably because users load the duplicate `multilingual_librispeech` directly instead), but it means we can update it. (this makes me think that we should deprecate `multilingual_librispeech` it and redirect users to `facebook/multilingual_librispeech`). This PR is also useful for the AudioFolder in https://github.com/huggingface/datasets/pull/3963
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4006/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4006/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4006", "html_url": "https://github.com/huggingface/datasets/pull/4006", "diff_url": "https://github.com/huggingface/datasets/pull/4006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4006.patch", "merged_at": "2022-03-24T16:48:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/4005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4005/comments
https://api.github.com/repos/huggingface/datasets/issues/4005/events
https://github.com/huggingface/datasets/issues/4005
1,179,365,663
I_kwDODunzps5GS7Ef
4,005
Yelp not working
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-24T11:14:00
2022-03-25T14:59:57
2022-03-25T14:56:10
CONTRIBUTOR
null
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co./datasets/yelp_review_full/viewer/yelp_review_full/train Doesn't work: ``` Server error Status code: 400 Exception: Error Message: line contains NULL ``` Am I the one who added this dataset ? No A seamingly copy of the dataset: https://huggingface.co./datasets/SetFit/yelp_review_full works . The original one: https://huggingface.co./datasets/yelp_review_full has > 20K downloads.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4005/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4004/comments
https://api.github.com/repos/huggingface/datasets/issues/4004/events
https://github.com/huggingface/datasets/pull/4004
1,179,320,795
PR_kwDODunzps408Onj
4,004
ASSIN 2 dataset: replace broken Google Drive _URLS by links on github
{ "login": "ruanchaves", "id": 14352388, "node_id": "MDQ6VXNlcjE0MzUyMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruanchaves", "html_url": "https://github.com/ruanchaves", "followers_url": "https://api.github.com/users/ruanchaves/followers", "following_url": "https://api.github.com/users/ruanchaves/following{/other_user}", "gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}", "starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions", "organizations_url": "https://api.github.com/users/ruanchaves/orgs", "repos_url": "https://api.github.com/users/ruanchaves/repos", "events_url": "https://api.github.com/users/ruanchaves/events{/privacy}", "received_events_url": "https://api.github.com/users/ruanchaves/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-24T10:37:39
2022-03-28T14:01:46
2022-03-28T13:56:39
CONTRIBUTOR
null
Closes #4003 . Fixes checksum error. Replaces Google Drive urls by the files hosted here: [Multilingual Transformer Ensembles for Portuguese Natural Language Tasks](https://github.com/ruanchaves/assin)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4004/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4004", "html_url": "https://github.com/huggingface/datasets/pull/4004", "diff_url": "https://github.com/huggingface/datasets/pull/4004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4004.patch", "merged_at": "2022-03-28T13:56:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/4003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4003/comments
https://api.github.com/repos/huggingface/datasets/issues/4003/events
https://github.com/huggingface/datasets/issues/4003
1,179,286,877
I_kwDODunzps5GSn1d
4,003
ASSIN2 dataset checksum bug
{ "login": "ruanchaves", "id": 14352388, "node_id": "MDQ6VXNlcjE0MzUyMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruanchaves", "html_url": "https://github.com/ruanchaves", "followers_url": "https://api.github.com/users/ruanchaves/followers", "following_url": "https://api.github.com/users/ruanchaves/following{/other_user}", "gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}", "starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions", "organizations_url": "https://api.github.com/users/ruanchaves/orgs", "repos_url": "https://api.github.com/users/ruanchaves/repos", "events_url": "https://api.github.com/users/ruanchaves/events{/privacy}", "received_events_url": "https://api.github.com/users/ruanchaves/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-24T10:08:50
2022-04-27T14:14:45
2022-03-28T13:56:39
CONTRIBUTOR
null
## Describe the bug Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co./datasets/assin2). `NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`. Similar to #3952 , #3942 , #3941 , etc. ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) [<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>() ----> 1 load_dataset('assin2') 4 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'] ``` ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("assin2") ``` ## Expected results Load the dataset. ## Actual results The dataset won't load. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Google Colab - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4003/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4002/comments
https://api.github.com/repos/huggingface/datasets/issues/4002/events
https://github.com/huggingface/datasets/pull/4002
1,179,263,787
PR_kwDODunzps408Cfp
4,002
Support streaming conll2012_ontonotesv5 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-24T09:49:56
2022-03-24T10:53:41
2022-03-24T10:48:47
MEMBER
null
Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4002/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4002", "html_url": "https://github.com/huggingface/datasets/pull/4002", "diff_url": "https://github.com/huggingface/datasets/pull/4002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4002.patch", "merged_at": "2022-03-24T10:48:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/4001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4001/comments
https://api.github.com/repos/huggingface/datasets/issues/4001/events
https://github.com/huggingface/datasets/issues/4001
1,179,231,418
I_kwDODunzps5GSaS6
4,001
How to use generate this multitask dataset for SQUAD? I am getting a value error.
{ "login": "gsk1692", "id": 1963097, "node_id": "MDQ6VXNlcjE5NjMwOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1963097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gsk1692", "html_url": "https://github.com/gsk1692", "followers_url": "https://api.github.com/users/gsk1692/followers", "following_url": "https://api.github.com/users/gsk1692/following{/other_user}", "gists_url": "https://api.github.com/users/gsk1692/gists{/gist_id}", "starred_url": "https://api.github.com/users/gsk1692/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsk1692/subscriptions", "organizations_url": "https://api.github.com/users/gsk1692/orgs", "repos_url": "https://api.github.com/users/gsk1692/repos", "events_url": "https://api.github.com/users/gsk1692/events{/privacy}", "received_events_url": "https://api.github.com/users/gsk1692/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-24T09:21:51
2022-03-26T09:48:21
2022-03-26T03:35:43
NONE
null
## Dataset viewer issue for 'squad_multitask*' **Link:** https://huggingface.co./datasets/vershasaxena91/squad_multitask *short description of the issue* I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine. I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format') Error: Status code: 400 Exception: TypeError Message: argument of type 'Value' is not iterable Kindly advice.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4001/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4000/comments
https://api.github.com/repos/huggingface/datasets/issues/4000/events
https://github.com/huggingface/datasets/issues/4000
1,178,844,616
I_kwDODunzps5GQ73I
4,000
load_dataset error: sndfile library not found
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-24T01:52:32
2022-03-25T17:53:33
2022-03-25T17:53:33
NONE
null
## Describe the bug Can't load ami dataset ## Steps to reproduce the bug ``` python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ``` ## Expected results ## Actual results Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e... AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. 100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 36004.88it/s] 100%|█████████████████████████████████████████████████████████| 136/136 [00:01<00:00, 79.10it/s] 100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 25343.23it/s] 100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2874.78it/s] 100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 27950.38it/s] 100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2892.25it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4000/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3999/comments
https://api.github.com/repos/huggingface/datasets/issues/3999/events
https://github.com/huggingface/datasets/pull/3999
1,178,685,280
PR_kwDODunzps406WN_
3,999
Docs maintenance
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
null
2022-03-23T21:27:33
2022-03-30T17:01:45
2022-03-30T16:56:38
MEMBER
null
This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3999/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3999", "html_url": "https://github.com/huggingface/datasets/pull/3999", "diff_url": "https://github.com/huggingface/datasets/pull/3999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3999.patch", "merged_at": "2022-03-30T16:56:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/3998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3998/comments
https://api.github.com/repos/huggingface/datasets/issues/3998/events
https://github.com/huggingface/datasets/pull/3998
1,178,631,986
PR_kwDODunzps406KyA
3,998
Fix Audio.encode_example() when writing an array
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-23T20:32:13
2022-03-29T14:21:44
2022-03-29T14:16:13
CONTRIBUTOR
null
Closes #3996
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3998/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3998", "html_url": "https://github.com/huggingface/datasets/pull/3998", "diff_url": "https://github.com/huggingface/datasets/pull/3998.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3998.patch", "merged_at": "2022-03-29T14:16:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/3997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3997/comments
https://api.github.com/repos/huggingface/datasets/issues/3997/events
https://github.com/huggingface/datasets/pull/3997
1,178,566,568
PR_kwDODunzps4058xr
3,997
Sync Features dictionaries
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-23T19:23:51
2022-04-13T15:52:27
2022-04-13T15:46:19
COLLABORATOR
null
This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731). A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delitem__`, but this PR doesn't implement it for the following reasons: * it requires replacing all occurrences of `isinstance(obj, dict)` with `isinstance(obj, Mapping)`, which is five times slower than `isinstance(obj, dict)` on my machine, in `features.py` * is a breaking change, i.e., `isinstance(Features(...), dict)` would return `False` after it * IMO, it makes sense to be consistent in the user-facing API and subclass either `dict` or `UserDict`. The problem with the latter is that it can't be used for `DatasetDict` because `DatasetDict` exposes the `data` property, which is also used by `UserDict`, so this would result in a collision.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3997/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3997", "html_url": "https://github.com/huggingface/datasets/pull/3997", "diff_url": "https://github.com/huggingface/datasets/pull/3997.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3997.patch", "merged_at": "2022-04-13T15:46:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/3996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3996/comments
https://api.github.com/repos/huggingface/datasets/issues/3996/events
https://github.com/huggingface/datasets/issues/3996
1,178,415,905
I_kwDODunzps5GPTMh
3,996
Audio.encode_example() throws an error when writing example from array
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-23T17:11:47
2022-03-29T14:16:13
2022-03-29T14:16:13
CONTRIBUTOR
null
## Describe the bug When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error: `TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7f4218c0db30>` ## Steps to reproduce the bug ### Sample code to reproduce the bug ```python # download sample file !wget https://huggingface.co./datasets/polinaeterna/test_encode_example/resolve/main/common_voice_vi_21824030.mp3 arr, sr = librosa.load("common_voice_vi_21824030.mp3") Audio().encode_example({ "path": "common_voice_vi_21824030.mp3", "array": arr, "sampling_rate":sr }) ``` ## Expected results An encoded example (`{"bytes": b'....', "path": 'path'}`) ## Actual results ```python TypeError Traceback (most recent call last) Input In [3], in <module> 1 arr, sr = librosa.load("common_voice_vi_21824030.mp3") ----> 3 Audio().encode_example({ 4 "path": "common_voice_vi_21824030.mp3", 5 "array": arr, 6 "sampling_rate":sr 7 }) File ~/workspace/datasets/src/datasets/features/audio.py:75, in Audio.encode_example(self, value) 73 elif isinstance(value, dict) and "array" in value: 74 buffer = BytesIO() ---> 75 sf.write(buffer, value["array"], value["sampling_rate"]) 76 return {"bytes": buffer.getvalue(), "path": value.get("path")} 77 elif value.get("bytes") is not None or value.get("path") is not None: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:314, in write(file, data, samplerate, subtype, endian, format, closefd) 312 else: 313 channels = data.shape[1] --> 314 with SoundFile(file, 'w', samplerate, channels, 315 subtype, endian, format, closefd) as f: 316 f.write(data) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:627, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 625 mode_int = _check_mode(mode) 626 self._mode = mode --> 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1416, in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian) 1414 original_format = format 1415 if format is None: -> 1416 format = _get_format_from_filename(file, mode) 1417 assert isinstance(format, (_unicode, str)) 1418 else: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1457, in _get_format_from_filename(file, mode) 1455 pass 1456 if format.upper() not in _formats and 'r' not in mode: -> 1457 raise TypeError("No format specified and unable to get format from " 1458 "file extension: {0!r}".format(file)) 1459 return format TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7fd8daf88180> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets master - Platform: Ubuntu 20.04 - Python version: python 3.8.12 - PyArrow version: 6.0.1 ## Solution I guess we just need to add `format` arg in [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L75) like this: ```python sf.write(buffer, value["array"], value["sampling_rate"], format="wav") ``` BTW discovered this when trying to decode audio in mp3 format without torchaudio (would be useful for TensorFlow users), like this: ```python from datasets import load_dataset, Features, Audio ds = load_dataset("common_voice", "vi", split="test") ds = ds.remove_columns("audio") ds.select(range(3)) # 3 samples just for testing def load_mp3_with_librosa(example): arr, sr = librosa.load(example["path"]) example["audio"] = { "path": example["path"], "array": arr, "sampling_rate": sr } return example updated_dataset = ds.map(lambda example: load_mp3_with_librosa(example), features=Features( {"audio": Audio(decode=False)} )) ``` @lhoestq @mariosasko @albertvillanova am I right in my logic? do we agree that we can set wav as the format? 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3996/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3996/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3995/comments
https://api.github.com/repos/huggingface/datasets/issues/3995/events
https://github.com/huggingface/datasets/pull/3995
1,178,232,623
PR_kwDODunzps404054
3,995
Close `PIL.Image` file handler in `Image.decode_example`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-23T14:51:48
2022-03-23T18:24:52
2022-03-23T18:19:27
COLLABORATOR
null
Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error. To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `image.load()` regardless of how the image object is created (not only for the `PIL.Image.open(local_path)` case). This is needed because `load()` sets the `readonly` attribute of a `PIL.Image` object to 0 (it's 1 after `PIL.Image.open(file_like)`), and in the older PIL versions (only fixed on main), that attribute is considered in `PIL.Image.__eq__`. More info can be found here: https://github.com/python-pillow/Pillow/issues/5926. Fix #3985
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3995/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3995", "html_url": "https://github.com/huggingface/datasets/pull/3995", "diff_url": "https://github.com/huggingface/datasets/pull/3995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3995.patch", "merged_at": "2022-03-23T18:19:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/3994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3994/comments
https://api.github.com/repos/huggingface/datasets/issues/3994/events
https://github.com/huggingface/datasets/pull/3994
1,178,211,138
PR_kwDODunzps404wWu
3,994
Change audio column from string path to Audio feature in ASR task
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-23T14:34:52
2022-03-23T15:43:43
2022-03-23T15:43:43
CONTRIBUTOR
null
Will fix #3990
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3994/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3994", "html_url": "https://github.com/huggingface/datasets/pull/3994", "diff_url": "https://github.com/huggingface/datasets/pull/3994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3994.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3993/comments
https://api.github.com/repos/huggingface/datasets/issues/3993/events
https://github.com/huggingface/datasets/issues/3993
1,178,201,495
I_kwDODunzps5GOe2X
3,993
Streaming dataset + interleave + DataLoader hangs with multiple workers
{ "login": "jpilaul", "id": 614861, "node_id": "MDQ6VXNlcjYxNDg2MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/614861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jpilaul", "html_url": "https://github.com/jpilaul", "followers_url": "https://api.github.com/users/jpilaul/followers", "following_url": "https://api.github.com/users/jpilaul/following{/other_user}", "gists_url": "https://api.github.com/users/jpilaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/jpilaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jpilaul/subscriptions", "organizations_url": "https://api.github.com/users/jpilaul/orgs", "repos_url": "https://api.github.com/users/jpilaul/repos", "events_url": "https://api.github.com/users/jpilaul/events{/privacy}", "received_events_url": "https://api.github.com/users/jpilaul/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
null
2022-03-23T14:27:29
2023-02-28T14:14:24
null
NONE
null
## Describe the bug Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers. ## Steps to reproduce the bug ```python from datasets import interleave_datasets, load_dataset from torch.utils.data import DataLoader en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True) it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True) de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True) multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset]) multilingual_dataset = multilingual_dataset.with_format('torch') next(iter(multilingual_dataset)) # works fairly fast dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4) for batch in dataloader: print(len(batch)) # prints nothing after 30 min of waiting dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0) for batch in dataloader: print(len(batch)) # prints right away ``` ## Expected results It should be able to iterate the dataset with multiple workers. ## Actual results Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - `pytorch` version: 1.10.0+cu113 - Python version: 3.7 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3993/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3992/comments
https://api.github.com/repos/huggingface/datasets/issues/3992/events
https://github.com/huggingface/datasets/issues/3992
1,177,946,153
I_kwDODunzps5GNggp
3,992
Image column is not decoded in map when using with with_transform
{ "login": "phihung", "id": 5902432, "node_id": "MDQ6VXNlcjU5MDI0MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phihung", "html_url": "https://github.com/phihung", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "organizations_url": "https://api.github.com/users/phihung/orgs", "repos_url": "https://api.github.com/users/phihung/repos", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "received_events_url": "https://api.github.com/users/phihung/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
null
2022-03-23T10:51:13
2022-12-13T16:59:06
2022-12-13T16:59:06
NONE
null
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ds = ds.with_transform(lambda x: x) # <= This line causes the problem ds = ds.map(add_C, batched=True) print(ds[0]) ``` ## Expected results ``` {'C': <PIL.PngImagePlugin.PngImageFile>, ...} ``` ## Actual results ``` {'C': {'bytes': None, 'path': 'image.png'}, ...} ``` If we remove the `with_transform` line, we get the expected result. ## Environment info - `datasets` version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3992/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3991/comments
https://api.github.com/repos/huggingface/datasets/issues/3991/events
https://github.com/huggingface/datasets/issues/3991
1,177,362,901
I_kwDODunzps5GLSHV
3,991
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
null
[]
null
null
2022-03-22T22:16:05
2022-03-23T12:57:16
null
NONE
null
## Adding a Dataset - **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)* - **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.* - **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)* - **Motivation:** *Key dataset in the healthcare community* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). FYI @osanseviero @abidlabs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3991/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3990/comments
https://api.github.com/repos/huggingface/datasets/issues/3990/events
https://github.com/huggingface/datasets/issues/3990
1,176,976,247
I_kwDODunzps5GJzt3
3,990
Improve AutomaticSpeechRecognition task template
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
null
2022-03-22T15:41:08
2022-03-23T17:12:40
2022-03-23T17:12:40
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** [AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it's because Audio feature didn't exist at the time this template was created). **Describe the solution you'd like** Change audio columns from string path to Audio feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3990/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3989/comments
https://api.github.com/repos/huggingface/datasets/issues/3989/events
https://github.com/huggingface/datasets/pull/3989
1,176,955,078
PR_kwDODunzps400l1S
3,989
Remove old wikipedia leftovers
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-22T15:25:46
2022-03-31T15:35:26
2022-03-31T15:30:16
MEMBER
null
After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3989/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3989", "html_url": "https://github.com/huggingface/datasets/pull/3989", "diff_url": "https://github.com/huggingface/datasets/pull/3989.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3989.patch", "merged_at": "2022-03-31T15:30:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/3988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3988/comments
https://api.github.com/repos/huggingface/datasets/issues/3988/events
https://github.com/huggingface/datasets/pull/3988
1,176,858,540
PR_kwDODunzps400RGb
3,988
More consistent references in docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-22T14:18:41
2022-03-22T17:06:32
2022-03-22T16:50:44
COLLABORATOR
null
Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980. cc @stevhliu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3988/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3988/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3988", "html_url": "https://github.com/huggingface/datasets/pull/3988", "diff_url": "https://github.com/huggingface/datasets/pull/3988.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3988.patch", "merged_at": "2022-03-22T16:50:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/3987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3987/comments
https://api.github.com/repos/huggingface/datasets/issues/3987/events
https://github.com/huggingface/datasets/pull/3987
1,176,481,659
PR_kwDODunzps40zAxF
3,987
Fix Faiss custom_index device
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
null
2022-03-22T09:11:24
2022-03-24T12:18:59
2022-03-24T12:14:12
MEMBER
null
Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored. This PR fixes this by raising a ValueError if both arguments are passed. Alternatively, the `custom_index` could be transferred to the target `device`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3987/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3987", "html_url": "https://github.com/huggingface/datasets/pull/3987", "diff_url": "https://github.com/huggingface/datasets/pull/3987.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3987.patch", "merged_at": "2022-03-24T12:14:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/3986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3986/comments
https://api.github.com/repos/huggingface/datasets/issues/3986/events
https://github.com/huggingface/datasets/issues/3986
1,176,429,565
I_kwDODunzps5GHuP9
3,986
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
{ "login": "kelvinAI", "id": 10686779, "node_id": "MDQ6VXNlcjEwNjg2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kelvinAI", "html_url": "https://github.com/kelvinAI", "followers_url": "https://api.github.com/users/kelvinAI/followers", "following_url": "https://api.github.com/users/kelvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions", "organizations_url": "https://api.github.com/users/kelvinAI/orgs", "repos_url": "https://api.github.com/users/kelvinAI/repos", "events_url": "https://api.github.com/users/kelvinAI/events{/privacy}", "received_events_url": "https://api.github.com/users/kelvinAI/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
null
2022-03-22T08:23:21
2023-03-06T16:55:04
null
NONE
null
## Describe the bug Dataset loads indefinitely after modifying cache path (~/.cache/huggingface) If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script) ** Update: Transformer modules faces the same issue as well during loading ## A clear and concise description of what the bug is. Issue: - Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory - No error code, had to terminate the process - There are some files created in the cache directory: ``` custom_cache_dir | -- modules | -- __init__.py | -- datasets_modules | -- __init__.py | -- datasets | -- __init__.py | -- script.py (Dataset loading script) | -- script.lock ``` There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk. ## Steps to reproduce the bug What I've tried: - Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703) - Modifying HF_DATASETS_CACHE (https://huggingface.co./docs/datasets/v1.12.0/cache.html) - Modifying cache_dir param during runtime ```python >>> from datasets import load_dataset >>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache') ``` - Disabling dataset cache ```python >>> from datasets import set_caching_enabled >>> set_caching_enabled(False) ``` ## Expected results Datasets should load / cache as usual with the only exception that cache directory is different ## Actual results Any actions taken above to change the cache directory results in loading indefinitely without terminating. ## Environment info - `transformers` version: 4.18.0.dev0 - Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3986/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3985/comments
https://api.github.com/repos/huggingface/datasets/issues/3985/events
https://github.com/huggingface/datasets/issues/3985
1,175,982,937
I_kwDODunzps5GGBNZ
3,985
[image feature] Too many files open error when image feature is returned as a path
{ "login": "apsdehal", "id": 3616806, "node_id": "MDQ6VXNlcjM2MTY4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apsdehal", "html_url": "https://github.com/apsdehal", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "repos_url": "https://api.github.com/users/apsdehal/repos", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-21T21:54:05
2022-03-23T18:19:27
2022-03-23T18:19:27
CONTRIBUTOR
null
## Describe the bug PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue. ## Steps to reproduce the bug Pull the PR locally and run the following code ```python from datasets import load_dataset dataset = load_dataset("./datasets/textvqa")["train"] data = [item for item in dataset] # Error happens ``` ## Expected results List comprehension should work smoothly ## Actual results `Too many open files error` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.10.0 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3985/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3984/comments
https://api.github.com/repos/huggingface/datasets/issues/3984/events
https://github.com/huggingface/datasets/issues/3984
1,175,822,117
I_kwDODunzps5GFZ8l
3,984
Local and automatic tests fail
{ "login": "MarkusSagen", "id": 20767068, "node_id": "MDQ6VXNlcjIwNzY3MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkusSagen", "html_url": "https://github.com/MarkusSagen", "followers_url": "https://api.github.com/users/MarkusSagen/followers", "following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}", "gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions", "organizations_url": "https://api.github.com/users/MarkusSagen/orgs", "repos_url": "https://api.github.com/users/MarkusSagen/repos", "events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}", "received_events_url": "https://api.github.com/users/MarkusSagen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
null
2022-03-21T19:07:37
2023-07-25T15:18:40
2023-07-25T15:18:40
NONE
null
## Describe the bug Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py` ## Steps to reproduce the bug ```shell git clone https://huggingface/datasets.git cd datasets ``` ```python python -m pip install -e . pytest ``` ## Expected results All tests passing ## Actual results ``` tests/test_metric_common.py:91: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.pyenv/versions/3.8.5/lib/python3.8/doctest.py:1336: in __run exec(compile(example.source, filename, "single", <doctest datasets_modules.metrics.ter.c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155.ter.Ter[3]>:1: in <module> ??? ../datasets/src/datasets/metric.py:430: in compute output = self._compute(**inputs, **compute_kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Metric(name: "ter", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Val...ences=references) >>> print(results) {'score': 0.0, 'num_edits': 0, 'ref_length': 6.5} """, stored examples: 0) predictions = ['hello there general kenobi', 'foo bar foobar'] references = [['hello there general kenobi', 'hello there !'], ['foo bar foobar', 'foo bar foobar']] normalized = False, no_punct = False, asian_support = False, case_sensitive = False def _compute( self, predictions, references, normalized: bool = False, no_punct: bool = False, asian_support: bool = False, case_sensitive: bool = False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > sb_ter = TER(normalized, no_punct, asian_support, case_sensitive) E TypeError: __init__() takes 2 positional arguments but 5 were given /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/metrics/ter/c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155/ter.py:130: TypeError ------------------------------ Captured stdout call ------------------------------- Trying: predictions = ["hello there general kenobi", "foo bar foobar"] Expecting nothing ok Trying: references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] Expecting nothing ok Trying: ter = datasets.load_metric("ter") Expecting nothing ok Trying: results = ter.compute(predictions=predictions, references=references) Expecting nothing ================================ warnings summary ================================= ../.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15 /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses from imp import load_source ../datasets/src/datasets/commands/test.py:35 /home/markussagen/datasets/src/datasets/commands/test.py:35: PytestCollectionWarning: cannot collect test class 'TestCommand' because it has a __init__ constructor (from: tests/commands/test_test.py) class TestCommand(BaseDatasetsCLICommand): tests/commands/test_test.py:33 /home/markussagen/mydataset/tests/commands/test_test.py:33: PytestCollectionWarning: cannot collect test class 'TestCommandArgs' because it has a __new__ constructor (from: tests/commands/test_test.py) class TestCommandArgs: tests/test_arrow_dataset.py: 760 warnings tests/test_formatting.py: 60 warnings tests/test_search.py: 31 warnings tests/features/test_array_xd.py: 117 warnings /home/markussagen/datasets/src/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) tests/test_arrow_dataset.py: 154 warnings tests/features/test_array_xd.py: 1 warning /home/markussagen/datasets/src/datasets/formatting/formatting.py:201: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) tests/test_arrow_dataset.py: 60 warnings /home/markussagen/datasets/src/datasets/arrow_dataset.py:3105: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations elif np.issubdtype(values.dtype, np.str): tests/test_arrow_dataset.py: 138 warnings tests/test_formatting.py: 21 warnings /home/markussagen/datasets/src/datasets/formatting/tf_formatter.py:69: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations data_struct.dtype == np.object tests/test_arrow_dataset.py: 240 warnings tests/test_formatting.py: 20 warnings /home/markussagen/datasets/src/datasets/formatting/torch_formatter.py:49: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects tests/test_arrow_dataset.py: 12 warnings tests/test_search.py: 2 warnings tests/features/test_array_xd.py: 6 warnings tests/features/test_image.py: 4 warnings /home/markussagen/datasets/src/datasets/features/features.py:1129: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations [0] + [len(arr) for arr in l_arr], dtype=np.object tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_banking77 /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/banking77/aec0289529599d4572d76ab00c8944cb84f88410ad0c9e7da26189d31f62a55b/banking77.py:24: DeprecationWarning: invalid escape sequence \~ _CITATION = """\ tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_universal_dependencies /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/universal_dependencies/065e728dfe9a8371434a6e87132c2386a6eacab1a076d3a12aa417b994e6ef7d/universal_dependencies.py:6: DeprecationWarning: invalid escape sequence \= _CITATION = """\ tests/test_filesystem.py: 105 warnings /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/responses/__init__.py:398: DeprecationWarning: stream argument is deprecated. Use stream parameter in request directly warn( tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs /home/markussagen/datasets/src/datasets/formatting/jax_formatter.py:57: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # jax arrays cannot be instantied from an array of objects tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:3567: UserWarning: Explicitly requested dtype <class 'jax._src.numpy.lax_numpy.int64'> requested in array is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more. lax._check_user_dtype_supported(dtype, "array") tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html if obj.zone == 'local': tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features _audio /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations dtype=np.complex, tests/features/test_array_xd.py::test_array_xd_with_none /home/markussagen/mydataset/tests/features/test_array_xd.py:338: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations assert isinstance(arr, np.ndarray) and arr.dtype == np.object and arr.shape == (3,) -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================= short test summary info ============================= FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt - I... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf - Att... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_comet - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter - Type... ``` ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-5.16.11-76051611-generic-x86_64-with-glibc2.33 - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3984/timeline
null
completed
null
null
false