url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.2B
node_id
stringlengths
18
32
number
int64
1
4.12k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
int64
1,587B
1,649B
updated_at
int64
1,587B
1,649B
closed_at
int64
1,587B
1,649B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4122/comments
https://api.github.com/repos/huggingface/datasets/issues/4122/events
https://github.com/huggingface/datasets/issues/4122
1,196,095,072
I_kwDODunzps5HSvZg
4,122
medical_dialog zh has very slow _generate_examples
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,649,340,051,000
1,649,340,051,000
null
NONE
null
## Describe the bug After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours. ## Steps to reproduce the bug The easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud. ```python file_ids = [ "1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E", "1tt7weAT1SZknzRFyLXOT2fizceUUVRXX", "1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc", "1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J", "1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu", "1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP", "1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c", "1pA3bCFA5nZDhsQutqsJcH3d712giFb0S", "1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU", "1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD", "1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH", ] for i in file_ids: url = f"https://drive.google.com/uc?id={i}" !gdown $url from datasets import load_dataset ds = load_dataset("medical_dialog", "zh", data_dir="./") ``` ## Expected results Faster load time ## Actual results `Generating train split: 33%: 625519/1921127 [4:31:03<31:39:20, 11.37 examples/s]` ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 @vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4122/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4121/comments
https://api.github.com/repos/huggingface/datasets/issues/4121/events
https://github.com/huggingface/datasets/issues/4121
1,196,000,018
I_kwDODunzps5HSYMS
4,121
datasets.load_metric can not load a local metirc
{ "login": "Gare-Ng", "id": 51749469, "node_id": "MDQ6VXNlcjUxNzQ5NDY5", "avatar_url": "https://avatars.githubusercontent.com/u/51749469?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gare-Ng", "html_url": "https://github.com/Gare-Ng", "followers_url": "https://api.github.com/users/Gare-Ng/followers", "following_url": "https://api.github.com/users/Gare-Ng/following{/other_user}", "gists_url": "https://api.github.com/users/Gare-Ng/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gare-Ng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gare-Ng/subscriptions", "organizations_url": "https://api.github.com/users/Gare-Ng/orgs", "repos_url": "https://api.github.com/users/Gare-Ng/repos", "events_url": "https://api.github.com/users/Gare-Ng/events{/privacy}", "received_events_url": "https://api.github.com/users/Gare-Ng/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,649,335,736,000
1,649,339,607,000
1,649,339,607,000
NONE
null
## Describe the bug No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. And it comes back where it begins... ## Steps to reproduce the bug ```python metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py') ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py metric = load_metric(path='bleu') ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.12.1/metrics/bleu/bleu.py metric = load_metric(path='./blue/bleu.py') ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py ``` ## Expected results I do read the docs [here](https://huggingface.co./docs/datasets/package_reference/loading_methods#datasets.load_metric). There are no other parameters that help function to distinguish from local and online file but path. As what I code above, it should load from local. ## Actual results > metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py') > ~\AppData\Local\Temp\ipykernel_19636\1855752034.py in <module> ----> 1 metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py') D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) 817 if data_files is None and data_dir is not None: 818 data_files = os.path.join(data_dir, "**") --> 819 820 self.name = name 821 self.revision = revision D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs) 639 self, 640 path: str, --> 641 download_config: Optional[DownloadConfig] = None, 642 download_mode: Optional[DownloadMode] = None, 643 dynamic_modules_path: Optional[str] = None, D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 297 token = hf_api.HfFolder.get_token() 298 if token: --> 299 headers["authorization"] = f"Bearer {token}" 300 return headers 301 D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 604 def _resumable_file_manager(): 605 with open(incomplete_path, "a+b") as f: --> 606 yield f 607 608 temp_file_manager = _resumable_file_manager ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.7.13 - PyArrow version: 7.0.0 - Pandas version: 1.3.4 Any advice would be appreciated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4121/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4120/comments
https://api.github.com/repos/huggingface/datasets/issues/4120/events
https://github.com/huggingface/datasets/issues/4120
1,195,887,430
I_kwDODunzps5HR8tG
4,120
Representing dictionaries (json) objects as features
{ "login": "yanaiela", "id": 8031035, "node_id": "MDQ6VXNlcjgwMzEwMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanaiela", "html_url": "https://github.com/yanaiela", "followers_url": "https://api.github.com/users/yanaiela/followers", "following_url": "https://api.github.com/users/yanaiela/following{/other_user}", "gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions", "organizations_url": "https://api.github.com/users/yanaiela/orgs", "repos_url": "https://api.github.com/users/yanaiela/repos", "events_url": "https://api.github.com/users/yanaiela/events{/privacy}", "received_events_url": "https://api.github.com/users/yanaiela/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,649,329,661,000
1,649,329,661,000
null
NONE
null
In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442). For instance: ``` sample1 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, }} sample2 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, }} sample3 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, "d": {"id": 3, "text": "text4"}, }} ``` the `nps` field cannot be represented as a Feature while maintaining its original structure. @lhoestq suggested to add JSON as a new feature type, which will solve this problem. It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4120/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4119/comments
https://api.github.com/repos/huggingface/datasets/issues/4119/events
https://github.com/huggingface/datasets/pull/4119
1,195,641,298
PR_kwDODunzps41yXHF
4,119
Hotfix failing CI tests on Windows
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,317,126,000
1,649,324,844,000
1,649,318,233,000
MEMBER
null
This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414 Fix #4118 I guess this issue is related to this PR: - huggingface/huggingface_hub#815
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4119/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4119", "html_url": "https://github.com/huggingface/datasets/pull/4119", "diff_url": "https://github.com/huggingface/datasets/pull/4119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4119.patch", "merged_at": 1649318233000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4118/comments
https://api.github.com/repos/huggingface/datasets/issues/4118/events
https://github.com/huggingface/datasets/issues/4118
1,195,638,944
I_kwDODunzps5HRACg
4,118
Failing CI tests on Windows
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,649,316,985,000
1,649,318,233,000
1,649,318,233,000
MEMBER
null
## Describe the bug Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4118/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4117/comments
https://api.github.com/repos/huggingface/datasets/issues/4117/events
https://github.com/huggingface/datasets/issues/4117
1,195,552,406
I_kwDODunzps5HQq6W
4,117
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
{ "login": "arymbe", "id": 4567991, "node_id": "MDQ6VXNlcjQ1Njc5OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/4567991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arymbe", "html_url": "https://github.com/arymbe", "followers_url": "https://api.github.com/users/arymbe/followers", "following_url": "https://api.github.com/users/arymbe/following{/other_user}", "gists_url": "https://api.github.com/users/arymbe/gists{/gist_id}", "starred_url": "https://api.github.com/users/arymbe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arymbe/subscriptions", "organizations_url": "https://api.github.com/users/arymbe/orgs", "repos_url": "https://api.github.com/users/arymbe/repos", "events_url": "https://api.github.com/users/arymbe/events{/privacy}", "received_events_url": "https://api.github.com/users/arymbe/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.", "Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\nInput In [27], in <module>\r\n----> 1 from datasets import load_dataset\r\n\r\nvenv/lib/python3.8/site-packages/datasets/__init__.py:39, in <module>\r\n 37 from .arrow_dataset import Dataset, concatenate_datasets\r\n 38 from .arrow_reader import ReadInstruction\r\n---> 39 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n 40 from .combine import interleave_datasets\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n\r\nvenv/lib/python3.8/site-packages/datasets/builder.py:40, in <module>\r\n 32 from .arrow_reader import (\r\n 33 HF_GCP_BASE_URL,\r\n 34 ArrowReader,\r\n (...)\r\n 37 ReadInstruction,\r\n 38 )\r\n 39 from .arrow_writer import ArrowWriter, BeamWriter\r\n---> 40 from .data_files import DataFilesDict, sanitize_patterns\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n 42 from .features import Features\r\n\r\nvenv/lib/python3.8/site-packages/datasets/data_files.py:297, in <module>\r\n 292 except FileNotFoundError:\r\n 293 raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any data file\") from None\r\n 296 def _resolve_single_pattern_in_dataset_repository(\r\n--> 297 dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n 298 pattern: str,\r\n 299 allowed_extensions: Optional[list] = None,\r\n 300 ) -> List[PurePath]:\r\n 301 data_files_ignore = FILES_TO_IGNORE\r\n 302 fs = HfFileSystem(repo_info=dataset_info)\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'", "This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.\r\n\r\nMaybe you have a problem with your installed `huggingface_hub`...\r\n\r\nCould you please try to update it?\r\n```shell\r\npip install -U huggingface_hub\r\n```", "Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :)", "I'm sorry I can't reproduce your problem.\r\n\r\nMaybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:\r\n- Python venv: https://docs.python.org/3/library/venv.html\r\n- or conda venv (if you are using conda): https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html" ]
1,649,310,756,000
1,649,324,263,000
null
NONE
null
## Describe the bug Could you help me please. I got this following error. AttributeError: module 'huggingface_hub' has no attribute 'hf_api' ## Steps to reproduce the bug when I imported the datasets # Sample code to reproduce the bug from datasets import list_datasets, load_dataset, list_metrics, load_metric ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.3-x86_64-i386-64bit - Python version: 3.8.9 - PyArrow version: 7.0.0 - Pandas version: 1.3.5 - Huggingface-hub: 0.5.0 - Transformers: 4.18.0 Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4117/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4116/comments
https://api.github.com/repos/huggingface/datasets/issues/4116/events
https://github.com/huggingface/datasets/pull/4116
1,194,926,459
PR_kwDODunzps41wCEO
4,116
Pretty print dataset info files
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "maybe just do it from now on no? (i.e. not for existing `dataset_infos.json` files)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4116). All of your documentation changes will be reflected on that endpoint.", "> maybe just do it from now on no? (i.e. not for existing dataset_infos.json files)\r\n\r\nYes, or do this only for datasets created with `push_to_hub` to (always) keep the GH datasets small? \r\n", "yep sounds good too on my side! ", "I reverted the change to avoid the size increase and added the `pretty_print` flag, which pretty-prints the JSON, and that flag is only True for datasets created with `push_to_hub`. " ]
1,649,266,848,000
1,649,341,071,000
null
CONTRIBUTOR
null
Adds indentation to the `dataset_infos.json` file when saving for nicer diffs. (suggested by @julien-c) This PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sure this change is a good idea. `src/datasets/info.py` is the only relevant file for reviewers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4116/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4116", "html_url": "https://github.com/huggingface/datasets/pull/4116", "diff_url": "https://github.com/huggingface/datasets/pull/4116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4116.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4115/comments
https://api.github.com/repos/huggingface/datasets/issues/4115/events
https://github.com/huggingface/datasets/issues/4115
1,194,907,555
I_kwDODunzps5HONej
4,115
ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko " ]
1,649,266,183,000
1,649,266,300,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large. **Describe the solution you'd like** maybe have an option `ignore` or something .gitignore style `dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")` **Describe alternatives you've considered** Could filter out manually
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4115/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4114/comments
https://api.github.com/repos/huggingface/datasets/issues/4114/events
https://github.com/huggingface/datasets/issues/4114
1,194,855,345
I_kwDODunzps5HOAux
4,114
Allow downloading just some columns of a dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess", "Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought." ]
1,649,263,126,000
1,649,318,186,000
null
MEMBER
null
**Is your feature request related to a problem? Please describe.** Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case **Describe the solution you'd like** Be able to just download some columns of a dataset, such as doing ```python load_dataset("huggan/wikiart",columns=["artist", "genre"]) ``` Although this might make things a bit complicated in terms of local caching of datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4114/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4113/comments
https://api.github.com/repos/huggingface/datasets/issues/4113/events
https://github.com/huggingface/datasets/issues/4113
1,194,843,532
I_kwDODunzps5HN92M
4,113
Multiprocessing with FileLock fails in python 3.9
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,649,262,429,000
1,649,262,429,000
null
MEMBER
null
On python 3.9, this code hangs: ```python from multiprocessing import Pool from filelock import FileLock def run(i): print(f"got the lock in multi process [{i}]") with FileLock("tmp.lock"): with Pool(2) as pool: pool.map(run, range(2)) ``` This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python. This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well. Let's see if we can fix this and have a CI that runs on 3.9. cc @mariosasko @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4113/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4112/comments
https://api.github.com/repos/huggingface/datasets/issues/4112/events
https://github.com/huggingface/datasets/issues/4112
1,194,752,765
I_kwDODunzps5HNnr9
4,112
ImageFolder with Grayscale images dataset
{ "login": "ChainYo", "id": 50595514, "node_id": "MDQ6VXNlcjUwNTk1NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChainYo", "html_url": "https://github.com/ChainYo", "followers_url": "https://api.github.com/users/ChainYo/followers", "following_url": "https://api.github.com/users/ChainYo/following{/other_user}", "gists_url": "https://api.github.com/users/ChainYo/gists{/gist_id}", "starred_url": "https://api.github.com/users/ChainYo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChainYo/subscriptions", "organizations_url": "https://api.github.com/users/ChainYo/orgs", "repos_url": "https://api.github.com/users/ChainYo/repos", "events_url": "https://api.github.com/users/ChainYo/events{/privacy}", "received_events_url": "https://api.github.com/users/ChainYo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n return examples\r\n\r\ntransformed_dataset = dataset.with_transform(transform_func)\r\n```\r\nshould fix the issue. `datasets` doesn't support chaining of transforms (you can think of `set_format`/`with_format` as a predefined transform func for `set_transform`/`with_transforms`), so the last transform (in your case, `set_format`) takes precedence over the previous ones (in your case `with_format`). And the PyTorch formatter is not supported by the Image feature, hence the error (adding support for that is on our short-term roadmap).", "Ok thanks a lot for the code snippet!\r\n\r\nI love the way `datasets` is easy to use but it made it really long to pre-process all the images (400.000 in my case) before training anything. `ImageFolder` from pytorch is faster in my case but force to have the images on local.\r\n\r\nI don't know how to speed up the process without switching to `ImageFolder` :smile: ", "You can pass `ignore_verifications=True` in `load_dataset` to skip checksum verification, which takes a lot of time if the number of files is large. We will consider making this the default behavior." ]
1,649,257,800,000
1,649,332,246,000
null
NONE
null
Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co./datasets/ChainYo/rvl-cdip) (RVL-CDIP) I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback: ```bash AttributeError: Caught AttributeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__ return self._getitem( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem formatted_output = format_table( File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__ return self.format_row(pa_table) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row return self.recursive_tensorize(row) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize return map_nested(self._recursive_tensorize, data_struct, map_list=False) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested mapped = [ File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp> return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested return function(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize return self._tensorize(data_struct) File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize if np.issubdtype(value.dtype, np.integer): AttributeError: 'bytes' object has no attribute 'dtype' ``` I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well): ```python train_dataset = load_dataset("imagefolder", data_dir="data/train") train_dataset = train_dataset["train"] test_dataset = load_dataset("imagefolder", data_dir="data/test") test_dataset = test_dataset["train"] val_dataset = load_dataset("imagefolder", data_dir="data/val") val_dataset = val_dataset["train"] dataset = DatasetDict({ "train": train_dataset, "val": val_dataset, "test": test_dataset }) dataset.push_to_hub("ChainYo/rvl-cdip") ``` Now here is the code I am using to get the dataset and prepare it for training: ```python img_size = 512 batch_size = 128 normalize = [(0.5), (0.5)] data_dir = "ChainYo/rvl-cdip" dataset = load_dataset(data_dir, split="train") transforms = transforms.Compose([ transforms.Resize(img_size), transforms.CenterCrop(img_size), transforms.ToTensor(), transforms.Normalize(*normalize) ]) transformed_dataset = dataset.with_transform(transforms) transformed_dataset.set_format(type="torch", device="cuda") train_dataloader = torch.utils.data.DataLoader( transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True ) ``` But this get me the error above. I don't understand why it's doing this kind of weird thing? Do I need to map something on the dataset? Something like this: ```python labels = dataset.features["label"].names num_labels = dataset.features["label"].num_classes def preprocess_data(examples): images = [ex.convert("RGB") for ex in examples["image"]] labels = [ex for ex in examples["label"]] return {"images": images, "labels": labels} features = Features({ "images": Image(decode=True, id=None), "labels": ClassLabel(num_classes=num_labels, names=labels) }) decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4112/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4111/comments
https://api.github.com/repos/huggingface/datasets/issues/4111/events
https://github.com/huggingface/datasets/pull/4111
1,194,660,699
PR_kwDODunzps41vJCt
4,111
Update security policy
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,253,591,000
1,649,324,790,000
1,649,324,427,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4111/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4111", "html_url": "https://github.com/huggingface/datasets/pull/4111", "diff_url": "https://github.com/huggingface/datasets/pull/4111.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4111.patch", "merged_at": 1649324427000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4110/comments
https://api.github.com/repos/huggingface/datasets/issues/4110/events
https://github.com/huggingface/datasets/pull/4110
1,194,581,375
PR_kwDODunzps41u4Je
4,110
Matthews Correlation Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4110). All of your documentation changes will be reflected on that endpoint." ]
1,649,249,975,000
1,649,276,083,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4110/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4110", "html_url": "https://github.com/huggingface/datasets/pull/4110", "diff_url": "https://github.com/huggingface/datasets/pull/4110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4110.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4109/comments
https://api.github.com/repos/huggingface/datasets/issues/4109/events
https://github.com/huggingface/datasets/pull/4109
1,194,579,257
PR_kwDODunzps41u3sm
4,109
Add Spearmanr Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4109). All of your documentation changes will be reflected on that endpoint." ]
1,649,249,873,000
1,649,250,517,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4109/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4109", "html_url": "https://github.com/huggingface/datasets/pull/4109", "diff_url": "https://github.com/huggingface/datasets/pull/4109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4109.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4108/comments
https://api.github.com/repos/huggingface/datasets/issues/4108/events
https://github.com/huggingface/datasets/pull/4108
1,194,578,584
PR_kwDODunzps41u3j2
4,108
Perplexity Speedup
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "WRT the high values, can you add some unit tests with some [string, model] pairs and their resulting perplexity code, and @TristanThrush can run the same pairs through his version of the code?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4108). All of your documentation changes will be reflected on that endpoint.", "I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does).\r\n@lhoestq , @TristanThrush thoughts?", "> I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does). @lhoestq , @TristanThrush thoughts?\r\n\r\nI support this change from Emi. If we have a perplexity function that loads GPT2 and then returns an average over all of the strings, then it is impossible to get multiple perplexities of a batch of strings efficiently. If we have this new perplexity function that is built for batching, then it is possible to get a batch of perplexities efficiently and you can still compute the average efficiently afterwards.", "Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n\r\nFor consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n```python\r\nreturn {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n```\r\nwe're also doing this for the COMET metric." ]
1,649,249,841,000
1,649,343,709,000
null
CONTRIBUTOR
null
This PR makes necessary changes to perplexity such that: - it runs much faster (via batching) - it throws an error when input is empty, or when input is one word without <BOS> token - it adds the option to add a <BOS> token Issues: - The values returned are extremely high, and I'm worried they aren't correct. Even if they are correct, they are sometimes returned as `inf`, which is not very useful (see [comment below](https://github.com/huggingface/datasets/pull/4108#discussion_r843931094) for some of the output values). - If the values are not correct, can you help me find the error? - If the values are correct, it might be worth it to measure something like perplexity per word, which would allow us to get actual values for the larger perplexities, instead of just `inf` Future: - `stride` is not currently implemented here. I have some thoughts on how to make it happen with batching, but I think it would be better to get another set of eyes to look at any possible errors causing such large values now rather than later.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4108/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4108", "html_url": "https://github.com/huggingface/datasets/pull/4108", "diff_url": "https://github.com/huggingface/datasets/pull/4108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4108.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4107/comments
https://api.github.com/repos/huggingface/datasets/issues/4107/events
https://github.com/huggingface/datasets/issues/4107
1,194,484,885
I_kwDODunzps5HMmSV
4,107
Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
{ "login": "Pavithree", "id": 23344465, "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Pavithree", "html_url": "https://github.com/Pavithree", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "repos_url": "https://api.github.com/users/Pavithree/repos", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting. I'm looking at it", " It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDownloading and preparing dataset json/Pavithree--explainLikeImFive to /home/slesage/.cache/huggingface/datasets/json/Pavithree--explainLikeImFive-b68b6d8112cd8a51/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 305M/305M [00:03<00:00, 98.6MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.9M/17.9M [00:00<00:00, 75.7MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [00:00<00:00, 70.6MB/s]\r\nDownloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.92s/it]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1948.42it/s]\r\nFailed to read file '/home/slesage/.cache/huggingface/datasets/downloads/5fee9c8819754df277aee6f252e4db6897d785231c21938407b8862ca871d246' with error <class 'pyarrow.lib.ArrowInvalid'>: Exceeded maximum rows\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 144, in _generate_tables\r\n dataset = json.load(f)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/decoder.py\", line 340, in decode\r\n raise JSONDecodeError(\"Extra data\", s, end)\r\njson.decoder.JSONDecodeError: Extra data: line 1 column 916 (char 915)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1691, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1151, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/tqdm/std.py\", line 1168, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 146, in _generate_tables\r\n raise e\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 122, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```\r\n\r\ncc @lhoestq @albertvillanova @mariosasko ", "It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no \"\\n\")\r\n\r\nYou need to have one JSON object per line", "I'm closing this issue.\r\n\r\n@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it." ]
1,649,245,035,000
1,649,255,995,000
1,649,255,995,000
NONE
null
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows **Link:** *https://huggingface.co./datasets/Pavithree/explainLikeImFive* *This is the subset of original eli5 dataset https://huggingface.co./datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error: Status code: 400 Exception: ArrowInvalid Message: Exceeded maximum rows When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error* Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4107/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4106/comments
https://api.github.com/repos/huggingface/datasets/issues/4106/events
https://github.com/huggingface/datasets/pull/4106
1,194,393,892
PR_kwDODunzps41uPpa
4,106
Support huggingface_hub 0.5
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Looks like GH actions is not able to resolve `huggingface_hub` 0.5.0, I'm investivating", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4106). All of your documentation changes will be reflected on that endpoint.", "I'm glad to see changes in `huggingface_hub` are simplifying code here.", "seems to supersede #4102, feel free to close mine :)", "maybe just cherry-pick the docstring fix", "I think I've found the issue:\r\n- https://github.com/huggingface/huggingface_hub/pull/790", "Good catch, `huggingface_hub` doesn't support python 3.6 anymore indeed, therefore we should keep support for 0.4.0. I'm reverting the requirement version bump for now.\r\n\r\nWe can update the requirement once we drop support for python 3.6 in `datasets`", "@lhoestq, I've opened this PR on `huggingface_hub`: \r\n- https://github.com/huggingface/huggingface_hub/pull/823\r\n\r\nIs there any strong reason why `huggingface_hub` no longer supports Python 3.6? ", "I think `datasets` can drop support for 3.6 soon. But for now maybe let's keep support for 0.4.0, python 3.6 users are not affected by https://github.com/huggingface/datasets/issues/4105 anyway.\r\n\r\n`huggingface_hub` doesn't not have to support 3.6 again just for the CI IMO", "@lhoestq I commented on the PR, that IMO it is not a good practice to drop support for Python 3.6 without a previous deprecation cycle.", "Re-added support for older versions. I ended up checking `huggingface_hub` version to use the old, deprecated API for <0.5.0", "I find it good practice to have all dependency version related code in a single file so that when you decide to remove support for an old version of a dependency it's easy to find and remove them, hence suggesting `utils/_fixes.py` in https://github.com/huggingface/datasets/issues/4105#issuecomment-1090041204", "good idea, thanks !", "I used your suggestion @adrinjalali , I just replace the try/except with a check on the version of `huggingface_hub`" ]
1,649,240,125,000
1,649,265,261,000
null
MEMBER
null
Following https://github.com/huggingface/datasets/issues/4105 `huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s> cc @adrinjalali @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4106/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4106", "html_url": "https://github.com/huggingface/datasets/pull/4106", "diff_url": "https://github.com/huggingface/datasets/pull/4106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4106.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4105/comments
https://api.github.com/repos/huggingface/datasets/issues/4105/events
https://github.com/huggingface/datasets/issues/4105
1,194,297,119
I_kwDODunzps5HL4cf
4,105
push to hub fails with huggingface-hub 0.5.0
{ "login": "frascuchon", "id": 2518789, "node_id": "MDQ6VXNlcjI1MTg3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frascuchon", "html_url": "https://github.com/frascuchon", "followers_url": "https://api.github.com/users/frascuchon/followers", "following_url": "https://api.github.com/users/frascuchon/following{/other_user}", "gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}", "starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions", "organizations_url": "https://api.github.com/users/frascuchon/orgs", "repos_url": "https://api.github.com/users/frascuchon/repos", "events_url": "https://api.github.com/users/frascuchon/events{/privacy}", "received_events_url": "https://api.github.com/users/frascuchon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nI think we should fix that in `huggingface_hub`, will keep you posted. In the meantime please use `huggingface_hub` 0.4.0", "I'll be sending a fix for this later today on the `huggingface_hub` side.\r\n\r\nThe error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nto be:\r\n\r\n``` python\r\n api.create_repo(\r\n name=dataset_name,\r\n token=token,\r\n repo_type=\"dataset\",\r\n organization=organization,\r\n private=private,\r\n )\r\n```\r\n\r\nBut `name` and `organization` are deprecated in `huggingface_hub=0.5`, and people should pass `repo_id='org/name` instead. Note that `repo_id` was introduced in 0.5 and if `datasets` wants to support older `huggingface_hub` versions (which I encourage it to do), there needs to be a helper function to do that. It can be something like:\r\n\r\n\r\n```python\r\ndef create_repo(\r\n client,\r\n name: str,\r\n token: Optional[str] = None,\r\n organization: Optional[str] = None,\r\n private: Optional[bool] = None,\r\n repo_type: Optional[str] = None,\r\n exist_ok: Optional[bool] = False,\r\n space_sdk: Optional[str] = None,\r\n) -> str:\r\n try:\r\n return client.create_repo(\r\n repo_id=f\"{organization}/{name}\",\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n except TypeError:\r\n return client.create_repo(\r\n name=name,\r\n organization=organization,\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n```\r\n\r\nin a `utils/_fixes.py` kinda file and and be used internally.\r\n\r\nI'll be sending a patch to `huggingface_hub` to convert the error reported in this issue to a `FutureWarning`.", "PR with the hotfix on the `huggingface_hub` side: https://github.com/huggingface/huggingface_hub/pull/822", "We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.\r\n\r\nLet me open a PR :)", "`huggingface_hub` 0.5.1 just got released with a fix, feel free to update `huggingface_hub` ;)" ]
1,649,235,597,000
1,649,247,704,000
null
NONE
null
## Describe the bug `ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id" ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("rubrix/news_test") ds.push_to_hub("<your-user>/news_test", token="<your-token>") ``` ## Expected results The dataset is successfully uploaded ## Actual results An error validation is raised: ```bash if repo_id and (name or organization): > raise ValueError( "Only pass `repo_id` and leave deprecated `name` and " "`organization` to be None." E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.1 - `huggingface-hub`: 0.5 - Platform: macOS - Python version: 3.8.12 - PyArrow version: 6.0.0 cc @adrinjalali
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4105/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4104/comments
https://api.github.com/repos/huggingface/datasets/issues/4104/events
https://github.com/huggingface/datasets/issues/4104
1,194,072,966
I_kwDODunzps5HLBuG
4,104
Add time series data - stock market
{ "login": "INF800", "id": 45640029, "node_id": "MDQ6VXNlcjQ1NjQwMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/INF800", "html_url": "https://github.com/INF800", "followers_url": "https://api.github.com/users/INF800/followers", "following_url": "https://api.github.com/users/INF800/following{/other_user}", "gists_url": "https://api.github.com/users/INF800/gists{/gist_id}", "starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/INF800/subscriptions", "organizations_url": "https://api.github.com/users/INF800/orgs", "repos_url": "https://api.github.com/users/INF800/repos", "events_url": "https://api.github.com/users/INF800/events{/privacy}", "received_events_url": "https://api.github.com/users/INF800/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ", "cc'ing @kashif and @NielsRogge for visibility!", "@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly point me to the dataset? Also, note we have a bunch of time series datasets checked in e.g. `electricity_load_diagrams` or `monash_tsf`, and ideally this dataset could also be in a similar format. ", "Thankyou. This is how raw data looks like before cleaning for an individual stocks:\r\n\r\n1. https://github.com/INF800/marktech/tree/raw-data/f/data/raw\r\n2. https://github.com/INF800/marktech/tree/raw-data/t/data/raw\r\n3. https://github.com/INF800/marktech/tree/raw-data/rdfn/data/raw\r\n4. https://github.com/INF800/marktech/tree/raw-data/irbt/data/raw\r\n5. https://github.com/INF800/marktech/tree/raw-data/hll/data/raw\r\n6. https://github.com/INF800/marktech/tree/raw-data/infy/data/raw\r\n7. https://github.com/INF800/marktech/tree/raw-data/reli/data/raw\r\n8. https://github.com/INF800/marktech/tree/raw-data/hdbk/data/raw\r\n\r\n> Scraping is automated using GitHub Actions. So, everyday we will see a new file added in the above links.\r\n\r\nI can rewrite the cleaning scripts to make sure it fits HF dataset standards. (P.S I am very much new to HF dataset)\r\n\r\nThe data set above can be converted into univariate regression / multivariate regression / sequence to sequence generation dataset etc. So, do we have some kind of transformation modules that will read the dataset as some type of dataset (`GenericTimeData`) and convert it to other possible dataset relating to a specific ML task. **By having this kind of transformation module, I only have to add data once** and use transformation module whenever necessary\r\n\r\nAdditionally, having some kind of versioning for the dataset will be really helpful because it will keep on updating - especially time series datasets ", "thanks @INF800 I'll have a look. I believe it should be possible to incorporate this into the time-series format." ]
1,649,224,018,000
1,649,235,564,000
null
NONE
null
## Adding a Time Series Dataset - **Name:** 2min ticker data for stock market - **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image - **Data:** Collected by myself from investing.com - **Motivation:** Test applicability of transformer based model on stock market / time series problem ![image](https://user-images.githubusercontent.com/45640029/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4104/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4103/comments
https://api.github.com/repos/huggingface/datasets/issues/4103/events
https://github.com/huggingface/datasets/pull/4103
1,193,987,104
PR_kwDODunzps41s3T4
4,103
Add the `GSM8K` dataset
{ "login": "jon-tow", "id": 41410219, "node_id": "MDQ6VXNlcjQxNDEwMjE5", "avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jon-tow", "html_url": "https://github.com/jon-tow", "followers_url": "https://api.github.com/users/jon-tow/followers", "following_url": "https://api.github.com/users/jon-tow/following{/other_user}", "gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}", "starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions", "organizations_url": "https://api.github.com/users/jon-tow/orgs", "repos_url": "https://api.github.com/users/jon-tow/repos", "events_url": "https://api.github.com/users/jon-tow/events{/privacy}", "received_events_url": "https://api.github.com/users/jon-tow/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,649,218,072,000
1,649,272,062,000
null
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4103/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4103", "html_url": "https://github.com/huggingface/datasets/pull/4103", "diff_url": "https://github.com/huggingface/datasets/pull/4103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4103.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4102/comments
https://api.github.com/repos/huggingface/datasets/issues/4102/events
https://github.com/huggingface/datasets/pull/4102
1,193,616,722
PR_kwDODunzps41roGx
4,102
[hub] Fix `api.create_repo` call?
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4102). All of your documentation changes will be reflected on that endpoint." ]
1,649,186,512,000
1,649,247,134,000
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4102/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4102", "html_url": "https://github.com/huggingface/datasets/pull/4102", "diff_url": "https://github.com/huggingface/datasets/pull/4102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4102.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4101/comments
https://api.github.com/repos/huggingface/datasets/issues/4101/events
https://github.com/huggingface/datasets/issues/4101
1,193,399,204
I_kwDODunzps5HIdOk
4,101
How can I download only the train and test split for full numbers using load_dataset()?
{ "login": "Nakkhatra", "id": 64383902, "node_id": "MDQ6VXNlcjY0MzgzOTAy", "avatar_url": "https://avatars.githubusercontent.com/u/64383902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nakkhatra", "html_url": "https://github.com/Nakkhatra", "followers_url": "https://api.github.com/users/Nakkhatra/followers", "following_url": "https://api.github.com/users/Nakkhatra/following{/other_user}", "gists_url": "https://api.github.com/users/Nakkhatra/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nakkhatra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nakkhatra/subscriptions", "organizations_url": "https://api.github.com/users/Nakkhatra/orgs", "repos_url": "https://api.github.com/users/Nakkhatra/repos", "events_url": "https://api.github.com/users/Nakkhatra/events{/privacy}", "received_events_url": "https://api.github.com/users/Nakkhatra/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only download the requested split.\r\n\r\nIf you are in a hurry, download the `svhn` script [here](`https://huggingface.co./datasets/svhn/blob/main/svhn.py`), remove [this code](https://huggingface.co./datasets/svhn/blob/main/svhn.py#L155-L162), and run:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/your/local/script.py\", \"full_numbers\")\r\n```\r\n\r\nAnd to make loading easier in Colab, you can create a dataset repo on the Hub and upload the script there. Or push the script to Google Drive and mount the drive in Colab." ]
1,649,174,415,000
1,649,250,541,000
null
NONE
null
How can I download only the train and test split for full numbers using load_dataset()? I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4101/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4100/comments
https://api.github.com/repos/huggingface/datasets/issues/4100/events
https://github.com/huggingface/datasets/pull/4100
1,193,393,959
PR_kwDODunzps41q4ce
4,100
Improve RedCaps dataset card
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4100). All of your documentation changes will be reflected on that endpoint." ]
1,649,174,234,000
1,649,257,225,000
null
CONTRIBUTOR
null
This PR modifies the RedCaps card to: * fix the formatting of the Point of Contact fields on the Hub * speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4100/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4100", "html_url": "https://github.com/huggingface/datasets/pull/4100", "diff_url": "https://github.com/huggingface/datasets/pull/4100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4100.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4099/comments
https://api.github.com/repos/huggingface/datasets/issues/4099/events
https://github.com/huggingface/datasets/issues/4099
1,193,253,768
I_kwDODunzps5HH5uI
4,099
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
{ "login": "andreybond", "id": 20210017, "node_id": "MDQ6VXNlcjIwMjEwMDE3", "avatar_url": "https://avatars.githubusercontent.com/u/20210017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreybond", "html_url": "https://github.com/andreybond", "followers_url": "https://api.github.com/users/andreybond/followers", "following_url": "https://api.github.com/users/andreybond/following{/other_user}", "gists_url": "https://api.github.com/users/andreybond/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreybond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreybond/subscriptions", "organizations_url": "https://api.github.com/users/andreybond/orgs", "repos_url": "https://api.github.com/users/andreybond/repos", "events_url": "https://api.github.com/users/andreybond/events{/privacy}", "received_events_url": "https://api.github.com/users/andreybond/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 194\r\n })\r\n validation: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 71\r\n })\r\n})\r\n```\r\n\r\nThe only reason I can imagine this issue may arise is if your default encoding is not \"UTF-8\" (and it is ASCII instead). This is usually the case on Windows machines; but you say your environment is a Linux machine. Maybe you change your machine default encoding?\r\n\r\nCould you please check this?\r\n```python\r\nIn [6]: import sys\r\n\r\nIn [7]: sys.getdefaultencoding()\r\nOut[7]: 'utf-8'\r\n```", "I opened a PR in the original dataset loading script:\r\n- microsoft/unilm#677\r\n\r\nand fixed the corresponding dataset script on the Hub:\r\n- https://huggingface.co./datasets/nielsr/XFUN/commit/73ba5e026621e05fb756ae0f267eb49971f70ebd", "import sys\r\nsys.getdefaultencoding()\r\n\r\nreturned: 'utf-8'\r\n\r\n---------------------\r\n\r\nI've just cloned master branch - your fix works! Thank you!" ]
1,649,169,758,000
1,649,227,064,000
1,649,226,954,000
NONE
null
## Describe the bug Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("nielsr/XFUN", "xfun.ja") ``` ## Expected results Dataset should be downloaded without exceptions ## Actual results Stack trace (for the second-time execution): Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477... Downloading data files: 100% 2/2 [00:00<00:00, 88.48it/s] Extracting data files: 100% 2/2 [00:00<00:00, 79.60it/s] UnicodeDecodeErrorTraceback (most recent call last) <ipython-input-31-79c26bd1109c> in <module> 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja") /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 604 ) 605 --> 606 # By default, return all splits 607 if split is None: 608 split = {s: s for s in self.info.splits} /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 692 Args: 693 split: `datasets.Split` which subset of the data to read. --> 694 695 Returns: 696 `Dataset` /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys) /usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self) 252 if not self.disable: 253 self.display(check_delay=False) --> 254 255 def __iter__(self): 256 try: /usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self) 1183 for obj in iterable: 1184 yield obj -> 1185 return 1186 1187 mininterval = self.mininterval ~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths) 140 logger.info("Generating examples from = %s", filepath) 141 with open(filepath[0], "r") as f: --> 142 data = json.load(f) 143 144 for doc in data["documents"]: /usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 294 295 """ --> 296 return loads(fp.read(), 297 cls=cls, object_hook=object_hook, 298 parse_float=parse_float, parse_int=parse_int, /usr/lib/python3.6/encodings/ascii.py in decode(self, input, final) 24 class IncrementalDecoder(codecs.IncrementalDecoder): 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] 27 28 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 (but reproduced with many previous versions) - Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu - Python version: 3.6.9 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4099/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4098/comments
https://api.github.com/repos/huggingface/datasets/issues/4098/events
https://github.com/huggingface/datasets/pull/4098
1,193,245,522
PR_kwDODunzps41qXjo
4,098
Proposing WikiSplit metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "A quick Github tip ;) To avoid running N times the CI, you can push all the changes at once: go to Files Changed tab, and on each suggestion there's a \"add to commit batch\" and then you can do one commit for all the suggestions you want to approve ;)" ]
1,649,169,394,000
1,649,173,717,000
1,649,173,348,000
CONTRIBUTOR
null
Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4098/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4098", "html_url": "https://github.com/huggingface/datasets/pull/4098", "diff_url": "https://github.com/huggingface/datasets/pull/4098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4098.patch", "merged_at": 1649173348000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4097/comments
https://api.github.com/repos/huggingface/datasets/issues/4097/events
https://github.com/huggingface/datasets/pull/4097
1,193,205,751
PR_kwDODunzps41qPEu
4,097
Updating FrugalScore metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,167,764,000
1,649,171,255,000
1,649,170,906,000
CONTRIBUTOR
null
removing duplicate paragraph
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4097/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4097", "html_url": "https://github.com/huggingface/datasets/pull/4097", "diff_url": "https://github.com/huggingface/datasets/pull/4097.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4097.patch", "merged_at": 1649170906000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4096/comments
https://api.github.com/repos/huggingface/datasets/issues/4096/events
https://github.com/huggingface/datasets/issues/4096
1,193,165,229
I_kwDODunzps5HHkGt
4,096
Add support for streaming Zarr stores for hosted datasets
{ "login": "jacobbieker", "id": 7170359, "node_id": "MDQ6VXNlcjcxNzAzNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jacobbieker", "html_url": "https://github.com/jacobbieker", "followers_url": "https://api.github.com/users/jacobbieker/followers", "following_url": "https://api.github.com/users/jacobbieker/following{/other_user}", "gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}", "starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions", "organizations_url": "https://api.github.com/users/jacobbieker/orgs", "repos_url": "https://api.github.com/users/jacobbieker/repos", "events_url": "https://api.github.com/users/jacobbieker/events{/privacy}", "received_events_url": "https://api.github.com/users/jacobbieker/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.storage.ZipStore\r\n\r\nThis might be convenient for many reasons:\r\n- On the one hand, we avoid the Git issue with huge number of small files: chunks files are compressed into a single ZIP file\r\n- On the other hand, the ZIP file format is specially suited for streaming data because it allows random access to its component files (i.e. it supports random access to its chunks)\r\n\r\nAnyway, I think that a Python loading script will be necessary: you need to implement additional logic to select certain chunks (based on date or other criteria).\r\n\r\nPlease, let me know if this makes sense to you." ]
1,649,165,912,000
1,649,258,854,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co./datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co./datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily. **Describe the solution you'd like** A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec. **Describe alternatives you've considered** Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users. Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4096/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4095/comments
https://api.github.com/repos/huggingface/datasets/issues/4095/events
https://github.com/huggingface/datasets/pull/4095
1,192,573,353
PR_kwDODunzps41oIFI
4,095
fix typo in rename_column error message
{ "login": "hunterlang", "id": 680821, "node_id": "MDQ6VXNlcjY4MDgyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/680821?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hunterlang", "html_url": "https://github.com/hunterlang", "followers_url": "https://api.github.com/users/hunterlang/followers", "following_url": "https://api.github.com/users/hunterlang/following{/other_user}", "gists_url": "https://api.github.com/users/hunterlang/gists{/gist_id}", "starred_url": "https://api.github.com/users/hunterlang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hunterlang/subscriptions", "organizations_url": "https://api.github.com/users/hunterlang/orgs", "repos_url": "https://api.github.com/users/hunterlang/repos", "events_url": "https://api.github.com/users/hunterlang/events{/privacy}", "received_events_url": "https://api.github.com/users/hunterlang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4095). All of your documentation changes will be reflected on that endpoint." ]
1,649,130,956,000
1,649,148,886,000
1,649,148,353,000
CONTRIBUTOR
null
I feel bad submitting such a tiny change as a PR but it confused me today 😄
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4095/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4095", "html_url": "https://github.com/huggingface/datasets/pull/4095", "diff_url": "https://github.com/huggingface/datasets/pull/4095.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4095.patch", "merged_at": 1649148353000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4094/comments
https://api.github.com/repos/huggingface/datasets/issues/4094/events
https://github.com/huggingface/datasets/issues/4094
1,192,534,414
I_kwDODunzps5HFKGO
4,094
Helo Mayfrends
{ "login": "Budigming", "id": 102933353, "node_id": "U_kgDOBiKjaQ", "avatar_url": "https://avatars.githubusercontent.com/u/102933353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Budigming", "html_url": "https://github.com/Budigming", "followers_url": "https://api.github.com/users/Budigming/followers", "following_url": "https://api.github.com/users/Budigming/following{/other_user}", "gists_url": "https://api.github.com/users/Budigming/gists{/gist_id}", "starred_url": "https://api.github.com/users/Budigming/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Budigming/subscriptions", "organizations_url": "https://api.github.com/users/Budigming/orgs", "repos_url": "https://api.github.com/users/Budigming/repos", "events_url": "https://api.github.com/users/Budigming/events{/privacy}", "received_events_url": "https://api.github.com/users/Budigming/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,649,126,577,000
1,649,143,002,000
1,649,143,002,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4094/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4093/comments
https://api.github.com/repos/huggingface/datasets/issues/4093/events
https://github.com/huggingface/datasets/issues/4093
1,192,523,161
I_kwDODunzps5HFHWZ
4,093
elena-soare/crawled-ecommerce: missing dataset
{ "login": "seevaratnam", "id": 17519354, "node_id": "MDQ6VXNlcjE3NTE5MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/17519354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seevaratnam", "html_url": "https://github.com/seevaratnam", "followers_url": "https://api.github.com/users/seevaratnam/followers", "following_url": "https://api.github.com/users/seevaratnam/following{/other_user}", "gists_url": "https://api.github.com/users/seevaratnam/gists{/gist_id}", "starred_url": "https://api.github.com/users/seevaratnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seevaratnam/subscriptions", "organizations_url": "https://api.github.com/users/seevaratnam/orgs", "repos_url": "https://api.github.com/users/seevaratnam/repos", "events_url": "https://api.github.com/users/seevaratnam/events{/privacy}", "received_events_url": "https://api.github.com/users/seevaratnam/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "It's a bug! Thanks for reporting, I'm looking at it.", "By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer." ]
1,649,125,519,000
1,649,149,814,000
null
NONE
null
elena-soare/crawled-ecommerce **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4093/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4092/comments
https://api.github.com/repos/huggingface/datasets/issues/4092/events
https://github.com/huggingface/datasets/pull/4092
1,192,499,903
PR_kwDODunzps41n40R
4,092
Fix dataset `amazon_us_reviews` metadata - 4/4/2022
{ "login": "trentonstrong", "id": 191985, "node_id": "MDQ6VXNlcjE5MTk4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trentonstrong", "html_url": "https://github.com/trentonstrong", "followers_url": "https://api.github.com/users/trentonstrong/followers", "following_url": "https://api.github.com/users/trentonstrong/following{/other_user}", "gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}", "starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions", "organizations_url": "https://api.github.com/users/trentonstrong/orgs", "repos_url": "https://api.github.com/users/trentonstrong/repos", "events_url": "https://api.github.com/users/trentonstrong/events{/privacy}", "received_events_url": "https://api.github.com/users/trentonstrong/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4092). All of your documentation changes will be reflected on that endpoint." ]
1,649,122,785,000
1,649,149,026,000
null
NONE
null
Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4092/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4092", "html_url": "https://github.com/huggingface/datasets/pull/4092", "diff_url": "https://github.com/huggingface/datasets/pull/4092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4092.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4091/comments
https://api.github.com/repos/huggingface/datasets/issues/4091/events
https://github.com/huggingface/datasets/issues/4091
1,192,023,855
I_kwDODunzps5HDNcv
4,091
Build a Dataset One Example at a Time Without Loading All Data Into Memory
{ "login": "aravind-tonita", "id": 99340348, "node_id": "U_kgDOBevQPA", "avatar_url": "https://avatars.githubusercontent.com/u/99340348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aravind-tonita", "html_url": "https://github.com/aravind-tonita", "followers_url": "https://api.github.com/users/aravind-tonita/followers", "following_url": "https://api.github.com/users/aravind-tonita/following{/other_user}", "gists_url": "https://api.github.com/users/aravind-tonita/gists{/gist_id}", "starred_url": "https://api.github.com/users/aravind-tonita/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aravind-tonita/subscriptions", "organizations_url": "https://api.github.com/users/aravind-tonita/orgs", "repos_url": "https://api.github.com/users/aravind-tonita/repos", "events_url": "https://api.github.com/users/aravind-tonita/events{/privacy}", "received_events_url": "https://api.github.com/users/aravind-tonita/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON/CSV/Parquet/TXT file and using `Dataset.from_{format}`\r\n* using `add_item` + `save_to_disk` on smaller chunks: \r\n ```python\r\n from datasets import Dataset, concatenate_datasets\r\n MAX_SAMPLES_IN_MEMORY = 1000\r\n samples_in_dset = 0\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n path_to_save_dir = \"path/to/save/dir\"\r\n num_chunks = 0\r\n for example_dict in custom_example_dict_streamer(\"/path/to/raw/data\"):\r\n dset = dset.add_item(example_dict)\r\n samples_in_dset += 1\r\n if samples_in_dset == MAX_SAMPLES_IN_MEMORY:\r\n samples_in_dset = 0\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n if samples_in_dset > 0:\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n loaded_dsets = [] # memory-mapped\r\n for chunk_num in range(num_chunks):\r\n dset = Dataset.load_from_disk(f\"{path_to_save_dir}{chunk_num}\") \r\n loaded_dsets.append(dset)\r\n final_dset = concatenate_datasets(dset)\r\n ```\r\n If you still have issues with this approach, you can try to delete unused datasets with `gc.collect()` to free some memory. ", "This is really elegant, thank you @mariosasko! I will try this." ]
1,649,089,164,000
1,649,186,552,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.** **Describe the solution you'd like** I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand. ``` # Initialize an empty Dataset, possibly from a known schema. dataset = Dataset() # Read in examples one by one using a custom data streamer. for example_dict in custom_example_dict_streamer("/path/to/raw/data"): # Add this example to the dict but do not store it in memory. dataset.add_item(example_dict) # Save the final dataset to disk as an Arrow-backed dataset. dataset.save_to_disk("/path/to/dataset") ... # I'd like to be able to later `load_from_disk` and use the loaded Dataset # just like any other memory-mapped pyarrow-backed HuggingFace dataset... loaded_dataset = Dataset.load_from_disk("/path/to/dataset") loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"]) dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16) ... ``` **Describe alternatives you've considered** I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping. Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4091/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4090/comments
https://api.github.com/repos/huggingface/datasets/issues/4090/events
https://github.com/huggingface/datasets/pull/4090
1,191,956,734
PR_kwDODunzps41mEs5
4,090
Avoid writing empty license files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,085,817,000
1,649,335,605,000
1,649,335,243,000
MEMBER
null
This PR avoids the creation of empty `LICENSE` files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4090/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4090", "html_url": "https://github.com/huggingface/datasets/pull/4090", "diff_url": "https://github.com/huggingface/datasets/pull/4090.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4090.patch", "merged_at": 1649335243000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4089/comments
https://api.github.com/repos/huggingface/datasets/issues/4089/events
https://github.com/huggingface/datasets/pull/4089
1,191,915,196
PR_kwDODunzps41l7yd
4,089
Create metric card for Frugal Score
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,084,029,000
1,649,168,086,000
1,649,167,610,000
CONTRIBUTOR
null
Proposing metric card for Frugal Score. @albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4089/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4089", "html_url": "https://github.com/huggingface/datasets/pull/4089", "diff_url": "https://github.com/huggingface/datasets/pull/4089.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4089.patch", "merged_at": 1649167610000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4088/comments
https://api.github.com/repos/huggingface/datasets/issues/4088/events
https://github.com/huggingface/datasets/pull/4088
1,191,901,172
PR_kwDODunzps41l4yE
4,088
Remove unused legacy Beam utils
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,083,431,000
1,649,172,207,000
1,649,171,861,000
MEMBER
null
This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0: - Patch PR: https://github.com/apache/beam/pull/11699 - Issue: https://issues.apache.org/jira/browse/BEAM-10022 In relation with: - #204
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4088/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4088/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4088", "html_url": "https://github.com/huggingface/datasets/pull/4088", "diff_url": "https://github.com/huggingface/datasets/pull/4088.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4088.patch", "merged_at": 1649171861000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4087/comments
https://api.github.com/repos/huggingface/datasets/issues/4087/events
https://github.com/huggingface/datasets/pull/4087
1,191,819,805
PR_kwDODunzps41lnfO
4,087
Fix BeamWriter output Parquet file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,080,010,000
1,649,170,840,000
1,649,170,488,000
MEMBER
null
Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files. This PR: - writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size. - fixes `parquet_to_arrow` function
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4087/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4087/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4087", "html_url": "https://github.com/huggingface/datasets/pull/4087", "diff_url": "https://github.com/huggingface/datasets/pull/4087.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4087.patch", "merged_at": 1649170488000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4086/comments
https://api.github.com/repos/huggingface/datasets/issues/4086/events
https://github.com/huggingface/datasets/issues/4086
1,191,373,374
I_kwDODunzps5HAuo-
4,086
Dataset viewer issue for McGill-NLP/feedbackQA
{ "login": "cslizc", "id": 54827718, "node_id": "MDQ6VXNlcjU0ODI3NzE4", "avatar_url": "https://avatars.githubusercontent.com/u/54827718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cslizc", "html_url": "https://github.com/cslizc", "followers_url": "https://api.github.com/users/cslizc/followers", "following_url": "https://api.github.com/users/cslizc/following{/other_user}", "gists_url": "https://api.github.com/users/cslizc/gists{/gist_id}", "starred_url": "https://api.github.com/users/cslizc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cslizc/subscriptions", "organizations_url": "https://api.github.com/users/cslizc/orgs", "repos_url": "https://api.github.com/users/cslizc/repos", "events_url": "https://api.github.com/users/cslizc/events{/privacy}", "received_events_url": "https://api.github.com/users/cslizc/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.", "thank you so much" ]
1,649,057,240,000
1,649,111,393,000
1,649,059,305,000
NONE
null
## Dataset viewer issue for '*McGill-NLP/feedbackQA*' **Link:** *[link to the dataset viewer page](https://huggingface.co./datasets/McGill-NLP/feedbackQA)* *short description of the issue* The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message: ``` Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist. ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4086/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4085/comments
https://api.github.com/repos/huggingface/datasets/issues/4085/events
https://github.com/huggingface/datasets/issues/4085
1,190,621,345
I_kwDODunzps5G93Ch
4,085
datasets.set_progress_bar_enabled(False) not working in datasets v2
{ "login": "virilo", "id": 3381112, "node_id": "MDQ6VXNlcjMzODExMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/virilo", "html_url": "https://github.com/virilo", "followers_url": "https://api.github.com/users/virilo/followers", "following_url": "https://api.github.com/users/virilo/following{/other_user}", "gists_url": "https://api.github.com/users/virilo/gists{/gist_id}", "starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/virilo/subscriptions", "organizations_url": "https://api.github.com/users/virilo/orgs", "repos_url": "https://api.github.com/users/virilo/repos", "events_url": "https://api.github.com/users/virilo/events{/privacy}", "received_events_url": "https://api.github.com/users/virilo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted", "Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update your code to use `datasets.logging.disable_progress_bar`.\r\n\r\nYou have more info in our docs: [Logging methods](https://huggingface.co./docs/datasets/package_reference/logging_methods)" ]
1,648,903,210,000
1,649,056,499,000
1,649,054,674,000
NONE
null
## Describe the bug datasets.set_progress_bar_enabled(False) not working in datasets v2 ## Steps to reproduce the bug ```python datasets.set_progress_bar_enabled(False) ``` ## Expected results datasets not using any progress bar ## Actual results AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled ## Environment info datasets version 2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4085/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4084/comments
https://api.github.com/repos/huggingface/datasets/issues/4084/events
https://github.com/huggingface/datasets/issues/4084
1,190,060,415
I_kwDODunzps5G7uF_
4,084
Errors in `Train with Datasets` Tensorflow code section on Huggingface.co
{ "login": "blackhat-coder", "id": 57095771, "node_id": "MDQ6VXNlcjU3MDk1Nzcx", "avatar_url": "https://avatars.githubusercontent.com/u/57095771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blackhat-coder", "html_url": "https://github.com/blackhat-coder", "followers_url": "https://api.github.com/users/blackhat-coder/followers", "following_url": "https://api.github.com/users/blackhat-coder/following{/other_user}", "gists_url": "https://api.github.com/users/blackhat-coder/gists{/gist_id}", "starred_url": "https://api.github.com/users/blackhat-coder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blackhat-coder/subscriptions", "organizations_url": "https://api.github.com/users/blackhat-coder/orgs", "repos_url": "https://api.github.com/users/blackhat-coder/repos", "events_url": "https://api.github.com/users/blackhat-coder/events{/privacy}", "received_events_url": "https://api.github.com/users/blackhat-coder/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated in the `datasets` library documentation all the examples using `transformers` data collators.\r\n\r\nIf you would like to follow our examples, please update your installed `transformers` version:\r\n```\r\npip install -U transformers\r\n```" ]
1,648,832,567,000
1,649,057,077,000
1,649,056,891,000
NONE
null
## Describe the bug Hi ### Error 1 Running the Tensforlow code on [Huggingface](https://huggingface.co./docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors' ### Error 2 `DataCollatorWithPadding` isn't imported ## Steps to reproduce the bug ```python import tensorflow as tf from datasets import load_dataset from transformers import AutoTokenizer dataset = load_dataset('glue', 'mrpc', split='train') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'], shuffle=True, batch_size=16, collate_fn=data_collator, ) ``` This is the same code on Huggingface.co ## Actual results TypeError: __init__() got an unexpected keyword argument 'return_tensors' ## Environment info - `datasets` version: 2.0.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.7 - PyArrow version: 6.0.0 - Pandas version: 1.4.1 >
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4084/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4083/comments
https://api.github.com/repos/huggingface/datasets/issues/4083/events
https://github.com/huggingface/datasets/pull/4083
1,190,025,878
PR_kwDODunzps41gEbu
4,083
Add SacreBLEU Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4083). All of your documentation changes will be reflected on that endpoint." ]
1,648,830,296,000
1,649,067,227,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4083/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4083", "html_url": "https://github.com/huggingface/datasets/pull/4083", "diff_url": "https://github.com/huggingface/datasets/pull/4083.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4083.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4082/comments
https://api.github.com/repos/huggingface/datasets/issues/4082/events
https://github.com/huggingface/datasets/pull/4082
1,189,965,845
PR_kwDODunzps41f3fb
4,082
Add chrF(++) Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4082). All of your documentation changes will be reflected on that endpoint." ]
1,648,827,132,000
1,648,831,894,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4082/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4082", "html_url": "https://github.com/huggingface/datasets/pull/4082", "diff_url": "https://github.com/huggingface/datasets/pull/4082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4082.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4081/comments
https://api.github.com/repos/huggingface/datasets/issues/4081/events
https://github.com/huggingface/datasets/pull/4081
1,189,916,472
PR_kwDODunzps41fsxW
4,081
Close parquet writer properly in `push_to_hub`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,648,825,130,000
1,648,830,094,000
1,648,829,779,000
MEMBER
null
We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer. I fixed this by explicitly closing the parquet writer. Close https://github.com/huggingface/datasets/issues/4077.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4081/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4081", "html_url": "https://github.com/huggingface/datasets/pull/4081", "diff_url": "https://github.com/huggingface/datasets/pull/4081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4081.patch", "merged_at": 1648829779000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4080/comments
https://api.github.com/repos/huggingface/datasets/issues/4080/events
https://github.com/huggingface/datasets/issues/4080
1,189,667,296
I_kwDODunzps5G6OHg
4,080
NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @richarddwang,\r\n\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists. \r\n\r\nDuplicate of:\r\n- #4031" ]
1,648,812,868,000
1,648,821,550,000
1,648,821,550,000
CONTRIBUTOR
null
## Steps to reproduce the bug ```python datasets.load_dataset("conll2012_ontonotesv5", "english_v12") ``` ## Actual results ``` Downloading builder script: 32.2kB [00:00, 9.72MB/s] Downloading metadata: 20.0kB [00:00, 10.4MB/s] Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size , total: 379.12 MiB) to ... Traceback (most recent call last): [315/390] File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module> train() File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train trainer.fit(model, datamodule=dm) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte rrupt return trainer_fn(*args, **kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run self._data_connector.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre pare_data self.trainer.datamodule.prepare_data() File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn fn(*args, **kwargs) File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data raw_dsets = datasets.load_dataset(**load_dataset_kwargs) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset builder_instance.download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums( File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4080/timeline
null
null
null
false

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
10
Add dataset card