url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.18B
2.35B
node_id
stringlengths
18
19
number
int64
3.98k
6.97k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
12
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
1
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6967/comments
https://api.github.com/repos/huggingface/datasets/issues/6967/events
https://github.com/huggingface/datasets/issues/6967
2,349,146,398
I_kwDODunzps6MBSEe
6,967
Method to load Laion400m
{ "login": "humanely", "id": 6862868, "node_id": "MDQ6VXNlcjY4NjI4Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/6862868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/humanely", "html_url": "https://github.com/humanely", "followers_url": "https://api.github.com/users/humanely/followers", "following_url": "https://api.github.com/users/humanely/following{/other_user}", "gists_url": "https://api.github.com/users/humanely/gists{/gist_id}", "starred_url": "https://api.github.com/users/humanely/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/humanely/subscriptions", "organizations_url": "https://api.github.com/users/humanely/orgs", "repos_url": "https://api.github.com/users/humanely/repos", "events_url": "https://api.github.com/users/humanely/events{/privacy}", "received_events_url": "https://api.github.com/users/humanely/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-06-12T16:04:04
2024-06-12T16:04:04
null
NONE
null
### Feature request Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99 ### Motivation The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings files s,ealessly. ### Your contribution I cam write the loader with some help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6967/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6966/comments
https://api.github.com/repos/huggingface/datasets/issues/6966/events
https://github.com/huggingface/datasets/pull/6966
2,348,934,466
PR_kwDODunzps5yPwL4
6,966
Remove underlines between badges
{ "login": "novialriptide", "id": 35881688, "node_id": "MDQ6VXNlcjM1ODgxNjg4", "avatar_url": "https://avatars.githubusercontent.com/u/35881688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/novialriptide", "html_url": "https://github.com/novialriptide", "followers_url": "https://api.github.com/users/novialriptide/followers", "following_url": "https://api.github.com/users/novialriptide/following{/other_user}", "gists_url": "https://api.github.com/users/novialriptide/gists{/gist_id}", "starred_url": "https://api.github.com/users/novialriptide/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/novialriptide/subscriptions", "organizations_url": "https://api.github.com/users/novialriptide/orgs", "repos_url": "https://api.github.com/users/novialriptide/repos", "events_url": "https://api.github.com/users/novialriptide/events{/privacy}", "received_events_url": "https://api.github.com/users/novialriptide/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-06-12T14:32:11
2024-06-12T14:32:11
null
NONE
null
## Before: <img width="935" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/93666e72-059b-4180-9e1d-ff176a3d9dac"> ## After: <img width="956" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/75df7c3e-f473-44f0-a872-eeecf6a85fe2">
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6966/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6966", "html_url": "https://github.com/huggingface/datasets/pull/6966", "diff_url": "https://github.com/huggingface/datasets/pull/6966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6966.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6965/comments
https://api.github.com/repos/huggingface/datasets/issues/6965/events
https://github.com/huggingface/datasets/pull/6965
2,348,653,895
PR_kwDODunzps5yOyNG
6,965
Improve skip take shuffling and distributed
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6965). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-06-12T12:30:27
2024-06-12T22:08:57
null
MEMBER
null
set the right behavior of skip/take depending on whether it's called after or before shuffle/split_by_node
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6965/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6965", "html_url": "https://github.com/huggingface/datasets/pull/6965", "diff_url": "https://github.com/huggingface/datasets/pull/6965.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6965.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6964/comments
https://api.github.com/repos/huggingface/datasets/issues/6964/events
https://github.com/huggingface/datasets/pull/6964
2,344,973,229
PR_kwDODunzps5yCNGa
6,964
Fix resuming arrow format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6964). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-06-10T22:40:33
2024-06-11T11:54:19
null
MEMBER
null
following https://github.com/huggingface/datasets/pull/6658
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6964/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6964", "html_url": "https://github.com/huggingface/datasets/pull/6964", "diff_url": "https://github.com/huggingface/datasets/pull/6964.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6964.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6963/comments
https://api.github.com/repos/huggingface/datasets/issues/6963/events
https://github.com/huggingface/datasets/pull/6963
2,344,269,477
PR_kwDODunzps5x_yu-
6,963
[Streaming] retry on requests errors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6963). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-06-10T15:51:56
2024-06-11T07:37:21
null
MEMBER
null
reported in https://discuss.huggingface.co/t/speeding-up-streaming-of-large-datasets-fineweb/90714/6 when training using a streaming a dataloader cc @Wauplin it looks like the retries from `hfh` are not always enough. In this PR I let `datasets` do additional retries (that users can configure in `datasets.config`) since I couldn't find an easy way to increase the max_retries for `hfh` users in general.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6963/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6963", "html_url": "https://github.com/huggingface/datasets/pull/6963", "diff_url": "https://github.com/huggingface/datasets/pull/6963.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6963.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6962/comments
https://api.github.com/repos/huggingface/datasets/issues/6962/events
https://github.com/huggingface/datasets/pull/6962
2,343,394,378
PR_kwDODunzps5x8yHt
6,962
fix(ci): remove unnecessary permissions
{ "login": "McPatate", "id": 9112841, "node_id": "MDQ6VXNlcjkxMTI4NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/McPatate", "html_url": "https://github.com/McPatate", "followers_url": "https://api.github.com/users/McPatate/followers", "following_url": "https://api.github.com/users/McPatate/following{/other_user}", "gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}", "starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/McPatate/subscriptions", "organizations_url": "https://api.github.com/users/McPatate/orgs", "repos_url": "https://api.github.com/users/McPatate/repos", "events_url": "https://api.github.com/users/McPatate/events{/privacy}", "received_events_url": "https://api.github.com/users/McPatate/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6962). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005520 / 0.011353 (-0.005833) | 0.003989 / 0.011008 (-0.007019) | 0.064786 / 0.038508 (0.026278) | 0.031075 / 0.023109 (0.007966) | 0.241619 / 0.275898 (-0.034279) | 0.275341 / 0.323480 (-0.048139) | 0.003139 / 0.007986 (-0.004847) | 0.002820 / 0.004328 (-0.001508) | 0.049766 / 0.004250 (0.045515) | 0.045047 / 0.037052 (0.007995) | 0.251906 / 0.258489 (-0.006583) | 0.285889 / 0.293841 (-0.007952) | 0.028297 / 0.128546 (-0.100249) | 0.010683 / 0.075646 (-0.064963) | 0.206467 / 0.419271 (-0.212805) | 0.036267 / 0.043533 (-0.007266) | 0.250720 / 0.255139 (-0.004419) | 0.268565 / 0.283200 (-0.014635) | 0.020394 / 0.141683 (-0.121289) | 1.114283 / 1.452155 (-0.337872) | 1.163884 / 1.492716 (-0.328833) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.112698 / 0.018006 (0.094692) | 0.302740 / 0.000490 (0.302251) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019337 / 0.037411 (-0.018075) | 0.062854 / 0.014526 (0.048328) | 0.077088 / 0.176557 (-0.099468) | 0.120926 / 0.737135 (-0.616209) | 0.075594 / 0.296338 (-0.220744) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290787 / 0.215209 (0.075578) | 2.867894 / 2.077655 (0.790239) | 1.490043 / 1.504120 (-0.014076) | 1.356383 / 1.541195 (-0.184812) | 1.400229 / 1.468490 (-0.068261) | 0.582076 / 4.584777 (-4.002701) | 2.398270 / 3.745712 (-1.347442) | 2.856459 / 5.269862 (-2.413403) | 1.815545 / 4.565676 (-2.750131) | 0.063259 / 0.424275 (-0.361016) | 0.005056 / 0.007607 (-0.002551) | 0.347699 / 0.226044 (0.121655) | 3.466511 / 2.268929 (1.197582) | 1.862096 / 55.444624 (-53.582528) | 1.532324 / 6.876477 (-5.344152) | 1.599411 / 2.142072 (-0.542661) | 0.657350 / 4.805227 (-4.147878) | 0.118981 / 6.500664 (-6.381683) | 0.042224 / 0.075469 (-0.033245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965649 / 1.841788 (-0.876139) | 11.896501 / 8.074308 (3.822193) | 9.873923 / 10.191392 (-0.317469) | 0.141165 / 0.680424 (-0.539258) | 0.013885 / 0.534201 (-0.520316) | 0.291464 / 0.579283 (-0.287819) | 0.273153 / 0.434364 (-0.161211) | 0.324395 / 0.540337 (-0.215942) | 0.422040 / 1.386936 (-0.964897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005640 / 0.011353 (-0.005713) | 0.004035 / 0.011008 (-0.006973) | 0.050831 / 0.038508 (0.012323) | 0.032841 / 0.023109 (0.009732) | 0.272226 / 0.275898 (-0.003672) | 0.297880 / 0.323480 (-0.025599) | 0.004397 / 0.007986 (-0.003588) | 0.002762 / 0.004328 (-0.001566) | 0.049887 / 0.004250 (0.045637) | 0.040372 / 0.037052 (0.003320) | 0.286337 / 0.258489 (0.027848) | 0.320015 / 0.293841 (0.026174) | 0.029992 / 0.128546 (-0.098554) | 0.010781 / 0.075646 (-0.064865) | 0.059391 / 0.419271 (-0.359880) | 0.034410 / 0.043533 (-0.009123) | 0.273024 / 0.255139 (0.017885) | 0.288953 / 0.283200 (0.005754) | 0.018072 / 0.141683 (-0.123611) | 1.125742 / 1.452155 (-0.326413) | 1.175233 / 1.492716 (-0.317483) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093470 / 0.018006 (0.075463) | 0.313248 / 0.000490 (0.312758) | 0.000324 / 0.000200 (0.000124) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023529 / 0.037411 (-0.013882) | 0.077305 / 0.014526 (0.062779) | 0.088916 / 0.176557 (-0.087640) | 0.128792 / 0.737135 (-0.608344) | 0.090141 / 0.296338 (-0.206197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291110 / 0.215209 (0.075901) | 2.848118 / 2.077655 (0.770464) | 1.581664 / 1.504120 (0.077544) | 1.446390 / 1.541195 (-0.094804) | 1.452594 / 1.468490 (-0.015896) | 0.571213 / 4.584777 (-4.013564) | 0.976382 / 3.745712 (-2.769330) | 2.756192 / 5.269862 (-2.513670) | 1.770274 / 4.565676 (-2.795403) | 0.064513 / 0.424275 (-0.359763) | 0.005334 / 0.007607 (-0.002273) | 0.347380 / 0.226044 (0.121335) | 3.424800 / 2.268929 (1.155871) | 1.942374 / 55.444624 (-53.502250) | 1.636069 / 6.876477 (-5.240407) | 1.795327 / 2.142072 (-0.346745) | 0.658942 / 4.805227 (-4.146285) | 0.119542 / 6.500664 (-6.381123) | 0.041826 / 0.075469 (-0.033643) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007230 / 1.841788 (-0.834558) | 12.293084 / 8.074308 (4.218776) | 10.618104 / 10.191392 (0.426712) | 0.133691 / 0.680424 (-0.546733) | 0.015725 / 0.534201 (-0.518476) | 0.288860 / 0.579283 (-0.290423) | 0.130546 / 0.434364 (-0.303818) | 0.327279 / 0.540337 (-0.213059) | 0.428768 / 1.386936 (-0.958168) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#af3acfdfcf76bb980dbac871540e30c2cade0cf9 \"CML watermark\")\n" ]
2024-06-10T09:28:02
2024-06-11T08:31:52
2024-06-11T08:25:47
MEMBER
null
### What does this PR do? Remove unnecessary permissions granted to the actions workflow. Sorry for the mishap.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6962/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6962", "html_url": "https://github.com/huggingface/datasets/pull/6962", "diff_url": "https://github.com/huggingface/datasets/pull/6962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6962.patch", "merged_at": "2024-06-11T08:25:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/6961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6961/comments
https://api.github.com/repos/huggingface/datasets/issues/6961/events
https://github.com/huggingface/datasets/issues/6961
2,342,022,418
I_kwDODunzps6LmG0S
6,961
Manual downloads should count as downloads
{ "login": "umarbutler", "id": 8473183, "node_id": "MDQ6VXNlcjg0NzMxODM=", "avatar_url": "https://avatars.githubusercontent.com/u/8473183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/umarbutler", "html_url": "https://github.com/umarbutler", "followers_url": "https://api.github.com/users/umarbutler/followers", "following_url": "https://api.github.com/users/umarbutler/following{/other_user}", "gists_url": "https://api.github.com/users/umarbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/umarbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/umarbutler/subscriptions", "organizations_url": "https://api.github.com/users/umarbutler/orgs", "repos_url": "https://api.github.com/users/umarbutler/repos", "events_url": "https://api.github.com/users/umarbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/umarbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-06-09T04:52:06
2024-06-09T04:52:06
null
NONE
null
### Feature request I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co./docs/hub/en/datasets-download-stats ### Motivation This would ensure that downloads are accurately reported to end users. ### Your contribution N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6961/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6960/comments
https://api.github.com/repos/huggingface/datasets/issues/6960/events
https://github.com/huggingface/datasets/pull/6960
2,340,791,685
PR_kwDODunzps5x0R3T
6,960
feat(ci): add trufflehog secrets detection
{ "login": "McPatate", "id": 9112841, "node_id": "MDQ6VXNlcjkxMTI4NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/McPatate", "html_url": "https://github.com/McPatate", "followers_url": "https://api.github.com/users/McPatate/followers", "following_url": "https://api.github.com/users/McPatate/following{/other_user}", "gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}", "starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/McPatate/subscriptions", "organizations_url": "https://api.github.com/users/McPatate/orgs", "repos_url": "https://api.github.com/users/McPatate/repos", "events_url": "https://api.github.com/users/McPatate/events{/privacy}", "received_events_url": "https://api.github.com/users/McPatate/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6960). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Yes!", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005007 / 0.011353 (-0.006346) | 0.003603 / 0.011008 (-0.007405) | 0.062719 / 0.038508 (0.024211) | 0.029327 / 0.023109 (0.006217) | 0.250360 / 0.275898 (-0.025538) | 0.265095 / 0.323480 (-0.058385) | 0.004205 / 0.007986 (-0.003781) | 0.002713 / 0.004328 (-0.001616) | 0.049209 / 0.004250 (0.044958) | 0.045162 / 0.037052 (0.008110) | 0.260439 / 0.258489 (0.001950) | 0.287778 / 0.293841 (-0.006063) | 0.027458 / 0.128546 (-0.101088) | 0.010169 / 0.075646 (-0.065477) | 0.199487 / 0.419271 (-0.219784) | 0.036584 / 0.043533 (-0.006949) | 0.254523 / 0.255139 (-0.000616) | 0.269902 / 0.283200 (-0.013298) | 0.017138 / 0.141683 (-0.124545) | 1.099285 / 1.452155 (-0.352869) | 1.150878 / 1.492716 (-0.341839) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092868 / 0.018006 (0.074862) | 0.300421 / 0.000490 (0.299932) | 0.000213 / 0.000200 (0.000013) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018810 / 0.037411 (-0.018601) | 0.062341 / 0.014526 (0.047815) | 0.074779 / 0.176557 (-0.101777) | 0.120641 / 0.737135 (-0.616494) | 0.075020 / 0.296338 (-0.221318) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277782 / 0.215209 (0.062573) | 2.716427 / 2.077655 (0.638772) | 1.434204 / 1.504120 (-0.069916) | 1.335990 / 1.541195 (-0.205205) | 1.336636 / 1.468490 (-0.131854) | 0.557562 / 4.584777 (-4.027215) | 2.323517 / 3.745712 (-1.422196) | 2.647937 / 5.269862 (-2.621925) | 1.728735 / 4.565676 (-2.836941) | 0.061888 / 0.424275 (-0.362387) | 0.004981 / 0.007607 (-0.002627) | 0.329429 / 0.226044 (0.103385) | 3.324708 / 2.268929 (1.055779) | 1.832641 / 55.444624 (-53.611983) | 1.514386 / 6.876477 (-5.362091) | 1.656912 / 2.142072 (-0.485160) | 0.630706 / 4.805227 (-4.174521) | 0.116250 / 6.500664 (-6.384414) | 0.042598 / 0.075469 (-0.032871) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969217 / 1.841788 (-0.872570) | 11.232580 / 8.074308 (3.158272) | 9.541306 / 10.191392 (-0.650086) | 0.139544 / 0.680424 (-0.540880) | 0.014441 / 0.534201 (-0.519760) | 0.285834 / 0.579283 (-0.293449) | 0.261950 / 0.434364 (-0.172414) | 0.325449 / 0.540337 (-0.214889) | 0.415501 / 1.386936 (-0.971435) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005422 / 0.011353 (-0.005931) | 0.003528 / 0.011008 (-0.007480) | 0.049582 / 0.038508 (0.011074) | 0.032683 / 0.023109 (0.009574) | 0.277309 / 0.275898 (0.001411) | 0.298598 / 0.323480 (-0.024882) | 0.004325 / 0.007986 (-0.003661) | 0.002741 / 0.004328 (-0.001588) | 0.047933 / 0.004250 (0.043683) | 0.040778 / 0.037052 (0.003726) | 0.287492 / 0.258489 (0.029003) | 0.311408 / 0.293841 (0.017567) | 0.029482 / 0.128546 (-0.099064) | 0.010630 / 0.075646 (-0.065016) | 0.057745 / 0.419271 (-0.361526) | 0.033501 / 0.043533 (-0.010031) | 0.279880 / 0.255139 (0.024741) | 0.297421 / 0.283200 (0.014221) | 0.017907 / 0.141683 (-0.123776) | 1.152221 / 1.452155 (-0.299934) | 1.189332 / 1.492716 (-0.303385) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094464 / 0.018006 (0.076457) | 0.300769 / 0.000490 (0.300279) | 0.000196 / 0.000200 (-0.000004) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022232 / 0.037411 (-0.015179) | 0.076626 / 0.014526 (0.062100) | 0.087807 / 0.176557 (-0.088750) | 0.128847 / 0.737135 (-0.608288) | 0.092135 / 0.296338 (-0.204203) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299013 / 0.215209 (0.083804) | 2.929788 / 2.077655 (0.852133) | 1.614185 / 1.504120 (0.110065) | 1.486720 / 1.541195 (-0.054475) | 1.492473 / 1.468490 (0.023983) | 0.563699 / 4.584777 (-4.021078) | 0.928820 / 3.745712 (-2.816892) | 2.597271 / 5.269862 (-2.672590) | 1.716534 / 4.565676 (-2.849142) | 0.062568 / 0.424275 (-0.361707) | 0.005168 / 0.007607 (-0.002439) | 0.353781 / 0.226044 (0.127737) | 3.493732 / 2.268929 (1.224803) | 2.018343 / 55.444624 (-53.426282) | 1.694516 / 6.876477 (-5.181961) | 1.796950 / 2.142072 (-0.345123) | 0.634846 / 4.805227 (-4.170382) | 0.115230 / 6.500664 (-6.385434) | 0.040816 / 0.075469 (-0.034654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986212 / 1.841788 (-0.855575) | 11.954392 / 8.074308 (3.880084) | 10.299670 / 10.191392 (0.108278) | 0.128358 / 0.680424 (-0.552066) | 0.016313 / 0.534201 (-0.517888) | 0.289621 / 0.579283 (-0.289662) | 0.124708 / 0.434364 (-0.309656) | 0.325269 / 0.540337 (-0.215068) | 0.415133 / 1.386936 (-0.971803) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#97513be330114a8aa07e5199ec252ac662aeb76d \"CML watermark\")\n" ]
2024-06-07T16:18:23
2024-06-08T14:58:27
2024-06-08T14:52:18
MEMBER
null
### What does this PR do? Adding a GH action to scan for leaked secrets on each commit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6960/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6960", "html_url": "https://github.com/huggingface/datasets/pull/6960", "diff_url": "https://github.com/huggingface/datasets/pull/6960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6960.patch", "merged_at": "2024-06-08T14:52:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/6959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6959/comments
https://api.github.com/repos/huggingface/datasets/issues/6959/events
https://github.com/huggingface/datasets/pull/6959
2,340,229,908
PR_kwDODunzps5xyVt6
6,959
Better error handling in `dataset_module_factory`
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6959). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Test should be fixed by https://github.com/huggingface/datasets/pull/6959/commits/ef8f7cee79ffb070d9b5190f21128fc523b3d3ee (tested locally). Let's see what CI says :crossed_fingers: ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005678 / 0.011353 (-0.005675) | 0.004119 / 0.011008 (-0.006889) | 0.063901 / 0.038508 (0.025393) | 0.032071 / 0.023109 (0.008961) | 0.243182 / 0.275898 (-0.032716) | 0.280709 / 0.323480 (-0.042770) | 0.004195 / 0.007986 (-0.003791) | 0.002810 / 0.004328 (-0.001518) | 0.048722 / 0.004250 (0.044472) | 0.049381 / 0.037052 (0.012328) | 0.257816 / 0.258489 (-0.000673) | 0.288460 / 0.293841 (-0.005381) | 0.028518 / 0.128546 (-0.100029) | 0.010775 / 0.075646 (-0.064871) | 0.203149 / 0.419271 (-0.216122) | 0.038792 / 0.043533 (-0.004741) | 0.248502 / 0.255139 (-0.006637) | 0.268251 / 0.283200 (-0.014949) | 0.019536 / 0.141683 (-0.122147) | 1.133935 / 1.452155 (-0.318220) | 1.182855 / 1.492716 (-0.309862) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097531 / 0.018006 (0.079525) | 0.303612 / 0.000490 (0.303122) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019670 / 0.037411 (-0.017741) | 0.063439 / 0.014526 (0.048913) | 0.075119 / 0.176557 (-0.101438) | 0.122419 / 0.737135 (-0.614717) | 0.076965 / 0.296338 (-0.219374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286780 / 0.215209 (0.071571) | 2.811860 / 2.077655 (0.734206) | 1.485165 / 1.504120 (-0.018954) | 1.373296 / 1.541195 (-0.167898) | 1.412700 / 1.468490 (-0.055790) | 0.566442 / 4.584777 (-4.018335) | 2.382616 / 3.745712 (-1.363096) | 2.677214 / 5.269862 (-2.592647) | 1.760073 / 4.565676 (-2.805603) | 0.062673 / 0.424275 (-0.361602) | 0.005050 / 0.007607 (-0.002557) | 0.341701 / 0.226044 (0.115657) | 3.321182 / 2.268929 (1.052253) | 1.811715 / 55.444624 (-53.632909) | 1.554986 / 6.876477 (-5.321491) | 1.727448 / 2.142072 (-0.414624) | 0.642193 / 4.805227 (-4.163034) | 0.117878 / 6.500664 (-6.382786) | 0.042814 / 0.075469 (-0.032655) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985894 / 1.841788 (-0.855894) | 12.195975 / 8.074308 (4.121667) | 9.890180 / 10.191392 (-0.301212) | 0.142638 / 0.680424 (-0.537786) | 0.015207 / 0.534201 (-0.518994) | 0.283140 / 0.579283 (-0.296143) | 0.266016 / 0.434364 (-0.168348) | 0.325518 / 0.540337 (-0.214820) | 0.418994 / 1.386936 (-0.967942) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005978 / 0.011353 (-0.005374) | 0.003915 / 0.011008 (-0.007093) | 0.051592 / 0.038508 (0.013084) | 0.033338 / 0.023109 (0.010229) | 0.267925 / 0.275898 (-0.007973) | 0.296011 / 0.323480 (-0.027469) | 0.004503 / 0.007986 (-0.003483) | 0.002854 / 0.004328 (-0.001475) | 0.049958 / 0.004250 (0.045707) | 0.041708 / 0.037052 (0.004656) | 0.287185 / 0.258489 (0.028696) | 0.322715 / 0.293841 (0.028874) | 0.030088 / 0.128546 (-0.098458) | 0.010709 / 0.075646 (-0.064938) | 0.059736 / 0.419271 (-0.359536) | 0.034294 / 0.043533 (-0.009239) | 0.264316 / 0.255139 (0.009177) | 0.285471 / 0.283200 (0.002272) | 0.019197 / 0.141683 (-0.122486) | 1.135571 / 1.452155 (-0.316583) | 1.190019 / 1.492716 (-0.302698) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099251 / 0.018006 (0.081245) | 0.305357 / 0.000490 (0.304867) | 0.000215 / 0.000200 (0.000015) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023206 / 0.037411 (-0.014205) | 0.077835 / 0.014526 (0.063310) | 0.090242 / 0.176557 (-0.086315) | 0.131208 / 0.737135 (-0.605928) | 0.091726 / 0.296338 (-0.204612) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292487 / 0.215209 (0.077278) | 2.837044 / 2.077655 (0.759389) | 1.553155 / 1.504120 (0.049035) | 1.433645 / 1.541195 (-0.107550) | 1.476702 / 1.468490 (0.008212) | 0.561926 / 4.584777 (-4.022851) | 0.954630 / 3.745712 (-2.791082) | 2.752286 / 5.269862 (-2.517575) | 1.782746 / 4.565676 (-2.782931) | 0.062984 / 0.424275 (-0.361291) | 0.005056 / 0.007607 (-0.002551) | 0.341700 / 0.226044 (0.115656) | 3.343726 / 2.268929 (1.074798) | 1.953390 / 55.444624 (-53.491234) | 1.616989 / 6.876477 (-5.259488) | 1.785104 / 2.142072 (-0.356969) | 0.643465 / 4.805227 (-4.161763) | 0.115905 / 6.500664 (-6.384759) | 0.041678 / 0.075469 (-0.033791) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000237 / 1.841788 (-0.841550) | 12.633517 / 8.074308 (4.559208) | 10.553485 / 10.191392 (0.362092) | 0.143188 / 0.680424 (-0.537236) | 0.016020 / 0.534201 (-0.518181) | 0.286739 / 0.579283 (-0.292544) | 0.128488 / 0.434364 (-0.305876) | 0.321932 / 0.540337 (-0.218405) | 0.418635 / 1.386936 (-0.968301) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9510252f03fded02b8cc87ca6dfa3195d17594ba \"CML watermark\")\n" ]
2024-06-07T11:24:15
2024-06-10T07:33:53
2024-06-10T07:27:43
CONTRIBUTOR
null
cc @cakiki who reported it on [slack](https://huggingface.slack.com/archives/C039P47V1L5/p1717754405578539) (private link) This PR updates how errors are handled in `dataset_module_factory` when the `dataset_info` cannot be accessed: 1. Use multiple `except ... as e` instead of using `isinstance(e, ...)` 2. Always raise `DatasetNotFoundError` with `from e` so that the initial error is explicitly logged in the stacktrace. 3. Differentiate `RepoNotFoundError` / `GatedRepoError` / `RevisionNotFoundError` cases
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6959/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6959/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6959", "html_url": "https://github.com/huggingface/datasets/pull/6959", "diff_url": "https://github.com/huggingface/datasets/pull/6959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6959.patch", "merged_at": "2024-06-10T07:27:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/6958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6958/comments
https://api.github.com/repos/huggingface/datasets/issues/6958/events
https://github.com/huggingface/datasets/issues/6958
2,337,476,383
I_kwDODunzps6LUw8f
6,958
My Private Dataset doesn't exist on the Hub or cannot be accessed
{ "login": "wangguan1995", "id": 39621324, "node_id": "MDQ6VXNlcjM5NjIxMzI0", "avatar_url": "https://avatars.githubusercontent.com/u/39621324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangguan1995", "html_url": "https://github.com/wangguan1995", "followers_url": "https://api.github.com/users/wangguan1995/followers", "following_url": "https://api.github.com/users/wangguan1995/following{/other_user}", "gists_url": "https://api.github.com/users/wangguan1995/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangguan1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangguan1995/subscriptions", "organizations_url": "https://api.github.com/users/wangguan1995/orgs", "repos_url": "https://api.github.com/users/wangguan1995/repos", "events_url": "https://api.github.com/users/wangguan1995/events{/privacy}", "received_events_url": "https://api.github.com/users/wangguan1995/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I can load public dataset, but for my private dataset it fails", "https://huggingface.co./docs/datasets/upload_dataset", "I have checked the API HTTP link. Repository Not Found for url: https://huggingface.co./api/datasets/xxx/xxx.\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/4aceef59-0c65-4161-9665-676d25d73225)\r\n\r\nIt just works fine.", "It seems that everything is in a mass huh....\r\n\r\n![image](https://github.com/huggingface/datasets/assets/39621324/fb2fe12c-4f0a-4bf6-9656-63ba50347b10)\r\n", "https://huggingface.co./datasets/rajpurkar/squad/blob/main/squad.py fails again", "https://github.com/huggingface/datasets/blob/main/templates/new_dataset_script.py#L81 can not use this, too complex. I just need a def to load my file to a dict", "I am facing the same issue. Did you find a fix?" ]
2024-06-06T06:52:19
2024-06-12T16:59:05
null
NONE
null
### Describe the bug ``` File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed >>> dataset = load_dataset("xxxx", token=True) 404 error 404 Client Error. (Request ID: Root=xxxx) Repository Not Found for url: https://huggingface.co./api/datasets/xxx/xxx. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2593, in load_dataset builder_instance = load_dataset_builder( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 2265, in load_dataset_builder dataset_module = dataset_module_factory( File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1910, in dataset_module_factory raise e1 from None File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg) datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on the Hub or cannot be accessed ``` ### Steps to reproduce the bug 123 ### Expected behavior 123 ### Environment info 123
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6958/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6957/comments
https://api.github.com/repos/huggingface/datasets/issues/6957/events
https://github.com/huggingface/datasets/pull/6957
2,335,559,400
PR_kwDODunzps5xiTwJ
6,957
Fix typos in docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6957). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005371 / 0.011353 (-0.005982) | 0.003834 / 0.011008 (-0.007174) | 0.063032 / 0.038508 (0.024524) | 0.031623 / 0.023109 (0.008514) | 0.250008 / 0.275898 (-0.025890) | 0.273998 / 0.323480 (-0.049482) | 0.004114 / 0.007986 (-0.003871) | 0.002821 / 0.004328 (-0.001508) | 0.049470 / 0.004250 (0.045220) | 0.046586 / 0.037052 (0.009534) | 0.276807 / 0.258489 (0.018318) | 0.288607 / 0.293841 (-0.005234) | 0.027427 / 0.128546 (-0.101119) | 0.010634 / 0.075646 (-0.065012) | 0.202451 / 0.419271 (-0.216821) | 0.036346 / 0.043533 (-0.007187) | 0.250426 / 0.255139 (-0.004713) | 0.274104 / 0.283200 (-0.009096) | 0.018461 / 0.141683 (-0.123222) | 1.120326 / 1.452155 (-0.331829) | 1.157635 / 1.492716 (-0.335081) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102287 / 0.018006 (0.084281) | 0.313145 / 0.000490 (0.312655) | 0.000255 / 0.000200 (0.000055) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019494 / 0.037411 (-0.017917) | 0.063252 / 0.014526 (0.048727) | 0.075318 / 0.176557 (-0.101239) | 0.122194 / 0.737135 (-0.614942) | 0.076837 / 0.296338 (-0.219501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284098 / 0.215209 (0.068889) | 2.822301 / 2.077655 (0.744647) | 1.490185 / 1.504120 (-0.013935) | 1.366723 / 1.541195 (-0.174472) | 1.398832 / 1.468490 (-0.069658) | 0.563661 / 4.584777 (-4.021116) | 2.385129 / 3.745712 (-1.360583) | 2.689823 / 5.269862 (-2.580039) | 1.731271 / 4.565676 (-2.834405) | 0.063351 / 0.424275 (-0.360924) | 0.004974 / 0.007607 (-0.002633) | 0.332163 / 0.226044 (0.106119) | 3.314906 / 2.268929 (1.045977) | 1.811331 / 55.444624 (-53.633294) | 1.513357 / 6.876477 (-5.363120) | 1.718454 / 2.142072 (-0.423618) | 0.639663 / 4.805227 (-4.165564) | 0.120377 / 6.500664 (-6.380287) | 0.043254 / 0.075469 (-0.032215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978534 / 1.841788 (-0.863253) | 11.622313 / 8.074308 (3.548005) | 9.608732 / 10.191392 (-0.582660) | 0.131339 / 0.680424 (-0.549085) | 0.015226 / 0.534201 (-0.518975) | 0.287317 / 0.579283 (-0.291966) | 0.266647 / 0.434364 (-0.167717) | 0.324243 / 0.540337 (-0.216094) | 0.442025 / 1.386936 (-0.944911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005673 / 0.011353 (-0.005680) | 0.003722 / 0.011008 (-0.007286) | 0.049483 / 0.038508 (0.010975) | 0.033308 / 0.023109 (0.010199) | 0.261912 / 0.275898 (-0.013986) | 0.291151 / 0.323480 (-0.032329) | 0.004389 / 0.007986 (-0.003596) | 0.002762 / 0.004328 (-0.001567) | 0.048970 / 0.004250 (0.044719) | 0.041509 / 0.037052 (0.004457) | 0.273288 / 0.258489 (0.014798) | 0.308351 / 0.293841 (0.014510) | 0.029958 / 0.128546 (-0.098589) | 0.010500 / 0.075646 (-0.065146) | 0.058253 / 0.419271 (-0.361019) | 0.033820 / 0.043533 (-0.009713) | 0.261089 / 0.255139 (0.005950) | 0.282179 / 0.283200 (-0.001021) | 0.018543 / 0.141683 (-0.123140) | 1.121303 / 1.452155 (-0.330852) | 1.166141 / 1.492716 (-0.326575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099209 / 0.018006 (0.081203) | 0.316920 / 0.000490 (0.316430) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023339 / 0.037411 (-0.014072) | 0.077127 / 0.014526 (0.062602) | 0.088160 / 0.176557 (-0.088396) | 0.129449 / 0.737135 (-0.607686) | 0.093159 / 0.296338 (-0.203180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281262 / 0.215209 (0.066053) | 2.797504 / 2.077655 (0.719850) | 1.513354 / 1.504120 (0.009234) | 1.383034 / 1.541195 (-0.158161) | 1.395202 / 1.468490 (-0.073288) | 0.563180 / 4.584777 (-4.021597) | 0.979330 / 3.745712 (-2.766383) | 2.674008 / 5.269862 (-2.595853) | 1.762174 / 4.565676 (-2.803502) | 0.062333 / 0.424275 (-0.361942) | 0.004991 / 0.007607 (-0.002616) | 0.336043 / 0.226044 (0.109999) | 3.313500 / 2.268929 (1.044571) | 1.848083 / 55.444624 (-53.596541) | 1.554723 / 6.876477 (-5.321754) | 1.743485 / 2.142072 (-0.398587) | 0.657117 / 4.805227 (-4.148111) | 0.115736 / 6.500664 (-6.384928) | 0.040527 / 0.075469 (-0.034942) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005876 / 1.841788 (-0.835911) | 12.525895 / 8.074308 (4.451587) | 10.492961 / 10.191392 (0.301569) | 0.143443 / 0.680424 (-0.536981) | 0.016652 / 0.534201 (-0.517548) | 0.288236 / 0.579283 (-0.291047) | 0.131401 / 0.434364 (-0.302963) | 0.322885 / 0.540337 (-0.217452) | 0.416048 / 1.386936 (-0.970888) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6548e0e282aeeda7bfb18beafbc65ebecd780c63 \"CML watermark\")\n" ]
2024-06-05T10:46:47
2024-06-05T13:01:07
2024-06-05T12:43:26
MEMBER
null
Fix typos in docs introduced by: - #6956 Typos: - `comparisions` => `comparisons` - two consecutive sentences both ending in colon - split one sentence into two Sorry, I did not have time to review that PR. CC: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6957/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6957", "html_url": "https://github.com/huggingface/datasets/pull/6957", "diff_url": "https://github.com/huggingface/datasets/pull/6957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6957.patch", "merged_at": "2024-06-05T12:43:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/6956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6956/comments
https://api.github.com/repos/huggingface/datasets/issues/6956/events
https://github.com/huggingface/datasets/pull/6956
2,333,940,021
PR_kwDODunzps5xcwXz
6,956
update docs on N-dim arrays
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6956). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005348 / 0.011353 (-0.006005) | 0.003785 / 0.011008 (-0.007223) | 0.061674 / 0.038508 (0.023166) | 0.032127 / 0.023109 (0.009017) | 0.247095 / 0.275898 (-0.028803) | 0.276466 / 0.323480 (-0.047014) | 0.004197 / 0.007986 (-0.003789) | 0.002734 / 0.004328 (-0.001594) | 0.049604 / 0.004250 (0.045354) | 0.048553 / 0.037052 (0.011500) | 0.253230 / 0.258489 (-0.005259) | 0.286954 / 0.293841 (-0.006887) | 0.028181 / 0.128546 (-0.100365) | 0.010602 / 0.075646 (-0.065044) | 0.200719 / 0.419271 (-0.218552) | 0.037278 / 0.043533 (-0.006254) | 0.251565 / 0.255139 (-0.003574) | 0.269026 / 0.283200 (-0.014174) | 0.017632 / 0.141683 (-0.124050) | 1.136216 / 1.452155 (-0.315939) | 1.181158 / 1.492716 (-0.311559) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004892 / 0.018006 (-0.013114) | 0.312921 / 0.000490 (0.312431) | 0.000247 / 0.000200 (0.000047) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019303 / 0.037411 (-0.018108) | 0.062699 / 0.014526 (0.048174) | 0.075227 / 0.176557 (-0.101329) | 0.122919 / 0.737135 (-0.614217) | 0.076506 / 0.296338 (-0.219833) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277299 / 0.215209 (0.062090) | 2.754771 / 2.077655 (0.677116) | 1.457164 / 1.504120 (-0.046956) | 1.318878 / 1.541195 (-0.222317) | 1.374245 / 1.468490 (-0.094245) | 0.566253 / 4.584777 (-4.018524) | 2.352589 / 3.745712 (-1.393123) | 2.764263 / 5.269862 (-2.505599) | 1.843141 / 4.565676 (-2.722535) | 0.063996 / 0.424275 (-0.360279) | 0.005045 / 0.007607 (-0.002562) | 0.336703 / 0.226044 (0.110658) | 3.342538 / 2.268929 (1.073609) | 1.836664 / 55.444624 (-53.607960) | 1.528901 / 6.876477 (-5.347576) | 1.769562 / 2.142072 (-0.372511) | 0.674192 / 4.805227 (-4.131035) | 0.122421 / 6.500664 (-6.378243) | 0.043714 / 0.075469 (-0.031756) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989432 / 1.841788 (-0.852356) | 12.178341 / 8.074308 (4.104033) | 9.730838 / 10.191392 (-0.460554) | 0.146751 / 0.680424 (-0.533673) | 0.014720 / 0.534201 (-0.519481) | 0.285821 / 0.579283 (-0.293462) | 0.266474 / 0.434364 (-0.167889) | 0.327886 / 0.540337 (-0.212451) | 0.455672 / 1.386936 (-0.931264) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005691 / 0.011353 (-0.005662) | 0.004089 / 0.011008 (-0.006919) | 0.049878 / 0.038508 (0.011370) | 0.033578 / 0.023109 (0.010469) | 0.268295 / 0.275898 (-0.007603) | 0.288918 / 0.323480 (-0.034561) | 0.005092 / 0.007986 (-0.002894) | 0.002916 / 0.004328 (-0.001412) | 0.049489 / 0.004250 (0.045239) | 0.042495 / 0.037052 (0.005442) | 0.276253 / 0.258489 (0.017764) | 0.313321 / 0.293841 (0.019480) | 0.029386 / 0.128546 (-0.099160) | 0.010926 / 0.075646 (-0.064720) | 0.071747 / 0.419271 (-0.347525) | 0.033642 / 0.043533 (-0.009891) | 0.264950 / 0.255139 (0.009811) | 0.282962 / 0.283200 (-0.000238) | 0.018878 / 0.141683 (-0.122805) | 1.170685 / 1.452155 (-0.281470) | 1.198321 / 1.492716 (-0.294396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100422 / 0.018006 (0.082415) | 0.311750 / 0.000490 (0.311260) | 0.000235 / 0.000200 (0.000035) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023093 / 0.037411 (-0.014318) | 0.076934 / 0.014526 (0.062408) | 0.088959 / 0.176557 (-0.087598) | 0.129511 / 0.737135 (-0.607624) | 0.090151 / 0.296338 (-0.206187) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301646 / 0.215209 (0.086437) | 2.961780 / 2.077655 (0.884126) | 1.656051 / 1.504120 (0.151931) | 1.533154 / 1.541195 (-0.008041) | 1.585152 / 1.468490 (0.116662) | 0.582157 / 4.584777 (-4.002620) | 0.954881 / 3.745712 (-2.790831) | 2.813174 / 5.269862 (-2.456688) | 1.842840 / 4.565676 (-2.722837) | 0.065598 / 0.424275 (-0.358677) | 0.005306 / 0.007607 (-0.002301) | 0.359610 / 0.226044 (0.133565) | 3.575320 / 2.268929 (1.306391) | 2.015327 / 55.444624 (-53.429297) | 1.734086 / 6.876477 (-5.142391) | 1.919081 / 2.142072 (-0.222991) | 0.671178 / 4.805227 (-4.134049) | 0.120109 / 6.500664 (-6.380555) | 0.042353 / 0.075469 (-0.033116) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011726 / 1.841788 (-0.830062) | 13.007806 / 8.074308 (4.933498) | 10.632486 / 10.191392 (0.441094) | 0.148535 / 0.680424 (-0.531889) | 0.015988 / 0.534201 (-0.518213) | 0.290023 / 0.579283 (-0.289260) | 0.130685 / 0.434364 (-0.303679) | 0.322912 / 0.540337 (-0.217425) | 0.420596 / 1.386936 (-0.966340) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#336512dcba4fdb4c349d5ecb632b6ced80e038d5 \"CML watermark\")\n" ]
2024-06-04T16:32:19
2024-06-04T16:46:34
2024-06-04T16:40:27
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6956/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6956", "html_url": "https://github.com/huggingface/datasets/pull/6956", "diff_url": "https://github.com/huggingface/datasets/pull/6956.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6956.patch", "merged_at": "2024-06-04T16:40:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/6955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6955/comments
https://api.github.com/repos/huggingface/datasets/issues/6955/events
https://github.com/huggingface/datasets/pull/6955
2,333,802,815
PR_kwDODunzps5xcSYm
6,955
Fix small typo
{ "login": "marcenacp", "id": 17081356, "node_id": "MDQ6VXNlcjE3MDgxMzU2", "avatar_url": "https://avatars.githubusercontent.com/u/17081356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcenacp", "html_url": "https://github.com/marcenacp", "followers_url": "https://api.github.com/users/marcenacp/followers", "following_url": "https://api.github.com/users/marcenacp/following{/other_user}", "gists_url": "https://api.github.com/users/marcenacp/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcenacp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcenacp/subscriptions", "organizations_url": "https://api.github.com/users/marcenacp/orgs", "repos_url": "https://api.github.com/users/marcenacp/repos", "events_url": "https://api.github.com/users/marcenacp/events{/privacy}", "received_events_url": "https://api.github.com/users/marcenacp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005507 / 0.011353 (-0.005845) | 0.003757 / 0.011008 (-0.007251) | 0.063274 / 0.038508 (0.024766) | 0.029720 / 0.023109 (0.006610) | 0.247974 / 0.275898 (-0.027924) | 0.272283 / 0.323480 (-0.051197) | 0.004186 / 0.007986 (-0.003799) | 0.002820 / 0.004328 (-0.001508) | 0.049070 / 0.004250 (0.044820) | 0.050026 / 0.037052 (0.012973) | 0.256501 / 0.258489 (-0.001988) | 0.297082 / 0.293841 (0.003241) | 0.028549 / 0.128546 (-0.099997) | 0.010361 / 0.075646 (-0.065285) | 0.213202 / 0.419271 (-0.206070) | 0.038117 / 0.043533 (-0.005416) | 0.258878 / 0.255139 (0.003739) | 0.282980 / 0.283200 (-0.000220) | 0.018911 / 0.141683 (-0.122772) | 1.118857 / 1.452155 (-0.333298) | 1.157763 / 1.492716 (-0.334953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004499 / 0.018006 (-0.013507) | 0.310445 / 0.000490 (0.309956) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019275 / 0.037411 (-0.018137) | 0.063257 / 0.014526 (0.048731) | 0.075833 / 0.176557 (-0.100724) | 0.122323 / 0.737135 (-0.614812) | 0.079046 / 0.296338 (-0.217292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292811 / 0.215209 (0.077602) | 2.903501 / 2.077655 (0.825846) | 1.592434 / 1.504120 (0.088314) | 1.450833 / 1.541195 (-0.090362) | 1.481285 / 1.468490 (0.012795) | 0.570150 / 4.584777 (-4.014627) | 2.388618 / 3.745712 (-1.357094) | 2.699322 / 5.269862 (-2.570540) | 1.781405 / 4.565676 (-2.784272) | 0.063451 / 0.424275 (-0.360824) | 0.004979 / 0.007607 (-0.002628) | 0.353346 / 0.226044 (0.127302) | 3.541217 / 2.268929 (1.272289) | 1.972335 / 55.444624 (-53.472289) | 1.634780 / 6.876477 (-5.241697) | 1.815944 / 2.142072 (-0.326128) | 0.651559 / 4.805227 (-4.153669) | 0.118398 / 6.500664 (-6.382266) | 0.041962 / 0.075469 (-0.033507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971435 / 1.841788 (-0.870352) | 11.843740 / 8.074308 (3.769431) | 9.716333 / 10.191392 (-0.475059) | 0.145923 / 0.680424 (-0.534501) | 0.015073 / 0.534201 (-0.519128) | 0.293307 / 0.579283 (-0.285976) | 0.265505 / 0.434364 (-0.168859) | 0.327578 / 0.540337 (-0.212760) | 0.436409 / 1.386936 (-0.950527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005647 / 0.011353 (-0.005706) | 0.003669 / 0.011008 (-0.007339) | 0.050234 / 0.038508 (0.011726) | 0.033033 / 0.023109 (0.009924) | 0.269303 / 0.275898 (-0.006595) | 0.282472 / 0.323480 (-0.041008) | 0.004283 / 0.007986 (-0.003703) | 0.002821 / 0.004328 (-0.001507) | 0.050887 / 0.004250 (0.046637) | 0.041618 / 0.037052 (0.004565) | 0.277628 / 0.258489 (0.019139) | 0.310539 / 0.293841 (0.016698) | 0.030036 / 0.128546 (-0.098511) | 0.010401 / 0.075646 (-0.065245) | 0.058845 / 0.419271 (-0.360427) | 0.033676 / 0.043533 (-0.009857) | 0.261148 / 0.255139 (0.006009) | 0.295232 / 0.283200 (0.012032) | 0.018603 / 0.141683 (-0.123080) | 1.132182 / 1.452155 (-0.319972) | 1.173763 / 1.492716 (-0.318953) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100594 / 0.018006 (0.082588) | 0.308101 / 0.000490 (0.307611) | 0.000217 / 0.000200 (0.000017) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023040 / 0.037411 (-0.014371) | 0.080676 / 0.014526 (0.066150) | 0.094687 / 0.176557 (-0.081870) | 0.129780 / 0.737135 (-0.607356) | 0.092241 / 0.296338 (-0.204097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294799 / 0.215209 (0.079590) | 2.957570 / 2.077655 (0.879915) | 1.576795 / 1.504120 (0.072675) | 1.446869 / 1.541195 (-0.094326) | 1.463133 / 1.468490 (-0.005357) | 0.568511 / 4.584777 (-4.016266) | 1.011502 / 3.745712 (-2.734211) | 2.759571 / 5.269862 (-2.510291) | 1.771738 / 4.565676 (-2.793939) | 0.064104 / 0.424275 (-0.360171) | 0.005160 / 0.007607 (-0.002448) | 0.347554 / 0.226044 (0.121510) | 3.463905 / 2.268929 (1.194976) | 1.931843 / 55.444624 (-53.512781) | 1.622765 / 6.876477 (-5.253712) | 1.809146 / 2.142072 (-0.332926) | 0.653388 / 4.805227 (-4.151839) | 0.122703 / 6.500664 (-6.377961) | 0.041680 / 0.075469 (-0.033790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000428 / 1.841788 (-0.841359) | 12.503003 / 8.074308 (4.428695) | 10.434802 / 10.191392 (0.243410) | 0.144684 / 0.680424 (-0.535740) | 0.015988 / 0.534201 (-0.518213) | 0.287179 / 0.579283 (-0.292104) | 0.124811 / 0.434364 (-0.309553) | 0.327855 / 0.540337 (-0.212482) | 0.425144 / 1.386936 (-0.961792) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7170067f819222153fcd45682db61279bdfe673 \"CML watermark\")\n" ]
2024-06-04T15:19:02
2024-06-05T10:18:56
2024-06-04T15:20:55
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6955/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6955/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6955", "html_url": "https://github.com/huggingface/datasets/pull/6955", "diff_url": "https://github.com/huggingface/datasets/pull/6955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6955.patch", "merged_at": "2024-06-04T15:20:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/6954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6954/comments
https://api.github.com/repos/huggingface/datasets/issues/6954/events
https://github.com/huggingface/datasets/pull/6954
2,333,530,558
PR_kwDODunzps5xbWtU
6,954
Remove default `trust_remote_code=True`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6954). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "yay! 🎉 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004881 / 0.011353 (-0.006472) | 0.003246 / 0.011008 (-0.007762) | 0.062496 / 0.038508 (0.023988) | 0.030760 / 0.023109 (0.007651) | 0.241500 / 0.275898 (-0.034398) | 0.272073 / 0.323480 (-0.051407) | 0.004123 / 0.007986 (-0.003863) | 0.002796 / 0.004328 (-0.001533) | 0.049015 / 0.004250 (0.044764) | 0.047095 / 0.037052 (0.010043) | 0.257002 / 0.258489 (-0.001487) | 0.287602 / 0.293841 (-0.006239) | 0.027281 / 0.128546 (-0.101265) | 0.010132 / 0.075646 (-0.065514) | 0.203699 / 0.419271 (-0.215572) | 0.036553 / 0.043533 (-0.006980) | 0.246221 / 0.255139 (-0.008918) | 0.268137 / 0.283200 (-0.015062) | 0.017260 / 0.141683 (-0.124423) | 1.100677 / 1.452155 (-0.351478) | 1.148367 / 1.492716 (-0.344349) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102519 / 0.018006 (0.084513) | 0.301929 / 0.000490 (0.301439) | 0.000223 / 0.000200 (0.000023) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018590 / 0.037411 (-0.018821) | 0.061615 / 0.014526 (0.047089) | 0.074579 / 0.176557 (-0.101978) | 0.121415 / 0.737135 (-0.615720) | 0.075696 / 0.296338 (-0.220642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283842 / 0.215209 (0.068633) | 2.788321 / 2.077655 (0.710666) | 1.481376 / 1.504120 (-0.022743) | 1.356064 / 1.541195 (-0.185131) | 1.380592 / 1.468490 (-0.087898) | 0.575577 / 4.584777 (-4.009199) | 2.471858 / 3.745712 (-1.273854) | 2.760769 / 5.269862 (-2.509093) | 1.808638 / 4.565676 (-2.757038) | 0.064930 / 0.424275 (-0.359345) | 0.005056 / 0.007607 (-0.002551) | 0.337794 / 0.226044 (0.111750) | 3.359444 / 2.268929 (1.090515) | 1.829540 / 55.444624 (-53.615084) | 1.518660 / 6.876477 (-5.357817) | 1.671612 / 2.142072 (-0.470460) | 0.664286 / 4.805227 (-4.140941) | 0.119593 / 6.500664 (-6.381071) | 0.042519 / 0.075469 (-0.032950) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993152 / 1.841788 (-0.848636) | 11.733054 / 8.074308 (3.658746) | 9.746734 / 10.191392 (-0.444658) | 0.143026 / 0.680424 (-0.537398) | 0.014900 / 0.534201 (-0.519301) | 0.292243 / 0.579283 (-0.287040) | 0.261301 / 0.434364 (-0.173063) | 0.330838 / 0.540337 (-0.209500) | 0.523719 / 1.386936 (-0.863217) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005707 / 0.011353 (-0.005646) | 0.003523 / 0.011008 (-0.007485) | 0.052265 / 0.038508 (0.013757) | 0.034296 / 0.023109 (0.011187) | 0.266589 / 0.275898 (-0.009309) | 0.288441 / 0.323480 (-0.035039) | 0.004507 / 0.007986 (-0.003478) | 0.002745 / 0.004328 (-0.001583) | 0.049417 / 0.004250 (0.045167) | 0.042679 / 0.037052 (0.005627) | 0.278518 / 0.258489 (0.020029) | 0.328751 / 0.293841 (0.034911) | 0.029530 / 0.128546 (-0.099016) | 0.010373 / 0.075646 (-0.065274) | 0.058207 / 0.419271 (-0.361064) | 0.033434 / 0.043533 (-0.010099) | 0.267902 / 0.255139 (0.012763) | 0.288192 / 0.283200 (0.004993) | 0.018866 / 0.141683 (-0.122817) | 1.132734 / 1.452155 (-0.319421) | 1.172879 / 1.492716 (-0.319837) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097787 / 0.018006 (0.079780) | 0.305509 / 0.000490 (0.305019) | 0.000268 / 0.000200 (0.000068) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023230 / 0.037411 (-0.014181) | 0.076637 / 0.014526 (0.062111) | 0.088386 / 0.176557 (-0.088171) | 0.131079 / 0.737135 (-0.606057) | 0.091142 / 0.296338 (-0.205197) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295586 / 0.215209 (0.080377) | 2.872090 / 2.077655 (0.794435) | 1.538152 / 1.504120 (0.034032) | 1.405695 / 1.541195 (-0.135500) | 1.421058 / 1.468490 (-0.047432) | 0.561179 / 4.584777 (-4.023598) | 0.943954 / 3.745712 (-2.801758) | 2.684381 / 5.269862 (-2.585481) | 1.757457 / 4.565676 (-2.808220) | 0.062903 / 0.424275 (-0.361372) | 0.004998 / 0.007607 (-0.002610) | 0.370290 / 0.226044 (0.144245) | 3.374988 / 2.268929 (1.106059) | 1.899282 / 55.444624 (-53.545342) | 1.598787 / 6.876477 (-5.277690) | 1.735371 / 2.142072 (-0.406702) | 0.647367 / 4.805227 (-4.157860) | 0.116975 / 6.500664 (-6.383689) | 0.040811 / 0.075469 (-0.034658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996380 / 1.841788 (-0.845408) | 12.225657 / 8.074308 (4.151349) | 10.291221 / 10.191392 (0.099829) | 0.142791 / 0.680424 (-0.537633) | 0.016087 / 0.534201 (-0.518114) | 0.299978 / 0.579283 (-0.279305) | 0.149444 / 0.434364 (-0.284920) | 0.321354 / 0.540337 (-0.218984) | 0.414492 / 1.386936 (-0.972444) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a2dc287cbef5311cf1a32ad4e3685f4052db227c \"CML watermark\")\n" ]
2024-06-04T13:22:56
2024-06-07T12:26:37
2024-06-07T12:20:29
MEMBER
null
TODO: - [x] fix tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6954/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6954", "html_url": "https://github.com/huggingface/datasets/pull/6954", "diff_url": "https://github.com/huggingface/datasets/pull/6954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6954.patch", "merged_at": "2024-06-07T12:20:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/6953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6953/comments
https://api.github.com/repos/huggingface/datasets/issues/6953/events
https://github.com/huggingface/datasets/issues/6953
2,333,366,120
I_kwDODunzps6LFFdo
6,953
Remove canonical datasets from docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[]
2024-06-04T12:09:03
2024-06-04T12:09:03
null
MEMBER
null
Remove canonical datasets from docs, now that we no longer have canonical datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6953/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6952/comments
https://api.github.com/repos/huggingface/datasets/issues/6952/events
https://github.com/huggingface/datasets/pull/6952
2,333,320,411
PR_kwDODunzps5xaosH
6,952
Move info_utils errors to exceptions module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6952). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003744 / 0.011008 (-0.007264) | 0.064089 / 0.038508 (0.025581) | 0.032409 / 0.023109 (0.009300) | 0.255886 / 0.275898 (-0.020013) | 0.276033 / 0.323480 (-0.047447) | 0.004165 / 0.007986 (-0.003821) | 0.002741 / 0.004328 (-0.001588) | 0.052145 / 0.004250 (0.047894) | 0.043863 / 0.037052 (0.006811) | 0.258844 / 0.258489 (0.000355) | 0.290108 / 0.293841 (-0.003733) | 0.027390 / 0.128546 (-0.101156) | 0.010543 / 0.075646 (-0.065103) | 0.206936 / 0.419271 (-0.212335) | 0.036778 / 0.043533 (-0.006755) | 0.254331 / 0.255139 (-0.000808) | 0.279037 / 0.283200 (-0.004163) | 0.018564 / 0.141683 (-0.123119) | 1.112765 / 1.452155 (-0.339390) | 1.160099 / 1.492716 (-0.332617) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092148 / 0.018006 (0.074142) | 0.297156 / 0.000490 (0.296667) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018797 / 0.037411 (-0.018615) | 0.062992 / 0.014526 (0.048466) | 0.076361 / 0.176557 (-0.100195) | 0.121168 / 0.737135 (-0.615968) | 0.075845 / 0.296338 (-0.220494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293842 / 0.215209 (0.078633) | 2.880720 / 2.077655 (0.803065) | 1.477779 / 1.504120 (-0.026341) | 1.345136 / 1.541195 (-0.196059) | 1.352153 / 1.468490 (-0.116337) | 0.574722 / 4.584777 (-4.010055) | 2.373925 / 3.745712 (-1.371787) | 2.750704 / 5.269862 (-2.519157) | 1.725979 / 4.565676 (-2.839697) | 0.063006 / 0.424275 (-0.361269) | 0.005019 / 0.007607 (-0.002588) | 0.341228 / 0.226044 (0.115184) | 3.352576 / 2.268929 (1.083647) | 1.821363 / 55.444624 (-53.623261) | 1.529441 / 6.876477 (-5.347036) | 1.543401 / 2.142072 (-0.598671) | 0.634282 / 4.805227 (-4.170945) | 0.115565 / 6.500664 (-6.385099) | 0.042514 / 0.075469 (-0.032956) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987532 / 1.841788 (-0.854255) | 11.483853 / 8.074308 (3.409545) | 9.565657 / 10.191392 (-0.625735) | 0.141247 / 0.680424 (-0.539176) | 0.015026 / 0.534201 (-0.519175) | 0.299905 / 0.579283 (-0.279378) | 0.267667 / 0.434364 (-0.166697) | 0.320661 / 0.540337 (-0.219676) | 0.427368 / 1.386936 (-0.959568) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005905) | 0.003726 / 0.011008 (-0.007283) | 0.049776 / 0.038508 (0.011268) | 0.032733 / 0.023109 (0.009624) | 0.261387 / 0.275898 (-0.014511) | 0.280087 / 0.323480 (-0.043393) | 0.004351 / 0.007986 (-0.003634) | 0.002842 / 0.004328 (-0.001487) | 0.049440 / 0.004250 (0.045190) | 0.039585 / 0.037052 (0.002533) | 0.266331 / 0.258489 (0.007842) | 0.299643 / 0.293841 (0.005802) | 0.029649 / 0.128546 (-0.098897) | 0.010381 / 0.075646 (-0.065265) | 0.058596 / 0.419271 (-0.360676) | 0.033271 / 0.043533 (-0.010262) | 0.251070 / 0.255139 (-0.004069) | 0.272850 / 0.283200 (-0.010349) | 0.016728 / 0.141683 (-0.124955) | 1.146952 / 1.452155 (-0.305202) | 1.182602 / 1.492716 (-0.310114) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091673 / 0.018006 (0.073667) | 0.297228 / 0.000490 (0.296738) | 0.000197 / 0.000200 (-0.000003) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023174 / 0.037411 (-0.014237) | 0.078866 / 0.014526 (0.064341) | 0.088436 / 0.176557 (-0.088121) | 0.129650 / 0.737135 (-0.607485) | 0.091100 / 0.296338 (-0.205238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293882 / 0.215209 (0.078673) | 2.882667 / 2.077655 (0.805012) | 1.562949 / 1.504120 (0.058829) | 1.435104 / 1.541195 (-0.106090) | 1.450815 / 1.468490 (-0.017675) | 0.584090 / 4.584777 (-4.000687) | 0.984176 / 3.745712 (-2.761536) | 2.668740 / 5.269862 (-2.601121) | 1.766993 / 4.565676 (-2.798683) | 0.064710 / 0.424275 (-0.359565) | 0.005329 / 0.007607 (-0.002278) | 0.346008 / 0.226044 (0.119964) | 3.414576 / 2.268929 (1.145647) | 1.911388 / 55.444624 (-53.533236) | 1.660357 / 6.876477 (-5.216120) | 1.818628 / 2.142072 (-0.323444) | 0.659585 / 4.805227 (-4.145643) | 0.116980 / 6.500664 (-6.383684) | 0.041364 / 0.075469 (-0.034105) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005659 / 1.841788 (-0.836129) | 12.023761 / 8.074308 (3.949453) | 10.351086 / 10.191392 (0.159694) | 0.143261 / 0.680424 (-0.537162) | 0.016143 / 0.534201 (-0.518058) | 0.287793 / 0.579283 (-0.291490) | 0.123698 / 0.434364 (-0.310666) | 0.325241 / 0.540337 (-0.215097) | 0.418772 / 1.386936 (-0.968164) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#37a603679f451826cfafd8aae00738b01dcb9d58 \"CML watermark\")\n" ]
2024-06-04T11:48:32
2024-06-10T14:09:59
2024-06-10T14:03:55
MEMBER
null
Move `info_utils` errors to `exceptions` module. Additionally rename some of them, deprecate the former ones, and make the deprecation backward compatible (by making the new errors inherit from the former ones).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6952/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6952", "html_url": "https://github.com/huggingface/datasets/pull/6952", "diff_url": "https://github.com/huggingface/datasets/pull/6952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6952.patch", "merged_at": "2024-06-10T14:03:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/6951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6951/comments
https://api.github.com/repos/huggingface/datasets/issues/6951/events
https://github.com/huggingface/datasets/issues/6951
2,333,231,042
I_kwDODunzps6LEkfC
6,951
load_dataset() should load all subsets, if no specific subset is specified
{ "login": "windmaple", "id": 5577741, "node_id": "MDQ6VXNlcjU1Nzc3NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/windmaple", "html_url": "https://github.com/windmaple", "followers_url": "https://api.github.com/users/windmaple/followers", "following_url": "https://api.github.com/users/windmaple/following{/other_user}", "gists_url": "https://api.github.com/users/windmaple/gists{/gist_id}", "starred_url": "https://api.github.com/users/windmaple/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/windmaple/subscriptions", "organizations_url": "https://api.github.com/users/windmaple/orgs", "repos_url": "https://api.github.com/users/windmaple/repos", "events_url": "https://api.github.com/users/windmaple/events{/privacy}", "received_events_url": "https://api.github.com/users/windmaple/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "@xianbaoqian " ]
2024-06-04T11:02:33
2024-06-04T11:02:49
null
NONE
null
### Feature request Currently load_dataset() is forcing users to specify a subset. Example `from datasets import load_dataset dataset = load_dataset("m-a-p/COIG-CQIA")` ```--------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-10-c0cb49385da6>](https://localhost:8080/#) in <cell line: 2>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset("m-a-p/COIG-CQIA") 3 frames [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs) 582 if not config_kwargs: 583 example_of_usage = f"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')" --> 584 raise ValueError( 585 "Config name is missing." 586 f"\nPlease pick one among the available configs: {list(self.builder_configs.keys())}" ValueError: Config name is missing. Please pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu'] Example of usage: `load_dataset('coig-cqia', 'chinese_traditional')` ``` This means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co./datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy. ### Motivation Ideally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets. ### Your contribution Not sure since I'm not familiar w/ the lib src.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6951/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6951/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6950/comments
https://api.github.com/repos/huggingface/datasets/issues/6950/events
https://github.com/huggingface/datasets/issues/6950
2,333,005,974
I_kwDODunzps6LDtiW
6,950
`Dataset.with_format` behaves inconsistently with documentation
{ "login": "iansheng", "id": 42494185, "node_id": "MDQ6VXNlcjQyNDk0MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/42494185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iansheng", "html_url": "https://github.com/iansheng", "followers_url": "https://api.github.com/users/iansheng/followers", "following_url": "https://api.github.com/users/iansheng/following{/other_user}", "gists_url": "https://api.github.com/users/iansheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/iansheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iansheng/subscriptions", "organizations_url": "https://api.github.com/users/iansheng/orgs", "repos_url": "https://api.github.com/users/iansheng/repos", "events_url": "https://api.github.com/users/iansheng/events{/privacy}", "received_events_url": "https://api.github.com/users/iansheng/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "Hi ! It seems the documentation was outdated in this paragraph\r\n\r\nI fixed it here: https://github.com/huggingface/datasets/pull/6956" ]
2024-06-04T09:18:32
2024-06-05T10:19:56
null
NONE
null
### Describe the bug The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation. https://huggingface.co./docs/datasets/use_with_pytorch#n-dimensional-arrays https://huggingface.co./docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays > If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists. > In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor. > A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor. But I get a single tensor by default, which is inconsistent with the description. Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified. ### Steps to reproduce the bug ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([[1, 2], [3, 4]])} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy= array([[1, 2], [3, 4]])>} ``` ### Expected behavior ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': [tensor([1, 2]), tensor([3, 4])]} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.RaggedTensor [[1, 2], [3, 4]]>} ``` ### Environment info datasets==2.19.1 torch==2.1.0 tensorflow==2.13.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6950/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6949/comments
https://api.github.com/repos/huggingface/datasets/issues/6949/events
https://github.com/huggingface/datasets/issues/6949
2,332,336,573
I_kwDODunzps6LBKG9
6,949
load_dataset error
{ "login": "lion-ops", "id": 27952522, "node_id": "MDQ6VXNlcjI3OTUyNTIy", "avatar_url": "https://avatars.githubusercontent.com/u/27952522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lion-ops", "html_url": "https://github.com/lion-ops", "followers_url": "https://api.github.com/users/lion-ops/followers", "following_url": "https://api.github.com/users/lion-ops/following{/other_user}", "gists_url": "https://api.github.com/users/lion-ops/gists{/gist_id}", "starred_url": "https://api.github.com/users/lion-ops/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lion-ops/subscriptions", "organizations_url": "https://api.github.com/users/lion-ops/orgs", "repos_url": "https://api.github.com/users/lion-ops/repos", "events_url": "https://api.github.com/users/lion-ops/events{/privacy}", "received_events_url": "https://api.github.com/users/lion-ops/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi, @lion-ops.\r\n\r\nIn our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n\r\nCould you please share your \"train.json\" file, so that we can try to reproduce the issue you have? ", "> Hi, @lion-ops.\r\n> \r\n> In our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n> \r\n> Could you please share your \"train.json\" file, so that we can try to reproduce the issue you have?\r\n\r\nThank you for your reply. I can load it normally in another server. Is it possible that the disk of my server is a network disk in the LAN, so it will be downloaded from the LAN and get stuck?" ]
2024-06-04T01:24:45
2024-06-04T05:54:54
null
NONE
null
### Describe the bug Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r'). ### Steps to reproduce the bug 1. pip install datasets==2.19.2 2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset 3. data = load_dataset('json', data_files='train.json') ### Expected behavior It is able to load my json correctly ### Environment info datasets==2.19.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6949/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6948/comments
https://api.github.com/repos/huggingface/datasets/issues/6948/events
https://github.com/huggingface/datasets/issues/6948
2,331,758,300
I_kwDODunzps6K-87c
6,948
to_tf_dataset: Visible devices cannot be modified after being initialized
{ "login": "logasja", "id": 7151661, "node_id": "MDQ6VXNlcjcxNTE2NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/7151661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/logasja", "html_url": "https://github.com/logasja", "followers_url": "https://api.github.com/users/logasja/followers", "following_url": "https://api.github.com/users/logasja/following{/other_user}", "gists_url": "https://api.github.com/users/logasja/gists{/gist_id}", "starred_url": "https://api.github.com/users/logasja/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/logasja/subscriptions", "organizations_url": "https://api.github.com/users/logasja/orgs", "repos_url": "https://api.github.com/users/logasja/repos", "events_url": "https://api.github.com/users/logasja/events{/privacy}", "received_events_url": "https://api.github.com/users/logasja/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-06-03T18:10:57
2024-06-03T18:10:57
null
NONE
null
### Describe the bug When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``. File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/datasets/utils/tf_utils.py", line 438, in worker_loop tf.config.set_visible_devices([], "GPU") # Make sure workers don't try to allocate GPU memory ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/framework/config.py", line 566, in set_visible_devices context.context().set_visible_devices(devices, device_type) File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/eager/context.py", line 1737, in set_visible_devices raise RuntimeError( RuntimeError: Visible devices cannot be modified after being initialized ### Steps to reproduce the bug 1. Download a dataset using HuggingFace load_dataset 2. Define a function that transforms the data in some way to be used in the collate_fn argument 3. Provide a ``batch_size`` and ``num_workers`` value in the ``to_tf_dataset`` function 4. Either retrieve directly or use tfds benchmark to test the dataset ``` python from datasets import load_datasets import tensorflow_datasets as tfds from keras_cv.layers import Resizing def data_loader(examples): x = Resizing(examples[0]['image'], 256, 256, crop_to_aspect_ratio=True) return {X[0]: x} ds = load_datasets("logasja/FDF", split="test") ds = ds.to_tf_dataset(collate_fn=data_loader, batch_size=16, num_workers=2) tfds.benchmark(ds) ``` ### Expected behavior Use multiple processes to apply transformations from the collate_fn to the tf dataset on the CPU. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-1023-oracle-x86_64-with-glibc2.35 - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6948/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6947/comments
https://api.github.com/repos/huggingface/datasets/issues/6947/events
https://github.com/huggingface/datasets/issues/6947
2,331,114,055
I_kwDODunzps6K8fpH
6,947
FileNotFoundError:error when loading C4 dataset
{ "login": "W-215", "id": 62374585, "node_id": "MDQ6VXNlcjYyMzc0NTg1", "avatar_url": "https://avatars.githubusercontent.com/u/62374585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/W-215", "html_url": "https://github.com/W-215", "followers_url": "https://api.github.com/users/W-215/followers", "following_url": "https://api.github.com/users/W-215/following{/other_user}", "gists_url": "https://api.github.com/users/W-215/gists{/gist_id}", "starred_url": "https://api.github.com/users/W-215/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/W-215/subscriptions", "organizations_url": "https://api.github.com/users/W-215/orgs", "repos_url": "https://api.github.com/users/W-215/repos", "events_url": "https://api.github.com/users/W-215/events{/privacy}", "received_events_url": "https://api.github.com/users/W-215/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "same problem here", "Hello,\r\n\r\nAre you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n- #6925\r\n\r\nI can't reproduce the error:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\nDownloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\nGenerating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDataset({\r\n features: ['text', 'timestamp', 'url'],\r\n num_rows: 45576\r\n})\r\n```", "> Hello,\r\n> \r\n> Are you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n> \r\n> * [Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets #6925](https://github.com/huggingface/datasets/pull/6925)\r\n> \r\n> I can't reproduce the error:\r\n> \r\n> ```python\r\n> In [1]: from datasets import load_dataset\r\n> \r\n> In [2]: ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\n> Downloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41.1k/41.1k [00:00<00:00, 596kB/s]\r\n> Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 40.7M/40.7M [00:04<00:00, 8.50MB/s]\r\n> Generating validation split: 45576 examples [00:01, 44956.75 examples/s]\r\n> \r\n> In [3]: ds\r\n> Out[3]: \r\n> Dataset({\r\n> features: ['text', 'timestamp', 'url'],\r\n> num_rows: 45576\r\n> })\r\n> ```\r\nThank you for your reply,ExpectedMoreSplits was encountered in datasets version 2.12.2. After I updated the version, that is, datasets version 2.19.2, I encountered the FileNotFoundError problem mentioned above.", "That might be due to a corrupted cache.\r\n\r\nPlease, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n```python\r\nds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n```\r\n\r\nIt the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n", "> That might be due to a corrupted cache.\r\n> \r\n> Please, retry loading the dataset passing: `download_mode=\"force_redownload\"`\r\n> \r\n> ```python\r\n> ds = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation', download_mode=\"force_redownload\")\r\n> ```\r\n> \r\n> It the above command does not fix the issue, then you will need to fix the cache manually, by removing the corresponding directory inside `~/.cache/huggingface/`.\r\n\r\nThe two methods you mentioned above can not solve this problem, but the command line interface shows Downloading readme: 41.1kB [00:00, 281kB/s], and then FileNotFoundError appears. It is worth noting that I have no problem loading other datasets with the initial method, such as wikitext datasets" ]
2024-06-03T13:06:33
2024-06-04T12:48:40
null
NONE
null
### Describe the bug can't load c4 datasets When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'} How can I fix this? ### Steps to reproduce the bug 1.from datasets import load_dataset 2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') 3. raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ### Expected behavior The data was successfully imported ### Environment info python version 3.9 datasets version 2.19.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6947/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6946/comments
https://api.github.com/repos/huggingface/datasets/issues/6946/events
https://github.com/huggingface/datasets/pull/6946
2,330,276,848
PR_kwDODunzps5xQNao
6,946
Re-enable import sorting disabled by flake8:noqa directive when using ruff linter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6946). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004847 / 0.011353 (-0.006506) | 0.003199 / 0.011008 (-0.007810) | 0.060677 / 0.038508 (0.022169) | 0.030544 / 0.023109 (0.007435) | 0.240870 / 0.275898 (-0.035028) | 0.261320 / 0.323480 (-0.062160) | 0.002816 / 0.007986 (-0.005170) | 0.002483 / 0.004328 (-0.001845) | 0.048527 / 0.004250 (0.044277) | 0.045496 / 0.037052 (0.008444) | 0.251296 / 0.258489 (-0.007193) | 0.285746 / 0.293841 (-0.008095) | 0.025076 / 0.128546 (-0.103470) | 0.009417 / 0.075646 (-0.066229) | 0.191361 / 0.419271 (-0.227911) | 0.033778 / 0.043533 (-0.009755) | 0.235581 / 0.255139 (-0.019558) | 0.261069 / 0.283200 (-0.022131) | 0.018255 / 0.141683 (-0.123428) | 1.098437 / 1.452155 (-0.353718) | 1.127124 / 1.492716 (-0.365592) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004479 / 0.018006 (-0.013527) | 0.283706 / 0.000490 (0.283216) | 0.000214 / 0.000200 (0.000014) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018364 / 0.037411 (-0.019048) | 0.058398 / 0.014526 (0.043872) | 0.073056 / 0.176557 (-0.103501) | 0.117147 / 0.737135 (-0.619989) | 0.073683 / 0.296338 (-0.222656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.265121 / 0.215209 (0.049912) | 2.636981 / 2.077655 (0.559327) | 1.380192 / 1.504120 (-0.123928) | 1.270779 / 1.541195 (-0.270416) | 1.295729 / 1.468490 (-0.172762) | 0.523768 / 4.584777 (-4.061009) | 2.295720 / 3.745712 (-1.449992) | 2.519211 / 5.269862 (-2.750650) | 1.618712 / 4.565676 (-2.946965) | 0.058321 / 0.424275 (-0.365954) | 0.004492 / 0.007607 (-0.003115) | 0.316101 / 0.226044 (0.090057) | 3.169913 / 2.268929 (0.900984) | 1.793412 / 55.444624 (-53.651213) | 1.473784 / 6.876477 (-5.402693) | 1.565325 / 2.142072 (-0.576748) | 0.592734 / 4.805227 (-4.212493) | 0.109333 / 6.500664 (-6.391331) | 0.039063 / 0.075469 (-0.036406) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935504 / 1.841788 (-0.906284) | 10.865520 / 8.074308 (2.791212) | 9.219337 / 10.191392 (-0.972055) | 0.135284 / 0.680424 (-0.545140) | 0.013664 / 0.534201 (-0.520537) | 0.271601 / 0.579283 (-0.307682) | 0.260456 / 0.434364 (-0.173908) | 0.302931 / 0.540337 (-0.237406) | 0.414643 / 1.386936 (-0.972293) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004801 / 0.011353 (-0.006552) | 0.003092 / 0.011008 (-0.007917) | 0.046471 / 0.038508 (0.007963) | 0.031337 / 0.023109 (0.008228) | 0.258920 / 0.275898 (-0.016978) | 0.269842 / 0.323480 (-0.053638) | 0.003976 / 0.007986 (-0.004009) | 0.002661 / 0.004328 (-0.001668) | 0.045676 / 0.004250 (0.041426) | 0.038199 / 0.037052 (0.001146) | 0.277382 / 0.258489 (0.018893) | 0.289351 / 0.293841 (-0.004490) | 0.028452 / 0.128546 (-0.100094) | 0.009737 / 0.075646 (-0.065910) | 0.055201 / 0.419271 (-0.364071) | 0.032686 / 0.043533 (-0.010847) | 0.259617 / 0.255139 (0.004478) | 0.277163 / 0.283200 (-0.006037) | 0.017825 / 0.141683 (-0.123858) | 1.102797 / 1.452155 (-0.349357) | 1.105018 / 1.492716 (-0.387699) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094844 / 0.018006 (0.076838) | 0.290519 / 0.000490 (0.290029) | 0.000211 / 0.000200 (0.000012) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021917 / 0.037411 (-0.015494) | 0.075278 / 0.014526 (0.060753) | 0.085971 / 0.176557 (-0.090586) | 0.127072 / 0.737135 (-0.610063) | 0.088244 / 0.296338 (-0.208095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276704 / 0.215209 (0.061495) | 2.736960 / 2.077655 (0.659305) | 1.519634 / 1.504120 (0.015514) | 1.403026 / 1.541195 (-0.138168) | 1.418465 / 1.468490 (-0.050025) | 0.552425 / 4.584777 (-4.032352) | 0.955244 / 3.745712 (-2.790468) | 2.556563 / 5.269862 (-2.713298) | 1.705095 / 4.565676 (-2.860582) | 0.061212 / 0.424275 (-0.363063) | 0.004707 / 0.007607 (-0.002900) | 0.326284 / 0.226044 (0.100239) | 3.253911 / 2.268929 (0.984983) | 1.868649 / 55.444624 (-53.575976) | 1.598697 / 6.876477 (-5.277780) | 1.682617 / 2.142072 (-0.459455) | 0.606379 / 4.805227 (-4.198848) | 0.114126 / 6.500664 (-6.386538) | 0.038869 / 0.075469 (-0.036601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966354 / 1.841788 (-0.875433) | 11.575918 / 8.074308 (3.501609) | 9.816597 / 10.191392 (-0.374795) | 0.141492 / 0.680424 (-0.538932) | 0.015375 / 0.534201 (-0.518826) | 0.276027 / 0.579283 (-0.303256) | 0.118979 / 0.434364 (-0.315385) | 0.313467 / 0.540337 (-0.226870) | 0.403539 / 1.386936 (-0.983397) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b59c75856d765e60b66a5216062102d001c6612 \"CML watermark\")\n" ]
2024-06-03T06:24:47
2024-06-04T10:00:08
2024-06-04T09:54:23
MEMBER
null
Re-enable import sorting that was wrongly disabled by `flake8: noqa` directive after switching to `ruff` linter in datasets-2.10.0 PR: - #5519 Note that after the linter switch, we wrongly replaced `flake8: noqa` with `ruff: noqa` in datasets-2.17.0 PR: - #6619 That replacement was wrong because we kept the `isort: skip` directives although they were indeed disabled by `flake8: noqa` first and by `ruff: noqa` afterwards. See for example `__init__.py` file after the linter switch: - We kept the `flake8: noqa` directive https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L1 - Whereas we also kept the `isort: skip` directives (that were disabled) https://github.com/huggingface/datasets/blob/06ae3f678651bfbb3ca7dd3274ee2f38e0e0237e/src/datasets/__init__.py#L82-L84 Fix #6942.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6946/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6946", "html_url": "https://github.com/huggingface/datasets/pull/6946", "diff_url": "https://github.com/huggingface/datasets/pull/6946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6946.patch", "merged_at": "2024-06-04T09:54:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/6945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6945/comments
https://api.github.com/repos/huggingface/datasets/issues/6945/events
https://github.com/huggingface/datasets/pull/6945
2,330,224,869
PR_kwDODunzps5xQCCx
6,945
Update yanked version of minimum requests requirement
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005725 / 0.011353 (-0.005627) | 0.003788 / 0.011008 (-0.007220) | 0.063059 / 0.038508 (0.024551) | 0.031364 / 0.023109 (0.008255) | 0.259209 / 0.275898 (-0.016689) | 0.278805 / 0.323480 (-0.044675) | 0.003032 / 0.007986 (-0.004953) | 0.002633 / 0.004328 (-0.001696) | 0.049804 / 0.004250 (0.045554) | 0.046717 / 0.037052 (0.009665) | 0.267246 / 0.258489 (0.008757) | 0.299271 / 0.293841 (0.005430) | 0.027687 / 0.128546 (-0.100860) | 0.010524 / 0.075646 (-0.065123) | 0.201736 / 0.419271 (-0.217536) | 0.036192 / 0.043533 (-0.007341) | 0.264492 / 0.255139 (0.009353) | 0.280809 / 0.283200 (-0.002391) | 0.018187 / 0.141683 (-0.123496) | 1.170751 / 1.452155 (-0.281404) | 1.223450 / 1.492716 (-0.269266) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096610 / 0.018006 (0.078604) | 0.297122 / 0.000490 (0.296632) | 0.000211 / 0.000200 (0.000011) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018380 / 0.037411 (-0.019031) | 0.062214 / 0.014526 (0.047688) | 0.075833 / 0.176557 (-0.100723) | 0.121825 / 0.737135 (-0.615310) | 0.075475 / 0.296338 (-0.220864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275601 / 0.215209 (0.060392) | 2.698014 / 2.077655 (0.620359) | 1.434043 / 1.504120 (-0.070077) | 1.313217 / 1.541195 (-0.227978) | 1.339014 / 1.468490 (-0.129476) | 0.566703 / 4.584777 (-4.018074) | 2.367794 / 3.745712 (-1.377918) | 2.660787 / 5.269862 (-2.609074) | 1.738503 / 4.565676 (-2.827174) | 0.061693 / 0.424275 (-0.362582) | 0.004978 / 0.007607 (-0.002629) | 0.334719 / 0.226044 (0.108675) | 3.300889 / 2.268929 (1.031960) | 1.764493 / 55.444624 (-53.680131) | 1.475956 / 6.876477 (-5.400521) | 1.635988 / 2.142072 (-0.506084) | 0.643906 / 4.805227 (-4.161321) | 0.118002 / 6.500664 (-6.382662) | 0.042593 / 0.075469 (-0.032876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953511 / 1.841788 (-0.888276) | 11.489727 / 8.074308 (3.415419) | 9.775017 / 10.191392 (-0.416375) | 0.139864 / 0.680424 (-0.540560) | 0.014219 / 0.534201 (-0.519982) | 0.284389 / 0.579283 (-0.294894) | 0.264250 / 0.434364 (-0.170113) | 0.323471 / 0.540337 (-0.216866) | 0.415189 / 1.386936 (-0.971747) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005437 / 0.011353 (-0.005916) | 0.003710 / 0.011008 (-0.007298) | 0.049940 / 0.038508 (0.011432) | 0.032565 / 0.023109 (0.009456) | 0.266374 / 0.275898 (-0.009524) | 0.288069 / 0.323480 (-0.035411) | 0.004140 / 0.007986 (-0.003845) | 0.002669 / 0.004328 (-0.001660) | 0.049646 / 0.004250 (0.045395) | 0.040926 / 0.037052 (0.003874) | 0.278805 / 0.258489 (0.020316) | 0.311396 / 0.293841 (0.017555) | 0.029363 / 0.128546 (-0.099183) | 0.010260 / 0.075646 (-0.065386) | 0.058222 / 0.419271 (-0.361049) | 0.033063 / 0.043533 (-0.010470) | 0.266798 / 0.255139 (0.011659) | 0.283091 / 0.283200 (-0.000109) | 0.017904 / 0.141683 (-0.123779) | 1.139531 / 1.452155 (-0.312624) | 1.163909 / 1.492716 (-0.328808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089063 / 0.018006 (0.071057) | 0.296757 / 0.000490 (0.296268) | 0.000202 / 0.000200 (0.000002) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022843 / 0.037411 (-0.014568) | 0.076032 / 0.014526 (0.061507) | 0.087545 / 0.176557 (-0.089012) | 0.128870 / 0.737135 (-0.608266) | 0.089359 / 0.296338 (-0.206980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285213 / 0.215209 (0.070004) | 2.854950 / 2.077655 (0.777295) | 1.539311 / 1.504120 (0.035191) | 1.413753 / 1.541195 (-0.127442) | 1.440819 / 1.468490 (-0.027671) | 0.564734 / 4.584777 (-4.020043) | 0.944924 / 3.745712 (-2.800788) | 2.703612 / 5.269862 (-2.566249) | 1.749429 / 4.565676 (-2.816247) | 0.063239 / 0.424275 (-0.361036) | 0.005024 / 0.007607 (-0.002583) | 0.340866 / 0.226044 (0.114821) | 3.359511 / 2.268929 (1.090582) | 1.895794 / 55.444624 (-53.548831) | 1.606613 / 6.876477 (-5.269864) | 1.756539 / 2.142072 (-0.385533) | 0.646553 / 4.805227 (-4.158675) | 0.121278 / 6.500664 (-6.379386) | 0.041066 / 0.075469 (-0.034403) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005548 / 1.841788 (-0.836240) | 12.080103 / 8.074308 (4.005794) | 10.444822 / 10.191392 (0.253430) | 0.145024 / 0.680424 (-0.535400) | 0.015287 / 0.534201 (-0.518914) | 0.288567 / 0.579283 (-0.290716) | 0.118034 / 0.434364 (-0.316330) | 0.333474 / 0.540337 (-0.206864) | 0.421716 / 1.386936 (-0.965220) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3d95159dbd918009e1ff710dba0cd15d96d4264e \"CML watermark\")\n" ]
2024-06-03T05:45:50
2024-06-03T06:15:48
2024-06-03T06:09:43
MEMBER
null
Update yanked version of minimum requests requirement. Version 2.32.1 was yanked: https://pypi.org/project/requests/2.32.1/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6945/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6945", "html_url": "https://github.com/huggingface/datasets/pull/6945", "diff_url": "https://github.com/huggingface/datasets/pull/6945.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6945.patch", "merged_at": "2024-06-03T06:09:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/6944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6944/comments
https://api.github.com/repos/huggingface/datasets/issues/6944/events
https://github.com/huggingface/datasets/pull/6944
2,330,207,120
PR_kwDODunzps5xP-KD
6,944
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6944). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005150 / 0.011353 (-0.006203) | 0.003663 / 0.011008 (-0.007346) | 0.062832 / 0.038508 (0.024324) | 0.031928 / 0.023109 (0.008819) | 0.246455 / 0.275898 (-0.029443) | 0.272121 / 0.323480 (-0.051359) | 0.004220 / 0.007986 (-0.003765) | 0.002756 / 0.004328 (-0.001573) | 0.050071 / 0.004250 (0.045821) | 0.046074 / 0.037052 (0.009022) | 0.259676 / 0.258489 (0.001187) | 0.290674 / 0.293841 (-0.003167) | 0.027822 / 0.128546 (-0.100724) | 0.010791 / 0.075646 (-0.064855) | 0.202827 / 0.419271 (-0.216445) | 0.037057 / 0.043533 (-0.006476) | 0.256128 / 0.255139 (0.000989) | 0.269422 / 0.283200 (-0.013777) | 0.017395 / 0.141683 (-0.124288) | 1.125919 / 1.452155 (-0.326236) | 1.177708 / 1.492716 (-0.315008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098466 / 0.018006 (0.080460) | 0.305508 / 0.000490 (0.305018) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018866 / 0.037411 (-0.018545) | 0.062079 / 0.014526 (0.047553) | 0.074670 / 0.176557 (-0.101886) | 0.121025 / 0.737135 (-0.616111) | 0.075883 / 0.296338 (-0.220455) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291880 / 0.215209 (0.076671) | 2.874064 / 2.077655 (0.796409) | 1.477040 / 1.504120 (-0.027080) | 1.356198 / 1.541195 (-0.184997) | 1.354676 / 1.468490 (-0.113814) | 0.559731 / 4.584777 (-4.025046) | 2.362746 / 3.745712 (-1.382966) | 2.678838 / 5.269862 (-2.591024) | 1.752633 / 4.565676 (-2.813044) | 0.064023 / 0.424275 (-0.360252) | 0.005035 / 0.007607 (-0.002572) | 0.354807 / 0.226044 (0.128762) | 3.424463 / 2.268929 (1.155534) | 1.810476 / 55.444624 (-53.634149) | 1.519031 / 6.876477 (-5.357446) | 1.693957 / 2.142072 (-0.448116) | 0.647987 / 4.805227 (-4.157240) | 0.118993 / 6.500664 (-6.381671) | 0.042186 / 0.075469 (-0.033283) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982565 / 1.841788 (-0.859223) | 11.645075 / 8.074308 (3.570767) | 9.588360 / 10.191392 (-0.603032) | 0.142369 / 0.680424 (-0.538055) | 0.014025 / 0.534201 (-0.520176) | 0.285668 / 0.579283 (-0.293616) | 0.265825 / 0.434364 (-0.168539) | 0.323371 / 0.540337 (-0.216966) | 0.421227 / 1.386936 (-0.965709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005587 / 0.011353 (-0.005766) | 0.003664 / 0.011008 (-0.007345) | 0.050411 / 0.038508 (0.011903) | 0.033268 / 0.023109 (0.010159) | 0.266631 / 0.275898 (-0.009267) | 0.291135 / 0.323480 (-0.032345) | 0.004275 / 0.007986 (-0.003710) | 0.002822 / 0.004328 (-0.001506) | 0.049349 / 0.004250 (0.045099) | 0.040653 / 0.037052 (0.003601) | 0.282641 / 0.258489 (0.024152) | 0.315460 / 0.293841 (0.021619) | 0.029343 / 0.128546 (-0.099203) | 0.010606 / 0.075646 (-0.065040) | 0.058783 / 0.419271 (-0.360489) | 0.033205 / 0.043533 (-0.010327) | 0.266805 / 0.255139 (0.011666) | 0.288907 / 0.283200 (0.005707) | 0.017817 / 0.141683 (-0.123866) | 1.128132 / 1.452155 (-0.324023) | 1.175120 / 1.492716 (-0.317597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095653 / 0.018006 (0.077647) | 0.304825 / 0.000490 (0.304335) | 0.000212 / 0.000200 (0.000012) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022766 / 0.037411 (-0.014645) | 0.076598 / 0.014526 (0.062072) | 0.088314 / 0.176557 (-0.088242) | 0.127888 / 0.737135 (-0.609247) | 0.090391 / 0.296338 (-0.205947) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293384 / 0.215209 (0.078175) | 2.883742 / 2.077655 (0.806087) | 1.533868 / 1.504120 (0.029748) | 1.391964 / 1.541195 (-0.149231) | 1.423732 / 1.468490 (-0.044759) | 0.575457 / 4.584777 (-4.009320) | 0.970860 / 3.745712 (-2.774852) | 2.711405 / 5.269862 (-2.558457) | 1.774468 / 4.565676 (-2.791208) | 0.064611 / 0.424275 (-0.359664) | 0.005120 / 0.007607 (-0.002487) | 0.343892 / 0.226044 (0.117847) | 3.362579 / 2.268929 (1.093650) | 1.880200 / 55.444624 (-53.564424) | 1.587435 / 6.876477 (-5.289042) | 1.756464 / 2.142072 (-0.385609) | 0.661469 / 4.805227 (-4.143759) | 0.119030 / 6.500664 (-6.381634) | 0.041704 / 0.075469 (-0.033765) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025008 / 1.841788 (-0.816780) | 12.146244 / 8.074308 (4.071936) | 10.397267 / 10.191392 (0.205875) | 0.145917 / 0.680424 (-0.534507) | 0.015779 / 0.534201 (-0.518422) | 0.287122 / 0.579283 (-0.292161) | 0.125464 / 0.434364 (-0.308900) | 0.323315 / 0.540337 (-0.217023) | 0.416761 / 1.386936 (-0.970175) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2d15a6b1871f3998986853298e4338d72891491 \"CML watermark\")\n" ]
2024-06-03T05:29:59
2024-06-03T05:37:51
2024-06-03T05:31:47
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6944/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6944", "html_url": "https://github.com/huggingface/datasets/pull/6944", "diff_url": "https://github.com/huggingface/datasets/pull/6944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6944.patch", "merged_at": "2024-06-03T05:31:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/6943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6943/comments
https://api.github.com/repos/huggingface/datasets/issues/6943/events
https://github.com/huggingface/datasets/pull/6943
2,330,176,890
PR_kwDODunzps5xP3jp
6,943
Release 2.19.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6943). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-06-03T05:01:50
2024-06-03T05:17:41
2024-06-03T05:17:40
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6943/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6943", "html_url": "https://github.com/huggingface/datasets/pull/6943", "diff_url": "https://github.com/huggingface/datasets/pull/6943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6943.patch", "merged_at": "2024-06-03T05:17:40" }
true
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
35
Edit dataset card