url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.73B
1.79B
node_id
stringlengths
18
19
number
int64
5.91k
6.01k
title
stringlengths
1
191
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
9
16.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
1 value
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6010/comments
https://api.github.com/repos/huggingface/datasets/issues/6010/events
https://github.com/huggingface/datasets/issues/6010
1,793,838,152
I_kwDODunzps5q68xI
6,010
Improve `Dataset`'s string representation
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-07-07T16:38:03
2023-07-07T17:00:51
null
CONTRIBUTOR
null
Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows. We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6010/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6009/comments
https://api.github.com/repos/huggingface/datasets/issues/6009/events
https://github.com/huggingface/datasets/pull/6009
1,792,059,808
PR_kwDODunzps5U1mus
6,009
Fix cast for dictionaries with no keys
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006961 / 0.011353 (-0.004392) | 0.004390 / 0.011008 (-0.006618) | 0.103249 / 0.038508 (0.064741) | 0.048084 / 0.023109 (0.024975) | 0.351213 / 0.275898 (0.075315) | 0.416918 / 0.323480 (0.093439) | 0.005539 / 0.007986 (-0.002446) | 0.003555 / 0.004328 (-0.000774) | 0.079306 / 0.004250 (0.075055) | 0.066937 / 0.037052 (0.029884) | 0.382601 / 0.258489 (0.124112) | 0.406125 / 0.293841 (0.112284) | 0.032269 / 0.128546 (-0.096277) | 0.009133 / 0.075646 (-0.066514) | 0.354449 / 0.419271 (-0.064822) | 0.068978 / 0.043533 (0.025445) | 0.352314 / 0.255139 (0.097175) | 0.390398 / 0.283200 (0.107199) | 0.025640 / 0.141683 (-0.116043) | 1.553865 / 1.452155 (0.101710) | 1.601292 / 1.492716 (0.108576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208310 / 0.018006 (0.190303) | 0.440076 / 0.000490 (0.439586) | 0.000363 / 0.000200 (0.000163) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029173 / 0.037411 (-0.008238) | 0.111323 / 0.014526 (0.096797) | 0.123001 / 0.176557 (-0.053556) | 0.180180 / 0.737135 (-0.556955) | 0.125804 / 0.296338 (-0.170534) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419919 / 0.215209 (0.204710) | 4.194515 / 2.077655 (2.116860) | 1.881234 / 1.504120 (0.377114) | 1.672914 / 1.541195 (0.131720) | 1.723102 / 1.468490 (0.254612) | 0.543584 / 4.584777 (-4.041193) | 3.822477 / 3.745712 (0.076765) | 1.837946 / 5.269862 (-3.431915) | 1.094975 / 4.565676 (-3.470701) | 0.066788 / 0.424275 (-0.357487) | 0.011689 / 0.007607 (0.004082) | 0.520983 / 0.226044 (0.294938) | 5.209245 / 2.268929 (2.940316) | 2.392916 / 55.444624 (-53.051708) | 2.060042 / 6.876477 (-4.816434) | 2.162291 / 2.142072 (0.020219) | 0.668472 / 4.805227 (-4.136755) | 0.144373 / 6.500664 (-6.356291) | 0.066152 / 0.075469 (-0.009318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251256 / 1.841788 (-0.590532) | 15.161338 / 8.074308 (7.087030) | 14.416133 / 10.191392 (4.224741) | 0.166145 / 0.680424 (-0.514279) | 0.018168 / 0.534201 (-0.516033) | 0.433364 / 0.579283 (-0.145919) | 0.417484 / 0.434364 (-0.016880) | 0.502543 / 0.540337 (-0.037794) | 0.602904 / 1.386936 (-0.784032) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006946 / 0.011353 (-0.004407) | 0.004248 / 0.011008 (-0.006761) | 0.079707 / 0.038508 (0.041199) | 0.046226 / 0.023109 (0.023117) | 0.375864 / 0.275898 (0.099966) | 0.430740 / 0.323480 (0.107260) | 0.006222 / 0.007986 (-0.001764) | 0.003474 / 0.004328 (-0.000854) | 0.079622 / 0.004250 (0.075372) | 0.066666 / 0.037052 (0.029613) | 0.379487 / 0.258489 (0.120998) | 0.423002 / 0.293841 (0.129161) | 0.032836 / 0.128546 (-0.095710) | 0.008976 / 0.075646 (-0.066670) | 0.086578 / 0.419271 (-0.332693) | 0.055651 / 0.043533 (0.012118) | 0.360787 / 0.255139 (0.105648) | 0.384265 / 0.283200 (0.101065) | 0.025350 / 0.141683 (-0.116333) | 1.547880 / 1.452155 (0.095725) | 1.605850 / 1.492716 (0.113134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184227 / 0.018006 (0.166220) | 0.442071 / 0.000490 (0.441582) | 0.002887 / 0.000200 (0.002687) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031923 / 0.037411 (-0.005488) | 0.119093 / 0.014526 (0.104568) | 0.128704 / 0.176557 (-0.047853) | 0.187065 / 0.737135 (-0.550070) | 0.134135 / 0.296338 (-0.162204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455731 / 0.215209 (0.240522) | 4.562911 / 2.077655 (2.485256) | 2.247431 / 1.504120 (0.743311) | 2.053346 / 1.541195 (0.512151) | 2.049611 / 1.468490 (0.581121) | 0.546069 / 4.584777 (-4.038708) | 3.821852 / 3.745712 (0.076140) | 3.358497 / 5.269862 (-1.911364) | 1.667697 / 4.565676 (-2.897979) | 0.067968 / 0.424275 (-0.356307) | 0.012344 / 0.007607 (0.004737) | 0.550864 / 0.226044 (0.324820) | 5.496867 / 2.268929 (3.227939) | 2.680031 / 55.444624 (-52.764594) | 2.328673 / 6.876477 (-4.547804) | 2.436754 / 2.142072 (0.294682) | 0.681195 / 4.805227 (-4.124033) | 0.148761 / 6.500664 (-6.351904) | 0.067716 / 0.075469 (-0.007753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353798 / 1.841788 (-0.487990) | 15.992965 / 8.074308 (7.918657) | 14.051539 / 10.191392 (3.860147) | 0.181087 / 0.680424 (-0.499337) | 0.018653 / 0.534201 (-0.515548) | 0.433499 / 0.579283 (-0.145784) | 0.428845 / 0.434364 (-0.005519) | 0.501100 / 0.540337 (-0.039238) | 0.603666 / 1.386936 (-0.783270) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#10cfa871a2f387fe9c6360e1873ea74c6d69ff67 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010983 / 0.011353 (-0.000370) | 0.005630 / 0.011008 (-0.005378) | 0.109967 / 0.038508 (0.071458) | 0.101580 / 0.023109 (0.078471) | 0.490205 / 0.275898 (0.214307) | 0.534653 / 0.323480 (0.211173) | 0.008365 / 0.007986 (0.000379) | 0.004317 / 0.004328 (-0.000012) | 0.082429 / 0.004250 (0.078179) | 0.080556 / 0.037052 (0.043504) | 0.494627 / 0.258489 (0.236138) | 0.544189 / 0.293841 (0.250348) | 0.049419 / 0.128546 (-0.079127) | 0.014033 / 0.075646 (-0.061613) | 0.370406 / 0.419271 (-0.048866) | 0.083468 / 0.043533 (0.039935) | 0.463829 / 0.255139 (0.208690) | 0.507516 / 0.283200 (0.224316) | 0.053266 / 0.141683 (-0.088417) | 1.778680 / 1.452155 (0.326525) | 1.916616 / 1.492716 (0.423900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267646 / 0.018006 (0.249640) | 0.617824 / 0.000490 (0.617334) | 0.007720 / 0.000200 (0.007520) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034464 / 0.037411 (-0.002948) | 0.113626 / 0.014526 (0.099100) | 0.118911 / 0.176557 (-0.057646) | 0.194701 / 0.737135 (-0.542434) | 0.123431 / 0.296338 (-0.172907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.606073 / 0.215209 (0.390863) | 6.086393 / 2.077655 (4.008738) | 2.568712 / 1.504120 (1.064593) | 2.260801 / 1.541195 (0.719606) | 2.411798 / 1.468490 (0.943307) | 0.876433 / 4.584777 (-3.708344) | 5.521280 / 3.745712 (1.775568) | 5.969722 / 5.269862 (0.699861) | 3.671028 / 4.565676 (-0.894649) | 0.097082 / 0.424275 (-0.327193) | 0.011354 / 0.007607 (0.003747) | 0.713842 / 0.226044 (0.487798) | 7.291172 / 2.268929 (5.022244) | 3.315272 / 55.444624 (-52.129352) | 2.777487 / 6.876477 (-4.098990) | 3.025449 / 2.142072 (0.883377) | 1.014115 / 4.805227 (-3.791112) | 0.217928 / 6.500664 (-6.282736) | 0.083097 / 0.075469 (0.007627) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640060 / 1.841788 (-0.201728) | 25.342172 / 8.074308 (17.267864) | 22.776510 / 10.191392 (12.585118) | 0.227300 / 0.680424 (-0.453124) | 0.032233 / 0.534201 (-0.501968) | 0.507547 / 0.579283 (-0.071736) | 0.647044 / 0.434364 (0.212680) | 0.607019 / 0.540337 (0.066682) | 0.823548 / 1.386936 (-0.563388) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009576 / 0.011353 (-0.001777) | 0.009322 / 0.011008 (-0.001687) | 0.087184 / 0.038508 (0.048676) | 0.100795 / 0.023109 (0.077685) | 0.492138 / 0.275898 (0.216240) | 0.528386 / 0.323480 (0.204906) | 0.006689 / 0.007986 (-0.001296) | 0.004735 / 0.004328 (0.000406) | 0.085519 / 0.004250 (0.081269) | 0.072648 / 0.037052 (0.035595) | 0.496068 / 0.258489 (0.237579) | 0.549634 / 0.293841 (0.255793) | 0.049709 / 0.128546 (-0.078837) | 0.015077 / 0.075646 (-0.060569) | 0.099445 / 0.419271 (-0.319826) | 0.068080 / 0.043533 (0.024547) | 0.500426 / 0.255139 (0.245287) | 0.531437 / 0.283200 (0.248238) | 0.053176 / 0.141683 (-0.088507) | 1.827942 / 1.452155 (0.375787) | 1.914286 / 1.492716 (0.421570) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247658 / 0.018006 (0.229652) | 0.590805 / 0.000490 (0.590315) | 0.005319 / 0.000200 (0.005119) | 0.000165 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036993 / 0.037411 (-0.000418) | 0.112944 / 0.014526 (0.098419) | 0.118964 / 0.176557 (-0.057593) | 0.194867 / 0.737135 (-0.542269) | 0.120816 / 0.296338 (-0.175523) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638062 / 0.215209 (0.422853) | 6.246785 / 2.077655 (4.169130) | 2.957779 / 1.504120 (1.453659) | 2.739118 / 1.541195 (1.197924) | 2.795362 / 1.468490 (1.326872) | 0.890532 / 4.584777 (-3.694245) | 5.508198 / 3.745712 (1.762486) | 5.222315 / 5.269862 (-0.047547) | 3.152731 / 4.565676 (-1.412946) | 0.098344 / 0.424275 (-0.325931) | 0.008800 / 0.007607 (0.001193) | 0.757889 / 0.226044 (0.531845) | 7.545715 / 2.268929 (5.276787) | 3.694536 / 55.444624 (-51.750088) | 3.112872 / 6.876477 (-3.763605) | 3.182358 / 2.142072 (1.040285) | 1.028171 / 4.805227 (-3.777056) | 0.215223 / 6.500664 (-6.285441) | 0.085856 / 0.075469 (0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.853138 / 1.841788 (0.011350) | 25.939672 / 8.074308 (17.865364) | 23.118029 / 10.191392 (12.926637) | 0.250599 / 0.680424 (-0.429825) | 0.029942 / 0.534201 (-0.504259) | 0.508748 / 0.579283 (-0.070535) | 0.593966 / 0.434364 (0.159602) | 0.605499 / 0.540337 (0.065162) | 0.863827 / 1.386936 (-0.523109) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d15950d99677e9473cdcd31cfd83aa17e313e28 \"CML watermark\")\n" ]
2023-07-06T18:48:14
2023-07-07T14:13:00
2023-07-07T14:01:13
CONTRIBUTOR
null
Fix #5677
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6009/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6009", "html_url": "https://github.com/huggingface/datasets/pull/6009", "diff_url": "https://github.com/huggingface/datasets/pull/6009.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6009.patch", "merged_at": "2023-07-07T14:01:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6008/comments
https://api.github.com/repos/huggingface/datasets/issues/6008/events
https://github.com/huggingface/datasets/issues/6008
1,789,869,344
I_kwDODunzps5qrz0g
6,008
Dataset.from_generator consistently freezes at ~1000 rows
{ "login": "andreemic", "id": 27695722, "node_id": "MDQ6VXNlcjI3Njk1NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/27695722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreemic", "html_url": "https://github.com/andreemic", "followers_url": "https://api.github.com/users/andreemic/followers", "following_url": "https://api.github.com/users/andreemic/following{/other_user}", "gists_url": "https://api.github.com/users/andreemic/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreemic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreemic/subscriptions", "organizations_url": "https://api.github.com/users/andreemic/orgs", "repos_url": "https://api.github.com/users/andreemic/repos", "events_url": "https://api.github.com/users/andreemic/events{/privacy}", "received_events_url": "https://api.github.com/users/andreemic/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n\r\nOur support for multi-dim arrays could be better, and we plan to improve it as part of https://github.com/huggingface/datasets/issues/5272.", "> By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n> \r\n> Our support for multi-dim arrays could be better, and we plan to improve it as part of #5272.\r\n\r\nThanks for the explanation! The Image array was just for demonstration, I use PIL Images in practice. Does that make a difference? What's the best approach for a dataset with PIL Images as rows?", "It's best to use the `datasets.Image()` feature type for PIL images (to save space) :)" ]
2023-07-05T16:06:48
2023-07-06T16:32:02
null
NONE
null
### Describe the bug Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset. I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset. Let me know if you have ideas how to resolve it! ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np def gen(): for row in range(10000): yield {"i": np.random.rand(512, 512, 3)} Dataset.from_generator(gen) # -> 90% of the time gets stuck around 1000 rows ``` ### Expected behavior Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 12.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6008/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6007/comments
https://api.github.com/repos/huggingface/datasets/issues/6007/events
https://github.com/huggingface/datasets/issues/6007
1,789,782,693
I_kwDODunzps5qreql
6,007
Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset
{ "login": "silverriver", "id": 2529049, "node_id": "MDQ6VXNlcjI1MjkwNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silverriver", "html_url": "https://github.com/silverriver", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "organizations_url": "https://api.github.com/users/silverriver/orgs", "repos_url": "https://api.github.com/users/silverriver/repos", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "received_events_url": "https://api.github.com/users/silverriver/received_events", "type": "User", "site_admin": false }
[ { "id": 5705560427, "node_id": "LA_kwDODunzps8AAAABVBPxaw", "url": "https://api.github.com/repos/huggingface/datasets/labels/arrow", "name": "arrow", "color": "c2e0c6", "default": false, "description": "Related to Apache Arrow" } ]
open
false
null
[]
null
[ "This error means that one of the int32 (`Value(\"int32\")`) columns in the dataset has a value that is out of the valid (int32) range.\r\n\r\nI'll open a PR to print the name of a problematic column to make debugging such errors easier.", "I am afraid int32 is not the reason for this error.\r\n\r\nI have submitted a commit to use int64 for all ints in the dataset:\r\nhttps://huggingface.co./datasets/liwu/MNBVC/commit/857ac00d9eab96a6708ad6a82bd9001686042a9e\r\n\r\nand I have updated my env to the latest datasets release:\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.13.1\r\n- Platform: macOS-13.2.1-arm64-arm-64bit\r\n- Python version: 3.11.2\r\n- Huggingface_hub version: 0.13.4\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n\r\nBut the error still exist\r\n\r\n```\r\nDownloading and preparing dataset mnbvc/news_peoples_daily to /Users/silver/.cache/huggingface/datasets/liwu___mnbvc/news_peoples_daily/0.0.1/ee380f6309fe9b8b0d1fb14d77118f132444f22c8c4b28bf5c1645312688e051...\r\nDownloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 9070.40it/s]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 2697.16it/s]\r\n---------------------------------------------------------------------------\r\nOverflowError Traceback (most recent call last)\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1647, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1646 example = self.info.features.encode_example(record) if self.info.features is not None else record\r\n-> 1647 writer.write(example, key)\r\n 1648 num_examples_progress_update += 1\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:490, in ArrowWriter.write(self, example, key, writer_batch_size)\r\n 488 self.hkey_record = []\r\n--> 490 self.write_examples_on_file()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOverflowError Traceback (most recent call last)\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1656, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1655 num_shards = shard_id + 1\r\n-> 1656 num_examples, num_bytes = writer.finalize()\r\n 1657 writer.close()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream)\r\n 583 self.hkey_record = []\r\n--> 584 self.write_examples_on_file()\r\n 585 # If schema is known, infer features even if no examples were written\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nDatasetGenerationError Traceback (most recent call last)\r\nCell In[2], line 1\r\n----> 1 dataset = load_dataset(\"liwu/MNBVC\", 'news_peoples_daily', split='train')\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/load.py:1809, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1806 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1808 # Download and prepare data\r\n-> 1809 builder_instance.download_and_prepare(\r\n 1810 download_config=download_config,\r\n 1811 download_mode=download_mode,\r\n 1812 verification_mode=verification_mode,\r\n 1813 try_from_hf_gcs=try_from_hf_gcs,\r\n 1814 num_proc=num_proc,\r\n 1815 storage_options=storage_options,\r\n 1816 )\r\n 1818 # Build dataset for splits\r\n 1819 keep_in_memory = (\r\n 1820 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1821 )\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:909, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)\r\n 907 if num_proc is not None:\r\n 908 prepare_split_kwargs[\"num_proc\"] = num_proc\r\n--> 909 self._download_and_prepare(\r\n 910 dl_manager=dl_manager,\r\n 911 verification_mode=verification_mode,\r\n 912 **prepare_split_kwargs,\r\n 913 **download_and_prepare_kwargs,\r\n 914 )\r\n 915 # Sync info\r\n 916 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1670, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)\r\n 1669 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):\r\n-> 1670 super()._download_and_prepare(\r\n 1671 dl_manager,\r\n 1672 verification_mode,\r\n 1673 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS\r\n 1674 or verification_mode == VerificationMode.ALL_CHECKS,\r\n 1675 **prepare_splits_kwargs,\r\n 1676 )\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1004, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\r\n 1000 split_dict.add(split_generator.split_info)\r\n 1002 try:\r\n 1003 # Prepare split will record examples associated to the split\r\n-> 1004 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 1005 except OSError as e:\r\n 1006 raise OSError(\r\n 1007 \"Cannot find data file. \"\r\n 1008 + (self.manual_download_instructions or \"\")\r\n 1009 + \"\\nOriginal error:\\n\"\r\n 1010 + str(e)\r\n 1011 ) from None\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1508, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)\r\n 1506 job_id = 0\r\n 1507 with pbar:\r\n-> 1508 for job_id, done, content in self._prepare_split_single(\r\n 1509 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args\r\n 1510 ):\r\n 1511 if done:\r\n 1512 result = content\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1665, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1663 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:\r\n 1664 e = e.__context__\r\n-> 1665 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n 1667 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\nBesides, it works fine when I am using streamed dataset.", "`simhash` is the problematic column - it has values such as `18329103420363166823` that are out of the int64 range. You can fix this by setting the feature type to `Value(\"string\")` (it's advised to use this type for hash values in general)\r\n\r\n> Besides, it works fine when I am using streamed dataset.\r\n\r\nStreaming yields Python dictionaries from the script without converting them to the Arrow representation, as this conversion step is not that cheap performance-wise.", "i am using uint64 for simhash\r\n\r\nuint64 ranges up to about 3.69E19.\r\n\r\n18329103420363166823 is less than this value.\r\n\r\nmoreover, our simhash algorithm use 64 bits. it should fit in uint64.\r\n\r\n\r\n\r\n", "You are right. I overlooked the feature type.\r\n\r\nThis is a reproducer:\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.arrow_writer import TypedSequence\r\n\r\npa.array(TypedSequence([18329103420363166823], type=Value(\"uint64\")))\r\n```\r\n\r\n`pa.array([18329103420363166823])` also fails with the same error, so it seems PyArrow does not always infer the correct type as NumPy does (`uint64` in this case).\r\n\r\nI'll report this issue in the Arrow repo.\r\n\r\n`pa.array([18329103420363166823], pa.uint64)` works, so maybe we can implement a temporary fix (supporting complex input such as `[{\"image\": pil_image, \"num\": uint64_value}]` would be hard though).\r\n\r\nIn the meantime, you should be able to bypass this error by returning the `simhash` values as NumPy scalars in the script:\r\n```python\r\ndef _generate_examples(self, ...):\r\n ...\r\n yield {..., \"simhash\": np.uint64(simhash), ...}\r\n```", "Thank you for checking this issue in detail.\r\n\r\nHowever, it seems that using `np.uint64(simhash)` does not work. The same issue still exists.\r\n\r\nhttps://huggingface.co./datasets/liwu/MNBVC/commit/1e44f1e400b7e61052647d44c99cdae3bae9c830\r\n\r\nAnyway, we decide to use string type for these simhash values. Hope pyarrow can fix their bug soon." ]
2023-07-05T15:16:50
2023-07-07T10:46:12
null
CONTRIBUTOR
null
### Describe the bug When load a large dataset with the following code ```python from datasets import load_dataset dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train') ``` We encountered the error: "OverflowError: Python int too large to convert to C long" The error look something like: ``` OverflowError: Python int too large to convert to C long During handling of the above exception, another exception occurred: OverflowError Traceback (most recent call last) <ipython-input-7-0ed8700e662d> in <module> ----> 1 dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', cache_dir='/sfs/MNBVC/.cache/') /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1749 ignore_verifications=ignore_verifications, 1750 try_from_hf_gcs=try_from_hf_gcs, -> 1751 use_auth_token=use_auth_token, 1752 ) 1753 /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 703 if not downloaded_from_gcs: 704 self._download_and_prepare( --> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 1225 1226 def _download_and_prepare(self, dl_manager, verify_infos): -> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) 1228 1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable: /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 791 try: 792 # Prepare split will record examples associated to the split --> 793 self._prepare_split(split_generator, **prepare_split_kwargs) 794 except OSError as e: 795 raise OSError( /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys) 1219 writer.write(example, key) 1220 finally: -> 1221 num_examples, num_bytes = writer.finalize() 1222 1223 split_generator.split_info.num_examples = num_examples /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in finalize(self, close_stream) 536 # Re-intializing to empty list for next batch 537 self.hkey_record = [] --> 538 self.write_examples_on_file() 539 if self.pa_writer is None: 540 if self.schema: /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 407 # Since current_examples contains (example, key) tuples 408 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 409 self.write_batch(batch_examples=batch_examples) 410 self.current_examples = [] 411 /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 506 col_try_type = try_features[col] if try_features is not None and col in try_features else None 507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 508 arrays.append(pa.array(typed_sequence)) 509 inferred_features[col] = typed_sequence.get_inferred_type() 510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array() /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() /sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 180 else: 181 trying_cast_to_python_objects = True --> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) 183 # use smaller integer precisions if possible 184 if self.trying_int_optimization: /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array() /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() OverflowError: Python int too large to convert to C long ``` However, that dataset can be loaded in a streaming manner: ```python from datasets import load_dataset dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', streaming=True) for i in dataset: pass # it work well ``` Another issue is reported in our dataset hub: https://huggingface.co./datasets/liwu/MNBVC/discussions/2 ### Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train') ### Expected behavior the dataset can be safely loaded ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9 - Python version: 3.6.8 - PyArrow version: 6.0.1 - Pandas version: 1.1.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6007/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6006/comments
https://api.github.com/repos/huggingface/datasets/issues/6006/events
https://github.com/huggingface/datasets/issues/6006
1,788,855,582
I_kwDODunzps5qn8Ue
6,006
NotADirectoryError when loading gigawords
{ "login": "xipq", "id": 115634163, "node_id": "U_kgDOBuRv8w", "avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xipq", "html_url": "https://github.com/xipq", "followers_url": "https://api.github.com/users/xipq/followers", "following_url": "https://api.github.com/users/xipq/following{/other_user}", "gists_url": "https://api.github.com/users/xipq/gists{/gist_id}", "starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xipq/subscriptions", "organizations_url": "https://api.github.com/users/xipq/orgs", "repos_url": "https://api.github.com/users/xipq/repos", "events_url": "https://api.github.com/users/xipq/events{/privacy}", "received_events_url": "https://api.github.com/users/xipq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence." ]
2023-07-05T06:23:41
2023-07-05T06:31:02
2023-07-05T06:31:01
NONE
null
### Describe the bug got `NotADirectoryError` whtn loading gigawords dataset ### Steps to reproduce the bug When running ``` import datasets datasets.load_dataset('gigaword') ``` Got the following exception: ```bash Traceback (most recent call last): [0/1862] File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single for key, record in generator: File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b 64efb424b6/gigaword.py", line 115, in _generate_examples with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s: File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope n return open(main_hop, mode, *args, **kwargs) NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e 89780c4be7599794a4f559048ec/org_data/train.src.txt' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "gigaword.py", line 38, in <module> main() File "gigaword.py", line 35, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data dataset = self.load_dataset() File "gigaword.py", line 29, in load_dataset return datasets.load_dataset('gigaword') File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset builder_instance.download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare self._download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare super()._download_and_prepare( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Download and process the dataset successfully ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10 - Python version: 3.8.0 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6006/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6005/comments
https://api.github.com/repos/huggingface/datasets/issues/6005/events
https://github.com/huggingface/datasets/pull/6005
1,788,103,576
PR_kwDODunzps5UoJ91
6,005
Drop Python 3.7 support
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006152 / 0.011353 (-0.005200) | 0.003916 / 0.011008 (-0.007092) | 0.097355 / 0.038508 (0.058847) | 0.037228 / 0.023109 (0.014119) | 0.315753 / 0.275898 (0.039855) | 0.387949 / 0.323480 (0.064470) | 0.004804 / 0.007986 (-0.003181) | 0.002975 / 0.004328 (-0.001353) | 0.076932 / 0.004250 (0.072682) | 0.053497 / 0.037052 (0.016445) | 0.331143 / 0.258489 (0.072654) | 0.388347 / 0.293841 (0.094506) | 0.027535 / 0.128546 (-0.101011) | 0.008509 / 0.075646 (-0.067137) | 0.312639 / 0.419271 (-0.106632) | 0.047212 / 0.043533 (0.003679) | 0.316875 / 0.255139 (0.061736) | 0.352191 / 0.283200 (0.068992) | 0.021380 / 0.141683 (-0.120303) | 1.541401 / 1.452155 (0.089247) | 1.519420 / 1.492716 (0.026704) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206332 / 0.018006 (0.188326) | 0.412252 / 0.000490 (0.411762) | 0.005119 / 0.000200 (0.004919) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023856 / 0.037411 (-0.013556) | 0.098216 / 0.014526 (0.083691) | 0.106553 / 0.176557 (-0.070003) | 0.168767 / 0.737135 (-0.568369) | 0.109244 / 0.296338 (-0.187094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457580 / 0.215209 (0.242371) | 4.583246 / 2.077655 (2.505591) | 2.296356 / 1.504120 (0.792236) | 2.096216 / 1.541195 (0.555021) | 2.159086 / 1.468490 (0.690596) | 0.557905 / 4.584777 (-4.026872) | 3.345910 / 3.745712 (-0.399802) | 1.767436 / 5.269862 (-3.502426) | 1.021583 / 4.565676 (-3.544094) | 0.067265 / 0.424275 (-0.357011) | 0.011411 / 0.007607 (0.003804) | 0.559841 / 0.226044 (0.333797) | 5.586892 / 2.268929 (3.317963) | 2.735520 / 55.444624 (-52.709104) | 2.429393 / 6.876477 (-4.447084) | 2.544901 / 2.142072 (0.402829) | 0.667603 / 4.805227 (-4.137625) | 0.136244 / 6.500664 (-6.364421) | 0.066961 / 0.075469 (-0.008508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206529 / 1.841788 (-0.635259) | 13.988306 / 8.074308 (5.913998) | 13.481813 / 10.191392 (3.290421) | 0.161901 / 0.680424 (-0.518523) | 0.016850 / 0.534201 (-0.517351) | 0.367657 / 0.579283 (-0.211626) | 0.393343 / 0.434364 (-0.041021) | 0.465288 / 0.540337 (-0.075050) | 0.559888 / 1.386936 (-0.827048) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005956 / 0.011353 (-0.005397) | 0.003734 / 0.011008 (-0.007274) | 0.077841 / 0.038508 (0.039333) | 0.036532 / 0.023109 (0.013422) | 0.438923 / 0.275898 (0.163025) | 0.490133 / 0.323480 (0.166653) | 0.004651 / 0.007986 (-0.003335) | 0.002881 / 0.004328 (-0.001448) | 0.077868 / 0.004250 (0.073618) | 0.051700 / 0.037052 (0.014647) | 0.448018 / 0.258489 (0.189529) | 0.500304 / 0.293841 (0.206464) | 0.029051 / 0.128546 (-0.099496) | 0.008498 / 0.075646 (-0.067148) | 0.082932 / 0.419271 (-0.336339) | 0.043665 / 0.043533 (0.000132) | 0.431613 / 0.255139 (0.176474) | 0.458749 / 0.283200 (0.175549) | 0.021951 / 0.141683 (-0.119731) | 1.556043 / 1.452155 (0.103888) | 1.588391 / 1.492716 (0.095675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220674 / 0.018006 (0.202667) | 0.415408 / 0.000490 (0.414918) | 0.002613 / 0.000200 (0.002413) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025548 / 0.037411 (-0.011863) | 0.103633 / 0.014526 (0.089107) | 0.115193 / 0.176557 (-0.061364) | 0.163971 / 0.737135 (-0.573164) | 0.114754 / 0.296338 (-0.181585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456823 / 0.215209 (0.241614) | 4.569950 / 2.077655 (2.492296) | 2.196339 / 1.504120 (0.692219) | 1.985822 / 1.541195 (0.444628) | 2.044083 / 1.468490 (0.575593) | 0.567919 / 4.584777 (-4.016858) | 3.397515 / 3.745712 (-0.348197) | 1.741087 / 5.269862 (-3.528775) | 1.041237 / 4.565676 (-3.524440) | 0.068963 / 0.424275 (-0.355313) | 0.011677 / 0.007607 (0.004070) | 0.565010 / 0.226044 (0.338966) | 5.625886 / 2.268929 (3.356957) | 2.670658 / 55.444624 (-52.773967) | 2.300279 / 6.876477 (-4.576198) | 2.392178 / 2.142072 (0.250106) | 0.680226 / 4.805227 (-4.125001) | 0.139119 / 6.500664 (-6.361545) | 0.067953 / 0.075469 (-0.007516) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303280 / 1.841788 (-0.538507) | 14.458686 / 8.074308 (6.384378) | 14.409369 / 10.191392 (4.217977) | 0.144581 / 0.680424 (-0.535843) | 0.016634 / 0.534201 (-0.517567) | 0.364607 / 0.579283 (-0.214676) | 0.394521 / 0.434364 (-0.039843) | 0.433417 / 0.540337 (-0.106921) | 0.527127 / 1.386936 (-0.859809) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#04a36f9546484dceadb84a133c1a460281d018f8 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006245 / 0.011353 (-0.005108) | 0.003871 / 0.011008 (-0.007138) | 0.098823 / 0.038508 (0.060315) | 0.039853 / 0.023109 (0.016744) | 0.314989 / 0.275898 (0.039091) | 0.376733 / 0.323480 (0.053254) | 0.004754 / 0.007986 (-0.003232) | 0.002971 / 0.004328 (-0.001357) | 0.078451 / 0.004250 (0.074201) | 0.053160 / 0.037052 (0.016107) | 0.324443 / 0.258489 (0.065954) | 0.361488 / 0.293841 (0.067647) | 0.027942 / 0.128546 (-0.100604) | 0.008535 / 0.075646 (-0.067111) | 0.315526 / 0.419271 (-0.103745) | 0.045706 / 0.043533 (0.002174) | 0.329614 / 0.255139 (0.074475) | 0.336339 / 0.283200 (0.053139) | 0.021278 / 0.141683 (-0.120405) | 1.529710 / 1.452155 (0.077555) | 1.566833 / 1.492716 (0.074116) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215263 / 0.018006 (0.197257) | 0.440320 / 0.000490 (0.439830) | 0.002627 / 0.000200 (0.002427) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023971 / 0.037411 (-0.013441) | 0.100549 / 0.014526 (0.086023) | 0.106995 / 0.176557 (-0.069561) | 0.169630 / 0.737135 (-0.567505) | 0.111614 / 0.296338 (-0.184724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424911 / 0.215209 (0.209702) | 4.246920 / 2.077655 (2.169266) | 1.923321 / 1.504120 (0.419202) | 1.714795 / 1.541195 (0.173600) | 1.772906 / 1.468490 (0.304416) | 0.554676 / 4.584777 (-4.030101) | 3.478896 / 3.745712 (-0.266816) | 2.800494 / 5.269862 (-2.469368) | 1.382630 / 4.565676 (-3.183047) | 0.067271 / 0.424275 (-0.357004) | 0.010967 / 0.007607 (0.003360) | 0.526769 / 0.226044 (0.300725) | 5.288564 / 2.268929 (3.019636) | 2.337459 / 55.444624 (-53.107165) | 1.999975 / 6.876477 (-4.876502) | 2.102680 / 2.142072 (-0.039392) | 0.672181 / 4.805227 (-4.133046) | 0.135097 / 6.500664 (-6.365567) | 0.066950 / 0.075469 (-0.008519) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264365 / 1.841788 (-0.577423) | 14.282440 / 8.074308 (6.208132) | 14.220200 / 10.191392 (4.028808) | 0.139055 / 0.680424 (-0.541369) | 0.016681 / 0.534201 (-0.517520) | 0.367936 / 0.579283 (-0.211348) | 0.393959 / 0.434364 (-0.040404) | 0.424438 / 0.540337 (-0.115900) | 0.508065 / 1.386936 (-0.878872) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006514 / 0.011353 (-0.004839) | 0.003890 / 0.011008 (-0.007118) | 0.078871 / 0.038508 (0.040363) | 0.038080 / 0.023109 (0.014971) | 0.358282 / 0.275898 (0.082384) | 0.430654 / 0.323480 (0.107174) | 0.005712 / 0.007986 (-0.002273) | 0.003030 / 0.004328 (-0.001299) | 0.078636 / 0.004250 (0.074386) | 0.057771 / 0.037052 (0.020719) | 0.368814 / 0.258489 (0.110325) | 0.437047 / 0.293841 (0.143206) | 0.029470 / 0.128546 (-0.099076) | 0.008523 / 0.075646 (-0.067124) | 0.083334 / 0.419271 (-0.335938) | 0.044505 / 0.043533 (0.000972) | 0.357484 / 0.255139 (0.102345) | 0.393839 / 0.283200 (0.110639) | 0.023340 / 0.141683 (-0.118343) | 1.561033 / 1.452155 (0.108878) | 1.595560 / 1.492716 (0.102844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204149 / 0.018006 (0.186143) | 0.442747 / 0.000490 (0.442257) | 0.003105 / 0.000200 (0.002905) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027002 / 0.037411 (-0.010409) | 0.105595 / 0.014526 (0.091070) | 0.108695 / 0.176557 (-0.067861) | 0.163182 / 0.737135 (-0.573953) | 0.114999 / 0.296338 (-0.181339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483713 / 0.215209 (0.268504) | 4.836063 / 2.077655 (2.758409) | 2.488072 / 1.504120 (0.983952) | 2.289556 / 1.541195 (0.748361) | 2.342912 / 1.468490 (0.874422) | 0.565937 / 4.584777 (-4.018840) | 3.479085 / 3.745712 (-0.266627) | 1.770922 / 5.269862 (-3.498940) | 1.046084 / 4.565676 (-3.519592) | 0.067857 / 0.424275 (-0.356418) | 0.011283 / 0.007607 (0.003676) | 0.592966 / 0.226044 (0.366921) | 5.932842 / 2.268929 (3.663914) | 2.956252 / 55.444624 (-52.488372) | 2.602704 / 6.876477 (-4.273772) | 2.715625 / 2.142072 (0.573552) | 0.674299 / 4.805227 (-4.130929) | 0.136039 / 6.500664 (-6.364625) | 0.067629 / 0.075469 (-0.007840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333734 / 1.841788 (-0.508054) | 14.561943 / 8.074308 (6.487634) | 14.455385 / 10.191392 (4.263993) | 0.132020 / 0.680424 (-0.548404) | 0.016893 / 0.534201 (-0.517308) | 0.367146 / 0.579283 (-0.212137) | 0.399623 / 0.434364 (-0.034741) | 0.432658 / 0.540337 (-0.107680) | 0.530475 / 1.386936 (-0.856461) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#18da5adb22b2b403b8d8ae673192746d2ed7e9f9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006045 / 0.011353 (-0.005308) | 0.003906 / 0.011008 (-0.007103) | 0.097558 / 0.038508 (0.059050) | 0.038827 / 0.023109 (0.015718) | 0.393564 / 0.275898 (0.117666) | 0.442459 / 0.323480 (0.118980) | 0.004792 / 0.007986 (-0.003194) | 0.002984 / 0.004328 (-0.001345) | 0.076419 / 0.004250 (0.072169) | 0.053606 / 0.037052 (0.016554) | 0.409743 / 0.258489 (0.151254) | 0.445753 / 0.293841 (0.151912) | 0.027753 / 0.128546 (-0.100793) | 0.008428 / 0.075646 (-0.067219) | 0.310267 / 0.419271 (-0.109004) | 0.057582 / 0.043533 (0.014049) | 0.396624 / 0.255139 (0.141485) | 0.416288 / 0.283200 (0.133089) | 0.029048 / 0.141683 (-0.112635) | 1.495362 / 1.452155 (0.043207) | 1.546331 / 1.492716 (0.053615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203832 / 0.018006 (0.185826) | 0.423649 / 0.000490 (0.423160) | 0.004533 / 0.000200 (0.004333) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023084 / 0.037411 (-0.014328) | 0.100503 / 0.014526 (0.085977) | 0.105058 / 0.176557 (-0.071499) | 0.168506 / 0.737135 (-0.568629) | 0.112019 / 0.296338 (-0.184320) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425877 / 0.215209 (0.210668) | 4.251278 / 2.077655 (2.173624) | 1.931339 / 1.504120 (0.427219) | 1.730578 / 1.541195 (0.189383) | 1.750637 / 1.468490 (0.282147) | 0.559307 / 4.584777 (-4.025470) | 3.461665 / 3.745712 (-0.284047) | 2.826959 / 5.269862 (-2.442903) | 1.418448 / 4.565676 (-3.147229) | 0.067881 / 0.424275 (-0.356394) | 0.011394 / 0.007607 (0.003787) | 0.533226 / 0.226044 (0.307181) | 5.341849 / 2.268929 (3.072921) | 2.367832 / 55.444624 (-53.076792) | 2.027240 / 6.876477 (-4.849236) | 2.095852 / 2.142072 (-0.046220) | 0.673790 / 4.805227 (-4.131437) | 0.136044 / 6.500664 (-6.364620) | 0.066350 / 0.075469 (-0.009119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203740 / 1.841788 (-0.638048) | 13.720879 / 8.074308 (5.646571) | 13.405939 / 10.191392 (3.214547) | 0.146792 / 0.680424 (-0.533632) | 0.016844 / 0.534201 (-0.517357) | 0.373455 / 0.579283 (-0.205828) | 0.394596 / 0.434364 (-0.039768) | 0.464715 / 0.540337 (-0.075623) | 0.558931 / 1.386936 (-0.828005) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003817 / 0.011008 (-0.007191) | 0.077494 / 0.038508 (0.038985) | 0.037507 / 0.023109 (0.014398) | 0.387030 / 0.275898 (0.111132) | 0.437352 / 0.323480 (0.113872) | 0.004810 / 0.007986 (-0.003176) | 0.002935 / 0.004328 (-0.001394) | 0.077143 / 0.004250 (0.072892) | 0.053986 / 0.037052 (0.016933) | 0.393164 / 0.258489 (0.134675) | 0.449603 / 0.293841 (0.155762) | 0.029303 / 0.128546 (-0.099244) | 0.008481 / 0.075646 (-0.067165) | 0.083363 / 0.419271 (-0.335908) | 0.043877 / 0.043533 (0.000344) | 0.378175 / 0.255139 (0.123036) | 0.403996 / 0.283200 (0.120797) | 0.021688 / 0.141683 (-0.119995) | 1.541606 / 1.452155 (0.089452) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236759 / 0.018006 (0.218752) | 0.416221 / 0.000490 (0.415732) | 0.000862 / 0.000200 (0.000662) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025543 / 0.037411 (-0.011868) | 0.101731 / 0.014526 (0.087206) | 0.108482 / 0.176557 (-0.068075) | 0.160290 / 0.737135 (-0.576845) | 0.111392 / 0.296338 (-0.184946) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457767 / 0.215209 (0.242558) | 4.565976 / 2.077655 (2.488321) | 2.245413 / 1.504120 (0.741294) | 2.031458 / 1.541195 (0.490264) | 2.073193 / 1.468490 (0.604702) | 0.560461 / 4.584777 (-4.024316) | 3.422536 / 3.745712 (-0.323176) | 2.977017 / 5.269862 (-2.292845) | 1.377021 / 4.565676 (-3.188655) | 0.068444 / 0.424275 (-0.355831) | 0.011036 / 0.007607 (0.003429) | 0.571501 / 0.226044 (0.345456) | 5.702652 / 2.268929 (3.433723) | 2.727132 / 55.444624 (-52.717492) | 2.399269 / 6.876477 (-4.477208) | 2.574281 / 2.142072 (0.432208) | 0.682600 / 4.805227 (-4.122627) | 0.136943 / 6.500664 (-6.363722) | 0.067126 / 0.075469 (-0.008343) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322196 / 1.841788 (-0.519592) | 14.239509 / 8.074308 (6.165201) | 14.235779 / 10.191392 (4.044387) | 0.148262 / 0.680424 (-0.532162) | 0.016566 / 0.534201 (-0.517635) | 0.364034 / 0.579283 (-0.215249) | 0.399157 / 0.434364 (-0.035207) | 0.426348 / 0.540337 (-0.113990) | 0.520804 / 1.386936 (-0.866132) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f57aae06bd325d76cb70cb774450f3a66f169cf \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007808 / 0.011353 (-0.003545) | 0.004706 / 0.011008 (-0.006303) | 0.100530 / 0.038508 (0.062022) | 0.052052 / 0.023109 (0.028943) | 0.419300 / 0.275898 (0.143402) | 0.488451 / 0.323480 (0.164971) | 0.006350 / 0.007986 (-0.001636) | 0.003875 / 0.004328 (-0.000453) | 0.076489 / 0.004250 (0.072238) | 0.077554 / 0.037052 (0.040502) | 0.435863 / 0.258489 (0.177373) | 0.483241 / 0.293841 (0.189400) | 0.037518 / 0.128546 (-0.091028) | 0.009857 / 0.075646 (-0.065789) | 0.340933 / 0.419271 (-0.078339) | 0.087046 / 0.043533 (0.043514) | 0.410721 / 0.255139 (0.155582) | 0.428995 / 0.283200 (0.145795) | 0.041701 / 0.141683 (-0.099982) | 1.821017 / 1.452155 (0.368862) | 1.837021 / 1.492716 (0.344305) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228444 / 0.018006 (0.210438) | 0.480446 / 0.000490 (0.479956) | 0.004963 / 0.000200 (0.004763) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032485 / 0.037411 (-0.004926) | 0.096500 / 0.014526 (0.081974) | 0.111547 / 0.176557 (-0.065010) | 0.178842 / 0.737135 (-0.558294) | 0.111099 / 0.296338 (-0.185240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467159 / 0.215209 (0.251950) | 4.701676 / 2.077655 (2.624021) | 2.390560 / 1.504120 (0.886440) | 2.197722 / 1.541195 (0.656528) | 2.264705 / 1.468490 (0.796215) | 0.568667 / 4.584777 (-4.016110) | 4.200724 / 3.745712 (0.455012) | 3.777625 / 5.269862 (-1.492236) | 2.372451 / 4.565676 (-2.193225) | 0.067562 / 0.424275 (-0.356714) | 0.008947 / 0.007607 (0.001340) | 0.556910 / 0.226044 (0.330865) | 5.528927 / 2.268929 (3.259998) | 2.902780 / 55.444624 (-52.541844) | 2.507933 / 6.876477 (-4.368544) | 2.734627 / 2.142072 (0.592554) | 0.683305 / 4.805227 (-4.121922) | 0.158288 / 6.500664 (-6.342376) | 0.071252 / 0.075469 (-0.004217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.487502 / 1.841788 (-0.354286) | 22.193341 / 8.074308 (14.119033) | 15.922607 / 10.191392 (5.731215) | 0.172189 / 0.680424 (-0.508235) | 0.021502 / 0.534201 (-0.512699) | 0.471198 / 0.579283 (-0.108085) | 0.475979 / 0.434364 (0.041615) | 0.544675 / 0.540337 (0.004338) | 0.756102 / 1.386936 (-0.630834) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007635 / 0.011353 (-0.003717) | 0.004614 / 0.011008 (-0.006394) | 0.075852 / 0.038508 (0.037344) | 0.049700 / 0.023109 (0.026591) | 0.425957 / 0.275898 (0.150059) | 0.512590 / 0.323480 (0.189110) | 0.006921 / 0.007986 (-0.001065) | 0.003714 / 0.004328 (-0.000615) | 0.075536 / 0.004250 (0.071286) | 0.070206 / 0.037052 (0.033153) | 0.455706 / 0.258489 (0.197217) | 0.512231 / 0.293841 (0.218390) | 0.036685 / 0.128546 (-0.091861) | 0.009793 / 0.075646 (-0.065853) | 0.084208 / 0.419271 (-0.335064) | 0.065262 / 0.043533 (0.021729) | 0.423761 / 0.255139 (0.168622) | 0.456791 / 0.283200 (0.173591) | 0.044539 / 0.141683 (-0.097144) | 1.797029 / 1.452155 (0.344874) | 1.864124 / 1.492716 (0.371408) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.366840 / 0.018006 (0.348834) | 0.479254 / 0.000490 (0.478765) | 0.070383 / 0.000200 (0.070183) | 0.000762 / 0.000054 (0.000707) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.103140 / 0.014526 (0.088614) | 0.117099 / 0.176557 (-0.059457) | 0.178532 / 0.737135 (-0.558603) | 0.120092 / 0.296338 (-0.176247) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492993 / 0.215209 (0.277784) | 4.878776 / 2.077655 (2.801121) | 2.566666 / 1.504120 (1.062547) | 2.356383 / 1.541195 (0.815188) | 2.454723 / 1.468490 (0.986233) | 0.571432 / 4.584777 (-4.013345) | 4.240554 / 3.745712 (0.494842) | 7.509259 / 5.269862 (2.239398) | 4.040294 / 4.565676 (-0.525382) | 0.067409 / 0.424275 (-0.356866) | 0.008657 / 0.007607 (0.001050) | 0.585751 / 0.226044 (0.359707) | 5.967668 / 2.268929 (3.698739) | 3.195573 / 55.444624 (-52.249052) | 2.839772 / 6.876477 (-4.036704) | 2.806319 / 2.142072 (0.664246) | 0.681502 / 4.805227 (-4.123725) | 0.158673 / 6.500664 (-6.341991) | 0.073224 / 0.075469 (-0.002245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623335 / 1.841788 (-0.218453) | 22.490806 / 8.074308 (14.416498) | 16.762435 / 10.191392 (6.571043) | 0.180961 / 0.680424 (-0.499463) | 0.022716 / 0.534201 (-0.511485) | 0.472910 / 0.579283 (-0.106373) | 0.471616 / 0.434364 (0.037252) | 0.548192 / 0.540337 (0.007854) | 0.734357 / 1.386936 (-0.652579) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c0498b47a00153d4730352b6595fc51ab054fb95 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005858 / 0.011353 (-0.005495) | 0.003512 / 0.011008 (-0.007497) | 0.079739 / 0.038508 (0.041231) | 0.057736 / 0.023109 (0.034627) | 0.317640 / 0.275898 (0.041742) | 0.354157 / 0.323480 (0.030677) | 0.004772 / 0.007986 (-0.003214) | 0.002824 / 0.004328 (-0.001504) | 0.063288 / 0.004250 (0.059037) | 0.049542 / 0.037052 (0.012489) | 0.323974 / 0.258489 (0.065485) | 0.372149 / 0.293841 (0.078308) | 0.026841 / 0.128546 (-0.101705) | 0.007846 / 0.075646 (-0.067800) | 0.262546 / 0.419271 (-0.156725) | 0.051952 / 0.043533 (0.008420) | 0.319439 / 0.255139 (0.064300) | 0.343862 / 0.283200 (0.060663) | 0.027021 / 0.141683 (-0.114662) | 1.445211 / 1.452155 (-0.006944) | 1.485006 / 1.492716 (-0.007711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183174 / 0.018006 (0.165167) | 0.422794 / 0.000490 (0.422304) | 0.004148 / 0.000200 (0.003948) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023037 / 0.037411 (-0.014374) | 0.071300 / 0.014526 (0.056775) | 0.083022 / 0.176557 (-0.093535) | 0.146215 / 0.737135 (-0.590920) | 0.082549 / 0.296338 (-0.213789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422846 / 0.215209 (0.207637) | 4.215280 / 2.077655 (2.137626) | 2.256802 / 1.504120 (0.752682) | 2.056867 / 1.541195 (0.515673) | 2.102478 / 1.468490 (0.633988) | 0.497552 / 4.584777 (-4.087225) | 3.049716 / 3.745712 (-0.695996) | 4.209227 / 5.269862 (-1.060635) | 2.599947 / 4.565676 (-1.965730) | 0.059131 / 0.424275 (-0.365144) | 0.006459 / 0.007607 (-0.001148) | 0.495047 / 0.226044 (0.269003) | 4.952332 / 2.268929 (2.683404) | 2.675260 / 55.444624 (-52.769365) | 2.333223 / 6.876477 (-4.543254) | 2.449573 / 2.142072 (0.307500) | 0.583420 / 4.805227 (-4.221807) | 0.125140 / 6.500664 (-6.375524) | 0.060209 / 0.075469 (-0.015260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215033 / 1.841788 (-0.626755) | 18.101107 / 8.074308 (10.026799) | 13.489222 / 10.191392 (3.297830) | 0.147122 / 0.680424 (-0.533302) | 0.016567 / 0.534201 (-0.517634) | 0.329909 / 0.579283 (-0.249374) | 0.340952 / 0.434364 (-0.093412) | 0.379166 / 0.540337 (-0.161172) | 0.510767 / 1.386936 (-0.876169) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005942 / 0.011353 (-0.005411) | 0.003628 / 0.011008 (-0.007380) | 0.061975 / 0.038508 (0.023467) | 0.058331 / 0.023109 (0.035221) | 0.393277 / 0.275898 (0.117379) | 0.410740 / 0.323480 (0.087261) | 0.004546 / 0.007986 (-0.003440) | 0.002826 / 0.004328 (-0.001503) | 0.062216 / 0.004250 (0.057966) | 0.049801 / 0.037052 (0.012748) | 0.394070 / 0.258489 (0.135581) | 0.414407 / 0.293841 (0.120566) | 0.027161 / 0.128546 (-0.101385) | 0.007901 / 0.075646 (-0.067746) | 0.066778 / 0.419271 (-0.352493) | 0.041354 / 0.043533 (-0.002179) | 0.379432 / 0.255139 (0.124293) | 0.402966 / 0.283200 (0.119766) | 0.020279 / 0.141683 (-0.121404) | 1.416986 / 1.452155 (-0.035169) | 1.474335 / 1.492716 (-0.018382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226147 / 0.018006 (0.208140) | 0.404361 / 0.000490 (0.403871) | 0.000358 / 0.000200 (0.000158) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025105 / 0.037411 (-0.012306) | 0.075849 / 0.014526 (0.061323) | 0.084781 / 0.176557 (-0.091775) | 0.137415 / 0.737135 (-0.599720) | 0.086288 / 0.296338 (-0.210051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445925 / 0.215209 (0.230716) | 4.453478 / 2.077655 (2.375823) | 2.419048 / 1.504120 (0.914928) | 2.246363 / 1.541195 (0.705168) | 2.304022 / 1.468490 (0.835532) | 0.499132 / 4.584777 (-4.085645) | 3.001336 / 3.745712 (-0.744376) | 2.902593 / 5.269862 (-2.367269) | 1.819843 / 4.565676 (-2.745834) | 0.057210 / 0.424275 (-0.367065) | 0.006338 / 0.007607 (-0.001269) | 0.523280 / 0.226044 (0.297236) | 5.235969 / 2.268929 (2.967040) | 2.897585 / 55.444624 (-52.547039) | 2.541586 / 6.876477 (-4.334891) | 2.564233 / 2.142072 (0.422160) | 0.584714 / 4.805227 (-4.220513) | 0.124611 / 6.500664 (-6.376053) | 0.061774 / 0.075469 (-0.013695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.349799 / 1.841788 (-0.491988) | 18.225076 / 8.074308 (10.150768) | 13.781518 / 10.191392 (3.590126) | 0.130562 / 0.680424 (-0.549862) | 0.016434 / 0.534201 (-0.517767) | 0.331607 / 0.579283 (-0.247676) | 0.343456 / 0.434364 (-0.090908) | 0.380437 / 0.540337 (-0.159900) | 0.522793 / 1.386936 (-0.864143) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f0a3dbbd2e7ace162346d95ec27db674e80c1e23 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013721 / 0.011353 (0.002368) | 0.005715 / 0.011008 (-0.005293) | 0.090116 / 0.038508 (0.051608) | 0.087185 / 0.023109 (0.064075) | 0.427813 / 0.275898 (0.151915) | 0.390614 / 0.323480 (0.067135) | 0.006976 / 0.007986 (-0.001009) | 0.004231 / 0.004328 (-0.000098) | 0.078320 / 0.004250 (0.074070) | 0.066235 / 0.037052 (0.029183) | 0.439904 / 0.258489 (0.181415) | 0.424119 / 0.293841 (0.130278) | 0.050362 / 0.128546 (-0.078184) | 0.014992 / 0.075646 (-0.060654) | 0.293519 / 0.419271 (-0.125753) | 0.066906 / 0.043533 (0.023373) | 0.449657 / 0.255139 (0.194518) | 0.393800 / 0.283200 (0.110600) | 0.032258 / 0.141683 (-0.109425) | 1.539534 / 1.452155 (0.087379) | 1.675292 / 1.492716 (0.182576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210515 / 0.018006 (0.192508) | 0.506817 / 0.000490 (0.506327) | 0.001938 / 0.000200 (0.001738) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026019 / 0.037411 (-0.011393) | 0.080635 / 0.014526 (0.066109) | 0.103050 / 0.176557 (-0.073507) | 0.160597 / 0.737135 (-0.576538) | 0.095844 / 0.296338 (-0.200495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506359 / 0.215209 (0.291150) | 5.041586 / 2.077655 (2.963931) | 2.198288 / 1.504120 (0.694168) | 1.987544 / 1.541195 (0.446349) | 1.866790 / 1.468490 (0.398300) | 0.681642 / 4.584777 (-3.903135) | 4.719306 / 3.745712 (0.973593) | 7.669869 / 5.269862 (2.400008) | 4.466082 / 4.565676 (-0.099595) | 0.092974 / 0.424275 (-0.331301) | 0.008196 / 0.007607 (0.000589) | 0.707656 / 0.226044 (0.481612) | 6.974507 / 2.268929 (4.705579) | 3.254206 / 55.444624 (-52.190418) | 2.499019 / 6.876477 (-4.377457) | 2.509089 / 2.142072 (0.367017) | 0.915952 / 4.805227 (-3.889276) | 0.192119 / 6.500664 (-6.308545) | 0.065473 / 0.075469 (-0.009996) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.309078 / 1.841788 (-0.532710) | 19.660348 / 8.074308 (11.586040) | 16.659582 / 10.191392 (6.468190) | 0.194315 / 0.680424 (-0.486109) | 0.027773 / 0.534201 (-0.506428) | 0.401241 / 0.579283 (-0.178042) | 0.515799 / 0.434364 (0.081435) | 0.488772 / 0.540337 (-0.051566) | 0.604790 / 1.386936 (-0.782146) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006823 / 0.011353 (-0.004530) | 0.003940 / 0.011008 (-0.007068) | 0.061533 / 0.038508 (0.023025) | 0.065241 / 0.023109 (0.042132) | 0.411790 / 0.275898 (0.135892) | 0.475720 / 0.323480 (0.152241) | 0.005376 / 0.007986 (-0.002609) | 0.003433 / 0.004328 (-0.000895) | 0.065703 / 0.004250 (0.061452) | 0.050736 / 0.037052 (0.013683) | 0.435890 / 0.258489 (0.177401) | 0.436698 / 0.293841 (0.142857) | 0.040357 / 0.128546 (-0.088189) | 0.011578 / 0.075646 (-0.064069) | 0.072831 / 0.419271 (-0.346440) | 0.055698 / 0.043533 (0.012165) | 0.408225 / 0.255139 (0.153086) | 0.439551 / 0.283200 (0.156352) | 0.030469 / 0.141683 (-0.111214) | 1.443866 / 1.452155 (-0.008289) | 1.502022 / 1.492716 (0.009306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290338 / 0.018006 (0.272332) | 0.540726 / 0.000490 (0.540236) | 0.003244 / 0.000200 (0.003044) | 0.000170 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030865 / 0.037411 (-0.006547) | 0.090866 / 0.014526 (0.076340) | 0.106224 / 0.176557 (-0.070332) | 0.166583 / 0.737135 (-0.570553) | 0.104448 / 0.296338 (-0.191891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.518025 / 0.215209 (0.302816) | 6.027065 / 2.077655 (3.949410) | 2.671840 / 1.504120 (1.167720) | 2.273949 / 1.541195 (0.732754) | 2.414892 / 1.468490 (0.946402) | 0.774318 / 4.584777 (-3.810459) | 5.020364 / 3.745712 (1.274652) | 4.146927 / 5.269862 (-1.122934) | 2.584598 / 4.565676 (-1.981078) | 0.089519 / 0.424275 (-0.334756) | 0.009181 / 0.007607 (0.001574) | 0.654467 / 0.226044 (0.428423) | 6.421595 / 2.268929 (4.152666) | 3.091589 / 55.444624 (-52.353036) | 2.554798 / 6.876477 (-4.321679) | 2.441354 / 2.142072 (0.299282) | 0.943386 / 4.805227 (-3.861841) | 0.173641 / 6.500664 (-6.327023) | 0.072209 / 0.075469 (-0.003260) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.557147 / 1.841788 (-0.284641) | 19.980747 / 8.074308 (11.906439) | 17.816813 / 10.191392 (7.625421) | 0.212078 / 0.680424 (-0.468346) | 0.025435 / 0.534201 (-0.508766) | 0.396200 / 0.579283 (-0.183084) | 0.546249 / 0.434364 (0.111885) | 0.459632 / 0.540337 (-0.080705) | 0.616548 / 1.386936 (-0.770388) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#535e972a70a3d4f8490a7e1a77ac43d5a4ab2655 \"CML watermark\")\n" ]
2023-07-04T15:02:37
2023-07-06T15:32:41
2023-07-06T15:22:43
CONTRIBUTOR
null
`hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :). (Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6005/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6005", "html_url": "https://github.com/huggingface/datasets/pull/6005", "diff_url": "https://github.com/huggingface/datasets/pull/6005.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6005.patch", "merged_at": "2023-07-06T15:22:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/6004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6004/comments
https://api.github.com/repos/huggingface/datasets/issues/6004/events
https://github.com/huggingface/datasets/pull/6004
1,786,636,368
PR_kwDODunzps5UjN2h
6,004
Misc improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006897 / 0.011353 (-0.004456) | 0.004207 / 0.011008 (-0.006802) | 0.104828 / 0.038508 (0.066320) | 0.048054 / 0.023109 (0.024945) | 0.373991 / 0.275898 (0.098093) | 0.426740 / 0.323480 (0.103260) | 0.005540 / 0.007986 (-0.002446) | 0.003531 / 0.004328 (-0.000797) | 0.079304 / 0.004250 (0.075053) | 0.066996 / 0.037052 (0.029944) | 0.370675 / 0.258489 (0.112186) | 0.414154 / 0.293841 (0.120313) | 0.031567 / 0.128546 (-0.096979) | 0.008843 / 0.075646 (-0.066803) | 0.357426 / 0.419271 (-0.061845) | 0.067040 / 0.043533 (0.023508) | 0.362384 / 0.255139 (0.107245) | 0.376056 / 0.283200 (0.092856) | 0.032985 / 0.141683 (-0.108697) | 1.560603 / 1.452155 (0.108448) | 1.619024 / 1.492716 (0.126308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229059 / 0.018006 (0.211053) | 0.440513 / 0.000490 (0.440023) | 0.004647 / 0.000200 (0.004447) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029517 / 0.037411 (-0.007894) | 0.120974 / 0.014526 (0.106448) | 0.125070 / 0.176557 (-0.051486) | 0.184695 / 0.737135 (-0.552441) | 0.130244 / 0.296338 (-0.166095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436930 / 0.215209 (0.221721) | 4.356118 / 2.077655 (2.278463) | 2.049169 / 1.504120 (0.545049) | 1.842898 / 1.541195 (0.301703) | 1.918948 / 1.468490 (0.450458) | 0.553573 / 4.584777 (-4.031204) | 3.883195 / 3.745712 (0.137483) | 3.209780 / 5.269862 (-2.060081) | 1.551707 / 4.565676 (-3.013970) | 0.068181 / 0.424275 (-0.356094) | 0.012370 / 0.007607 (0.004762) | 0.539899 / 0.226044 (0.313854) | 5.380008 / 2.268929 (3.111079) | 2.518178 / 55.444624 (-52.926446) | 2.174190 / 6.876477 (-4.702286) | 2.317812 / 2.142072 (0.175740) | 0.674154 / 4.805227 (-4.131073) | 0.149313 / 6.500664 (-6.351351) | 0.068297 / 0.075469 (-0.007172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261426 / 1.841788 (-0.580362) | 15.316378 / 8.074308 (7.242070) | 13.573512 / 10.191392 (3.382120) | 0.190022 / 0.680424 (-0.490401) | 0.018697 / 0.534201 (-0.515504) | 0.448122 / 0.579283 (-0.131161) | 0.435044 / 0.434364 (0.000681) | 0.550065 / 0.540337 (0.009728) | 0.653547 / 1.386936 (-0.733389) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007116 / 0.011353 (-0.004237) | 0.004375 / 0.011008 (-0.006633) | 0.081793 / 0.038508 (0.043285) | 0.047980 / 0.023109 (0.024871) | 0.392185 / 0.275898 (0.116287) | 0.462263 / 0.323480 (0.138783) | 0.005574 / 0.007986 (-0.002412) | 0.003552 / 0.004328 (-0.000776) | 0.080413 / 0.004250 (0.076162) | 0.065539 / 0.037052 (0.028487) | 0.413137 / 0.258489 (0.154648) | 0.467377 / 0.293841 (0.173536) | 0.034386 / 0.128546 (-0.094160) | 0.009183 / 0.075646 (-0.066464) | 0.087542 / 0.419271 (-0.331730) | 0.053954 / 0.043533 (0.010421) | 0.385096 / 0.255139 (0.129957) | 0.404900 / 0.283200 (0.121701) | 0.025908 / 0.141683 (-0.115775) | 1.550159 / 1.452155 (0.098005) | 1.598794 / 1.492716 (0.106078) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246222 / 0.018006 (0.228216) | 0.441095 / 0.000490 (0.440605) | 0.006863 / 0.000200 (0.006663) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032179 / 0.037411 (-0.005233) | 0.120112 / 0.014526 (0.105586) | 0.129326 / 0.176557 (-0.047230) | 0.184542 / 0.737135 (-0.552593) | 0.135038 / 0.296338 (-0.161300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459002 / 0.215209 (0.243793) | 4.580258 / 2.077655 (2.502604) | 2.296689 / 1.504120 (0.792569) | 2.104338 / 1.541195 (0.563143) | 2.182896 / 1.468490 (0.714406) | 0.546447 / 4.584777 (-4.038330) | 3.854047 / 3.745712 (0.108335) | 1.873829 / 5.269862 (-3.396032) | 1.116484 / 4.565676 (-3.449193) | 0.067158 / 0.424275 (-0.357117) | 0.012035 / 0.007607 (0.004428) | 0.556642 / 0.226044 (0.330597) | 5.574436 / 2.268929 (3.305508) | 2.828223 / 55.444624 (-52.616402) | 2.519851 / 6.876477 (-4.356626) | 2.668594 / 2.142072 (0.526521) | 0.675989 / 4.805227 (-4.129238) | 0.146075 / 6.500664 (-6.354589) | 0.067788 / 0.075469 (-0.007681) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345958 / 1.841788 (-0.495830) | 15.672748 / 8.074308 (7.598440) | 14.937583 / 10.191392 (4.746191) | 0.163479 / 0.680424 (-0.516945) | 0.018364 / 0.534201 (-0.515837) | 0.433296 / 0.579283 (-0.145987) | 0.432463 / 0.434364 (-0.001901) | 0.512000 / 0.540337 (-0.028338) | 0.619397 / 1.386936 (-0.767539) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0832d48a07ed00b406271f4b4439e6d54ae38ebf \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010097 / 0.011353 (-0.001256) | 0.005070 / 0.011008 (-0.005939) | 0.118638 / 0.038508 (0.080130) | 0.043651 / 0.023109 (0.020542) | 0.356074 / 0.275898 (0.080176) | 0.414578 / 0.323480 (0.091098) | 0.005939 / 0.007986 (-0.002046) | 0.004927 / 0.004328 (0.000598) | 0.089545 / 0.004250 (0.085294) | 0.067533 / 0.037052 (0.030481) | 0.371550 / 0.258489 (0.113061) | 0.417808 / 0.293841 (0.123967) | 0.045186 / 0.128546 (-0.083361) | 0.015763 / 0.075646 (-0.059883) | 0.393304 / 0.419271 (-0.025967) | 0.065123 / 0.043533 (0.021591) | 0.345057 / 0.255139 (0.089918) | 0.378809 / 0.283200 (0.095610) | 0.033243 / 0.141683 (-0.108440) | 1.679956 / 1.452155 (0.227802) | 1.775456 / 1.492716 (0.282739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229723 / 0.018006 (0.211717) | 0.554630 / 0.000490 (0.554140) | 0.008729 / 0.000200 (0.008529) | 0.000183 / 0.000054 (0.000129) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027284 / 0.037411 (-0.010128) | 0.114741 / 0.014526 (0.100215) | 0.129188 / 0.176557 (-0.047369) | 0.189270 / 0.737135 (-0.547866) | 0.126000 / 0.296338 (-0.170339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580417 / 0.215209 (0.365208) | 5.829337 / 2.077655 (3.751683) | 2.421191 / 1.504120 (0.917071) | 2.063673 / 1.541195 (0.522479) | 2.133427 / 1.468490 (0.664937) | 0.830964 / 4.584777 (-3.753813) | 5.107139 / 3.745712 (1.361427) | 4.599451 / 5.269862 (-0.670410) | 2.406502 / 4.565676 (-2.159175) | 0.100422 / 0.424275 (-0.323853) | 0.011850 / 0.007607 (0.004243) | 0.741881 / 0.226044 (0.515836) | 7.425689 / 2.268929 (5.156760) | 3.068948 / 55.444624 (-52.375676) | 2.496292 / 6.876477 (-4.380184) | 2.566420 / 2.142072 (0.424348) | 1.093084 / 4.805227 (-3.712144) | 0.224106 / 6.500664 (-6.276558) | 0.084549 / 0.075469 (0.009080) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.416315 / 1.841788 (-0.425473) | 16.306901 / 8.074308 (8.232593) | 19.792419 / 10.191392 (9.601027) | 0.224223 / 0.680424 (-0.456201) | 0.026385 / 0.534201 (-0.507816) | 0.463460 / 0.579283 (-0.115823) | 0.598385 / 0.434364 (0.164021) | 0.543981 / 0.540337 (0.003644) | 0.647454 / 1.386936 (-0.739482) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009470 / 0.011353 (-0.001883) | 0.004800 / 0.011008 (-0.006208) | 0.094276 / 0.038508 (0.055768) | 0.045157 / 0.023109 (0.022048) | 0.397302 / 0.275898 (0.121404) | 0.474213 / 0.323480 (0.150733) | 0.005826 / 0.007986 (-0.002160) | 0.003724 / 0.004328 (-0.000605) | 0.090060 / 0.004250 (0.085809) | 0.066671 / 0.037052 (0.029618) | 0.439560 / 0.258489 (0.181071) | 0.468598 / 0.293841 (0.174757) | 0.044549 / 0.128546 (-0.083997) | 0.014000 / 0.075646 (-0.061646) | 0.110457 / 0.419271 (-0.308815) | 0.065898 / 0.043533 (0.022365) | 0.408101 / 0.255139 (0.152962) | 0.433473 / 0.283200 (0.150273) | 0.038438 / 0.141683 (-0.103245) | 1.767781 / 1.452155 (0.315626) | 1.791575 / 1.492716 (0.298859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230257 / 0.018006 (0.212251) | 0.492280 / 0.000490 (0.491790) | 0.005110 / 0.000200 (0.004910) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028854 / 0.037411 (-0.008557) | 0.111702 / 0.014526 (0.097176) | 0.122040 / 0.176557 (-0.054517) | 0.179103 / 0.737135 (-0.558032) | 0.128869 / 0.296338 (-0.167470) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634795 / 0.215209 (0.419586) | 6.204760 / 2.077655 (4.127105) | 2.692479 / 1.504120 (1.188359) | 2.324260 / 1.541195 (0.783066) | 2.380640 / 1.468490 (0.912149) | 0.887827 / 4.584777 (-3.696950) | 5.251648 / 3.745712 (1.505935) | 2.632767 / 5.269862 (-2.637095) | 1.745721 / 4.565676 (-2.819955) | 0.108364 / 0.424275 (-0.315911) | 0.013409 / 0.007607 (0.005802) | 0.783427 / 0.226044 (0.557383) | 7.765144 / 2.268929 (5.496216) | 3.340686 / 55.444624 (-52.103938) | 2.715340 / 6.876477 (-4.161137) | 2.768604 / 2.142072 (0.626531) | 1.119746 / 4.805227 (-3.685481) | 0.210804 / 6.500664 (-6.289860) | 0.072600 / 0.075469 (-0.002869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517334 / 1.841788 (-0.324454) | 17.046837 / 8.074308 (8.972529) | 19.371090 / 10.191392 (9.179698) | 0.194275 / 0.680424 (-0.486148) | 0.026712 / 0.534201 (-0.507488) | 0.462731 / 0.579283 (-0.116552) | 0.568958 / 0.434364 (0.134595) | 0.555707 / 0.540337 (0.015370) | 0.663654 / 1.386936 (-0.723283) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d20476b1d4c8e11e0ffafc1570cbf4bd19011cf \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006423 / 0.011353 (-0.004930) | 0.003882 / 0.011008 (-0.007126) | 0.082976 / 0.038508 (0.044468) | 0.071281 / 0.023109 (0.048171) | 0.311367 / 0.275898 (0.035469) | 0.348228 / 0.323480 (0.024748) | 0.005315 / 0.007986 (-0.002671) | 0.003326 / 0.004328 (-0.001003) | 0.064641 / 0.004250 (0.060391) | 0.056134 / 0.037052 (0.019081) | 0.314071 / 0.258489 (0.055582) | 0.360534 / 0.293841 (0.066693) | 0.030642 / 0.128546 (-0.097904) | 0.008301 / 0.075646 (-0.067345) | 0.285820 / 0.419271 (-0.133451) | 0.069241 / 0.043533 (0.025708) | 0.313995 / 0.255139 (0.058856) | 0.336656 / 0.283200 (0.053457) | 0.031686 / 0.141683 (-0.109997) | 1.467627 / 1.452155 (0.015472) | 1.536493 / 1.492716 (0.043777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196518 / 0.018006 (0.178512) | 0.458235 / 0.000490 (0.457745) | 0.005599 / 0.000200 (0.005399) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027371 / 0.037411 (-0.010040) | 0.080986 / 0.014526 (0.066460) | 0.093296 / 0.176557 (-0.083260) | 0.150592 / 0.737135 (-0.586543) | 0.094150 / 0.296338 (-0.202188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379412 / 0.215209 (0.164202) | 3.797927 / 2.077655 (1.720272) | 1.830654 / 1.504120 (0.326534) | 1.669569 / 1.541195 (0.128374) | 1.746738 / 1.468490 (0.278248) | 0.479536 / 4.584777 (-4.105241) | 3.592867 / 3.745712 (-0.152845) | 5.468098 / 5.269862 (0.198237) | 3.268013 / 4.565676 (-1.297663) | 0.056635 / 0.424275 (-0.367640) | 0.007224 / 0.007607 (-0.000383) | 0.456681 / 0.226044 (0.230636) | 4.566736 / 2.268929 (2.297807) | 2.362831 / 55.444624 (-53.081793) | 1.965141 / 6.876477 (-4.911336) | 2.156905 / 2.142072 (0.014833) | 0.572543 / 4.805227 (-4.232684) | 0.132203 / 6.500664 (-6.368461) | 0.059254 / 0.075469 (-0.016215) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256134 / 1.841788 (-0.585654) | 19.905438 / 8.074308 (11.831130) | 14.179556 / 10.191392 (3.988164) | 0.168043 / 0.680424 (-0.512381) | 0.018215 / 0.534201 (-0.515986) | 0.392740 / 0.579283 (-0.186543) | 0.398397 / 0.434364 (-0.035967) | 0.463806 / 0.540337 (-0.076531) | 0.616248 / 1.386936 (-0.770688) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006564 / 0.011353 (-0.004789) | 0.003923 / 0.011008 (-0.007085) | 0.063929 / 0.038508 (0.025421) | 0.073780 / 0.023109 (0.050671) | 0.360242 / 0.275898 (0.084344) | 0.395078 / 0.323480 (0.071598) | 0.005265 / 0.007986 (-0.002720) | 0.003229 / 0.004328 (-0.001100) | 0.064094 / 0.004250 (0.059843) | 0.057468 / 0.037052 (0.020416) | 0.369530 / 0.258489 (0.111041) | 0.411159 / 0.293841 (0.117318) | 0.031278 / 0.128546 (-0.097268) | 0.008424 / 0.075646 (-0.067222) | 0.070411 / 0.419271 (-0.348860) | 0.048714 / 0.043533 (0.005181) | 0.361280 / 0.255139 (0.106141) | 0.382468 / 0.283200 (0.099269) | 0.023059 / 0.141683 (-0.118624) | 1.452369 / 1.452155 (0.000215) | 1.519192 / 1.492716 (0.026475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223745 / 0.018006 (0.205739) | 0.442086 / 0.000490 (0.441596) | 0.000379 / 0.000200 (0.000179) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030919 / 0.037411 (-0.006493) | 0.088483 / 0.014526 (0.073958) | 0.101165 / 0.176557 (-0.075391) | 0.154332 / 0.737135 (-0.582804) | 0.103030 / 0.296338 (-0.193309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414520 / 0.215209 (0.199311) | 4.126754 / 2.077655 (2.049099) | 2.142677 / 1.504120 (0.638557) | 1.995300 / 1.541195 (0.454106) | 2.101678 / 1.468490 (0.633188) | 0.481099 / 4.584777 (-4.103678) | 3.562813 / 3.745712 (-0.182900) | 3.392463 / 5.269862 (-1.877399) | 1.983943 / 4.565676 (-2.581734) | 0.056594 / 0.424275 (-0.367681) | 0.007216 / 0.007607 (-0.000391) | 0.495085 / 0.226044 (0.269041) | 4.955640 / 2.268929 (2.686712) | 2.629434 / 55.444624 (-52.815191) | 2.269577 / 6.876477 (-4.606900) | 2.357708 / 2.142072 (0.215635) | 0.612370 / 4.805227 (-4.192857) | 0.131169 / 6.500664 (-6.369495) | 0.061029 / 0.075469 (-0.014440) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.339438 / 1.841788 (-0.502350) | 19.757611 / 8.074308 (11.683303) | 14.246254 / 10.191392 (4.054862) | 0.170750 / 0.680424 (-0.509674) | 0.018192 / 0.534201 (-0.516009) | 0.395693 / 0.579283 (-0.183590) | 0.411003 / 0.434364 (-0.023361) | 0.478531 / 0.540337 (-0.061806) | 0.650291 / 1.386936 (-0.736645) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3e34d06d746688dd5d26e4c85517b7e1a2f361ca \"CML watermark\")\n" ]
2023-07-03T18:29:14
2023-07-06T17:04:11
2023-07-06T16:55:25
CONTRIBUTOR
null
Contains the following improvements: * fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section * updates `Makefile` to also run the style checks on `utils` and `setup.py` * deletes a test for GH-hosted datasets (no longer supported) * deletes `convert_dataset.sh` (outdated) * aligns `utils/release.py` with `transformers` (the current version is outdated)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6004/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6004", "html_url": "https://github.com/huggingface/datasets/pull/6004", "diff_url": "https://github.com/huggingface/datasets/pull/6004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6004.patch", "merged_at": "2023-07-06T16:55:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/6003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6003/comments
https://api.github.com/repos/huggingface/datasets/issues/6003/events
https://github.com/huggingface/datasets/issues/6003
1,786,554,110
I_kwDODunzps5qfKb-
6,003
interleave_datasets & DataCollatorForLanguageModeling having a conflict ?
{ "login": "PonteIneptique", "id": 1929830, "node_id": "MDQ6VXNlcjE5Mjk4MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PonteIneptique", "html_url": "https://github.com/PonteIneptique", "followers_url": "https://api.github.com/users/PonteIneptique/followers", "following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}", "gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}", "starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions", "organizations_url": "https://api.github.com/users/PonteIneptique/orgs", "repos_url": "https://api.github.com/users/PonteIneptique/repos", "events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}", "received_events_url": "https://api.github.com/users/PonteIneptique/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-03T17:15:31
2023-07-03T17:15:31
null
NONE
null
### Describe the bug Hi everyone :) I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`: - `tokenize()` runs fine - `group_text()` runs fine Everytime, on step 19, I get ```pytb File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens inputs[indices_random] = random_words[indices_random] RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source. ``` I tried: - training without interleave on dataset 1, it runs - training without interleave on dataset 2, it runs - training without `.to_iterable_dataset()`, it hangs then crash - training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps. I might have coded something wrong, but I don't get what ### Steps to reproduce the bug I have this function: ```py def build_dataset(path: str, percent: str): dataset = load_dataset( "text", data_files={"train": [path]}, split=f"train[{percent}]" ) dataset = dataset.map( lambda examples: tokenize(examples["text"]), batched=True, num_proc=num_proc, ) dataset = dataset.map( group_texts, batched=True, num_proc=num_proc, desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}", remove_columns=["text"] ) print(len(dataset)) return dataset.to_iterable_dataset() ``` I hardcoded group_text: ```py def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict. # We could add padding if the model supported it instead of this drop, you can customize this part to your needs. total_length = (total_length // 512) * 512 # Split by chunks of max_len. result = { k: [t[i: i + 512] for i in range(0, total_length, 512)] for k, t in concatenated_examples.items() } # result = {k: [el for el in elements if el] for k, elements in result.items()} return result ``` And then I build datasets using the following code: ```py train1 = build_dataset("d1.txt", ":95%") train2 = build_dataset("d2.txt", ":95%") dev1 = build_dataset("d1.txt", "95%:") dev2 = build_dataset("d2.txt", "95%:") ``` and finally I run ```py train_dataset = interleave_datasets( [train1, train2], probabilities=[0.8, 0.2], seed=42 ) eval_dataset = interleave_datasets( [dev1, dev2], probabilities=[0.8, 0.2], seed=42 ) ``` Then I run the training part which remains mostly untouched: > CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16 ### Expected behavior The model should then train normally, but fails every time at the same step (19). printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well] ### Environment info transformers[torch] 4.30.2 Ubuntu A100 0 CUDA 12 Driver Version: 525.116.04
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6003/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6002/comments
https://api.github.com/repos/huggingface/datasets/issues/6002/events
https://github.com/huggingface/datasets/pull/6002
1,786,053,060
PR_kwDODunzps5UhP-Z
6,002
Add KLUE-MRC metrics
{ "login": "ingyuseong", "id": 37537248, "node_id": "MDQ6VXNlcjM3NTM3MjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/37537248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ingyuseong", "html_url": "https://github.com/ingyuseong", "followers_url": "https://api.github.com/users/ingyuseong/followers", "following_url": "https://api.github.com/users/ingyuseong/following{/other_user}", "gists_url": "https://api.github.com/users/ingyuseong/gists{/gist_id}", "starred_url": "https://api.github.com/users/ingyuseong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ingyuseong/subscriptions", "organizations_url": "https://api.github.com/users/ingyuseong/orgs", "repos_url": "https://api.github.com/users/ingyuseong/repos", "events_url": "https://api.github.com/users/ingyuseong/events{/privacy}", "received_events_url": "https://api.github.com/users/ingyuseong/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https://huggingface.co./docs/evaluate/creating_and_sharing)." ]
2023-07-03T12:11:10
2023-07-03T15:34:17
null
NONE
null
## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension) Adding metrics for [KLUE-MRC](https://huggingface.co./datasets/klue). KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC. Specifically, in the case of [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness), it leverages the scoring script of SQuAD to evaluate SQuAD 2.0 and KorQuAD. But the script isn't suitable for KLUE-MRC because KLUE-MRC is a bit different from SQuAD 2.0. And this is why I added the scoring script for KLUE-MRC. - [x] All tests passed - [x] Added a metric card (referred the metric card of SQuAD 2.0) - [x] Compatibility test with [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) passed ### References - [KLUE: Korean Language Understanding Evaluation](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/98dce83da57b0395e163467c9dae521b-Paper-round2.pdf) - [KLUE on Hugging Face Datasets](https://huggingface.co./datasets/klue) - #2416
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6002/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6002", "html_url": "https://github.com/huggingface/datasets/pull/6002", "diff_url": "https://github.com/huggingface/datasets/pull/6002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6002.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6001/comments
https://api.github.com/repos/huggingface/datasets/issues/6001/events
https://github.com/huggingface/datasets/pull/6001
1,782,516,627
PR_kwDODunzps5UVMMh
6,001
Align `column_names` type check with type hint in `sort`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006038 / 0.011353 (-0.005315) | 0.003797 / 0.011008 (-0.007211) | 0.097686 / 0.038508 (0.059178) | 0.035235 / 0.023109 (0.012126) | 0.317294 / 0.275898 (0.041396) | 0.377682 / 0.323480 (0.054202) | 0.003485 / 0.007986 (-0.004501) | 0.003603 / 0.004328 (-0.000725) | 0.077268 / 0.004250 (0.073017) | 0.054649 / 0.037052 (0.017597) | 0.322293 / 0.258489 (0.063804) | 0.372277 / 0.293841 (0.078436) | 0.027927 / 0.128546 (-0.100619) | 0.008495 / 0.075646 (-0.067151) | 0.313078 / 0.419271 (-0.106193) | 0.046974 / 0.043533 (0.003441) | 0.313848 / 0.255139 (0.058709) | 0.338454 / 0.283200 (0.055255) | 0.020462 / 0.141683 (-0.121221) | 1.473027 / 1.452155 (0.020873) | 1.539468 / 1.492716 (0.046752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221429 / 0.018006 (0.203423) | 0.412044 / 0.000490 (0.411555) | 0.005866 / 0.000200 (0.005666) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022870 / 0.037411 (-0.014541) | 0.099129 / 0.014526 (0.084603) | 0.103463 / 0.176557 (-0.073094) | 0.164969 / 0.737135 (-0.572166) | 0.110000 / 0.296338 (-0.186339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431311 / 0.215209 (0.216102) | 4.293562 / 2.077655 (2.215907) | 1.961209 / 1.504120 (0.457089) | 1.733680 / 1.541195 (0.192485) | 1.793171 / 1.468490 (0.324681) | 0.568566 / 4.584777 (-4.016211) | 3.401794 / 3.745712 (-0.343918) | 1.827949 / 5.269862 (-3.441913) | 1.055963 / 4.565676 (-3.509714) | 0.068459 / 0.424275 (-0.355816) | 0.011586 / 0.007607 (0.003979) | 0.533936 / 0.226044 (0.307891) | 5.347637 / 2.268929 (3.078708) | 2.378056 / 55.444624 (-53.066569) | 2.032159 / 6.876477 (-4.844318) | 2.159064 / 2.142072 (0.016991) | 0.674528 / 4.805227 (-4.130699) | 0.136859 / 6.500664 (-6.363805) | 0.066629 / 0.075469 (-0.008840) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218084 / 1.841788 (-0.623704) | 14.141710 / 8.074308 (6.067402) | 13.588415 / 10.191392 (3.397023) | 0.155104 / 0.680424 (-0.525320) | 0.017160 / 0.534201 (-0.517041) | 0.375558 / 0.579283 (-0.203725) | 0.386293 / 0.434364 (-0.048071) | 0.459476 / 0.540337 (-0.080862) | 0.548561 / 1.386936 (-0.838375) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005878 / 0.011353 (-0.005475) | 0.003750 / 0.011008 (-0.007259) | 0.077720 / 0.038508 (0.039212) | 0.034955 / 0.023109 (0.011846) | 0.357480 / 0.275898 (0.081582) | 0.418210 / 0.323480 (0.094730) | 0.004566 / 0.007986 (-0.003419) | 0.002918 / 0.004328 (-0.001410) | 0.076517 / 0.004250 (0.072266) | 0.050202 / 0.037052 (0.013150) | 0.368166 / 0.258489 (0.109677) | 0.415681 / 0.293841 (0.121840) | 0.029496 / 0.128546 (-0.099050) | 0.008547 / 0.075646 (-0.067099) | 0.083037 / 0.419271 (-0.336234) | 0.045001 / 0.043533 (0.001468) | 0.356503 / 0.255139 (0.101364) | 0.383747 / 0.283200 (0.100547) | 0.025071 / 0.141683 (-0.116612) | 1.541985 / 1.452155 (0.089830) | 1.594710 / 1.492716 (0.101994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204491 / 0.018006 (0.186484) | 0.408686 / 0.000490 (0.408196) | 0.002505 / 0.000200 (0.002305) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024446 / 0.037411 (-0.012965) | 0.101432 / 0.014526 (0.086906) | 0.108105 / 0.176557 (-0.068452) | 0.161195 / 0.737135 (-0.575940) | 0.112671 / 0.296338 (-0.183667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459697 / 0.215209 (0.244488) | 4.570071 / 2.077655 (2.492416) | 2.211547 / 1.504120 (0.707427) | 1.996651 / 1.541195 (0.455457) | 2.015621 / 1.468490 (0.547131) | 0.567423 / 4.584777 (-4.017354) | 3.408027 / 3.745712 (-0.337685) | 2.913824 / 5.269862 (-2.356038) | 1.423223 / 4.565676 (-3.142453) | 0.068740 / 0.424275 (-0.355535) | 0.010997 / 0.007607 (0.003390) | 0.567340 / 0.226044 (0.341296) | 5.666280 / 2.268929 (3.397351) | 2.804934 / 55.444624 (-52.639690) | 2.430761 / 6.876477 (-4.445716) | 2.451820 / 2.142072 (0.309748) | 0.681926 / 4.805227 (-4.123301) | 0.137761 / 6.500664 (-6.362903) | 0.067173 / 0.075469 (-0.008296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329853 / 1.841788 (-0.511934) | 14.436232 / 8.074308 (6.361924) | 14.398645 / 10.191392 (4.207253) | 0.147421 / 0.680424 (-0.533002) | 0.016743 / 0.534201 (-0.517458) | 0.364964 / 0.579283 (-0.214319) | 0.387072 / 0.434364 (-0.047292) | 0.423892 / 0.540337 (-0.116445) | 0.521304 / 1.386936 (-0.865632) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a62b6ce65f718e9ff4189da86d160ae4bb197fc2 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004889) | 0.003923 / 0.011008 (-0.007086) | 0.102096 / 0.038508 (0.063588) | 0.040230 / 0.023109 (0.017121) | 0.384688 / 0.275898 (0.108789) | 0.445574 / 0.323480 (0.122094) | 0.003590 / 0.007986 (-0.004395) | 0.004023 / 0.004328 (-0.000306) | 0.080125 / 0.004250 (0.075875) | 0.057406 / 0.037052 (0.020354) | 0.395049 / 0.258489 (0.136560) | 0.438065 / 0.293841 (0.144224) | 0.028963 / 0.128546 (-0.099583) | 0.008693 / 0.075646 (-0.066954) | 0.317158 / 0.419271 (-0.102114) | 0.047930 / 0.043533 (0.004397) | 0.382442 / 0.255139 (0.127303) | 0.410665 / 0.283200 (0.127466) | 0.020127 / 0.141683 (-0.121555) | 1.558554 / 1.452155 (0.106400) | 1.590959 / 1.492716 (0.098242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208826 / 0.018006 (0.190820) | 0.432037 / 0.000490 (0.431547) | 0.006509 / 0.000200 (0.006309) | 0.000285 / 0.000054 (0.000230) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023460 / 0.037411 (-0.013951) | 0.099070 / 0.014526 (0.084545) | 0.105771 / 0.176557 (-0.070785) | 0.166683 / 0.737135 (-0.570452) | 0.108755 / 0.296338 (-0.187583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424324 / 0.215209 (0.209115) | 4.225696 / 2.077655 (2.148042) | 1.910955 / 1.504120 (0.406835) | 1.704493 / 1.541195 (0.163298) | 1.782784 / 1.468490 (0.314293) | 0.562927 / 4.584777 (-4.021850) | 3.380163 / 3.745712 (-0.365550) | 1.779641 / 5.269862 (-3.490221) | 1.029134 / 4.565676 (-3.536543) | 0.068325 / 0.424275 (-0.355950) | 0.011528 / 0.007607 (0.003921) | 0.530141 / 0.226044 (0.304097) | 5.323443 / 2.268929 (3.054514) | 2.346956 / 55.444624 (-53.097668) | 2.013335 / 6.876477 (-4.863142) | 2.118531 / 2.142072 (-0.023541) | 0.675206 / 4.805227 (-4.130021) | 0.135473 / 6.500664 (-6.365191) | 0.064804 / 0.075469 (-0.010665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240179 / 1.841788 (-0.601608) | 14.692449 / 8.074308 (6.618141) | 13.672223 / 10.191392 (3.480831) | 0.147748 / 0.680424 (-0.532676) | 0.017119 / 0.534201 (-0.517082) | 0.369481 / 0.579283 (-0.209802) | 0.390133 / 0.434364 (-0.044231) | 0.458768 / 0.540337 (-0.081569) | 0.548989 / 1.386936 (-0.837947) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006319 / 0.011353 (-0.005034) | 0.003975 / 0.011008 (-0.007033) | 0.077886 / 0.038508 (0.039378) | 0.038322 / 0.023109 (0.015213) | 0.379851 / 0.275898 (0.103953) | 0.456749 / 0.323480 (0.133269) | 0.005320 / 0.007986 (-0.002665) | 0.003135 / 0.004328 (-0.001194) | 0.078272 / 0.004250 (0.074022) | 0.059919 / 0.037052 (0.022866) | 0.430062 / 0.258489 (0.171573) | 0.477432 / 0.293841 (0.183591) | 0.029713 / 0.128546 (-0.098833) | 0.008704 / 0.075646 (-0.066942) | 0.082488 / 0.419271 (-0.336784) | 0.044667 / 0.043533 (0.001134) | 0.354910 / 0.255139 (0.099771) | 0.434637 / 0.283200 (0.151438) | 0.026402 / 0.141683 (-0.115281) | 1.528825 / 1.452155 (0.076671) | 1.548209 / 1.492716 (0.055493) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237988 / 0.018006 (0.219982) | 0.420402 / 0.000490 (0.419913) | 0.003098 / 0.000200 (0.002898) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026253 / 0.037411 (-0.011159) | 0.106137 / 0.014526 (0.091611) | 0.110273 / 0.176557 (-0.066284) | 0.165316 / 0.737135 (-0.571819) | 0.115720 / 0.296338 (-0.180619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454244 / 0.215209 (0.239035) | 4.526018 / 2.077655 (2.448364) | 2.395985 / 1.504120 (0.891865) | 2.234822 / 1.541195 (0.693627) | 2.370235 / 1.468490 (0.901745) | 0.567607 / 4.584777 (-4.017169) | 3.650156 / 3.745712 (-0.095556) | 3.360094 / 5.269862 (-1.909768) | 1.415252 / 4.565676 (-3.150424) | 0.068012 / 0.424275 (-0.356263) | 0.011135 / 0.007607 (0.003528) | 0.561967 / 0.226044 (0.335923) | 5.621819 / 2.268929 (3.352890) | 2.676912 / 55.444624 (-52.767712) | 2.338306 / 6.876477 (-4.538171) | 2.430888 / 2.142072 (0.288815) | 0.684576 / 4.805227 (-4.120651) | 0.138923 / 6.500664 (-6.361741) | 0.069933 / 0.075469 (-0.005536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313383 / 1.841788 (-0.528405) | 15.125088 / 8.074308 (7.050780) | 14.801501 / 10.191392 (4.610109) | 0.134235 / 0.680424 (-0.546189) | 0.017058 / 0.534201 (-0.517143) | 0.365166 / 0.579283 (-0.214117) | 0.395415 / 0.434364 (-0.038949) | 0.419355 / 0.540337 (-0.120983) | 0.513411 / 1.386936 (-0.873525) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8b9649b3cfb49342e44873ce7e29e0c75eaf3efa \"CML watermark\")\n" ]
2023-06-30T13:15:50
2023-06-30T14:18:32
2023-06-30T14:11:24
CONTRIBUTOR
null
Fix #5998
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6001/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6001", "html_url": "https://github.com/huggingface/datasets/pull/6001", "diff_url": "https://github.com/huggingface/datasets/pull/6001.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6001.patch", "merged_at": "2023-06-30T14:11:24" }
true

Dataset Card for "github-issues-100"

More Information needed

Downloads last month
47
Edit dataset card