url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.04B
| node_id
stringlengths 18
32
| number
int64 1
6.5k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6503/comments | https://api.github.com/repos/huggingface/datasets/issues/6503/events | https://github.com/huggingface/datasets/pull/6503 | 2,043,847,591 | PR_kwDODunzps5iHgZf | 6,503 | Fix streaming xnli | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005003 / 0.011353 (-0.006350) | 0.003020 / 0.011008 (-0.007988) | 0.061370 / 0.038508 (0.022862) | 0.050996 / 0.023109 (0.027887) | 0.243434 / 0.275898 (-0.032464) | 0.266317 / 0.323480 (-0.057163) | 0.003888 / 0.007986 (-0.004098) | 0.002607 / 0.004328 (-0.001721) | 0.047541 / 0.004250 (0.043290) | 0.037933 / 0.037052 (0.000881) | 0.259695 / 0.258489 (0.001206) | 0.279374 / 0.293841 (-0.014467) | 0.027258 / 0.128546 (-0.101288) | 0.010184 / 0.075646 (-0.065462) | 0.207412 / 0.419271 (-0.211860) | 0.034978 / 0.043533 (-0.008554) | 0.247871 / 0.255139 (-0.007267) | 0.265273 / 0.283200 (-0.017927) | 0.017886 / 0.141683 (-0.123796) | 1.090451 / 1.452155 (-0.361704) | 1.152034 / 1.492716 (-0.340682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094383 / 0.018006 (0.076377) | 0.301151 / 0.000490 (0.300661) | 0.000211 / 0.000200 (0.000011) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018927 / 0.037411 (-0.018484) | 0.062152 / 0.014526 (0.047626) | 0.072177 / 0.176557 (-0.104380) | 0.119792 / 0.737135 (-0.617343) | 0.073333 / 0.296338 (-0.223005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282671 / 0.215209 (0.067462) | 2.721148 / 2.077655 (0.643494) | 1.472689 / 1.504120 (-0.031431) | 1.355226 / 1.541195 (-0.185969) | 1.375935 / 1.468490 (-0.092556) | 0.562600 / 4.584777 (-4.022177) | 2.364046 / 3.745712 (-1.381666) | 2.714984 / 5.269862 (-2.554878) | 1.738413 / 4.565676 (-2.827263) | 0.062564 / 0.424275 (-0.361711) | 0.004964 / 0.007607 (-0.002643) | 0.341300 / 0.226044 (0.115255) | 3.345187 / 2.268929 (1.076259) | 1.857822 / 55.444624 (-53.586803) | 1.581002 / 6.876477 (-5.295475) | 1.585919 / 2.142072 (-0.556153) | 0.640105 / 4.805227 (-4.165122) | 0.117880 / 6.500664 (-6.382784) | 0.042032 / 0.075469 (-0.033437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962701 / 1.841788 (-0.879086) | 11.309251 / 8.074308 (3.234943) | 10.462520 / 10.191392 (0.271128) | 0.127399 / 0.680424 (-0.553025) | 0.014549 / 0.534201 (-0.519652) | 0.297017 / 0.579283 (-0.282266) | 0.266152 / 0.434364 (-0.168212) | 0.349252 / 0.540337 (-0.191085) | 0.457015 / 1.386936 (-0.929921) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005341 / 0.011353 (-0.006012) | 0.003108 / 0.011008 (-0.007900) | 0.048862 / 0.038508 (0.010353) | 0.053354 / 0.023109 (0.030245) | 0.274499 / 0.275898 (-0.001399) | 0.296698 / 0.323480 (-0.026782) | 0.003974 / 0.007986 (-0.004012) | 0.002631 / 0.004328 (-0.001697) | 0.048013 / 0.004250 (0.043762) | 0.040416 / 0.037052 (0.003363) | 0.276581 / 0.258489 (0.018092) | 0.301296 / 0.293841 (0.007455) | 0.029049 / 0.128546 (-0.099497) | 0.010253 / 0.075646 (-0.065393) | 0.057157 / 0.419271 (-0.362114) | 0.031830 / 0.043533 (-0.011703) | 0.274341 / 0.255139 (0.019202) | 0.292583 / 0.283200 (0.009383) | 0.018449 / 0.141683 (-0.123234) | 1.145099 / 1.452155 (-0.307055) | 1.192958 / 1.492716 (-0.299758) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091596 / 0.018006 (0.073590) | 0.300917 / 0.000490 (0.300427) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021657 / 0.037411 (-0.015754) | 0.068464 / 0.014526 (0.053938) | 0.079869 / 0.176557 (-0.096687) | 0.117523 / 0.737135 (-0.619613) | 0.081257 / 0.296338 (-0.215082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294876 / 0.215209 (0.079667) | 2.879372 / 2.077655 (0.801718) | 1.619887 / 1.504120 (0.115767) | 1.482154 / 1.541195 (-0.059041) | 1.494656 / 1.468490 (0.026166) | 0.558914 / 4.584777 (-4.025862) | 2.420948 / 3.745712 (-1.324765) | 2.728992 / 5.269862 (-2.540869) | 1.722135 / 4.565676 (-2.843542) | 0.062182 / 0.424275 (-0.362093) | 0.004933 / 0.007607 (-0.002674) | 0.342759 / 0.226044 (0.116715) | 3.424083 / 2.268929 (1.155154) | 1.950673 / 55.444624 (-53.493951) | 1.683126 / 6.876477 (-5.193351) | 1.673135 / 2.142072 (-0.468937) | 0.633711 / 4.805227 (-4.171516) | 0.114898 / 6.500664 (-6.385766) | 0.040332 / 0.075469 (-0.035137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975102 / 1.841788 (-0.866685) | 11.975731 / 8.074308 (3.901423) | 10.961103 / 10.191392 (0.769711) | 0.131152 / 0.680424 (-0.549272) | 0.016268 / 0.534201 (-0.517933) | 0.285031 / 0.579283 (-0.294252) | 0.279556 / 0.434364 (-0.154808) | 0.324183 / 0.540337 (-0.216154) | 0.571404 / 1.386936 (-0.815532) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f67312956fc15572b6a0ca0dfcc0ceb90fbb794 \"CML watermark\")\n"
] | 2023-12-15T14:40:57 | 2023-12-15T14:51:06 | 2023-12-15T14:44:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6503",
"html_url": "https://github.com/huggingface/datasets/pull/6503",
"diff_url": "https://github.com/huggingface/datasets/pull/6503.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6503.patch",
"merged_at": "2023-12-15T14:44:46"
} | This code was failing
```python
In [1]: from datasets import load_dataset
In [2]:
...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True)
...:
...: sample_data = next(iter(ds))["premise"] # pick up one data
...: input_text = list(sample_data.values())
```
```
File ~/hf/datasets/src/datasets/features/translation.py:104, in TranslationVariableLanguages.encode_example(self, translation_dict)
102 return translation_dict
103 elif self.languages and set(translation_dict) - lang_set:
--> 104 raise ValueError(
105 f'Some languages in example ({", ".join(sorted(set(translation_dict) - lang_set))}) are not in valid set ({", ".join(lang_set)}).'
106 )
108 # Convert dictionary into tuples, splitting out cases where there are
109 # multiple translations for a single language.
110 translation_tuples = []
ValueError: Some languages in example (language, translation) are not in valid set (ur, fr, hi, sw, vi, el, de, th, en, tr, zh, ar, bg, ru, es).
```
because in streaming mode we expect features encode methods to be no-ops if the example is already encoded.
I fixed `TranslationVariableLanguages` to account for that | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6503/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6502/comments | https://api.github.com/repos/huggingface/datasets/issues/6502/events | https://github.com/huggingface/datasets/pull/6502 | 2,043,771,731 | PR_kwDODunzps5iHPt- | 6,502 | Pickle support for `torch.Generator` objects | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6502). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005472 / 0.011353 (-0.005881) | 0.003715 / 0.011008 (-0.007293) | 0.063257 / 0.038508 (0.024749) | 0.060683 / 0.023109 (0.037574) | 0.250885 / 0.275898 (-0.025013) | 0.271685 / 0.323480 (-0.051795) | 0.003051 / 0.007986 (-0.004934) | 0.002799 / 0.004328 (-0.001530) | 0.049113 / 0.004250 (0.044863) | 0.038965 / 0.037052 (0.001912) | 0.252688 / 0.258489 (-0.005801) | 0.282536 / 0.293841 (-0.011305) | 0.028722 / 0.128546 (-0.099824) | 0.010586 / 0.075646 (-0.065060) | 0.205145 / 0.419271 (-0.214127) | 0.036996 / 0.043533 (-0.006537) | 0.248874 / 0.255139 (-0.006265) | 0.266148 / 0.283200 (-0.017051) | 0.018540 / 0.141683 (-0.123143) | 1.120216 / 1.452155 (-0.331938) | 1.191072 / 1.492716 (-0.301644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095721 / 0.018006 (0.077714) | 0.313401 / 0.000490 (0.312911) | 0.000234 / 0.000200 (0.000034) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018604 / 0.037411 (-0.018807) | 0.061571 / 0.014526 (0.047045) | 0.075343 / 0.176557 (-0.101213) | 0.121272 / 0.737135 (-0.615864) | 0.076448 / 0.296338 (-0.219890) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286885 / 0.215209 (0.071676) | 2.809100 / 2.077655 (0.731445) | 1.485365 / 1.504120 (-0.018755) | 1.367672 / 1.541195 (-0.173523) | 1.423570 / 1.468490 (-0.044920) | 0.571063 / 4.584777 (-4.013714) | 2.385248 / 3.745712 (-1.360464) | 2.855251 / 5.269862 (-2.414610) | 1.799371 / 4.565676 (-2.766306) | 0.063491 / 0.424275 (-0.360784) | 0.004942 / 0.007607 (-0.002665) | 0.346181 / 0.226044 (0.120137) | 3.388123 / 2.268929 (1.119195) | 1.819093 / 55.444624 (-53.625532) | 1.552998 / 6.876477 (-5.323479) | 1.627930 / 2.142072 (-0.514143) | 0.653438 / 4.805227 (-4.151789) | 0.123831 / 6.500664 (-6.376833) | 0.043340 / 0.075469 (-0.032129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952167 / 1.841788 (-0.889621) | 12.149515 / 8.074308 (4.075207) | 10.665085 / 10.191392 (0.473693) | 0.127768 / 0.680424 (-0.552656) | 0.014022 / 0.534201 (-0.520179) | 0.285959 / 0.579283 (-0.293324) | 0.269727 / 0.434364 (-0.164637) | 0.336646 / 0.540337 (-0.203692) | 0.442932 / 1.386936 (-0.944005) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005351 / 0.011353 (-0.006002) | 0.003561 / 0.011008 (-0.007448) | 0.048890 / 0.038508 (0.010382) | 0.054093 / 0.023109 (0.030984) | 0.274397 / 0.275898 (-0.001501) | 0.296980 / 0.323480 (-0.026500) | 0.004126 / 0.007986 (-0.003860) | 0.002751 / 0.004328 (-0.001578) | 0.049131 / 0.004250 (0.044880) | 0.040769 / 0.037052 (0.003716) | 0.279147 / 0.258489 (0.020658) | 0.302014 / 0.293841 (0.008173) | 0.029847 / 0.128546 (-0.098699) | 0.010710 / 0.075646 (-0.064936) | 0.057626 / 0.419271 (-0.361645) | 0.032801 / 0.043533 (-0.010732) | 0.272698 / 0.255139 (0.017559) | 0.289238 / 0.283200 (0.006039) | 0.017876 / 0.141683 (-0.123807) | 1.152059 / 1.452155 (-0.300096) | 1.212289 / 1.492716 (-0.280427) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092914 / 0.018006 (0.074908) | 0.303092 / 0.000490 (0.302603) | 0.000214 / 0.000200 (0.000014) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022074 / 0.037411 (-0.015337) | 0.070109 / 0.014526 (0.055583) | 0.083360 / 0.176557 (-0.093196) | 0.122445 / 0.737135 (-0.614690) | 0.083625 / 0.296338 (-0.212714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282788 / 0.215209 (0.067579) | 2.789229 / 2.077655 (0.711574) | 1.571077 / 1.504120 (0.066957) | 1.452627 / 1.541195 (-0.088567) | 1.493176 / 1.468490 (0.024686) | 0.556892 / 4.584777 (-4.027885) | 2.442771 / 3.745712 (-1.302941) | 2.826316 / 5.269862 (-2.443545) | 1.758276 / 4.565676 (-2.807401) | 0.063039 / 0.424275 (-0.361236) | 0.004928 / 0.007607 (-0.002679) | 0.338247 / 0.226044 (0.112202) | 3.346344 / 2.268929 (1.077416) | 1.952520 / 55.444624 (-53.492104) | 1.664520 / 6.876477 (-5.211956) | 1.701528 / 2.142072 (-0.440544) | 0.634746 / 4.805227 (-4.170481) | 0.116879 / 6.500664 (-6.383786) | 0.040990 / 0.075469 (-0.034479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969521 / 1.841788 (-0.872267) | 12.431395 / 8.074308 (4.357087) | 10.907503 / 10.191392 (0.716111) | 0.131028 / 0.680424 (-0.549396) | 0.015239 / 0.534201 (-0.518962) | 0.290793 / 0.579283 (-0.288490) | 0.275072 / 0.434364 (-0.159292) | 0.331036 / 0.540337 (-0.209301) | 0.567858 / 1.386936 (-0.819078) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#092118fc00f7dd718ab3643739d7b23ff16c9eff \"CML watermark\")\n"
] | 2023-12-15T13:55:12 | 2023-12-15T15:04:33 | 2023-12-15T14:58:22 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6502",
"html_url": "https://github.com/huggingface/datasets/pull/6502",
"diff_url": "https://github.com/huggingface/datasets/pull/6502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6502.patch",
"merged_at": "2023-12-15T14:58:22"
} | Fix for https://discuss.huggingface.co/t/caching-a-dataset-processed-with-randomness/65616 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6502/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6501/comments | https://api.github.com/repos/huggingface/datasets/issues/6501/events | https://github.com/huggingface/datasets/issues/6501 | 2,043,377,240 | I_kwDODunzps55y3ZY | 6,501 | OverflowError: value too large to convert to int32_t | {
"login": "zhangfan-algo",
"id": 47747764,
"node_id": "MDQ6VXNlcjQ3NzQ3NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangfan-algo",
"html_url": "https://github.com/zhangfan-algo",
"followers_url": "https://api.github.com/users/zhangfan-algo/followers",
"following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions",
"organizations_url": "https://api.github.com/users/zhangfan-algo/orgs",
"repos_url": "https://api.github.com/users/zhangfan-algo/repos",
"events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangfan-algo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-15T10:10:21 | 2023-12-15T10:10:21 | null | NONE | null | null | null | ### Describe the bug
![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630)
### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl
pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl
done | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6501/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6500/comments | https://api.github.com/repos/huggingface/datasets/issues/6500/events | https://github.com/huggingface/datasets/pull/6500 | 2,043,258,633 | PR_kwDODunzps5iFc6e | 6,500 | Enable setting config as default when push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6500). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This is ready for review @huggingface/datasets. ",
"Also what if the config is being overwritten and it was the default config and the user doesn't pass `set_default` ?\r\nI'd expect the config to keep being the default one but lmk what you think",
"How can you unset a config as the default one? In the case you mentioned, I would expect the config not being the default one.",
"Maybe by passing `set_default=False` ? (set_default can be None by default)"
] | 2023-12-15T09:17:41 | 2023-12-15T15:22:27 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6500",
"html_url": "https://github.com/huggingface/datasets/pull/6500",
"diff_url": "https://github.com/huggingface/datasets/pull/6500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6500.patch",
"merged_at": null
} | Fix #6497. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6500/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6499/comments | https://api.github.com/repos/huggingface/datasets/issues/6499/events | https://github.com/huggingface/datasets/pull/6499 | 2,043,166,976 | PR_kwDODunzps5iFIUF | 6,499 | docs: add reference Git over SSH | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6499). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005701 / 0.011353 (-0.005652) | 0.003546 / 0.011008 (-0.007463) | 0.063335 / 0.038508 (0.024827) | 0.051987 / 0.023109 (0.028878) | 0.240429 / 0.275898 (-0.035469) | 0.260659 / 0.323480 (-0.062820) | 0.003866 / 0.007986 (-0.004120) | 0.002617 / 0.004328 (-0.001712) | 0.048653 / 0.004250 (0.044403) | 0.038176 / 0.037052 (0.001124) | 0.245496 / 0.258489 (-0.012993) | 0.277141 / 0.293841 (-0.016700) | 0.027886 / 0.128546 (-0.100660) | 0.010738 / 0.075646 (-0.064908) | 0.211255 / 0.419271 (-0.208016) | 0.045205 / 0.043533 (0.001672) | 0.243062 / 0.255139 (-0.012077) | 0.262877 / 0.283200 (-0.020323) | 0.023426 / 0.141683 (-0.118257) | 1.092247 / 1.452155 (-0.359908) | 1.161074 / 1.492716 (-0.331642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090488 / 0.018006 (0.072482) | 0.300993 / 0.000490 (0.300504) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018543 / 0.037411 (-0.018868) | 0.061418 / 0.014526 (0.046892) | 0.073242 / 0.176557 (-0.103314) | 0.120757 / 0.737135 (-0.616378) | 0.073967 / 0.296338 (-0.222372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282341 / 0.215209 (0.067132) | 2.741106 / 2.077655 (0.663451) | 1.416573 / 1.504120 (-0.087547) | 1.287904 / 1.541195 (-0.253291) | 1.309425 / 1.468490 (-0.159065) | 0.582592 / 4.584777 (-4.002184) | 2.404866 / 3.745712 (-1.340846) | 2.895397 / 5.269862 (-2.374464) | 1.799864 / 4.565676 (-2.765812) | 0.064386 / 0.424275 (-0.359889) | 0.004920 / 0.007607 (-0.002687) | 0.330879 / 0.226044 (0.104835) | 3.287064 / 2.268929 (1.018135) | 1.765169 / 55.444624 (-53.679456) | 1.490442 / 6.876477 (-5.386034) | 1.530960 / 2.142072 (-0.611113) | 0.655939 / 4.805227 (-4.149288) | 0.118529 / 6.500664 (-6.382135) | 0.042350 / 0.075469 (-0.033119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959027 / 1.841788 (-0.882761) | 11.911284 / 8.074308 (3.836976) | 10.576898 / 10.191392 (0.385506) | 0.141038 / 0.680424 (-0.539386) | 0.014184 / 0.534201 (-0.520017) | 0.305335 / 0.579283 (-0.273948) | 0.267531 / 0.434364 (-0.166832) | 0.353362 / 0.540337 (-0.186975) | 0.466258 / 1.386936 (-0.920678) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005194 / 0.011353 (-0.006159) | 0.003561 / 0.011008 (-0.007448) | 0.049181 / 0.038508 (0.010673) | 0.056664 / 0.023109 (0.033555) | 0.267142 / 0.275898 (-0.008756) | 0.291871 / 0.323480 (-0.031609) | 0.003996 / 0.007986 (-0.003990) | 0.003147 / 0.004328 (-0.001181) | 0.048527 / 0.004250 (0.044276) | 0.040239 / 0.037052 (0.003187) | 0.269728 / 0.258489 (0.011239) | 0.295531 / 0.293841 (0.001690) | 0.030316 / 0.128546 (-0.098231) | 0.010666 / 0.075646 (-0.064981) | 0.058176 / 0.419271 (-0.361095) | 0.033218 / 0.043533 (-0.010315) | 0.265383 / 0.255139 (0.010244) | 0.285102 / 0.283200 (0.001902) | 0.018295 / 0.141683 (-0.123388) | 1.117830 / 1.452155 (-0.334325) | 1.196919 / 1.492716 (-0.295798) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088547 / 0.018006 (0.070541) | 0.293220 / 0.000490 (0.292730) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022060 / 0.037411 (-0.015351) | 0.071973 / 0.014526 (0.057448) | 0.081721 / 0.176557 (-0.094836) | 0.119990 / 0.737135 (-0.617145) | 0.081639 / 0.296338 (-0.214700) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293712 / 0.215209 (0.078503) | 2.872986 / 2.077655 (0.795331) | 1.568944 / 1.504120 (0.064824) | 1.434555 / 1.541195 (-0.106639) | 1.457747 / 1.468490 (-0.010743) | 0.559296 / 4.584777 (-4.025481) | 2.471845 / 3.745712 (-1.273867) | 2.840916 / 5.269862 (-2.428946) | 1.754909 / 4.565676 (-2.810768) | 0.064585 / 0.424275 (-0.359690) | 0.004992 / 0.007607 (-0.002615) | 0.349149 / 0.226044 (0.123104) | 3.385906 / 2.268929 (1.116977) | 1.940644 / 55.444624 (-53.503980) | 1.638300 / 6.876477 (-5.238177) | 1.649939 / 2.142072 (-0.492133) | 0.645680 / 4.805227 (-4.159547) | 0.118080 / 6.500664 (-6.382584) | 0.040643 / 0.075469 (-0.034826) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969965 / 1.841788 (-0.871822) | 12.099766 / 8.074308 (4.025457) | 10.550650 / 10.191392 (0.359258) | 0.131736 / 0.680424 (-0.548688) | 0.015483 / 0.534201 (-0.518718) | 0.289231 / 0.579283 (-0.290052) | 0.287505 / 0.434364 (-0.146858) | 0.327326 / 0.540337 (-0.213011) | 0.570364 / 1.386936 (-0.816572) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#533c38cef16111e9e8154eeb76c207f1f4936ddf \"CML watermark\")\n"
] | 2023-12-15T08:38:31 | 2023-12-15T11:48:47 | 2023-12-15T11:42:38 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6499",
"html_url": "https://github.com/huggingface/datasets/pull/6499",
"diff_url": "https://github.com/huggingface/datasets/pull/6499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6499.patch",
"merged_at": "2023-12-15T11:42:38"
} | see https://discuss.huggingface.co/t/update-datasets-getting-started-to-new-git-security/65893 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6499/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6498/comments | https://api.github.com/repos/huggingface/datasets/issues/6498/events | https://github.com/huggingface/datasets/pull/6498 | 2,042,075,969 | PR_kwDODunzps5iBcFj | 6,498 | Fallback on dataset script if user wants to load default config | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6498). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> I was just thinking: what if the user does not pass a config name and the dataset has only a config with a name different from \"default\"?\r\n\r\nYou mean if there is a DEFAULT_CONFIG_NAME defined in the script but the dataset only has one configuration ? We can't easily get the number of configs without running the python code so I don't think we can support detect this case\r\n",
"Most datasets with a script don't define DEFAULT_CONFIG_NAME if there is only one configuration anyway.\r\n\r\nSo there is no issue e.g. for `squad`",
"> I was trying to mean the case where DEFAULT_CONFIG_NAME is None but there is only a single config in BUILDER_CONFIGS, with a name different from \"default\".\r\n\r\nIn this case we can detect if \"DEFAULT_CONFIG_NAME\" is not mentioned and use the Parquet export. If it is mentioned (and maybe it is set to None or to the single config) I consider that it may have multiple configs and fall back on using the script",
"... but the user does not pass the config name.",
"In this case we load the single configuration (this is how a DatasetBuilder works)",
"see \r\n\r\nhttps://github.com/huggingface/datasets/blob/2feaa589de86dd85941301fc8c3fa091731a67c0/src/datasets/builder.py#L532-L532",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005122 / 0.011353 (-0.006231) | 0.003565 / 0.011008 (-0.007443) | 0.062706 / 0.038508 (0.024198) | 0.049314 / 0.023109 (0.026205) | 0.247325 / 0.275898 (-0.028573) | 0.269788 / 0.323480 (-0.053692) | 0.003895 / 0.007986 (-0.004090) | 0.002788 / 0.004328 (-0.001540) | 0.048615 / 0.004250 (0.044365) | 0.037591 / 0.037052 (0.000539) | 0.253495 / 0.258489 (-0.004994) | 0.281200 / 0.293841 (-0.012641) | 0.027712 / 0.128546 (-0.100834) | 0.010901 / 0.075646 (-0.064745) | 0.205577 / 0.419271 (-0.213694) | 0.035989 / 0.043533 (-0.007544) | 0.252978 / 0.255139 (-0.002161) | 0.268042 / 0.283200 (-0.015157) | 0.017857 / 0.141683 (-0.123826) | 1.096633 / 1.452155 (-0.355521) | 1.147026 / 1.492716 (-0.345691) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095609 / 0.018006 (0.077603) | 0.311941 / 0.000490 (0.311451) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019042 / 0.037411 (-0.018369) | 0.060549 / 0.014526 (0.046023) | 0.074761 / 0.176557 (-0.101796) | 0.121729 / 0.737135 (-0.615406) | 0.075661 / 0.296338 (-0.220677) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284774 / 0.215209 (0.069565) | 2.764576 / 2.077655 (0.686921) | 1.489926 / 1.504120 (-0.014194) | 1.387276 / 1.541195 (-0.153919) | 1.400931 / 1.468490 (-0.067559) | 0.555623 / 4.584777 (-4.029154) | 2.409488 / 3.745712 (-1.336224) | 2.781053 / 5.269862 (-2.488808) | 1.750472 / 4.565676 (-2.815204) | 0.062232 / 0.424275 (-0.362043) | 0.004974 / 0.007607 (-0.002633) | 0.336324 / 0.226044 (0.110280) | 3.286619 / 2.268929 (1.017691) | 1.825070 / 55.444624 (-53.619554) | 1.537993 / 6.876477 (-5.338484) | 1.586520 / 2.142072 (-0.555553) | 0.640090 / 4.805227 (-4.165138) | 0.117637 / 6.500664 (-6.383027) | 0.042318 / 0.075469 (-0.033151) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964051 / 1.841788 (-0.877736) | 11.706259 / 8.074308 (3.631951) | 10.752311 / 10.191392 (0.560919) | 0.128117 / 0.680424 (-0.552307) | 0.014001 / 0.534201 (-0.520200) | 0.286255 / 0.579283 (-0.293028) | 0.263810 / 0.434364 (-0.170554) | 0.329347 / 0.540337 (-0.210991) | 0.437349 / 1.386936 (-0.949587) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003586 / 0.011008 (-0.007422) | 0.049339 / 0.038508 (0.010831) | 0.051287 / 0.023109 (0.028178) | 0.274397 / 0.275898 (-0.001501) | 0.292977 / 0.323480 (-0.030503) | 0.004029 / 0.007986 (-0.003957) | 0.002727 / 0.004328 (-0.001602) | 0.048779 / 0.004250 (0.044528) | 0.040075 / 0.037052 (0.003022) | 0.277676 / 0.258489 (0.019187) | 0.301963 / 0.293841 (0.008122) | 0.029340 / 0.128546 (-0.099206) | 0.010714 / 0.075646 (-0.064932) | 0.057253 / 0.419271 (-0.362018) | 0.033426 / 0.043533 (-0.010107) | 0.276673 / 0.255139 (0.021534) | 0.291053 / 0.283200 (0.007854) | 0.017660 / 0.141683 (-0.124023) | 1.122354 / 1.452155 (-0.329800) | 1.180381 / 1.492716 (-0.312335) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091903 / 0.018006 (0.073897) | 0.300720 / 0.000490 (0.300231) | 0.000288 / 0.000200 (0.000088) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021521 / 0.037411 (-0.015890) | 0.068233 / 0.014526 (0.053707) | 0.081245 / 0.176557 (-0.095312) | 0.119996 / 0.737135 (-0.617139) | 0.082483 / 0.296338 (-0.213856) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302776 / 0.215209 (0.087567) | 2.950776 / 2.077655 (0.873122) | 1.631032 / 1.504120 (0.126912) | 1.502021 / 1.541195 (-0.039174) | 1.514213 / 1.468490 (0.045723) | 0.578246 / 4.584777 (-4.006531) | 2.443768 / 3.745712 (-1.301944) | 2.827811 / 5.269862 (-2.442051) | 1.771529 / 4.565676 (-2.794148) | 0.064479 / 0.424275 (-0.359797) | 0.005061 / 0.007607 (-0.002546) | 0.350966 / 0.226044 (0.124922) | 3.458616 / 2.268929 (1.189687) | 1.967917 / 55.444624 (-53.476707) | 1.704661 / 6.876477 (-5.171815) | 1.698895 / 2.142072 (-0.443178) | 0.663259 / 4.805227 (-4.141968) | 0.122140 / 6.500664 (-6.378525) | 0.041099 / 0.075469 (-0.034371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972080 / 1.841788 (-0.869708) | 12.123286 / 8.074308 (4.048978) | 10.819854 / 10.191392 (0.628462) | 0.131486 / 0.680424 (-0.548938) | 0.015785 / 0.534201 (-0.518416) | 0.290048 / 0.579283 (-0.289235) | 0.277822 / 0.434364 (-0.156542) | 0.325949 / 0.540337 (-0.214388) | 0.577681 / 1.386936 (-0.809255) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#30f6a2d9af183eba4501f0b8d90e9200bdca6bb1 \"CML watermark\")\n"
] | 2023-12-14T16:46:01 | 2023-12-15T13:16:56 | 2023-12-15T13:10:48 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6498",
"html_url": "https://github.com/huggingface/datasets/pull/6498",
"diff_url": "https://github.com/huggingface/datasets/pull/6498.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6498.patch",
"merged_at": "2023-12-15T13:10:48"
} | Right now this code is failing on `main`:
```python
load_dataset("openbookqa")
```
This is because it tries to load the dataset from the Parquet export but the dataset has multiple configurations and the Parquet export doesn't know which one is the default one.
I fixed this by simply falling back on using the dataset script (which tells the user to pass `trust_remote_code=True`):
```python
load_dataset("openbookqa", trust_remote_code=True)
```
Note that if the user happened to specify a config name I don't fall back on the script since we can use the Parquet export in this case (no need to know which config is the default)
```python
load_dataset("openbookqa", "main")
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6498/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6497/comments | https://api.github.com/repos/huggingface/datasets/issues/6497/events | https://github.com/huggingface/datasets/issues/6497 | 2,041,994,274 | I_kwDODunzps55tlwi | 6,497 | Support setting a default config name in push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 2023-12-14T15:59:03 | 2023-12-15T08:26:20 | null | MEMBER | null | null | null | In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6497/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6496/comments | https://api.github.com/repos/huggingface/datasets/issues/6496/events | https://github.com/huggingface/datasets/issues/6496 | 2,041,589,386 | I_kwDODunzps55sC6K | 6,496 | Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again. | {
"login": "GeorgesLorre",
"id": 35808396,
"node_id": "MDQ6VXNlcjM1ODA4Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/35808396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeorgesLorre",
"html_url": "https://github.com/GeorgesLorre",
"followers_url": "https://api.github.com/users/GeorgesLorre/followers",
"following_url": "https://api.github.com/users/GeorgesLorre/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgesLorre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeorgesLorre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgesLorre/subscriptions",
"organizations_url": "https://api.github.com/users/GeorgesLorre/orgs",
"repos_url": "https://api.github.com/users/GeorgesLorre/repos",
"events_url": "https://api.github.com/users/GeorgesLorre/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeorgesLorre/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"I transferred from datasets-server, since the issue is more about `datasets` and the integration with `huggingface_hub`."
] | 2023-12-14T11:24:54 | 2023-12-14T12:22:21 | null | NONE | null | null | null | **Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co./api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc)
A commit has happened since. Please refresh and try again.
```
**Steps to reproduce the bug**
This is a minimal reproducer:
```
import dask.dataframe as dd
import pandas as pd
import random
import os
import huggingface_hub
import datasets
huggingface_hub.login(token=os.getenv("HF_TOKEN"))
data = {"number": [random.randint(0,10) for _ in range(1000)]}
df = pd.DataFrame.from_dict(data)
dataframe = dd.from_pandas(df, npartitions=1)
dataframe = dataframe.repartition(npartitions=3)
schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema
repo_id = "GLorr/test-dask"
repo_path = f"hf://datasets/{repo_id}"
huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True)
dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema)
```
**Expected behavior**
Would expect to write to the hub without any problem.
**Environment info**
```
datasets==2.15.0
huggingface-hub==0.19.4
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6496/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6494/comments | https://api.github.com/repos/huggingface/datasets/issues/6494/events | https://github.com/huggingface/datasets/issues/6494 | 2,039,684,839 | I_kwDODunzps55kx7n | 6,494 | Image Data loaded Twice | {
"login": "baowuzhida",
"id": 28867010,
"node_id": "MDQ6VXNlcjI4ODY3MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/28867010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baowuzhida",
"html_url": "https://github.com/baowuzhida",
"followers_url": "https://api.github.com/users/baowuzhida/followers",
"following_url": "https://api.github.com/users/baowuzhida/following{/other_user}",
"gists_url": "https://api.github.com/users/baowuzhida/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baowuzhida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baowuzhida/subscriptions",
"organizations_url": "https://api.github.com/users/baowuzhida/orgs",
"repos_url": "https://api.github.com/users/baowuzhida/repos",
"events_url": "https://api.github.com/users/baowuzhida/events{/privacy}",
"received_events_url": "https://api.github.com/users/baowuzhida/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-13T13:11:42 | 2023-12-13T13:11:42 | null | NONE | null | null | null | ### Describe the bug
![1702472610561](https://github.com/huggingface/datasets/assets/28867010/4b7ef5e7-32c3-4b73-84cb-5de059caa0b6)
When I learn from https://huggingface.co./docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see in the attached image, there are only four images in the train folder, but reading brings up eight images
### Steps to reproduce the bug
from datasets import Dataset, load_dataset
dataset = load_dataset("imagefolder", data_dir="data/", drop_labels=False)
# print(dataset["train"][0]["image"] == dataset["train"][1]["image"])
print(dataset)
print(dataset["train"]["image"])
print(len(dataset["train"]["image"]))
### Expected behavior
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 8
})
})
[<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D1CA8B0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D2452E0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245310>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2453A0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245460>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245430>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2454F0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245550>]
8
### Environment info
- `datasets` version: 2.14.5
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.17
- Huggingface_hub version: 0.19.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6494/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6495/comments | https://api.github.com/repos/huggingface/datasets/issues/6495/events | https://github.com/huggingface/datasets/issues/6495 | 2,039,708,529 | I_kwDODunzps55k3tx | 6,495 | Newline characters don't behave as expected when calling dataset.info | {
"login": "gerald-wrona",
"id": 32300890,
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerald-wrona",
"html_url": "https://github.com/gerald-wrona",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-12T23:07:51 | 2023-12-13T13:24:22 | null | NONE | null | null | null | ### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@marios
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[Source](https://huggingface.co./docs/datasets/v2.2.1/en/access)
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673)
### Expected behavior
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(
description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n',
citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398',
license='',
features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')},
download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}},
download_size=1494541,
post_processing_size=None,
dataset_size=1492156,
size_in_bytes=2986697
) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6495/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6493/comments | https://api.github.com/repos/huggingface/datasets/issues/6493/events | https://github.com/huggingface/datasets/pull/6493 | 2,038,221,490 | PR_kwDODunzps5h0XJK | 6,493 | Lazy data files resolution and offline cache reload | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6493). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Naive question: is there any breaking change when loading?\r\n\r\nNo breaking changes except that the cache folders are different\r\n\r\ne.g. for glue sst2 (has parquet export)\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/glue/sst2/1.0.0/fd8e86499fa5c264fcaad392a8f49ddf58bf4037\r\nOn main\r\n~/.cache/huggingface/datasets/glue/sst2/0.0.0/74a75637ac4acd3f\r\nOn 2.15.0\r\n~/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\n```\r\n\r\ne.g. for wikimedia/wikipedia 20231101.ab (has metadata configs)\r\n\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/4cb9b0d719291f1a10f96f67d609c5d442980dc9\r\nOn main (takes ages to load)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/cfa627e27933df13\r\nOn 2.15.0 (takes ages to load)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/e92ee7a91c466564\r\n```\r\n\r\n\r\ne.g. for lhoestq/demo1 (no metadata configs)\r\n\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default/0.0.0/87ecf163bedca9d80598b528940a9c4f99e14c11\r\nOn main\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default-8a4a0b7a240d3c5e/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d\r\nOn 2.15.0\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default-59d4029e0bb36ae0/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d\r\n```",
"There was a last bug I just fixed: if you modify a dataset and reload it from the hub it won't download the new version - I think I need to use another hash to name the cache directory\r\nedit: fixed",
"I switched to using the git commit sha for the cache directory, which is now `config/version/commit_sha` :) much cleaner than before.\r\n\r\nAnd for local file it's a hash that takes into account the resolved files (and their last modified dates)",
"I also ran the `transformers` CI on this branch and it's green",
"FYI `huggingface_hub` will have a release on tuesday/wednesday (will speed up load_dataset data files resolution which is now needed for datasets loaded from parquet export) so we can aim on merging this around the same time and do a release on thursday"
] | 2023-12-12T17:15:17 | 2023-12-15T18:16:18 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6493",
"html_url": "https://github.com/huggingface/datasets/pull/6493",
"diff_url": "https://github.com/huggingface/datasets/pull/6493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6493.patch",
"merged_at": null
} | Includes both https://github.com/huggingface/datasets/pull/6458 and https://github.com/huggingface/datasets/pull/6459
This PR should be merged instead of the two individually, since they are conflicting
## Offline cache reload
it can reload datasets that were pushed to hub if they exist in the cache.
example:
```python
>>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp")
>>> load_dataset("lhoestq/tmp")
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
and later, without connection:
```python
>>> load_dataset("lhoestq/tmp")
Using the latest cached version of the dataset since lhoestq/tmp couldn't be found on the Hugging Face Hub
Found the latest cached dataset configuration 'default' at /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/default/0.0.0/da0e902a945afeb9 (last modified on Wed Dec 13 14:55:52 2023).
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
- Updated `CachedDatasetModuleFactory` to look for datasets in the cache at `<namespace>___<dataset_name>/<config_id>`
- Since the metadata configs parameters are not available in offline mode, we don't know which folder to load (config_id and hash change), so I simply load the latest one
- I instantiate a BuilderConfig even if there is no metadata config with the right config_name
- Its config_id is equal to the config_name to be able to retrieve it in the cache (no more suffix for configs from metadata configs)
- We can reload this config if offline mode by specifying the right config_name (same as online !)
- Consequences of this change:
- Only when there are user's parameters it creates a custom builder config with config_id = config_name + user parameters hash
- the hash used to name the cache folder takes into account the metadata config and the dataset info, so that the right cache can be reloaded when there is internet connection without redownloading the data or resolving the data files. For local directories I hash the builder configs and dataset info, and for datasets on the hub I use the commit sha as hash.
- cache directories now look like `config/version/commit_sha` for hub datasets which is clean :)
Fix https://github.com/huggingface/datasets/issues/3547
## Lazy data files resolution
this makes this code run in 2sec instead of >10sec
```python
from datasets import load_dataset
ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False)
```
For some datasets with many configs and files it can be up to 100x faster.
This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts.
The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6493/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6492/comments | https://api.github.com/repos/huggingface/datasets/issues/6492/events | https://github.com/huggingface/datasets/pull/6492 | 2,037,987,267 | PR_kwDODunzps5hzjhQ | 6,492 | Make push_to_hub return CommitInfo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6492). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This PR is ready to review @huggingface/datasets.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005093 / 0.011353 (-0.006259) | 0.003695 / 0.011008 (-0.007313) | 0.064648 / 0.038508 (0.026140) | 0.054677 / 0.023109 (0.031568) | 0.242007 / 0.275898 (-0.033891) | 0.265216 / 0.323480 (-0.058264) | 0.003847 / 0.007986 (-0.004138) | 0.003773 / 0.004328 (-0.000556) | 0.048595 / 0.004250 (0.044345) | 0.038122 / 0.037052 (0.001070) | 0.245698 / 0.258489 (-0.012791) | 0.278095 / 0.293841 (-0.015746) | 0.027488 / 0.128546 (-0.101058) | 0.011002 / 0.075646 (-0.064644) | 0.211443 / 0.419271 (-0.207829) | 0.035664 / 0.043533 (-0.007869) | 0.244754 / 0.255139 (-0.010385) | 0.261078 / 0.283200 (-0.022121) | 0.017768 / 0.141683 (-0.123915) | 1.130765 / 1.452155 (-0.321390) | 1.189825 / 1.492716 (-0.302891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093027 / 0.018006 (0.075021) | 0.302193 / 0.000490 (0.301703) | 0.000207 / 0.000200 (0.000007) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018413 / 0.037411 (-0.018999) | 0.062715 / 0.014526 (0.048190) | 0.073287 / 0.176557 (-0.103269) | 0.120394 / 0.737135 (-0.616741) | 0.077573 / 0.296338 (-0.218765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284445 / 0.215209 (0.069236) | 2.780718 / 2.077655 (0.703063) | 1.460988 / 1.504120 (-0.043132) | 1.345799 / 1.541195 (-0.195395) | 1.399892 / 1.468490 (-0.068598) | 0.576051 / 4.584777 (-4.008726) | 2.418792 / 3.745712 (-1.326921) | 2.901330 / 5.269862 (-2.368532) | 1.765083 / 4.565676 (-2.800593) | 0.063555 / 0.424275 (-0.360720) | 0.004991 / 0.007607 (-0.002616) | 0.339657 / 0.226044 (0.113613) | 3.372963 / 2.268929 (1.104034) | 1.853667 / 55.444624 (-53.590958) | 1.552022 / 6.876477 (-5.324454) | 1.616452 / 2.142072 (-0.525620) | 0.652309 / 4.805227 (-4.152919) | 0.121125 / 6.500664 (-6.379539) | 0.042420 / 0.075469 (-0.033049) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954514 / 1.841788 (-0.887274) | 11.853736 / 8.074308 (3.779428) | 10.624571 / 10.191392 (0.433179) | 0.134118 / 0.680424 (-0.546306) | 0.014200 / 0.534201 (-0.520001) | 0.290106 / 0.579283 (-0.289177) | 0.270637 / 0.434364 (-0.163727) | 0.336155 / 0.540337 (-0.204182) | 0.443962 / 1.386936 (-0.942974) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005282 / 0.011353 (-0.006071) | 0.003526 / 0.011008 (-0.007482) | 0.048994 / 0.038508 (0.010486) | 0.055345 / 0.023109 (0.032236) | 0.271587 / 0.275898 (-0.004311) | 0.294676 / 0.323480 (-0.028804) | 0.003989 / 0.007986 (-0.003996) | 0.002594 / 0.004328 (-0.001735) | 0.048310 / 0.004250 (0.044059) | 0.039945 / 0.037052 (0.002893) | 0.277304 / 0.258489 (0.018815) | 0.312017 / 0.293841 (0.018176) | 0.028364 / 0.128546 (-0.100182) | 0.010683 / 0.075646 (-0.064963) | 0.057990 / 0.419271 (-0.361281) | 0.032418 / 0.043533 (-0.011115) | 0.273835 / 0.255139 (0.018697) | 0.288585 / 0.283200 (0.005385) | 0.018964 / 0.141683 (-0.122719) | 1.148863 / 1.452155 (-0.303292) | 1.195684 / 1.492716 (-0.297032) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091967 / 0.018006 (0.073960) | 0.303236 / 0.000490 (0.302747) | 0.000214 / 0.000200 (0.000015) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021960 / 0.037411 (-0.015452) | 0.068744 / 0.014526 (0.054218) | 0.081167 / 0.176557 (-0.095390) | 0.119623 / 0.737135 (-0.617513) | 0.084965 / 0.296338 (-0.211373) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297740 / 0.215209 (0.082531) | 2.924856 / 2.077655 (0.847201) | 1.602080 / 1.504120 (0.097960) | 1.494083 / 1.541195 (-0.047112) | 1.544662 / 1.468490 (0.076172) | 0.581212 / 4.584777 (-4.003565) | 2.451064 / 3.745712 (-1.294648) | 2.875213 / 5.269862 (-2.394649) | 1.780777 / 4.565676 (-2.784900) | 0.063751 / 0.424275 (-0.360524) | 0.004967 / 0.007607 (-0.002641) | 0.350321 / 0.226044 (0.124276) | 3.449585 / 2.268929 (1.180657) | 1.977666 / 55.444624 (-53.466958) | 1.685125 / 6.876477 (-5.191351) | 1.734466 / 2.142072 (-0.407606) | 0.657477 / 4.805227 (-4.147750) | 0.116767 / 6.500664 (-6.383898) | 0.041400 / 0.075469 (-0.034069) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985751 / 1.841788 (-0.856037) | 12.300065 / 8.074308 (4.225756) | 10.608238 / 10.191392 (0.416846) | 0.139907 / 0.680424 (-0.540517) | 0.015379 / 0.534201 (-0.518822) | 0.283528 / 0.579283 (-0.295755) | 0.278751 / 0.434364 (-0.155613) | 0.328811 / 0.540337 (-0.211527) | 0.584041 / 1.386936 (-0.802895) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef0f986518bd252c5314a7e3a419dedcbb166630 \"CML watermark\")\n"
] | 2023-12-12T15:18:16 | 2023-12-13T14:29:01 | 2023-12-13T14:22:41 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6492",
"html_url": "https://github.com/huggingface/datasets/pull/6492",
"diff_url": "https://github.com/huggingface/datasets/pull/6492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6492.patch",
"merged_at": "2023-12-13T14:22:41"
} | Make `push_to_hub` return `CommitInfo`.
This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID.
CC: @severo for the use case in https://huggingface.co./datasets/jmhessel/newyorker_caption_contest/discussions/4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6492/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6492/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6491/comments | https://api.github.com/repos/huggingface/datasets/issues/6491/events | https://github.com/huggingface/datasets/pull/6491 | 2,037,690,643 | PR_kwDODunzps5hyiTY | 6,491 | Fix metrics dead link | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6491). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2023-12-12T12:51:49 | 2023-12-12T12:58:25 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6491",
"html_url": "https://github.com/huggingface/datasets/pull/6491",
"diff_url": "https://github.com/huggingface/datasets/pull/6491.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6491.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6491/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6490/comments | https://api.github.com/repos/huggingface/datasets/issues/6490/events | https://github.com/huggingface/datasets/issues/6490 | 2,037,204,892 | I_kwDODunzps55bUec | 6,490 | `load_dataset(...,save_infos=True)` not working without loading script | {
"login": "morganveyret",
"id": 114978051,
"node_id": "U_kgDOBtptAw",
"avatar_url": "https://avatars.githubusercontent.com/u/114978051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morganveyret",
"html_url": "https://github.com/morganveyret",
"followers_url": "https://api.github.com/users/morganveyret/followers",
"following_url": "https://api.github.com/users/morganveyret/following{/other_user}",
"gists_url": "https://api.github.com/users/morganveyret/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morganveyret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganveyret/subscriptions",
"organizations_url": "https://api.github.com/users/morganveyret/orgs",
"repos_url": "https://api.github.com/users/morganveyret/repos",
"events_url": "https://api.github.com/users/morganveyret/events{/privacy}",
"received_events_url": "https://api.github.com/users/morganveyret/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Also, once the README.md exists in the python environment it is used when loading another dataset in the same format (e.g. json) since it always resolves the path to the same directory.\r\nThe consequence here is any other dataset won't load because of infos mismatch.\r\nTo reproduce this aspect:\r\n1. Do a `load_datasets(...,save_infos=True)` with one dataset without a loading script\r\n2. Try to load another dataset without a loading script in the same format (e.g. json) but with a different schema "
] | 2023-12-12T08:09:18 | 2023-12-12T08:36:22 | null | NONE | null | null | null | ### Describe the bug
It seems that saving a dataset infos back into the card file is not working for datasets without a loading script.
After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory.
Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`).
### Steps to reproduce the bug
1. Have a local dataset without any loading script
2. Make sure there are no dataset infos in the README.md
3. Load with `save_infos=True`
4. No change in the dataset README.md
5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`)
### Expected behavior
The dataset README.md should be updated and no file should be created in the python environment.
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.6.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6490/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6489/comments | https://api.github.com/repos/huggingface/datasets/issues/6489/events | https://github.com/huggingface/datasets/issues/6489 | 2,036,743,777 | I_kwDODunzps55Zj5h | 6,489 | load_dataset imageflder for aws s3 path | {
"login": "segalinc",
"id": 9353106,
"node_id": "MDQ6VXNlcjkzNTMxMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/segalinc",
"html_url": "https://github.com/segalinc",
"followers_url": "https://api.github.com/users/segalinc/followers",
"following_url": "https://api.github.com/users/segalinc/following{/other_user}",
"gists_url": "https://api.github.com/users/segalinc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/segalinc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/segalinc/subscriptions",
"organizations_url": "https://api.github.com/users/segalinc/orgs",
"repos_url": "https://api.github.com/users/segalinc/repos",
"events_url": "https://api.github.com/users/segalinc/events{/privacy}",
"received_events_url": "https://api.github.com/users/segalinc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [] | 2023-12-12T00:08:43 | 2023-12-12T00:09:27 | null | NONE | null | null | null | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience with this | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6489/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6488/comments | https://api.github.com/repos/huggingface/datasets/issues/6488/events | https://github.com/huggingface/datasets/issues/6488 | 2,035,899,898 | I_kwDODunzps55WV36 | 6,488 | 429 Client Error | {
"login": "sasaadi",
"id": 7882383,
"node_id": "MDQ6VXNlcjc4ODIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7882383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sasaadi",
"html_url": "https://github.com/sasaadi",
"followers_url": "https://api.github.com/users/sasaadi/followers",
"following_url": "https://api.github.com/users/sasaadi/following{/other_user}",
"gists_url": "https://api.github.com/users/sasaadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sasaadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sasaadi/subscriptions",
"organizations_url": "https://api.github.com/users/sasaadi/orgs",
"repos_url": "https://api.github.com/users/sasaadi/repos",
"events_url": "https://api.github.com/users/sasaadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sasaadi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Transferring repos as this is a datasets issue "
] | 2023-12-11T15:06:01 | 2023-12-11T15:34:23 | null | NONE | null | null | null | Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it?
Thanks
Dataset:
https://huggingface.co./datasets/cerebras/SlimPajama-627B
Error:
`requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co./datasets/cerebras/SlimPajama-627B/resolve/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk1/example_train_3300.jsonl.zst`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6488/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6487/comments | https://api.github.com/repos/huggingface/datasets/issues/6487/events | https://github.com/huggingface/datasets/pull/6487 | 2,035,424,254 | PR_kwDODunzps5hqyfV | 6,487 | Update builder hash with info | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Closing this one in favor of https://github.com/huggingface/datasets/pull/6458/commits/565c294fc12bc547730a023a610ed4f92313d8fb in https://github.com/huggingface/datasets/pull/6458"
] | 2023-12-11T11:09:16 | 2023-12-11T11:41:34 | 2023-12-11T11:41:34 | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6487",
"html_url": "https://github.com/huggingface/datasets/pull/6487",
"diff_url": "https://github.com/huggingface/datasets/pull/6487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6487.patch",
"merged_at": null
} | Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change.
This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub)
Ideally we should take the resolved files into account as well but this will be for another PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6487/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6486/comments | https://api.github.com/repos/huggingface/datasets/issues/6486/events | https://github.com/huggingface/datasets/pull/6486 | 2,035,206,206 | PR_kwDODunzps5hqCSc | 6,486 | Fix docs phrasing about supported formats when sharing a dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6486). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005042 / 0.011353 (-0.006311) | 0.003452 / 0.011008 (-0.007557) | 0.061845 / 0.038508 (0.023337) | 0.052042 / 0.023109 (0.028933) | 0.241791 / 0.275898 (-0.034107) | 0.264639 / 0.323480 (-0.058841) | 0.003940 / 0.007986 (-0.004045) | 0.002768 / 0.004328 (-0.001560) | 0.047851 / 0.004250 (0.043600) | 0.037599 / 0.037052 (0.000547) | 0.251462 / 0.258489 (-0.007028) | 0.274737 / 0.293841 (-0.019104) | 0.027723 / 0.128546 (-0.100823) | 0.010510 / 0.075646 (-0.065137) | 0.205581 / 0.419271 (-0.213691) | 0.035504 / 0.043533 (-0.008029) | 0.242380 / 0.255139 (-0.012759) | 0.259791 / 0.283200 (-0.023409) | 0.017752 / 0.141683 (-0.123931) | 1.089289 / 1.452155 (-0.362865) | 1.161958 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094288 / 0.018006 (0.076282) | 0.303253 / 0.000490 (0.302763) | 0.000216 / 0.000200 (0.000016) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018496 / 0.037411 (-0.018915) | 0.060411 / 0.014526 (0.045885) | 0.074294 / 0.176557 (-0.102262) | 0.122934 / 0.737135 (-0.614201) | 0.074710 / 0.296338 (-0.221629) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286394 / 0.215209 (0.071185) | 2.806145 / 2.077655 (0.728490) | 1.497071 / 1.504120 (-0.007049) | 1.362254 / 1.541195 (-0.178940) | 1.389642 / 1.468490 (-0.078848) | 0.554503 / 4.584777 (-4.030274) | 2.348029 / 3.745712 (-1.397684) | 2.780862 / 5.269862 (-2.489000) | 1.728058 / 4.565676 (-2.837619) | 0.062617 / 0.424275 (-0.361658) | 0.004901 / 0.007607 (-0.002707) | 0.346267 / 0.226044 (0.120223) | 3.363744 / 2.268929 (1.094815) | 1.826994 / 55.444624 (-53.617630) | 1.560656 / 6.876477 (-5.315820) | 1.561083 / 2.142072 (-0.580990) | 0.643395 / 4.805227 (-4.161832) | 0.116206 / 6.500664 (-6.384458) | 0.042008 / 0.075469 (-0.033461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953416 / 1.841788 (-0.888371) | 11.461665 / 8.074308 (3.387357) | 10.623865 / 10.191392 (0.432473) | 0.128071 / 0.680424 (-0.552353) | 0.014277 / 0.534201 (-0.519924) | 0.288810 / 0.579283 (-0.290474) | 0.267575 / 0.434364 (-0.166788) | 0.327422 / 0.540337 (-0.212916) | 0.435151 / 1.386936 (-0.951785) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005242 / 0.011353 (-0.006111) | 0.003515 / 0.011008 (-0.007493) | 0.048483 / 0.038508 (0.009975) | 0.051684 / 0.023109 (0.028575) | 0.276564 / 0.275898 (0.000666) | 0.297582 / 0.323480 (-0.025898) | 0.004117 / 0.007986 (-0.003869) | 0.002610 / 0.004328 (-0.001719) | 0.047811 / 0.004250 (0.043561) | 0.040622 / 0.037052 (0.003569) | 0.280265 / 0.258489 (0.021776) | 0.311719 / 0.293841 (0.017878) | 0.028811 / 0.128546 (-0.099735) | 0.010600 / 0.075646 (-0.065047) | 0.056660 / 0.419271 (-0.362611) | 0.032638 / 0.043533 (-0.010894) | 0.276434 / 0.255139 (0.021295) | 0.299095 / 0.283200 (0.015896) | 0.018483 / 0.141683 (-0.123200) | 1.156382 / 1.452155 (-0.295773) | 1.252205 / 1.492716 (-0.240511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097868 / 0.018006 (0.079862) | 0.309438 / 0.000490 (0.308948) | 0.000229 / 0.000200 (0.000029) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021838 / 0.037411 (-0.015573) | 0.068358 / 0.014526 (0.053832) | 0.080432 / 0.176557 (-0.096125) | 0.119788 / 0.737135 (-0.617348) | 0.081742 / 0.296338 (-0.214597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301239 / 0.215209 (0.086030) | 2.962242 / 2.077655 (0.884587) | 1.693918 / 1.504120 (0.189798) | 1.573663 / 1.541195 (0.032468) | 1.583125 / 1.468490 (0.114635) | 0.557267 / 4.584777 (-4.027510) | 2.440048 / 3.745712 (-1.305664) | 2.727572 / 5.269862 (-2.542290) | 1.713557 / 4.565676 (-2.852120) | 0.062526 / 0.424275 (-0.361749) | 0.004982 / 0.007607 (-0.002625) | 0.353850 / 0.226044 (0.127806) | 3.530887 / 2.268929 (1.261958) | 2.047864 / 55.444624 (-53.396761) | 1.770776 / 6.876477 (-5.105701) | 1.757621 / 2.142072 (-0.384451) | 0.633847 / 4.805227 (-4.171381) | 0.114055 / 6.500664 (-6.386609) | 0.040078 / 0.075469 (-0.035391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983721 / 1.841788 (-0.858066) | 11.896537 / 8.074308 (3.822229) | 10.529883 / 10.191392 (0.338491) | 0.129593 / 0.680424 (-0.550831) | 0.016213 / 0.534201 (-0.517988) | 0.289623 / 0.579283 (-0.289660) | 0.280073 / 0.434364 (-0.154291) | 0.327446 / 0.540337 (-0.212892) | 0.574847 / 1.386936 (-0.812089) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2684a98fe38e0c87bb11e050586004108e32b79d \"CML watermark\")\n"
] | 2023-12-11T09:21:22 | 2023-12-13T14:21:29 | 2023-12-13T14:15:21 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6486",
"html_url": "https://github.com/huggingface/datasets/pull/6486",
"diff_url": "https://github.com/huggingface/datasets/pull/6486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6486.patch",
"merged_at": "2023-12-13T14:15:21"
} | Fix docs phrasing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6486/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6485/comments | https://api.github.com/repos/huggingface/datasets/issues/6485/events | https://github.com/huggingface/datasets/issues/6485 | 2,035,141,884 | I_kwDODunzps55Tcz8 | 6,485 | FileNotFoundError: [Errno 2] No such file or directory: 'nul' | {
"login": "amanyara",
"id": 73683903,
"node_id": "MDQ6VXNlcjczNjgzOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/73683903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanyara",
"html_url": "https://github.com/amanyara",
"followers_url": "https://api.github.com/users/amanyara/followers",
"following_url": "https://api.github.com/users/amanyara/following{/other_user}",
"gists_url": "https://api.github.com/users/amanyara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanyara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanyara/subscriptions",
"organizations_url": "https://api.github.com/users/amanyara/orgs",
"repos_url": "https://api.github.com/users/amanyara/repos",
"events_url": "https://api.github.com/users/amanyara/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanyara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! It seems like the problem is your environment. Maybe this issue can help: https://github.com/pytest-dev/pytest/issues/9519. "
] | 2023-12-11T08:52:13 | 2023-12-14T08:09:08 | 2023-12-14T08:09:08 | NONE | null | null | null | ### Describe the bug
it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets"
i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul'
![image](https://github.com/huggingface/datasets/assets/73683903/3973c120-ebb1-42b7-bede-b9de053e861d)
![image](https://github.com/huggingface/datasets/assets/73683903/0496adff-a7a7-4dcb-929e-ec11ede71f04)
### Steps to reproduce the bug
1.import datasets
### Expected behavior
i just run a single line code and stuct in this bug
### Environment info
OS: Windows10
Datasets==2.15.0
python=3.10 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6485/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6483/comments | https://api.github.com/repos/huggingface/datasets/issues/6483/events | https://github.com/huggingface/datasets/issues/6483 | 2,032,946,981 | I_kwDODunzps55LE8l | 6,483 | Iterable Dataset: rename column clashes with remove column | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Column \"text\" doesn't exist anymore so you can't remove it",
"You can get the expected result by fixing typos in the snippet :)\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset.features) - COLUMNS_TO_KEEP)\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Fixed code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n#Β rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\ndataset_features = dataset.features.keys()\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Whoops π
Thanks for the swift reply both! Works like a charm!"
] | 2023-12-08T16:11:30 | 2023-12-08T16:27:16 | 2023-12-08T16:27:04 | CONTRIBUTOR | null | null | null | ### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by:
1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`)
2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`)
However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets.
### Steps to reproduce the bug
```python
from datasets import load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# check original features
dataset_features = dataset.features.keys()
print("Original features: ", dataset_features)
#Β rename "text" -> "sentence"
dataset = dataset.rename_column("text", "sentence")
# remove unwanted columns
COLUMNS_TO_KEEP = {"audio", "sentence"}
dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
# stream first sample, should return "audio" and "sentence" columns
print(next(iter(dataset)))
```
Traceback:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 17
14 COLUMNS_TO_KEEP = {"audio", "sentence"}
15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
---> 17 print(next(iter(dataset)))
File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)
1350 yield formatter.format_row(pa_table)
1351 return
-> 1353 for key, example in ex_iterable:
1354 if self.features:
1355 # `IterableDataset` automatically fills missing columns with None.
1356 # This is done with `_apply_feature_types_on_example`.
1357 example = _apply_feature_types_on_example(
1358 example, self.features, token_per_repo_id=self._token_per_repo_id
1359 )
File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self)
650 yield from ArrowExamplesIterable(self._iter_arrow, {})
651 else:
--> 652 yield from self._iter()
File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self)
727 if self.remove_columns:
728 for c in self.remove_columns:
--> 729 del transformed_example[c]
730 yield key, transformed_example
731 current_idx += 1
KeyError: 'text'
```
=> we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset.
### Expected behavior
Should be able to rename and remove columns from iterable dataset.
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: macOS-13.5.1-arm64-arm-64bit
- Python version: 3.11.6
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6483/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6484/comments | https://api.github.com/repos/huggingface/datasets/issues/6484/events | https://github.com/huggingface/datasets/issues/6484 | 2,033,333,294 | I_kwDODunzps55MjQu | 6,484 | [Feature Request] Dataset versioning | {
"login": "kenfus",
"id": 47979198,
"node_id": "MDQ6VXNlcjQ3OTc5MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenfus",
"html_url": "https://github.com/kenfus",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfus/subscriptions",
"organizations_url": "https://api.github.com/users/kenfus/orgs",
"repos_url": "https://api.github.com/users/kenfus/repos",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenfus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hello @kenfus, this is meant to be possible to do yes. Let me ping @lhoestq or @mariosasko from the `datasets` team (`huggingface_hub` is only the underlying library to download files from the Hub but here it looks more like a `datasets` problem). ",
"Hi! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 2023-12-08T16:01:35 | 2023-12-11T19:13:46 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different.
Of course, I may have done something wrong or missed a setting somewhere!
**Describe the solution you'd like**
The solution would allow me to easily work with revisions:
- create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this:
`dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')`
- then, get the current revision as follows:
```
dataset = load_dataset(
'kenfus/xy', revision='v1.0.2',
)
```
this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision.
- if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision.
**Describe alternatives you've considered**
I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere.
**Additional context**
Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface.
This is the data loading in my script:
```
## CREATE PATHS
prepared_dataset_path = os.path.join(
DATA_FOLDER, str(DATA_VERSION), "prepared_dataset"
)
os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True)
## LOAD DATASET
if os.path.exists(prepared_dataset_path):
print("Loading prepared dataset from disk...")
dataset_prepared = load_from_disk(prepared_dataset_path)
else:
print("Loading dataset from HuggingFace Datasets...")
dataset = load_dataset(
PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload"
)
print("Preparing dataset...")
dataset_prepared = dataset.map(
prepare_dataset,
remove_columns=["audio", "transcription"],
num_proc=os.cpu_count(),
load_from_cache_file=False,
)
dataset_prepared.save_to_disk(prepared_dataset_path)
del dataset
if CHECK_DATASET:
## CHECK DATASET
dataset_prepared = dataset_prepared.map(
check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False
)
dataset_filtered = dataset_prepared.filter(
lambda example: not example["incorrect_dimension"],
load_from_cache_file=False,
)
for example in dataset_prepared.filter(
lambda example: example["incorrect_dimension"], load_from_cache_file=False
):
print(example["path"])
print(
f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}"
)
print("Number of examples train: ", len(dataset_filtered["train"]))
print("Number of examples test: ", len(dataset_filtered["test"]))
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6484/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6482/comments | https://api.github.com/repos/huggingface/datasets/issues/6482/events | https://github.com/huggingface/datasets/pull/6482 | 2,032,675,918 | PR_kwDODunzps5hhl23 | 6,482 | Fix max lock length on unix | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6482). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I'm getting `AttributeError: module 'os' has no attribute 'statvfs'` on windows - reverting",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005294 / 0.011353 (-0.006059) | 0.003562 / 0.011008 (-0.007446) | 0.062030 / 0.038508 (0.023522) | 0.053335 / 0.023109 (0.030226) | 0.233303 / 0.275898 (-0.042595) | 0.252029 / 0.323480 (-0.071451) | 0.002835 / 0.007986 (-0.005151) | 0.002732 / 0.004328 (-0.001597) | 0.047973 / 0.004250 (0.043723) | 0.038380 / 0.037052 (0.001328) | 0.235028 / 0.258489 (-0.023461) | 0.265555 / 0.293841 (-0.028286) | 0.027136 / 0.128546 (-0.101410) | 0.010806 / 0.075646 (-0.064840) | 0.205040 / 0.419271 (-0.214231) | 0.035063 / 0.043533 (-0.008470) | 0.236351 / 0.255139 (-0.018788) | 0.254556 / 0.283200 (-0.028643) | 0.019528 / 0.141683 (-0.122155) | 1.099012 / 1.452155 (-0.353142) | 1.156250 / 1.492716 (-0.336466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093952 / 0.018006 (0.075946) | 0.304181 / 0.000490 (0.303692) | 0.000227 / 0.000200 (0.000027) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018568 / 0.037411 (-0.018844) | 0.060323 / 0.014526 (0.045798) | 0.073010 / 0.176557 (-0.103546) | 0.121723 / 0.737135 (-0.615412) | 0.075668 / 0.296338 (-0.220670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288429 / 0.215209 (0.073220) | 2.797834 / 2.077655 (0.720180) | 1.480957 / 1.504120 (-0.023163) | 1.360872 / 1.541195 (-0.180323) | 1.406828 / 1.468490 (-0.061663) | 0.587596 / 4.584777 (-3.997181) | 2.533997 / 3.745712 (-1.211715) | 2.906697 / 5.269862 (-2.363164) | 1.801753 / 4.565676 (-2.763923) | 0.064360 / 0.424275 (-0.359915) | 0.005016 / 0.007607 (-0.002591) | 0.347334 / 0.226044 (0.121290) | 3.426344 / 2.268929 (1.157416) | 1.856014 / 55.444624 (-53.588610) | 1.581774 / 6.876477 (-5.294703) | 1.640036 / 2.142072 (-0.502037) | 0.656096 / 4.805227 (-4.149131) | 0.120212 / 6.500664 (-6.380452) | 0.044003 / 0.075469 (-0.031466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943933 / 1.841788 (-0.897855) | 11.846572 / 8.074308 (3.772263) | 10.330705 / 10.191392 (0.139313) | 0.129767 / 0.680424 (-0.550657) | 0.013508 / 0.534201 (-0.520693) | 0.289672 / 0.579283 (-0.289611) | 0.266427 / 0.434364 (-0.167937) | 0.342766 / 0.540337 (-0.197571) | 0.452068 / 1.386936 (-0.934868) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005308 / 0.011353 (-0.006045) | 0.003712 / 0.011008 (-0.007296) | 0.048848 / 0.038508 (0.010340) | 0.055156 / 0.023109 (0.032047) | 0.271942 / 0.275898 (-0.003956) | 0.293166 / 0.323480 (-0.030314) | 0.004056 / 0.007986 (-0.003930) | 0.002722 / 0.004328 (-0.001606) | 0.048418 / 0.004250 (0.044167) | 0.039320 / 0.037052 (0.002268) | 0.277184 / 0.258489 (0.018695) | 0.312398 / 0.293841 (0.018557) | 0.029392 / 0.128546 (-0.099155) | 0.011314 / 0.075646 (-0.064332) | 0.057883 / 0.419271 (-0.361389) | 0.032603 / 0.043533 (-0.010930) | 0.273025 / 0.255139 (0.017886) | 0.289265 / 0.283200 (0.006065) | 0.017553 / 0.141683 (-0.124129) | 1.127725 / 1.452155 (-0.324430) | 1.202293 / 1.492716 (-0.290423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097179 / 0.018006 (0.079173) | 0.309712 / 0.000490 (0.309222) | 0.000269 / 0.000200 (0.000069) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024742 / 0.037411 (-0.012670) | 0.070097 / 0.014526 (0.055571) | 0.082273 / 0.176557 (-0.094283) | 0.121696 / 0.737135 (-0.615439) | 0.082983 / 0.296338 (-0.213355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292688 / 0.215209 (0.077479) | 2.853436 / 2.077655 (0.775781) | 1.588999 / 1.504120 (0.084879) | 1.454547 / 1.541195 (-0.086648) | 1.476342 / 1.468490 (0.007852) | 0.559464 / 4.584777 (-4.025313) | 2.564597 / 3.745712 (-1.181115) | 2.900460 / 5.269862 (-2.369402) | 1.782156 / 4.565676 (-2.783520) | 0.061768 / 0.424275 (-0.362507) | 0.005042 / 0.007607 (-0.002565) | 0.345168 / 0.226044 (0.119124) | 3.412273 / 2.268929 (1.143344) | 1.953154 / 55.444624 (-53.491470) | 1.667347 / 6.876477 (-5.209130) | 1.685138 / 2.142072 (-0.456934) | 0.643270 / 4.805227 (-4.161958) | 0.115955 / 6.500664 (-6.384709) | 0.041090 / 0.075469 (-0.034379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976324 / 1.841788 (-0.865464) | 12.252294 / 8.074308 (4.177986) | 10.598062 / 10.191392 (0.406670) | 0.129779 / 0.680424 (-0.550644) | 0.015697 / 0.534201 (-0.518504) | 0.287241 / 0.579283 (-0.292042) | 0.287331 / 0.434364 (-0.147033) | 0.331710 / 0.540337 (-0.208628) | 0.574571 / 1.386936 (-0.812365) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#702344140461b7a111139860c944d3dd0a2689e3 \"CML watermark\")\n"
] | 2023-12-08T13:39:30 | 2023-12-12T11:53:32 | 2023-12-12T11:47:27 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6482",
"html_url": "https://github.com/huggingface/datasets/pull/6482",
"diff_url": "https://github.com/huggingface/datasets/pull/6482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6482.patch",
"merged_at": "2023-12-12T11:47:27"
} | reported in https://github.com/huggingface/datasets/pull/6482 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6482/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6482/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6481/comments | https://api.github.com/repos/huggingface/datasets/issues/6481/events | https://github.com/huggingface/datasets/issues/6481 | 2,032,650,003 | I_kwDODunzps55J8cT | 6,481 | using torchrun, save_to_disk suddenly shows SIGTERM | {
"login": "Ariya12138",
"id": 85916625,
"node_id": "MDQ6VXNlcjg1OTE2NjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ariya12138",
"html_url": "https://github.com/Ariya12138",
"followers_url": "https://api.github.com/users/Ariya12138/followers",
"following_url": "https://api.github.com/users/Ariya12138/following{/other_user}",
"gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions",
"organizations_url": "https://api.github.com/users/Ariya12138/orgs",
"repos_url": "https://api.github.com/users/Ariya12138/repos",
"events_url": "https://api.github.com/users/Ariya12138/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ariya12138/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-08T13:22:03 | 2023-12-08T13:22:03 | null | NONE | null | null | null | ### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard.
WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM
ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967.
### Steps to reproduce the bug
ds_shard = ds_shard.map(map_fn, *args, **kwargs)
ds_shard.save_to_disk(ds_shard_filepaths[rank])
Saving the dataset (14/70 shards): 20%|ββ | 875350/4376702 [00:19<01:53, 30863.15 examples/s]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python
Traceback (most recent call last):
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
==========================================================
run.py FAILED
----------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
----------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-12-08_20:09:04
rank : 0 (local_rank: 0)
exitcode : -7 (pid: 2224967)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 2224967
### Expected behavior
I hope it can save successfully without any issues, but it seems there is a problem.
### Environment info
`datasets` version: 2.14.6
- Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 14.0.0
- Pandas version: 2.1.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6481/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6480/comments | https://api.github.com/repos/huggingface/datasets/issues/6480/events | https://github.com/huggingface/datasets/pull/6480 | 2,031,116,653 | PR_kwDODunzps5hcS7P | 6,480 | Add IterableDataset `__repr__` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6480). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005392 / 0.011353 (-0.005960) | 0.003120 / 0.011008 (-0.007888) | 0.062017 / 0.038508 (0.023509) | 0.048824 / 0.023109 (0.025715) | 0.232300 / 0.275898 (-0.043598) | 0.262045 / 0.323480 (-0.061435) | 0.002909 / 0.007986 (-0.005077) | 0.003916 / 0.004328 (-0.000413) | 0.049469 / 0.004250 (0.045218) | 0.038965 / 0.037052 (0.001913) | 0.247841 / 0.258489 (-0.010648) | 0.268259 / 0.293841 (-0.025582) | 0.027588 / 0.128546 (-0.100958) | 0.010334 / 0.075646 (-0.065312) | 0.205811 / 0.419271 (-0.213460) | 0.035456 / 0.043533 (-0.008077) | 0.242774 / 0.255139 (-0.012365) | 0.260377 / 0.283200 (-0.022823) | 0.017469 / 0.141683 (-0.124214) | 1.199665 / 1.452155 (-0.252489) | 1.259316 / 1.492716 (-0.233400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092357 / 0.018006 (0.074350) | 0.303745 / 0.000490 (0.303255) | 0.000212 / 0.000200 (0.000012) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018820 / 0.037411 (-0.018592) | 0.061548 / 0.014526 (0.047022) | 0.072527 / 0.176557 (-0.104030) | 0.119696 / 0.737135 (-0.617440) | 0.074153 / 0.296338 (-0.222185) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283952 / 0.215209 (0.068743) | 2.769844 / 2.077655 (0.692189) | 1.526100 / 1.504120 (0.021980) | 1.417584 / 1.541195 (-0.123611) | 1.440523 / 1.468490 (-0.027967) | 0.556994 / 4.584777 (-4.027783) | 2.400392 / 3.745712 (-1.345320) | 2.727794 / 5.269862 (-2.542068) | 1.724671 / 4.565676 (-2.841006) | 0.062111 / 0.424275 (-0.362164) | 0.004925 / 0.007607 (-0.002682) | 0.342748 / 0.226044 (0.116704) | 3.376790 / 2.268929 (1.107862) | 1.856498 / 55.444624 (-53.588127) | 1.574143 / 6.876477 (-5.302334) | 1.591828 / 2.142072 (-0.550245) | 0.644416 / 4.805227 (-4.160811) | 0.116862 / 6.500664 (-6.383802) | 0.041484 / 0.075469 (-0.033985) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975704 / 1.841788 (-0.866084) | 11.196447 / 8.074308 (3.122139) | 10.567518 / 10.191392 (0.376126) | 0.126786 / 0.680424 (-0.553638) | 0.013768 / 0.534201 (-0.520433) | 0.284531 / 0.579283 (-0.294752) | 0.260855 / 0.434364 (-0.173509) | 0.328888 / 0.540337 (-0.211450) | 0.439911 / 1.386936 (-0.947025) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005108 / 0.011353 (-0.006245) | 0.003006 / 0.011008 (-0.008003) | 0.048673 / 0.038508 (0.010165) | 0.051066 / 0.023109 (0.027957) | 0.279578 / 0.275898 (0.003680) | 0.298356 / 0.323480 (-0.025123) | 0.003965 / 0.007986 (-0.004020) | 0.002662 / 0.004328 (-0.001667) | 0.049037 / 0.004250 (0.044786) | 0.039385 / 0.037052 (0.002333) | 0.284545 / 0.258489 (0.026055) | 0.314240 / 0.293841 (0.020399) | 0.028493 / 0.128546 (-0.100053) | 0.010400 / 0.075646 (-0.065247) | 0.057375 / 0.419271 (-0.361896) | 0.032382 / 0.043533 (-0.011151) | 0.283163 / 0.255139 (0.028024) | 0.298967 / 0.283200 (0.015768) | 0.017564 / 0.141683 (-0.124119) | 1.172425 / 1.452155 (-0.279730) | 1.219975 / 1.492716 (-0.272742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090664 / 0.018006 (0.072658) | 0.298419 / 0.000490 (0.297929) | 0.000211 / 0.000200 (0.000011) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021739 / 0.037411 (-0.015672) | 0.068274 / 0.014526 (0.053748) | 0.080820 / 0.176557 (-0.095736) | 0.119809 / 0.737135 (-0.617326) | 0.081612 / 0.296338 (-0.214727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303346 / 0.215209 (0.088137) | 2.971648 / 2.077655 (0.893993) | 1.634828 / 1.504120 (0.130708) | 1.510851 / 1.541195 (-0.030344) | 1.515236 / 1.468490 (0.046745) | 0.558487 / 4.584777 (-4.026289) | 2.436263 / 3.745712 (-1.309449) | 2.718525 / 5.269862 (-2.551336) | 1.727421 / 4.565676 (-2.838255) | 0.061396 / 0.424275 (-0.362879) | 0.004951 / 0.007607 (-0.002656) | 0.352950 / 0.226044 (0.126906) | 3.473766 / 2.268929 (1.204838) | 1.971299 / 55.444624 (-53.473325) | 1.712173 / 6.876477 (-5.164304) | 1.711334 / 2.142072 (-0.430738) | 0.627291 / 4.805227 (-4.177936) | 0.113779 / 6.500664 (-6.386885) | 0.046561 / 0.075469 (-0.028908) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989507 / 1.841788 (-0.852280) | 11.777883 / 8.074308 (3.703575) | 10.525453 / 10.191392 (0.334061) | 0.129118 / 0.680424 (-0.551306) | 0.014989 / 0.534201 (-0.519212) | 0.282324 / 0.579283 (-0.296959) | 0.280688 / 0.434364 (-0.153676) | 0.322579 / 0.540337 (-0.217758) | 0.554327 / 1.386936 (-0.832609) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79e94fcdf3d4378ddcdf7e130bb1ae23d99c6fce \"CML watermark\")\n"
] | 2023-12-07T16:31:50 | 2023-12-08T13:33:06 | 2023-12-08T13:26:54 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6480",
"html_url": "https://github.com/huggingface/datasets/pull/6480",
"diff_url": "https://github.com/huggingface/datasets/pull/6480.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6480.patch",
"merged_at": "2023-12-08T13:26:54"
} | Example for glue sst2:
Dataset
```
DatasetDict({
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1821
})
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 67349
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 872
})
})
```
IterableDataset (new)
```
IterableDatasetDict({
test: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
train: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
validation: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
})
```
IterableDataset (before)
```
{'test': <datasets.iterable_dataset.IterableDataset object at 0x130d421f0>, 'train': <datasets.iterable_dataset.IterableDataset object at 0x136f3aaf0>, 'validation': <datasets.iterable_dataset.IterableDataset object at 0x136f4b100>}
{'sentence': 'hide new secretions from the parental units ', 'label': 0, 'idx': 0}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6480/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6479/comments | https://api.github.com/repos/huggingface/datasets/issues/6479/events | https://github.com/huggingface/datasets/pull/6479 | 2,029,040,121 | PR_kwDODunzps5hVLom | 6,479 | More robust preupload retry mechanism | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6479). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005669 / 0.011353 (-0.005683) | 0.003684 / 0.011008 (-0.007324) | 0.063477 / 0.038508 (0.024969) | 0.068760 / 0.023109 (0.045651) | 0.252741 / 0.275898 (-0.023157) | 0.286499 / 0.323480 (-0.036981) | 0.003311 / 0.007986 (-0.004674) | 0.003487 / 0.004328 (-0.000842) | 0.049636 / 0.004250 (0.045385) | 0.040983 / 0.037052 (0.003931) | 0.262230 / 0.258489 (0.003740) | 0.292131 / 0.293841 (-0.001710) | 0.028231 / 0.128546 (-0.100315) | 0.010912 / 0.075646 (-0.064734) | 0.211248 / 0.419271 (-0.208023) | 0.036679 / 0.043533 (-0.006854) | 0.258139 / 0.255139 (0.003000) | 0.277568 / 0.283200 (-0.005631) | 0.019576 / 0.141683 (-0.122107) | 1.102588 / 1.452155 (-0.349567) | 1.178587 / 1.492716 (-0.314130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098968 / 0.018006 (0.080962) | 0.298777 / 0.000490 (0.298287) | 0.000220 / 0.000200 (0.000020) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020408 / 0.037411 (-0.017003) | 0.062832 / 0.014526 (0.048306) | 0.076047 / 0.176557 (-0.100509) | 0.125209 / 0.737135 (-0.611926) | 0.079098 / 0.296338 (-0.217240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285603 / 0.215209 (0.070394) | 2.811530 / 2.077655 (0.733875) | 1.481012 / 1.504120 (-0.023108) | 1.362740 / 1.541195 (-0.178455) | 1.448999 / 1.468490 (-0.019491) | 0.557740 / 4.584777 (-4.027037) | 2.391377 / 3.745712 (-1.354335) | 2.973181 / 5.269862 (-2.296681) | 1.837147 / 4.565676 (-2.728530) | 0.064445 / 0.424275 (-0.359831) | 0.004992 / 0.007607 (-0.002615) | 0.339207 / 0.226044 (0.113162) | 3.378508 / 2.268929 (1.109580) | 1.843969 / 55.444624 (-53.600655) | 1.597794 / 6.876477 (-5.278682) | 1.657665 / 2.142072 (-0.484407) | 0.654267 / 4.805227 (-4.150961) | 0.120408 / 6.500664 (-6.380256) | 0.045298 / 0.075469 (-0.030171) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949030 / 1.841788 (-0.892758) | 12.922161 / 8.074308 (4.847852) | 11.115660 / 10.191392 (0.924268) | 0.130556 / 0.680424 (-0.549868) | 0.016278 / 0.534201 (-0.517923) | 0.288137 / 0.579283 (-0.291146) | 0.265978 / 0.434364 (-0.168386) | 0.331491 / 0.540337 (-0.208847) | 0.437782 / 1.386936 (-0.949154) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005342 / 0.011353 (-0.006010) | 0.003636 / 0.011008 (-0.007373) | 0.049527 / 0.038508 (0.011019) | 0.054856 / 0.023109 (0.031746) | 0.271922 / 0.275898 (-0.003976) | 0.295654 / 0.323480 (-0.027826) | 0.004023 / 0.007986 (-0.003963) | 0.002814 / 0.004328 (-0.001515) | 0.048963 / 0.004250 (0.044712) | 0.039936 / 0.037052 (0.002884) | 0.274336 / 0.258489 (0.015847) | 0.310100 / 0.293841 (0.016259) | 0.030006 / 0.128546 (-0.098540) | 0.010750 / 0.075646 (-0.064896) | 0.057989 / 0.419271 (-0.361283) | 0.033692 / 0.043533 (-0.009841) | 0.274084 / 0.255139 (0.018945) | 0.289428 / 0.283200 (0.006229) | 0.018739 / 0.141683 (-0.122944) | 1.126224 / 1.452155 (-0.325931) | 1.171595 / 1.492716 (-0.321121) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093983 / 0.018006 (0.075977) | 0.298516 / 0.000490 (0.298026) | 0.000221 / 0.000200 (0.000022) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022498 / 0.037411 (-0.014914) | 0.071909 / 0.014526 (0.057383) | 0.083940 / 0.176557 (-0.092617) | 0.121059 / 0.737135 (-0.616076) | 0.084141 / 0.296338 (-0.212198) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301792 / 0.215209 (0.086583) | 2.971971 / 2.077655 (0.894317) | 1.618718 / 1.504120 (0.114598) | 1.495816 / 1.541195 (-0.045379) | 1.546709 / 1.468490 (0.078219) | 0.571448 / 4.584777 (-4.013329) | 2.459182 / 3.745712 (-1.286531) | 2.937584 / 5.269862 (-2.332278) | 1.804670 / 4.565676 (-2.761007) | 0.062264 / 0.424275 (-0.362011) | 0.004915 / 0.007607 (-0.002692) | 0.355054 / 0.226044 (0.129009) | 3.490468 / 2.268929 (1.221539) | 1.978948 / 55.444624 (-53.465677) | 1.701020 / 6.876477 (-5.175457) | 1.744684 / 2.142072 (-0.397388) | 0.635880 / 4.805227 (-4.169347) | 0.115933 / 6.500664 (-6.384732) | 0.042646 / 0.075469 (-0.032823) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999486 / 1.841788 (-0.842302) | 13.373854 / 8.074308 (5.299546) | 10.959784 / 10.191392 (0.768392) | 0.131032 / 0.680424 (-0.549392) | 0.015059 / 0.534201 (-0.519142) | 0.289892 / 0.579283 (-0.289391) | 0.279383 / 0.434364 (-0.154981) | 0.337670 / 0.540337 (-0.202668) | 0.597102 / 1.386936 (-0.789834) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dd9044cdaabc1f9abce02c1b71bdb48fd3525d4e \"CML watermark\")\n"
] | 2023-12-06T17:19:38 | 2023-12-06T19:47:29 | 2023-12-06T19:41:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6479",
"html_url": "https://github.com/huggingface/datasets/pull/6479",
"diff_url": "https://github.com/huggingface/datasets/pull/6479.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6479.patch",
"merged_at": "2023-12-06T19:41:06"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6479/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6478/comments | https://api.github.com/repos/huggingface/datasets/issues/6478/events | https://github.com/huggingface/datasets/issues/6478 | 2,028,071,596 | I_kwDODunzps544eqs | 6,478 | How to load data from lakefs | {
"login": "d710055071",
"id": 12895488,
"node_id": "MDQ6VXNlcjEyODk1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d710055071",
"html_url": "https://github.com/d710055071",
"followers_url": "https://api.github.com/users/d710055071/followers",
"following_url": "https://api.github.com/users/d710055071/following{/other_user}",
"gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d710055071/subscriptions",
"organizations_url": "https://api.github.com/users/d710055071/orgs",
"repos_url": "https://api.github.com/users/d710055071/repos",
"events_url": "https://api.github.com/users/d710055071/events{/privacy}",
"received_events_url": "https://api.github.com/users/d710055071/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"You can create a `pandas` DataFrame following [this](https://lakefs.io/data-version-control/dvc-using-python/) tutorial, and then convert this DataFrame to a `Dataset` with `datasets.Dataset.from_pandas`. For larger datasets (to memory map them), you can use `Dataset.from_generator` with a generator function that reads lakeFS files with `s3fs`.",
"@mariosasko hello,\r\nThis can achieve and https://huggingface.co./datasets Does the same effect apply to the dataset? For example, downloading while using"
] | 2023-12-06T09:04:11 | 2023-12-07T02:19:44 | null | NONE | null | null | null | My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6478/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6477/comments | https://api.github.com/repos/huggingface/datasets/issues/6477/events | https://github.com/huggingface/datasets/pull/6477 | 2,028,022,374 | PR_kwDODunzps5hRq_N | 6,477 | Fix PermissionError on Windows CI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6477). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005383 / 0.011353 (-0.005969) | 0.003644 / 0.011008 (-0.007364) | 0.063375 / 0.038508 (0.024866) | 0.055567 / 0.023109 (0.032457) | 0.261376 / 0.275898 (-0.014522) | 0.283731 / 0.323480 (-0.039749) | 0.004022 / 0.007986 (-0.003964) | 0.002780 / 0.004328 (-0.001549) | 0.049407 / 0.004250 (0.045156) | 0.038208 / 0.037052 (0.001156) | 0.256275 / 0.258489 (-0.002214) | 0.293203 / 0.293841 (-0.000638) | 0.028411 / 0.128546 (-0.100135) | 0.010753 / 0.075646 (-0.064894) | 0.210420 / 0.419271 (-0.208851) | 0.036062 / 0.043533 (-0.007471) | 0.260455 / 0.255139 (0.005317) | 0.294991 / 0.283200 (0.011791) | 0.019020 / 0.141683 (-0.122662) | 1.118334 / 1.452155 (-0.333821) | 1.227391 / 1.492716 (-0.265325) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094700 / 0.018006 (0.076694) | 0.302378 / 0.000490 (0.301888) | 0.000215 / 0.000200 (0.000015) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018745 / 0.037411 (-0.018667) | 0.061103 / 0.014526 (0.046578) | 0.075369 / 0.176557 (-0.101188) | 0.121573 / 0.737135 (-0.615563) | 0.076898 / 0.296338 (-0.219440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284143 / 0.215209 (0.068934) | 2.774298 / 2.077655 (0.696644) | 1.483557 / 1.504120 (-0.020563) | 1.365091 / 1.541195 (-0.176104) | 1.390170 / 1.468490 (-0.078320) | 0.561179 / 4.584777 (-4.023598) | 2.401654 / 3.745712 (-1.344058) | 2.782628 / 5.269862 (-2.487233) | 1.731497 / 4.565676 (-2.834179) | 0.061798 / 0.424275 (-0.362477) | 0.004998 / 0.007607 (-0.002609) | 0.336920 / 0.226044 (0.110875) | 3.371891 / 2.268929 (1.102963) | 1.832173 / 55.444624 (-53.612452) | 1.573515 / 6.876477 (-5.302962) | 1.595609 / 2.142072 (-0.546463) | 0.647652 / 4.805227 (-4.157575) | 0.118501 / 6.500664 (-6.382164) | 0.042521 / 0.075469 (-0.032948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939310 / 1.841788 (-0.902478) | 11.459855 / 8.074308 (3.385547) | 10.677954 / 10.191392 (0.486562) | 0.141029 / 0.680424 (-0.539395) | 0.014321 / 0.534201 (-0.519880) | 0.306679 / 0.579283 (-0.272604) | 0.262303 / 0.434364 (-0.172061) | 0.327422 / 0.540337 (-0.212915) | 0.436159 / 1.386936 (-0.950777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003646 / 0.011008 (-0.007362) | 0.049272 / 0.038508 (0.010764) | 0.075367 / 0.023109 (0.052257) | 0.275959 / 0.275898 (0.000061) | 0.296317 / 0.323480 (-0.027163) | 0.004129 / 0.007986 (-0.003857) | 0.002731 / 0.004328 (-0.001597) | 0.048475 / 0.004250 (0.044225) | 0.041571 / 0.037052 (0.004518) | 0.277993 / 0.258489 (0.019504) | 0.298709 / 0.293841 (0.004868) | 0.033117 / 0.128546 (-0.095429) | 0.010914 / 0.075646 (-0.064732) | 0.057599 / 0.419271 (-0.361673) | 0.033354 / 0.043533 (-0.010179) | 0.275669 / 0.255139 (0.020530) | 0.288451 / 0.283200 (0.005251) | 0.019953 / 0.141683 (-0.121729) | 1.148608 / 1.452155 (-0.303547) | 1.184818 / 1.492716 (-0.307898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099566 / 0.018006 (0.081560) | 0.344935 / 0.000490 (0.344445) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021925 / 0.037411 (-0.015486) | 0.068623 / 0.014526 (0.054097) | 0.081533 / 0.176557 (-0.095024) | 0.120996 / 0.737135 (-0.616139) | 0.082495 / 0.296338 (-0.213844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294990 / 0.215209 (0.079781) | 2.892344 / 2.077655 (0.814690) | 1.611090 / 1.504120 (0.106970) | 1.496072 / 1.541195 (-0.045123) | 1.486069 / 1.468490 (0.017579) | 0.569769 / 4.584777 (-4.015008) | 2.477623 / 3.745712 (-1.268089) | 2.819576 / 5.269862 (-2.450286) | 1.745717 / 4.565676 (-2.819959) | 0.063763 / 0.424275 (-0.360512) | 0.004970 / 0.007607 (-0.002637) | 0.344879 / 0.226044 (0.118834) | 3.452795 / 2.268929 (1.183867) | 1.964468 / 55.444624 (-53.480156) | 1.674526 / 6.876477 (-5.201951) | 1.679716 / 2.142072 (-0.462356) | 0.650005 / 4.805227 (-4.155222) | 0.117019 / 6.500664 (-6.383646) | 0.048297 / 0.075469 (-0.027172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965422 / 1.841788 (-0.876366) | 11.989414 / 8.074308 (3.915106) | 10.938462 / 10.191392 (0.747070) | 0.140089 / 0.680424 (-0.540334) | 0.015533 / 0.534201 (-0.518668) | 0.292188 / 0.579283 (-0.287095) | 0.277903 / 0.434364 (-0.156461) | 0.326164 / 0.540337 (-0.214173) | 0.565674 / 1.386936 (-0.821262) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d78f07091bc42c41bea068bf1b6116e2bde46a6f \"CML watermark\")\n"
] | 2023-12-06T08:34:53 | 2023-12-06T09:24:11 | 2023-12-06T09:17:52 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6477",
"html_url": "https://github.com/huggingface/datasets/pull/6477",
"diff_url": "https://github.com/huggingface/datasets/pull/6477.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6477.patch",
"merged_at": "2023-12-06T09:17:52"
} | Fix #6476. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6477/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6476/comments | https://api.github.com/repos/huggingface/datasets/issues/6476/events | https://github.com/huggingface/datasets/issues/6476 | 2,028,018,596 | I_kwDODunzps544Ruk | 6,476 | CI on windows is broken: PermissionError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 2023-12-06T08:32:53 | 2023-12-06T09:17:53 | 2023-12-06T09:17:53 | MEMBER | null | null | null | See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394
```
FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\\default\\0.0.0\\c240e2be3370bdbd\\dataset_with_script-train.arrow'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6476/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6475/comments | https://api.github.com/repos/huggingface/datasets/issues/6475/events | https://github.com/huggingface/datasets/issues/6475 | 2,027,373,734 | I_kwDODunzps5410Sm | 6,475 | laion2B-en failed to load on Windows with PrefetchVirtualMemory failed | {
"login": "doctorpangloss",
"id": 2229300,
"node_id": "MDQ6VXNlcjIyMjkzMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doctorpangloss",
"html_url": "https://github.com/doctorpangloss",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
"gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
"organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
"repos_url": "https://api.github.com/users/doctorpangloss/repos",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"~~You will see this error if the cache dir filepath contains relative `..` paths. Use `os.path.realpath(_CACHE_DIR)` before passing it to the `load_dataset` function.~~",
"This is a real issue and not related to paths.",
"Based on the StackOverflow answer, this causes the error to go away:\r\n```diff\r\ndiff --git a/table.py b/table.py\r\n--- a/table.py\t\r\n+++ b/table.py\t(date 1701824849806)\r\n@@ -47,7 +47,7 @@\r\n \r\n \r\n def _memory_mapped_record_batch_reader_from_file(filename: str) -> pa.RecordBatchStreamReader:\r\n- memory_mapped_stream = pa.memory_map(filename)\r\n+ memory_mapped_stream = pa.memory_map(filename, \"r+\")\r\n return pa.ipc.open_stream(memory_mapped_stream)\r\n```\r\nBut now loading the dataset goes very, very slowly, which is unexpected.",
"I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...",
"Hi! \r\n\r\nInstead of generating one (potentially large) Arrow file, we shard the generated data into 500 MB shards because memory-mapping large Arrow files can be problematic on some systems. Maybe deleting the dataset's cache and increasing the shard size (controlled with the `datasets.config.MAX_SHARD_SIZE` variable; e.g. to \"4GB\") can fix the issue for you.\r\n\r\n> I don't really comprehend what it is that `datasets` gave me when it downloaded the laion2B-en dataset, because nothing can seemingly read these 1024 .arrow files it is retrieving. Not `polars`, not `pyarrow`, it's not an `ipc` file, it's not a `parquet` file...\r\n\r\nOur `.arrow` files are in the [Arrow streaming format](https://arrow.apache.org/docs/python/ipc.html#using-streams). To load them as a `polars` DataFrame, do the following:\r\n```python\r\ndf = pl.from_arrow(Dataset.from_from(path_to_arrow_file).data.table)\r\n```\r\n\r\nWe plan to switch to the IPC version eventually.\r\n",
"Hmm, I have a feeling this works fine on Linux, and is a real bug for however `datasets` is doing the sharding on Windows. I will follow up, but I think this is a real bug."
] | 2023-12-06T00:07:34 | 2023-12-06T23:26:23 | null | NONE | null | null | null | ### Describe the bug
I have downloaded laion2B-en, and I'm receiving the following error trying to load it:
```
Resolving data files: 100%|ββββββββββ| 128/128 [00:00<00:00, 1173.79it/s]
Traceback (most recent call last):
File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <module>
count = compute_frequencies()
^^^^^^^^^^^^^^^^^^^^^
File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 17, in compute_frequencies
laion2b_dataset = load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\load.py", line 2165, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1187, in as_dataset
datasets = map_nested(
^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\utils\py_utils.py", line 456, in map_nested
return function(data_struct)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1217, in _build_single_dataset
ds = self._as_dataset(
^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1291, in _as_dataset
dataset_kwargs = ArrowReader(cache_dir, self.info).read(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 244, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 265, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 200, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 336, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 357, in read_table
return table_cls.from_file(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 1059, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 66, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow\ipc.pxi", line 757, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
OSError: [WinError 8] PrefetchVirtualMemory failed. Detail: [Windows error 8] Not enough memory resources are available to process this command.
```
This error is probably a red herring: https://stackoverflow.com/questions/50263929/numpy-memmap-returns-not-enough-memory-while-there-are-plenty-available In other words, the issue is related to asking for a memory mapping of length N > M the length of the file on Windows. This gracefully succeeds on Linux.
I have 1024 arrow files in my cache instead of 128 like in the repository for it. Probably related. I don't know why `datasets` reorganized/rewrote the dataset in my cache to be 1024 slices instead of the original 128.
### Steps to reproduce the bug
```
# as a huggingface developer, you may already have laion2B-en somewhere
_CACHE_DIR = "."
from datasets import load_dataset
load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False)
```
### Expected behavior
This should correctly load as a memory mapped Arrow dataset.
### Environment info
- `datasets` version: 2.15.0
- Platform: Windows-10-10.0.20348-SP0 (this is windows 2022)
- Python version: 3.11.4
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.10.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6475/timeline | null | reopened | false |
https://api.github.com/repos/huggingface/datasets/issues/6474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6474/comments | https://api.github.com/repos/huggingface/datasets/issues/6474/events | https://github.com/huggingface/datasets/pull/6474 | 2,027,006,715 | PR_kwDODunzps5hONZc | 6,474 | Deprecate Beam API and download from HF GCS bucket | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2023-12-05T19:51:33 | 2023-12-10T17:55:50 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6474",
"html_url": "https://github.com/huggingface/datasets/pull/6474",
"diff_url": "https://github.com/huggingface/datasets/pull/6474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6474.patch",
"merged_at": null
} | Deprecate the Beam API and download from the HF GCS bucked.
TODO:
- [ ] Deprecate the Beam-based [`wikipedia`](https://huggingface.co./datasets/wikipedia) in favor of [`wikimedia/wikipedia`](https://huggingface.co./datasets/wikimedia/wikipedia) ([Hub PR](https://huggingface.co./datasets/wikipedia/discussions/19))
- [ ] Make [`natural_questions`](https://huggingface.co./datasets/natural_questions) a no-code dataset
- [ ] Make [`wiki40b`](https://huggingface.co./datasets/wiki40b) a no-code dataset
- [ ] Make [`wiki_dpr`](https://huggingface.co./datasets/wiki_dpr) an Arrow-based dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6474/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6474/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6473/comments | https://api.github.com/repos/huggingface/datasets/issues/6473/events | https://github.com/huggingface/datasets/pull/6473 | 2,026,495,084 | PR_kwDODunzps5hMbvz | 6,473 | Fix CI quality | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6473). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005270 / 0.011353 (-0.006083) | 0.003471 / 0.011008 (-0.007537) | 0.061942 / 0.038508 (0.023434) | 0.052671 / 0.023109 (0.029562) | 0.250541 / 0.275898 (-0.025357) | 0.270677 / 0.323480 (-0.052803) | 0.002933 / 0.007986 (-0.005053) | 0.003264 / 0.004328 (-0.001064) | 0.048055 / 0.004250 (0.043804) | 0.037459 / 0.037052 (0.000407) | 0.254926 / 0.258489 (-0.003563) | 0.292547 / 0.293841 (-0.001294) | 0.027959 / 0.128546 (-0.100587) | 0.010762 / 0.075646 (-0.064884) | 0.204961 / 0.419271 (-0.214310) | 0.035488 / 0.043533 (-0.008045) | 0.254102 / 0.255139 (-0.001037) | 0.273654 / 0.283200 (-0.009546) | 0.018126 / 0.141683 (-0.123556) | 1.082330 / 1.452155 (-0.369825) | 1.147179 / 1.492716 (-0.345538) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093223 / 0.018006 (0.075217) | 0.301912 / 0.000490 (0.301422) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018407 / 0.037411 (-0.019004) | 0.060412 / 0.014526 (0.045886) | 0.074063 / 0.176557 (-0.102494) | 0.118743 / 0.737135 (-0.618392) | 0.076484 / 0.296338 (-0.219854) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289929 / 0.215209 (0.074720) | 2.825096 / 2.077655 (0.747442) | 1.511444 / 1.504120 (0.007324) | 1.394812 / 1.541195 (-0.146383) | 1.419751 / 1.468490 (-0.048739) | 0.569995 / 4.584777 (-4.014782) | 2.402586 / 3.745712 (-1.343126) | 2.826223 / 5.269862 (-2.443639) | 1.751554 / 4.565676 (-2.814123) | 0.064266 / 0.424275 (-0.360009) | 0.005047 / 0.007607 (-0.002561) | 0.341513 / 0.226044 (0.115469) | 3.372106 / 2.268929 (1.103177) | 1.872693 / 55.444624 (-53.571931) | 1.588200 / 6.876477 (-5.288276) | 1.630800 / 2.142072 (-0.511272) | 0.654266 / 4.805227 (-4.150961) | 0.124292 / 6.500664 (-6.376372) | 0.042876 / 0.075469 (-0.032593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948406 / 1.841788 (-0.893382) | 11.652947 / 8.074308 (3.578639) | 10.218195 / 10.191392 (0.026803) | 0.128447 / 0.680424 (-0.551976) | 0.014092 / 0.534201 (-0.520109) | 0.287631 / 0.579283 (-0.291652) | 0.264843 / 0.434364 (-0.169521) | 0.329997 / 0.540337 (-0.210340) | 0.439597 / 1.386936 (-0.947339) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005418 / 0.011353 (-0.005935) | 0.003589 / 0.011008 (-0.007419) | 0.050074 / 0.038508 (0.011566) | 0.052566 / 0.023109 (0.029456) | 0.293447 / 0.275898 (0.017549) | 0.320518 / 0.323480 (-0.002962) | 0.004094 / 0.007986 (-0.003892) | 0.002690 / 0.004328 (-0.001639) | 0.048200 / 0.004250 (0.043949) | 0.040692 / 0.037052 (0.003640) | 0.297086 / 0.258489 (0.038597) | 0.323827 / 0.293841 (0.029986) | 0.029511 / 0.128546 (-0.099035) | 0.011079 / 0.075646 (-0.064568) | 0.058562 / 0.419271 (-0.360709) | 0.032897 / 0.043533 (-0.010636) | 0.297244 / 0.255139 (0.042105) | 0.316812 / 0.283200 (0.033612) | 0.018468 / 0.141683 (-0.123215) | 1.140948 / 1.452155 (-0.311207) | 1.195453 / 1.492716 (-0.297263) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092677 / 0.018006 (0.074671) | 0.300775 / 0.000490 (0.300285) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021617 / 0.037411 (-0.015794) | 0.077135 / 0.014526 (0.062610) | 0.079848 / 0.176557 (-0.096709) | 0.118475 / 0.737135 (-0.618661) | 0.081174 / 0.296338 (-0.215164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294424 / 0.215209 (0.079215) | 2.863989 / 2.077655 (0.786334) | 1.590604 / 1.504120 (0.086484) | 1.474345 / 1.541195 (-0.066849) | 1.482120 / 1.468490 (0.013630) | 0.567829 / 4.584777 (-4.016948) | 2.493782 / 3.745712 (-1.251930) | 2.823460 / 5.269862 (-2.446402) | 1.732677 / 4.565676 (-2.833000) | 0.065518 / 0.424275 (-0.358757) | 0.004923 / 0.007607 (-0.002684) | 0.349313 / 0.226044 (0.123268) | 3.428618 / 2.268929 (1.159689) | 1.970641 / 55.444624 (-53.473983) | 1.655884 / 6.876477 (-5.220593) | 1.657151 / 2.142072 (-0.484921) | 0.661208 / 4.805227 (-4.144019) | 0.119129 / 6.500664 (-6.381535) | 0.040770 / 0.075469 (-0.034699) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964865 / 1.841788 (-0.876923) | 12.050218 / 8.074308 (3.975910) | 10.458749 / 10.191392 (0.267357) | 0.141856 / 0.680424 (-0.538568) | 0.015091 / 0.534201 (-0.519109) | 0.288897 / 0.579283 (-0.290387) | 0.275343 / 0.434364 (-0.159021) | 0.328363 / 0.540337 (-0.211975) | 0.579243 / 1.386936 (-0.807693) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7721021e284859ea0952444bae6300a0d00794f \"CML watermark\")\n"
] | 2023-12-05T15:36:23 | 2023-12-05T18:14:50 | 2023-12-05T18:08:41 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6473",
"html_url": "https://github.com/huggingface/datasets/pull/6473",
"diff_url": "https://github.com/huggingface/datasets/pull/6473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6473.patch",
"merged_at": "2023-12-05T18:08:41"
} | Fix #6472. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6473/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6472/comments | https://api.github.com/repos/huggingface/datasets/issues/6472/events | https://github.com/huggingface/datasets/issues/6472 | 2,026,493,439 | I_kwDODunzps54ydX_ | 6,472 | CI quality is broken | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 2023-12-05T15:35:34 | 2023-12-06T08:17:34 | 2023-12-05T18:08:43 | MEMBER | null | null | null | See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359
```
Would reformat: src/datasets/features/image.py
1 file would be reformatted, 253 files left unchanged
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6472/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6471/comments | https://api.github.com/repos/huggingface/datasets/issues/6471/events | https://github.com/huggingface/datasets/pull/6471 | 2,026,100,761 | PR_kwDODunzps5hLEni | 6,471 | Remove delete doc CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6471). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005573 / 0.011353 (-0.005780) | 0.003449 / 0.011008 (-0.007559) | 0.063323 / 0.038508 (0.024815) | 0.049369 / 0.023109 (0.026260) | 0.254280 / 0.275898 (-0.021618) | 0.267721 / 0.323480 (-0.055759) | 0.002894 / 0.007986 (-0.005092) | 0.002646 / 0.004328 (-0.001683) | 0.049284 / 0.004250 (0.045033) | 0.037947 / 0.037052 (0.000895) | 0.251654 / 0.258489 (-0.006836) | 0.279729 / 0.293841 (-0.014112) | 0.028022 / 0.128546 (-0.100525) | 0.010653 / 0.075646 (-0.064993) | 0.208567 / 0.419271 (-0.210704) | 0.035863 / 0.043533 (-0.007670) | 0.248522 / 0.255139 (-0.006617) | 0.270274 / 0.283200 (-0.012925) | 0.019683 / 0.141683 (-0.122000) | 1.136342 / 1.452155 (-0.315812) | 1.206757 / 1.492716 (-0.285960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094682 / 0.018006 (0.076676) | 0.304092 / 0.000490 (0.303602) | 0.000220 / 0.000200 (0.000020) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018606 / 0.037411 (-0.018805) | 0.060568 / 0.014526 (0.046042) | 0.074067 / 0.176557 (-0.102490) | 0.118979 / 0.737135 (-0.618156) | 0.075676 / 0.296338 (-0.220663) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290452 / 0.215209 (0.075243) | 2.848868 / 2.077655 (0.771213) | 1.534932 / 1.504120 (0.030812) | 1.386717 / 1.541195 (-0.154478) | 1.416645 / 1.468490 (-0.051845) | 0.569020 / 4.584777 (-4.015757) | 2.421168 / 3.745712 (-1.324545) | 2.781358 / 5.269862 (-2.488503) | 1.758495 / 4.565676 (-2.807182) | 0.063851 / 0.424275 (-0.360424) | 0.004968 / 0.007607 (-0.002639) | 0.339198 / 0.226044 (0.113154) | 3.356392 / 2.268929 (1.087464) | 1.858145 / 55.444624 (-53.586479) | 1.589000 / 6.876477 (-5.287477) | 1.569175 / 2.142072 (-0.572897) | 0.650571 / 4.805227 (-4.154657) | 0.120288 / 6.500664 (-6.380376) | 0.042489 / 0.075469 (-0.032980) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939963 / 1.841788 (-0.901824) | 11.493612 / 8.074308 (3.419304) | 10.353780 / 10.191392 (0.162388) | 0.141945 / 0.680424 (-0.538479) | 0.014397 / 0.534201 (-0.519804) | 0.286971 / 0.579283 (-0.292312) | 0.266787 / 0.434364 (-0.167577) | 0.330385 / 0.540337 (-0.209952) | 0.438542 / 1.386936 (-0.948394) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005360 / 0.011353 (-0.005993) | 0.003720 / 0.011008 (-0.007288) | 0.048790 / 0.038508 (0.010282) | 0.050256 / 0.023109 (0.027147) | 0.275445 / 0.275898 (-0.000453) | 0.297725 / 0.323480 (-0.025755) | 0.004077 / 0.007986 (-0.003909) | 0.002759 / 0.004328 (-0.001569) | 0.047653 / 0.004250 (0.043403) | 0.040205 / 0.037052 (0.003153) | 0.281028 / 0.258489 (0.022539) | 0.304682 / 0.293841 (0.010841) | 0.030158 / 0.128546 (-0.098388) | 0.010957 / 0.075646 (-0.064689) | 0.058193 / 0.419271 (-0.361079) | 0.033277 / 0.043533 (-0.010256) | 0.279501 / 0.255139 (0.024362) | 0.295381 / 0.283200 (0.012181) | 0.017889 / 0.141683 (-0.123794) | 1.121354 / 1.452155 (-0.330801) | 1.225702 / 1.492716 (-0.267014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093385 / 0.018006 (0.075378) | 0.304642 / 0.000490 (0.304152) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021456 / 0.037411 (-0.015955) | 0.068536 / 0.014526 (0.054010) | 0.080867 / 0.176557 (-0.095689) | 0.119093 / 0.737135 (-0.618042) | 0.081875 / 0.296338 (-0.214464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304434 / 0.215209 (0.089225) | 2.990303 / 2.077655 (0.912649) | 1.616959 / 1.504120 (0.112839) | 1.493256 / 1.541195 (-0.047939) | 1.542857 / 1.468490 (0.074367) | 0.575517 / 4.584777 (-4.009260) | 2.455165 / 3.745712 (-1.290547) | 2.810089 / 5.269862 (-2.459773) | 1.756502 / 4.565676 (-2.809175) | 0.064801 / 0.424275 (-0.359475) | 0.004969 / 0.007607 (-0.002638) | 0.360227 / 0.226044 (0.134183) | 3.575029 / 2.268929 (1.306100) | 1.989955 / 55.444624 (-53.454669) | 1.705306 / 6.876477 (-5.171171) | 1.688523 / 2.142072 (-0.453550) | 0.663266 / 4.805227 (-4.141962) | 0.121852 / 6.500664 (-6.378812) | 0.041853 / 0.075469 (-0.033616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983535 / 1.841788 (-0.858252) | 11.827656 / 8.074308 (3.753348) | 10.663265 / 10.191392 (0.471873) | 0.145942 / 0.680424 (-0.534482) | 0.016004 / 0.534201 (-0.518197) | 0.288907 / 0.579283 (-0.290376) | 0.279100 / 0.434364 (-0.155264) | 0.328061 / 0.540337 (-0.212276) | 0.570253 / 1.386936 (-0.816683) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b52cbc18919869460557e15028e7f489eae8afc7 \"CML watermark\")\n"
] | 2023-12-05T12:37:50 | 2023-12-05T12:44:59 | 2023-12-05T12:38:50 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6471",
"html_url": "https://github.com/huggingface/datasets/pull/6471",
"diff_url": "https://github.com/huggingface/datasets/pull/6471.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6471.patch",
"merged_at": "2023-12-05T12:38:50"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6471/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6470/comments | https://api.github.com/repos/huggingface/datasets/issues/6470/events | https://github.com/huggingface/datasets/issues/6470 | 2,024,724,319 | I_kwDODunzps54rtdf | 6,470 | If an image in a dataset is corrupted, we get unescapable error | {
"login": "chigozienri",
"id": 14337872,
"node_id": "MDQ6VXNlcjE0MzM3ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/14337872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chigozienri",
"html_url": "https://github.com/chigozienri",
"followers_url": "https://api.github.com/users/chigozienri/followers",
"following_url": "https://api.github.com/users/chigozienri/following{/other_user}",
"gists_url": "https://api.github.com/users/chigozienri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chigozienri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chigozienri/subscriptions",
"organizations_url": "https://api.github.com/users/chigozienri/orgs",
"repos_url": "https://api.github.com/users/chigozienri/repos",
"events_url": "https://api.github.com/users/chigozienri/events{/privacy}",
"received_events_url": "https://api.github.com/users/chigozienri/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-12-04T20:58:49 | 2023-12-04T20:58:49 | null | NONE | null | null | null | ### Describe the bug
Example discussed in detail here: https://huggingface.co./datasets/sasha/birdsnap/discussions/1
### Steps to reproduce the bug
```
from datasets import load_dataset, VerificationMode
dataset = load_dataset(
'sasha/birdsnap',
split="train",
verification_mode=VerificationMode.ALL_CHECKS,
streaming=True # I recommend using streaming=True when reproducing, as this dataset is large
)
for idx, row in enumerate(dataset):
# Iterating to 9287 took 7 minutes for me
# If you already have the data locally cached and set streaming=False, you see the same error just by with dataset[9287]
pass
# error at 9287 OSError: image file is truncated (45 bytes not processed)
# note that we can't avoid the error using a try/except + continue inside the loop
```
### Expected behavior
Able to escape errors in casting to Image() without killing the whole loop
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6470/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6469/comments | https://api.github.com/repos/huggingface/datasets/issues/6469/events | https://github.com/huggingface/datasets/pull/6469 | 2,023,695,839 | PR_kwDODunzps5hC6xf | 6,469 | Don't expand_info in HF glob | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6469). All of your documentation changes will be reflected on that endpoint.",
"Merging this one for now, but lmk if you had other optimizations in mind for the next version of `huggingface_hub`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004998 / 0.011353 (-0.006355) | 0.003523 / 0.011008 (-0.007486) | 0.064932 / 0.038508 (0.026424) | 0.050107 / 0.023109 (0.026998) | 0.253715 / 0.275898 (-0.022183) | 0.275364 / 0.323480 (-0.048116) | 0.003902 / 0.007986 (-0.004084) | 0.002716 / 0.004328 (-0.001612) | 0.048458 / 0.004250 (0.044208) | 0.037802 / 0.037052 (0.000750) | 0.262328 / 0.258489 (0.003839) | 0.285911 / 0.293841 (-0.007930) | 0.027112 / 0.128546 (-0.101435) | 0.010780 / 0.075646 (-0.064867) | 0.206447 / 0.419271 (-0.212824) | 0.035771 / 0.043533 (-0.007761) | 0.255031 / 0.255139 (-0.000108) | 0.270530 / 0.283200 (-0.012670) | 0.017152 / 0.141683 (-0.124530) | 1.094734 / 1.452155 (-0.357421) | 1.163480 / 1.492716 (-0.329237) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092944 / 0.018006 (0.074938) | 0.301042 / 0.000490 (0.300553) | 0.000238 / 0.000200 (0.000038) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019090 / 0.037411 (-0.018321) | 0.061046 / 0.014526 (0.046520) | 0.073330 / 0.176557 (-0.103227) | 0.121124 / 0.737135 (-0.616012) | 0.080544 / 0.296338 (-0.215795) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.323866 / 0.215209 (0.108657) | 2.797727 / 2.077655 (0.720072) | 1.502994 / 1.504120 (-0.001126) | 1.376177 / 1.541195 (-0.165018) | 1.422741 / 1.468490 (-0.045749) | 0.562990 / 4.584777 (-4.021786) | 2.431781 / 3.745712 (-1.313931) | 2.783226 / 5.269862 (-2.486635) | 1.788055 / 4.565676 (-2.777621) | 0.064206 / 0.424275 (-0.360069) | 0.004989 / 0.007607 (-0.002618) | 0.338282 / 0.226044 (0.112237) | 3.356226 / 2.268929 (1.087297) | 1.855644 / 55.444624 (-53.588980) | 1.580876 / 6.876477 (-5.295601) | 1.617418 / 2.142072 (-0.524655) | 0.636816 / 4.805227 (-4.168411) | 0.117680 / 6.500664 (-6.382985) | 0.042560 / 0.075469 (-0.032909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956410 / 1.841788 (-0.885377) | 11.764886 / 8.074308 (3.690578) | 10.535801 / 10.191392 (0.344409) | 0.137797 / 0.680424 (-0.542627) | 0.014368 / 0.534201 (-0.519833) | 0.286213 / 0.579283 (-0.293070) | 0.267093 / 0.434364 (-0.167271) | 0.334802 / 0.540337 (-0.205535) | 0.441866 / 1.386936 (-0.945070) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005348 / 0.011353 (-0.006005) | 0.003551 / 0.011008 (-0.007458) | 0.049226 / 0.038508 (0.010718) | 0.052072 / 0.023109 (0.028963) | 0.268025 / 0.275898 (-0.007873) | 0.289968 / 0.323480 (-0.033512) | 0.004034 / 0.007986 (-0.003952) | 0.002675 / 0.004328 (-0.001653) | 0.048099 / 0.004250 (0.043848) | 0.040141 / 0.037052 (0.003089) | 0.272974 / 0.258489 (0.014485) | 0.296097 / 0.293841 (0.002256) | 0.028972 / 0.128546 (-0.099575) | 0.010689 / 0.075646 (-0.064957) | 0.057853 / 0.419271 (-0.361418) | 0.032488 / 0.043533 (-0.011045) | 0.272018 / 0.255139 (0.016879) | 0.287179 / 0.283200 (0.003980) | 0.018446 / 0.141683 (-0.123237) | 1.140346 / 1.452155 (-0.311809) | 1.247743 / 1.492716 (-0.244974) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091987 / 0.018006 (0.073980) | 0.300527 / 0.000490 (0.300037) | 0.000224 / 0.000200 (0.000024) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021390 / 0.037411 (-0.016021) | 0.068768 / 0.014526 (0.054242) | 0.080798 / 0.176557 (-0.095759) | 0.119081 / 0.737135 (-0.618054) | 0.082461 / 0.296338 (-0.213878) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286631 / 0.215209 (0.071422) | 2.804633 / 2.077655 (0.726978) | 1.574122 / 1.504120 (0.070002) | 1.459994 / 1.541195 (-0.081201) | 1.499739 / 1.468490 (0.031249) | 0.579595 / 4.584777 (-4.005182) | 2.426407 / 3.745712 (-1.319306) | 2.917994 / 5.269862 (-2.351868) | 1.846439 / 4.565676 (-2.719238) | 0.063274 / 0.424275 (-0.361001) | 0.005028 / 0.007607 (-0.002579) | 0.341114 / 0.226044 (0.115070) | 3.402677 / 2.268929 (1.133748) | 1.940980 / 55.444624 (-53.503645) | 1.651902 / 6.876477 (-5.224575) | 1.677037 / 2.142072 (-0.465036) | 0.651576 / 4.805227 (-4.153651) | 0.116398 / 6.500664 (-6.384266) | 0.041060 / 0.075469 (-0.034409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973278 / 1.841788 (-0.868509) | 12.248332 / 8.074308 (4.174024) | 10.830627 / 10.191392 (0.639235) | 0.143146 / 0.680424 (-0.537278) | 0.016249 / 0.534201 (-0.517952) | 0.298563 / 0.579283 (-0.280720) | 0.278643 / 0.434364 (-0.155721) | 0.338206 / 0.540337 (-0.202132) | 0.589485 / 1.386936 (-0.797451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#da29ac32c57e079199c173e4404342cc105ed774 \"CML watermark\")\n"
] | 2023-12-04T12:00:37 | 2023-12-15T13:18:37 | 2023-12-15T13:12:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6469",
"html_url": "https://github.com/huggingface/datasets/pull/6469",
"diff_url": "https://github.com/huggingface/datasets/pull/6469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6469.patch",
"merged_at": "2023-12-15T13:12:30"
} | Finally fix https://github.com/huggingface/datasets/issues/5537 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6469/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6468/comments | https://api.github.com/repos/huggingface/datasets/issues/6468/events | https://github.com/huggingface/datasets/pull/6468 | 2,023,617,877 | PR_kwDODunzps5hCpbN | 6,468 | Use auth to get parquet export | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6468). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003510 / 0.011008 (-0.007499) | 0.062939 / 0.038508 (0.024431) | 0.049191 / 0.023109 (0.026082) | 0.259088 / 0.275898 (-0.016810) | 0.273523 / 0.323480 (-0.049957) | 0.003902 / 0.007986 (-0.004083) | 0.002699 / 0.004328 (-0.001630) | 0.049077 / 0.004250 (0.044827) | 0.037174 / 0.037052 (0.000121) | 0.256467 / 0.258489 (-0.002022) | 0.291235 / 0.293841 (-0.002606) | 0.028119 / 0.128546 (-0.100427) | 0.010404 / 0.075646 (-0.065243) | 0.205825 / 0.419271 (-0.213446) | 0.035741 / 0.043533 (-0.007792) | 0.253219 / 0.255139 (-0.001920) | 0.274986 / 0.283200 (-0.008214) | 0.018379 / 0.141683 (-0.123304) | 1.131139 / 1.452155 (-0.321016) | 1.175875 / 1.492716 (-0.316841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090717 / 0.018006 (0.072710) | 0.299285 / 0.000490 (0.298796) | 0.000217 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018678 / 0.037411 (-0.018733) | 0.060558 / 0.014526 (0.046032) | 0.073828 / 0.176557 (-0.102728) | 0.119302 / 0.737135 (-0.617833) | 0.075261 / 0.296338 (-0.221078) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277018 / 0.215209 (0.061809) | 2.713255 / 2.077655 (0.635601) | 1.427512 / 1.504120 (-0.076608) | 1.311374 / 1.541195 (-0.229821) | 1.348756 / 1.468490 (-0.119734) | 0.561777 / 4.584777 (-4.023000) | 2.393578 / 3.745712 (-1.352134) | 2.798109 / 5.269862 (-2.471753) | 1.754808 / 4.565676 (-2.810869) | 0.062302 / 0.424275 (-0.361973) | 0.004948 / 0.007607 (-0.002659) | 0.328468 / 0.226044 (0.102423) | 3.246558 / 2.268929 (0.977629) | 1.786816 / 55.444624 (-53.657808) | 1.482937 / 6.876477 (-5.393540) | 1.516109 / 2.142072 (-0.625963) | 0.634457 / 4.805227 (-4.170770) | 0.116505 / 6.500664 (-6.384159) | 0.042162 / 0.075469 (-0.033308) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935312 / 1.841788 (-0.906476) | 11.540599 / 8.074308 (3.466291) | 10.512593 / 10.191392 (0.321201) | 0.129638 / 0.680424 (-0.550786) | 0.013994 / 0.534201 (-0.520207) | 0.291490 / 0.579283 (-0.287793) | 0.263641 / 0.434364 (-0.170722) | 0.328718 / 0.540337 (-0.211619) | 0.437598 / 1.386936 (-0.949338) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005192 / 0.011353 (-0.006161) | 0.003454 / 0.011008 (-0.007554) | 0.049448 / 0.038508 (0.010940) | 0.050968 / 0.023109 (0.027859) | 0.273702 / 0.275898 (-0.002196) | 0.296934 / 0.323480 (-0.026545) | 0.004066 / 0.007986 (-0.003920) | 0.002611 / 0.004328 (-0.001718) | 0.048284 / 0.004250 (0.044034) | 0.041399 / 0.037052 (0.004346) | 0.283000 / 0.258489 (0.024511) | 0.302553 / 0.293841 (0.008712) | 0.029086 / 0.128546 (-0.099460) | 0.010510 / 0.075646 (-0.065137) | 0.058097 / 0.419271 (-0.361175) | 0.032992 / 0.043533 (-0.010541) | 0.271752 / 0.255139 (0.016613) | 0.293535 / 0.283200 (0.010335) | 0.016958 / 0.141683 (-0.124725) | 1.130126 / 1.452155 (-0.322028) | 1.187228 / 1.492716 (-0.305488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092321 / 0.018006 (0.074315) | 0.302599 / 0.000490 (0.302109) | 0.000215 / 0.000200 (0.000015) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021837 / 0.037411 (-0.015574) | 0.071148 / 0.014526 (0.056622) | 0.082448 / 0.176557 (-0.094108) | 0.128083 / 0.737135 (-0.609053) | 0.090864 / 0.296338 (-0.205474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296248 / 0.215209 (0.081039) | 2.881130 / 2.077655 (0.803476) | 1.580360 / 1.504120 (0.076240) | 1.454642 / 1.541195 (-0.086553) | 1.461453 / 1.468490 (-0.007037) | 0.567500 / 4.584777 (-4.017277) | 2.493708 / 3.745712 (-1.252004) | 2.756623 / 5.269862 (-2.513239) | 1.771319 / 4.565676 (-2.794358) | 0.062287 / 0.424275 (-0.361988) | 0.004917 / 0.007607 (-0.002691) | 0.348034 / 0.226044 (0.121990) | 3.426938 / 2.268929 (1.158010) | 1.954190 / 55.444624 (-53.490435) | 1.660870 / 6.876477 (-5.215607) | 1.675118 / 2.142072 (-0.466955) | 0.636843 / 4.805227 (-4.168384) | 0.115028 / 6.500664 (-6.385636) | 0.040702 / 0.075469 (-0.034767) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988076 / 1.841788 (-0.853711) | 11.890867 / 8.074308 (3.816559) | 10.621169 / 10.191392 (0.429777) | 0.131568 / 0.680424 (-0.548856) | 0.014994 / 0.534201 (-0.519207) | 0.288900 / 0.579283 (-0.290384) | 0.272092 / 0.434364 (-0.162272) | 0.329397 / 0.540337 (-0.210940) | 0.569337 / 1.386936 (-0.817599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ae3b4a2268adc2f21568ff63891e9a83530c7e29 \"CML watermark\")\n"
] | 2023-12-04T11:18:27 | 2023-12-04T17:21:22 | 2023-12-04T17:15:11 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6468",
"html_url": "https://github.com/huggingface/datasets/pull/6468",
"diff_url": "https://github.com/huggingface/datasets/pull/6468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6468.patch",
"merged_at": "2023-12-04T17:15:11"
} | added `token` to the `_datasets_server` functions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6468/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6467/comments | https://api.github.com/repos/huggingface/datasets/issues/6467/events | https://github.com/huggingface/datasets/issues/6467 | 2,023,174,233 | I_kwDODunzps54lzBZ | 6,467 | New version release request | {
"login": "LZHgrla",
"id": 36994684,
"node_id": "MDQ6VXNlcjM2OTk0Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LZHgrla",
"html_url": "https://github.com/LZHgrla",
"followers_url": "https://api.github.com/users/LZHgrla/followers",
"following_url": "https://api.github.com/users/LZHgrla/following{/other_user}",
"gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions",
"organizations_url": "https://api.github.com/users/LZHgrla/orgs",
"repos_url": "https://api.github.com/users/LZHgrla/repos",
"events_url": "https://api.github.com/users/LZHgrla/events{/privacy}",
"received_events_url": "https://api.github.com/users/LZHgrla/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | [
"We will publish it soon (we usually do it in intervals of 1-2 months, so probably next week)",
"Thanks!"
] | 2023-12-04T07:08:26 | 2023-12-04T15:42:22 | 2023-12-04T15:42:22 | CONTRIBUTOR | null | null | null | ### Feature request
Hi!
I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0.
To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share it with us?
Thanks very much!
### Motivation
.
### Your contribution
. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6467/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6466/comments | https://api.github.com/repos/huggingface/datasets/issues/6466/events | https://github.com/huggingface/datasets/issues/6466 | 2,022,601,176 | I_kwDODunzps54jnHY | 6,466 | Can't align optional features of struct | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Friendly bump, I would be happy to work on this issue once I get the go-ahead from the dev team. "
] | 2023-12-03T15:57:07 | 2023-12-11T14:38:34 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Hello!
I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional.
I have a column named `speaker`, and this holds some information about a speaker.
```python
@dataclass
class Speaker:
name: str
email: Optional[str]
```
If I have two datasets, one happens to have `email` always None, then I get `The features can't be aligned because the key email of features`
### Steps to reproduce the bug
You can run the following script:
```python
ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]})
ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]})
concatenate_datasets([ds, ds2])
>>>The features can't be aligned because the key speaker of features {'speaker': {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}} has unexpected type - {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)} (expected either {'email': Value(dtype='null', id=None), 'name': Value(dtype='string', id=None)} or Value("null").
```
### Expected behavior
I think this should work; if two top-level columns were in the same situation it would properly cast to `string`.
```python
ds = Dataset.from_dict({'email': [None, None]})
ds2 = Dataset.from_dict({'email': ['[email protected]', '[email protected]']})
concatenate_datasets([ds, ds2])
>>> # Works!
```
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
- `fsspec` version: 2023.6.0
I would be happy to fix this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6466/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6465/comments | https://api.github.com/repos/huggingface/datasets/issues/6465/events | https://github.com/huggingface/datasets/issues/6465 | 2,022,212,468 | I_kwDODunzps54iIN0 | 6,465 | `load_dataset` uses out-of-date cache instead of re-downloading a changed dataset | {
"login": "mnoukhov",
"id": 3391297,
"node_id": "MDQ6VXNlcjMzOTEyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3391297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnoukhov",
"html_url": "https://github.com/mnoukhov",
"followers_url": "https://api.github.com/users/mnoukhov/followers",
"following_url": "https://api.github.com/users/mnoukhov/following{/other_user}",
"gists_url": "https://api.github.com/users/mnoukhov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnoukhov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnoukhov/subscriptions",
"organizations_url": "https://api.github.com/users/mnoukhov/orgs",
"repos_url": "https://api.github.com/users/mnoukhov/repos",
"events_url": "https://api.github.com/users/mnoukhov/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnoukhov/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi, thanks for reporting! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 2023-12-02T21:35:17 | 2023-12-04T16:13:10 | null | NONE | null | null | null | ### Describe the bug
When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset
### Steps to reproduce the bug
Here is a minimal example script to
1. create an initial dataset and upload
2. download it so it is stored in cache
3. change the dataset and re-upload
4. redownload
```python
import time
from datasets import Dataset, DatasetDict, DownloadMode, load_dataset
username = "YOUR_USERNAME_HERE"
initial = Dataset.from_dict({"foo": [1, 2, 3]})
print(f"Intial {initial['foo']}")
initial_ds = DatasetDict({"train": initial})
initial_ds.push_to_hub("test")
time.sleep(1)
download = load_dataset(f"{username}/test", split="train")
changed = download.map(lambda x: {"foo": x["foo"] + 1})
print(f"Changed {changed['foo']}")
changed.push_to_hub("test")
time.sleep(1)
download_again = load_dataset(f"{username}/test", split="train")
print(f"Download Changed {download_again['foo']}")
# >>> gives the out-dated [1,2,3] when it should be changed [2,3,4]
```
The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset
```python
download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD)
print(f"Force Download Changed {download_again_force['foo']}")
# >>> [2,3,4]
```
### Expected behavior
I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match
### Environment info
- `datasets` version: 2.15.0 β
- Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 β
- Python version: 3.8.17 β
- `huggingface_hub` version: 0.19.4 β
- PyArrow version: 13.0.0 β
- Pandas version: 2.0.3 β
- `fsspec` version: 2023.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6465/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6464/comments | https://api.github.com/repos/huggingface/datasets/issues/6464/events | https://github.com/huggingface/datasets/pull/6464 | 2,020,860,462 | PR_kwDODunzps5g5djo | 6,464 | Add concurrent loading of shards to datasets.load_from_disk | {
"login": "kkoutini",
"id": 51880718,
"node_id": "MDQ6VXNlcjUxODgwNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkoutini",
"html_url": "https://github.com/kkoutini",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "https://api.github.com/users/kkoutini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkoutini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkoutini/subscriptions",
"organizations_url": "https://api.github.com/users/kkoutini/orgs",
"repos_url": "https://api.github.com/users/kkoutini/repos",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkoutini/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"If we use multithreading no need to ask for `num_proc`. And maybe we the same numbers of threads as tqdm by default (IIRC it's `max(32, cpu_count() + 4)`) - you can even use `tqdm.contrib.concurrent.thread_map` directly to simplify the code\r\n\r\nAlso you can ignore the `IN_MEMORY_MAX_SIZE` config for this. This parameter is kinda legacy.\r\n\r\nHave you been able to run the benchmark on a fresh node ? The speed up doesn't seem that big in your first report",
"I got some fresh nodes with the 32 threads I'm loading the dataset with around 315 seconds (without any preloading). Sequentially, it used to take around 1865 seconds. \r\nOk I'll roll back the changes and switch to `tqdm.contrib.concurrent.thread_map` without the `num_proc` parameter. ",
"I switched to `tqdm.contrib.concurrent.thread_map` the code looks much simpler now.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6464). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2023-12-01T13:13:53 | 2023-12-07T12:47:02 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6464",
"html_url": "https://github.com/huggingface/datasets/pull/6464",
"diff_url": "https://github.com/huggingface/datasets/pull/6464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6464.patch",
"merged_at": null
} | In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads.
- Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252).
- I'm not sure if using threads would respect theΒ `IN_MEMORY_MAX_SIZE` config.
- I'm not sure if we need to exposeΒ num_procΒ fromΒ `BaseReader.read`Β toΒ `DatasetBuilder.as_dataset`. Since `Β DatasetBuilder.as_dataset` is used in many places beside `load_dataset`.
### Tests on luster file system (on a shared partial node):
Loading 1231 shards of ~2GBs.
The files were pre-loaded in another process before the script runs (couldn't get a fresh node).
```python
import logging
from time import perf_counter
import datasets
logger = datasets.logging.get_logger(__name__)
datasets.logging.set_verbosity_info()
logging.basicConfig(level=logging.DEBUG, format="%(message)s")
class catchtime:
# context to measure loading time: https://stackoverflow.com/questions/33987060/python-context-manager-that-measures-time
def __init__(self, debug_print="Time", logger=logger):
self.debug_print = debug_print
self.logger = logger
def __enter__(self):
self.start = perf_counter()
return self
def __exit__(self, type, value, traceback):
self.time = perf_counter() - self.start
readout = f"{self.debug_print}: {self.time:.3f} seconds"
self.logger.info(readout)
dataset_path=""
# warmup
with catchtime("Loading in parallel", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=16)
# num_proc=16
with catchtime("Loading in parallel", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=16)
# num_proc=32
with catchtime("Loading in parallel", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=32)
# num_proc=1
with catchtime("Loading in conseq", logger=logger):
ds = datasets.load_from_disk(dataset_path,num_proc=1)
```
#### Run 1
```
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:28<00:00, 13.96shards/s]
Loading in parallel: 88.690 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:48<00:00, 11.31shards/s]
Loading in parallel: 109.339 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 32 threads: 100%|ββββββββββ| 1231/1231 [01:06<00:00, 18.56shards/s]
Loading in parallel: 66.931 seconds
open file: .../dataset_dict.json
Loading the dataset from disk: 100%|ββββββββββ| 1231/1231 [05:09<00:00, 3.98shards/s]
Loading in conseq: 309.792 seconds
```
#### Run 2
```
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:38<00:00, 12.53shards/s]
Loading in parallel: 98.831 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [02:01<00:00, 10.16shards/s]
Loading in parallel: 121.669 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 32 threads: 100%|ββββββββββ| 1231/1231 [01:07<00:00, 18.18shards/s]
Loading in parallel: 68.192 seconds
open file: .../dataset_dict.json
Loading the dataset from disk: 100%|ββββββββββ| 1231/1231 [05:19<00:00, 3.86shards/s]
Loading in conseq: 319.759 seconds
```
#### Run 3
```
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [01:36<00:00, 12.74shards/s]
Loading in parallel: 96.936 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 16 threads: 100%|ββββββββββ| 1231/1231 [02:00<00:00, 10.24shards/s]
Loading in parallel: 120.761 seconds
open file: .../dataset_dict.json
Loading the dataset from disk using 32 threads: 100%|ββββββββββ| 1231/1231 [01:08<00:00, 18.04shards/s]
Loading in parallel: 68.666 seconds
open file: .../dataset_dict.json
Loading the dataset from disk: 100%|ββββββββββ| 1231/1231 [05:35<00:00, 3.67shards/s]
Loading in conseq: 335.777 seconds
```
fix #2252
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6464/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6463/comments | https://api.github.com/repos/huggingface/datasets/issues/6463/events | https://github.com/huggingface/datasets/pull/6463 | 2,020,702,967 | PR_kwDODunzps5g46_4 | 6,463 | Disable benchmarks in PRs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's a way to detect regressions in performance sensitive methods like map, and find the commit that lead to the regression",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005357 / 0.011353 (-0.005996) | 0.003295 / 0.011008 (-0.007713) | 0.062354 / 0.038508 (0.023846) | 0.054207 / 0.023109 (0.031098) | 0.240030 / 0.275898 (-0.035869) | 0.267863 / 0.323480 (-0.055617) | 0.002925 / 0.007986 (-0.005061) | 0.002634 / 0.004328 (-0.001695) | 0.047952 / 0.004250 (0.043702) | 0.038424 / 0.037052 (0.001372) | 0.248059 / 0.258489 (-0.010430) | 0.271923 / 0.293841 (-0.021918) | 0.027513 / 0.128546 (-0.101034) | 0.010344 / 0.075646 (-0.065302) | 0.210864 / 0.419271 (-0.208407) | 0.035911 / 0.043533 (-0.007622) | 0.245166 / 0.255139 (-0.009973) | 0.260914 / 0.283200 (-0.022285) | 0.016709 / 0.141683 (-0.124974) | 1.098324 / 1.452155 (-0.353830) | 1.162638 / 1.492716 (-0.330079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094419 / 0.018006 (0.076413) | 0.303209 / 0.000490 (0.302719) | 0.000214 / 0.000200 (0.000014) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018350 / 0.037411 (-0.019061) | 0.060625 / 0.014526 (0.046099) | 0.072545 / 0.176557 (-0.104012) | 0.120905 / 0.737135 (-0.616231) | 0.073858 / 0.296338 (-0.222480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282011 / 0.215209 (0.066802) | 2.758741 / 2.077655 (0.681086) | 1.431691 / 1.504120 (-0.072429) | 1.315883 / 1.541195 (-0.225312) | 1.344235 / 1.468490 (-0.124255) | 0.562117 / 4.584777 (-4.022660) | 2.385641 / 3.745712 (-1.360071) | 2.785402 / 5.269862 (-2.484460) | 1.753912 / 4.565676 (-2.811764) | 0.064054 / 0.424275 (-0.360221) | 0.005050 / 0.007607 (-0.002557) | 0.336452 / 0.226044 (0.110407) | 3.302481 / 2.268929 (1.033553) | 1.794105 / 55.444624 (-53.650519) | 1.519346 / 6.876477 (-5.357131) | 1.514911 / 2.142072 (-0.627161) | 0.655779 / 4.805227 (-4.149449) | 0.117913 / 6.500664 (-6.382751) | 0.042229 / 0.075469 (-0.033240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935196 / 1.841788 (-0.906591) | 11.490113 / 8.074308 (3.415805) | 10.542446 / 10.191392 (0.351054) | 0.129614 / 0.680424 (-0.550810) | 0.014919 / 0.534201 (-0.519282) | 0.288448 / 0.579283 (-0.290835) | 0.266929 / 0.434364 (-0.167435) | 0.328830 / 0.540337 (-0.211507) | 0.475510 / 1.386936 (-0.911426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005469 / 0.011353 (-0.005884) | 0.003798 / 0.011008 (-0.007210) | 0.049129 / 0.038508 (0.010621) | 0.055490 / 0.023109 (0.032380) | 0.265828 / 0.275898 (-0.010070) | 0.286031 / 0.323480 (-0.037448) | 0.004075 / 0.007986 (-0.003910) | 0.002668 / 0.004328 (-0.001660) | 0.047823 / 0.004250 (0.043573) | 0.041946 / 0.037052 (0.004894) | 0.270359 / 0.258489 (0.011869) | 0.294287 / 0.293841 (0.000446) | 0.029643 / 0.128546 (-0.098903) | 0.010523 / 0.075646 (-0.065123) | 0.057370 / 0.419271 (-0.361902) | 0.033149 / 0.043533 (-0.010384) | 0.264408 / 0.255139 (0.009269) | 0.280413 / 0.283200 (-0.002787) | 0.018313 / 0.141683 (-0.123370) | 1.105982 / 1.452155 (-0.346173) | 1.182486 / 1.492716 (-0.310230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092643 / 0.018006 (0.074637) | 0.301320 / 0.000490 (0.300831) | 0.000221 / 0.000200 (0.000021) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021253 / 0.037411 (-0.016158) | 0.068052 / 0.014526 (0.053527) | 0.080821 / 0.176557 (-0.095736) | 0.119320 / 0.737135 (-0.617816) | 0.081952 / 0.296338 (-0.214387) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288536 / 0.215209 (0.073327) | 2.819900 / 2.077655 (0.742245) | 1.545210 / 1.504120 (0.041090) | 1.422047 / 1.541195 (-0.119147) | 1.439158 / 1.468490 (-0.029332) | 0.564910 / 4.584777 (-4.019867) | 2.430474 / 3.745712 (-1.315238) | 2.763979 / 5.269862 (-2.505882) | 1.732203 / 4.565676 (-2.833474) | 0.062692 / 0.424275 (-0.361583) | 0.004936 / 0.007607 (-0.002671) | 0.341626 / 0.226044 (0.115582) | 3.366623 / 2.268929 (1.097694) | 1.917198 / 55.444624 (-53.527426) | 1.637635 / 6.876477 (-5.238842) | 1.625953 / 2.142072 (-0.516119) | 0.634936 / 4.805227 (-4.170291) | 0.115336 / 6.500664 (-6.385328) | 0.040946 / 0.075469 (-0.034524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964865 / 1.841788 (-0.876922) | 12.077233 / 8.074308 (4.002925) | 10.664120 / 10.191392 (0.472728) | 0.132084 / 0.680424 (-0.548340) | 0.015931 / 0.534201 (-0.518270) | 0.289181 / 0.579283 (-0.290102) | 0.276943 / 0.434364 (-0.157420) | 0.324884 / 0.540337 (-0.215453) | 0.552570 / 1.386936 (-0.834366) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ac3f2b3f6d867673e41a0253f9e1ad48db68a8e \"CML watermark\")\n"
] | 2023-12-01T11:35:30 | 2023-12-01T12:09:09 | 2023-12-01T12:03:04 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6463",
"html_url": "https://github.com/huggingface/datasets/pull/6463",
"diff_url": "https://github.com/huggingface/datasets/pull/6463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6463.patch",
"merged_at": "2023-12-01T12:03:04"
} | In order to keep PR pages less spammy / more readable.
Having the benchmarks on commits on `main` is enough imo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6463/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6463/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6462/comments | https://api.github.com/repos/huggingface/datasets/issues/6462/events | https://github.com/huggingface/datasets/pull/6462 | 2,019,238,388 | PR_kwDODunzps5gz68T | 6,462 | Missing DatasetNotFoundError | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005594 / 0.011353 (-0.005759) | 0.003672 / 0.011008 (-0.007337) | 0.062796 / 0.038508 (0.024288) | 0.059432 / 0.023109 (0.036323) | 0.253976 / 0.275898 (-0.021922) | 0.281155 / 0.323480 (-0.042325) | 0.003023 / 0.007986 (-0.004962) | 0.003320 / 0.004328 (-0.001008) | 0.049059 / 0.004250 (0.044809) | 0.040252 / 0.037052 (0.003200) | 0.259526 / 0.258489 (0.001037) | 0.318798 / 0.293841 (0.024957) | 0.027883 / 0.128546 (-0.100663) | 0.010883 / 0.075646 (-0.064763) | 0.206948 / 0.419271 (-0.212323) | 0.036335 / 0.043533 (-0.007198) | 0.253209 / 0.255139 (-0.001930) | 0.275173 / 0.283200 (-0.008026) | 0.020365 / 0.141683 (-0.121318) | 1.121630 / 1.452155 (-0.330524) | 1.174680 / 1.492716 (-0.318036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098372 / 0.018006 (0.080366) | 0.309949 / 0.000490 (0.309460) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019495 / 0.037411 (-0.017916) | 0.062321 / 0.014526 (0.047795) | 0.074525 / 0.176557 (-0.102031) | 0.121832 / 0.737135 (-0.615303) | 0.077612 / 0.296338 (-0.218727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288156 / 0.215209 (0.072947) | 2.816411 / 2.077655 (0.738756) | 1.497926 / 1.504120 (-0.006193) | 1.378137 / 1.541195 (-0.163058) | 1.446466 / 1.468490 (-0.022024) | 0.566195 / 4.584777 (-4.018582) | 2.391933 / 3.745712 (-1.353780) | 2.929290 / 5.269862 (-2.340572) | 1.828215 / 4.565676 (-2.737462) | 0.063312 / 0.424275 (-0.360963) | 0.005199 / 0.007607 (-0.002408) | 0.342883 / 0.226044 (0.116838) | 3.378388 / 2.268929 (1.109459) | 1.865710 / 55.444624 (-53.578915) | 1.573442 / 6.876477 (-5.303035) | 1.631228 / 2.142072 (-0.510845) | 0.651614 / 4.805227 (-4.153613) | 0.118177 / 6.500664 (-6.382487) | 0.043303 / 0.075469 (-0.032166) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950694 / 1.841788 (-0.891094) | 12.559851 / 8.074308 (4.485543) | 10.751123 / 10.191392 (0.559731) | 0.143107 / 0.680424 (-0.537317) | 0.014469 / 0.534201 (-0.519732) | 0.289531 / 0.579283 (-0.289752) | 0.267316 / 0.434364 (-0.167047) | 0.327748 / 0.540337 (-0.212590) | 0.437758 / 1.386936 (-0.949178) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005669 / 0.011353 (-0.005684) | 0.003831 / 0.011008 (-0.007177) | 0.049096 / 0.038508 (0.010588) | 0.061408 / 0.023109 (0.038299) | 0.274571 / 0.275898 (-0.001327) | 0.299978 / 0.323480 (-0.023501) | 0.004216 / 0.007986 (-0.003769) | 0.002848 / 0.004328 (-0.001480) | 0.048755 / 0.004250 (0.044504) | 0.042576 / 0.037052 (0.005524) | 0.276781 / 0.258489 (0.018292) | 0.300903 / 0.293841 (0.007062) | 0.030243 / 0.128546 (-0.098303) | 0.010967 / 0.075646 (-0.064679) | 0.057879 / 0.419271 (-0.361392) | 0.033206 / 0.043533 (-0.010327) | 0.277620 / 0.255139 (0.022481) | 0.296263 / 0.283200 (0.013064) | 0.019022 / 0.141683 (-0.122660) | 1.125615 / 1.452155 (-0.326539) | 1.278016 / 1.492716 (-0.214700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096836 / 0.018006 (0.078830) | 0.307491 / 0.000490 (0.307001) | 0.000230 / 0.000200 (0.000030) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021552 / 0.037411 (-0.015859) | 0.071099 / 0.014526 (0.056573) | 0.082432 / 0.176557 (-0.094124) | 0.121826 / 0.737135 (-0.615310) | 0.084902 / 0.296338 (-0.211437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.328113 / 0.215209 (0.112904) | 2.989613 / 2.077655 (0.911959) | 1.604904 / 1.504120 (0.100784) | 1.485459 / 1.541195 (-0.055735) | 1.524829 / 1.468490 (0.056339) | 0.580589 / 4.584777 (-4.004188) | 2.440087 / 3.745712 (-1.305625) | 2.944697 / 5.269862 (-2.325164) | 1.832728 / 4.565676 (-2.732949) | 0.064423 / 0.424275 (-0.359852) | 0.004991 / 0.007607 (-0.002616) | 0.357878 / 0.226044 (0.131834) | 3.515415 / 2.268929 (1.246487) | 1.964492 / 55.444624 (-53.480132) | 1.684058 / 6.876477 (-5.192418) | 1.730294 / 2.142072 (-0.411778) | 0.661228 / 4.805227 (-4.143999) | 0.122894 / 6.500664 (-6.377770) | 0.041776 / 0.075469 (-0.033693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969849 / 1.841788 (-0.871939) | 12.897067 / 8.074308 (4.822758) | 10.908200 / 10.191392 (0.716808) | 0.141139 / 0.680424 (-0.539285) | 0.015377 / 0.534201 (-0.518824) | 0.288625 / 0.579283 (-0.290658) | 0.279020 / 0.434364 (-0.155344) | 0.328386 / 0.540337 (-0.211951) | 0.590833 / 1.386936 (-0.796103) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#39ea60eaabb05d8ee38c072f375816cf87fce1a9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004986 / 0.011353 (-0.006367) | 0.003070 / 0.011008 (-0.007938) | 0.062433 / 0.038508 (0.023925) | 0.050639 / 0.023109 (0.027530) | 0.241807 / 0.275898 (-0.034091) | 0.262517 / 0.323480 (-0.060963) | 0.003826 / 0.007986 (-0.004160) | 0.002602 / 0.004328 (-0.001727) | 0.048508 / 0.004250 (0.044257) | 0.037276 / 0.037052 (0.000224) | 0.245757 / 0.258489 (-0.012732) | 0.272969 / 0.293841 (-0.020871) | 0.027139 / 0.128546 (-0.101407) | 0.010265 / 0.075646 (-0.065381) | 0.207279 / 0.419271 (-0.211992) | 0.035312 / 0.043533 (-0.008221) | 0.247535 / 0.255139 (-0.007604) | 0.260668 / 0.283200 (-0.022532) | 0.016496 / 0.141683 (-0.125187) | 1.137510 / 1.452155 (-0.314645) | 1.167870 / 1.492716 (-0.324847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091743 / 0.018006 (0.073736) | 0.298649 / 0.000490 (0.298159) | 0.000208 / 0.000200 (0.000009) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019053 / 0.037411 (-0.018359) | 0.060300 / 0.014526 (0.045774) | 0.072154 / 0.176557 (-0.104402) | 0.120293 / 0.737135 (-0.616842) | 0.073923 / 0.296338 (-0.222415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283058 / 0.215209 (0.067849) | 2.769503 / 2.077655 (0.691849) | 1.457016 / 1.504120 (-0.047104) | 1.335753 / 1.541195 (-0.205441) | 1.325986 / 1.468490 (-0.142504) | 0.562553 / 4.584777 (-4.022224) | 2.406144 / 3.745712 (-1.339568) | 2.778063 / 5.269862 (-2.491799) | 1.782199 / 4.565676 (-2.783477) | 0.062490 / 0.424275 (-0.361785) | 0.004912 / 0.007607 (-0.002695) | 0.338500 / 0.226044 (0.112456) | 3.309746 / 2.268929 (1.040818) | 1.819693 / 55.444624 (-53.624931) | 1.510295 / 6.876477 (-5.366182) | 1.578402 / 2.142072 (-0.563671) | 0.637517 / 4.805227 (-4.167710) | 0.117018 / 6.500664 (-6.383647) | 0.048149 / 0.075469 (-0.027320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939424 / 1.841788 (-0.902364) | 11.494891 / 8.074308 (3.420583) | 10.115194 / 10.191392 (-0.076198) | 0.126751 / 0.680424 (-0.553673) | 0.013567 / 0.534201 (-0.520634) | 0.282501 / 0.579283 (-0.296782) | 0.260594 / 0.434364 (-0.173770) | 0.325940 / 0.540337 (-0.214397) | 0.426186 / 1.386936 (-0.960750) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005405 / 0.011353 (-0.005948) | 0.003557 / 0.011008 (-0.007451) | 0.051139 / 0.038508 (0.012631) | 0.053446 / 0.023109 (0.030337) | 0.268051 / 0.275898 (-0.007847) | 0.292343 / 0.323480 (-0.031136) | 0.004716 / 0.007986 (-0.003269) | 0.002677 / 0.004328 (-0.001651) | 0.047634 / 0.004250 (0.043384) | 0.041062 / 0.037052 (0.004009) | 0.269225 / 0.258489 (0.010736) | 0.297462 / 0.293841 (0.003621) | 0.029292 / 0.128546 (-0.099254) | 0.010947 / 0.075646 (-0.064699) | 0.057845 / 0.419271 (-0.361426) | 0.032793 / 0.043533 (-0.010740) | 0.265308 / 0.255139 (0.010169) | 0.288242 / 0.283200 (0.005043) | 0.018311 / 0.141683 (-0.123372) | 1.140957 / 1.452155 (-0.311197) | 1.204883 / 1.492716 (-0.287833) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091375 / 0.018006 (0.073368) | 0.285922 / 0.000490 (0.285432) | 0.000238 / 0.000200 (0.000038) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021277 / 0.037411 (-0.016134) | 0.068853 / 0.014526 (0.054328) | 0.081002 / 0.176557 (-0.095555) | 0.120998 / 0.737135 (-0.616138) | 0.082741 / 0.296338 (-0.213598) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299398 / 0.215209 (0.084189) | 2.909622 / 2.077655 (0.831967) | 1.624381 / 1.504120 (0.120261) | 1.501683 / 1.541195 (-0.039512) | 1.523045 / 1.468490 (0.054555) | 0.548960 / 4.584777 (-4.035817) | 2.413297 / 3.745712 (-1.332415) | 2.817852 / 5.269862 (-2.452010) | 1.754407 / 4.565676 (-2.811270) | 0.061912 / 0.424275 (-0.362363) | 0.004880 / 0.007607 (-0.002727) | 0.353989 / 0.226044 (0.127944) | 3.496147 / 2.268929 (1.227219) | 2.003026 / 55.444624 (-53.441598) | 1.702013 / 6.876477 (-5.174463) | 1.680935 / 2.142072 (-0.461137) | 0.630183 / 4.805227 (-4.175044) | 0.113786 / 6.500664 (-6.386878) | 0.040061 / 0.075469 (-0.035408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957218 / 1.841788 (-0.884569) | 11.914469 / 8.074308 (3.840160) | 10.488896 / 10.191392 (0.297504) | 0.129292 / 0.680424 (-0.551132) | 0.016603 / 0.534201 (-0.517598) | 0.287367 / 0.579283 (-0.291916) | 0.271332 / 0.434364 (-0.163032) | 0.325577 / 0.540337 (-0.214761) | 0.560553 / 1.386936 (-0.826383) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d31e434bbeafdf6a70cb80539342d8fe5f5fd27 \"CML watermark\")\n"
] | 2023-11-30T18:09:43 | 2023-11-30T18:36:40 | 2023-11-30T18:30:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6462",
"html_url": "https://github.com/huggingface/datasets/pull/6462",
"diff_url": "https://github.com/huggingface/datasets/pull/6462.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6462.patch",
"merged_at": "2023-11-30T18:30:30"
} | continuation of https://github.com/huggingface/datasets/pull/6431
this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6462/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6461/comments | https://api.github.com/repos/huggingface/datasets/issues/6461/events | https://github.com/huggingface/datasets/pull/6461 | 2,018,850,731 | PR_kwDODunzps5gykvO | 6,461 | Fix shard retry mechanism in `push_to_hub` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Wauplin Maybe `504` should be added to the `retry_on_status_codes` tuple [here](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300) to guard against https://github.com/huggingface/datasets/issues/3872",
"We could but I'm not sure to have witness a 504 on S3 before. The issue reported in https://github.com/huggingface/datasets/issues/3872 is a 504 on the `/upload` endpoint on the Hub and this is not an endpoint that is retried on [this line](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300).",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005110 / 0.011353 (-0.006243) | 0.003307 / 0.011008 (-0.007701) | 0.062601 / 0.038508 (0.024093) | 0.049644 / 0.023109 (0.026534) | 0.243195 / 0.275898 (-0.032703) | 0.273543 / 0.323480 (-0.049936) | 0.003862 / 0.007986 (-0.004123) | 0.002624 / 0.004328 (-0.001705) | 0.048273 / 0.004250 (0.044023) | 0.037820 / 0.037052 (0.000768) | 0.249134 / 0.258489 (-0.009355) | 0.319359 / 0.293841 (0.025518) | 0.027816 / 0.128546 (-0.100730) | 0.010422 / 0.075646 (-0.065225) | 0.206607 / 0.419271 (-0.212665) | 0.035719 / 0.043533 (-0.007814) | 0.250300 / 0.255139 (-0.004839) | 0.290377 / 0.283200 (0.007177) | 0.018459 / 0.141683 (-0.123224) | 1.114664 / 1.452155 (-0.337490) | 1.171429 / 1.492716 (-0.321288) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091483 / 0.018006 (0.073477) | 0.302770 / 0.000490 (0.302281) | 0.000203 / 0.000200 (0.000003) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018870 / 0.037411 (-0.018541) | 0.062692 / 0.014526 (0.048166) | 0.075381 / 0.176557 (-0.101176) | 0.122338 / 0.737135 (-0.614797) | 0.075608 / 0.296338 (-0.220730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288115 / 0.215209 (0.072906) | 2.816183 / 2.077655 (0.738528) | 1.535601 / 1.504120 (0.031481) | 1.409546 / 1.541195 (-0.131648) | 1.438569 / 1.468490 (-0.029921) | 0.561797 / 4.584777 (-4.022980) | 2.373921 / 3.745712 (-1.371791) | 2.739437 / 5.269862 (-2.530424) | 1.750921 / 4.565676 (-2.814755) | 0.062114 / 0.424275 (-0.362161) | 0.004965 / 0.007607 (-0.002642) | 0.348614 / 0.226044 (0.122569) | 3.519631 / 2.268929 (1.250703) | 1.910797 / 55.444624 (-53.533827) | 1.610541 / 6.876477 (-5.265936) | 1.617972 / 2.142072 (-0.524100) | 0.639421 / 4.805227 (-4.165806) | 0.117371 / 6.500664 (-6.383293) | 0.041851 / 0.075469 (-0.033618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945563 / 1.841788 (-0.896224) | 11.362399 / 8.074308 (3.288090) | 10.468468 / 10.191392 (0.277075) | 0.128925 / 0.680424 (-0.551499) | 0.013892 / 0.534201 (-0.520309) | 0.285487 / 0.579283 (-0.293796) | 0.269295 / 0.434364 (-0.165069) | 0.324843 / 0.540337 (-0.215495) | 0.438452 / 1.386936 (-0.948484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003162 / 0.011008 (-0.007846) | 0.048177 / 0.038508 (0.009669) | 0.048708 / 0.023109 (0.025599) | 0.271663 / 0.275898 (-0.004235) | 0.289948 / 0.323480 (-0.033532) | 0.003955 / 0.007986 (-0.004030) | 0.002616 / 0.004328 (-0.001713) | 0.047510 / 0.004250 (0.043260) | 0.039938 / 0.037052 (0.002886) | 0.277449 / 0.258489 (0.018960) | 0.300315 / 0.293841 (0.006474) | 0.029263 / 0.128546 (-0.099283) | 0.010403 / 0.075646 (-0.065244) | 0.056682 / 0.419271 (-0.362590) | 0.032757 / 0.043533 (-0.010776) | 0.273291 / 0.255139 (0.018152) | 0.289023 / 0.283200 (0.005824) | 0.017843 / 0.141683 (-0.123840) | 1.124762 / 1.452155 (-0.327393) | 1.176646 / 1.492716 (-0.316070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004568 / 0.018006 (-0.013438) | 0.300715 / 0.000490 (0.300225) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021528 / 0.037411 (-0.015883) | 0.068317 / 0.014526 (0.053792) | 0.081358 / 0.176557 (-0.095199) | 0.119297 / 0.737135 (-0.617838) | 0.082445 / 0.296338 (-0.213893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289681 / 0.215209 (0.074472) | 2.843862 / 2.077655 (0.766208) | 1.574257 / 1.504120 (0.070137) | 1.454026 / 1.541195 (-0.087169) | 1.478379 / 1.468490 (0.009889) | 0.558259 / 4.584777 (-4.026518) | 2.513261 / 3.745712 (-1.232451) | 2.759751 / 5.269862 (-2.510111) | 1.730335 / 4.565676 (-2.835341) | 0.063805 / 0.424275 (-0.360470) | 0.004991 / 0.007607 (-0.002616) | 0.346586 / 0.226044 (0.120542) | 3.369163 / 2.268929 (1.100234) | 1.934734 / 55.444624 (-53.509890) | 1.658864 / 6.876477 (-5.217613) | 1.645621 / 2.142072 (-0.496452) | 0.636633 / 4.805227 (-4.168594) | 0.116839 / 6.500664 (-6.383825) | 0.040863 / 0.075469 (-0.034606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960925 / 1.841788 (-0.880863) | 11.769189 / 8.074308 (3.694881) | 10.713662 / 10.191392 (0.522270) | 0.140510 / 0.680424 (-0.539914) | 0.015424 / 0.534201 (-0.518777) | 0.288039 / 0.579283 (-0.291244) | 0.277623 / 0.434364 (-0.156741) | 0.322622 / 0.540337 (-0.217716) | 0.539805 / 1.386936 (-0.847131) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#07ad81c15bd3b954defe779fc37ba5f432f5ff2a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005852) | 0.003754 / 0.011008 (-0.007254) | 0.062628 / 0.038508 (0.024120) | 0.059951 / 0.023109 (0.036842) | 0.254851 / 0.275898 (-0.021047) | 0.272133 / 0.323480 (-0.051347) | 0.003962 / 0.007986 (-0.004024) | 0.002759 / 0.004328 (-0.001569) | 0.048412 / 0.004250 (0.044161) | 0.039349 / 0.037052 (0.002297) | 0.253093 / 0.258489 (-0.005397) | 0.287048 / 0.293841 (-0.006793) | 0.027197 / 0.128546 (-0.101349) | 0.010828 / 0.075646 (-0.064819) | 0.206371 / 0.419271 (-0.212901) | 0.035881 / 0.043533 (-0.007652) | 0.254905 / 0.255139 (-0.000234) | 0.273819 / 0.283200 (-0.009381) | 0.018041 / 0.141683 (-0.123642) | 1.103970 / 1.452155 (-0.348185) | 1.166340 / 1.492716 (-0.326377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093196 / 0.018006 (0.075190) | 0.302690 / 0.000490 (0.302200) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019552 / 0.037411 (-0.017860) | 0.062337 / 0.014526 (0.047811) | 0.074070 / 0.176557 (-0.102486) | 0.120998 / 0.737135 (-0.616137) | 0.076265 / 0.296338 (-0.220074) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272637 / 0.215209 (0.057427) | 2.693350 / 2.077655 (0.615696) | 1.398020 / 1.504120 (-0.106100) | 1.285706 / 1.541195 (-0.255488) | 1.342810 / 1.468490 (-0.125680) | 0.565378 / 4.584777 (-4.019399) | 2.390131 / 3.745712 (-1.355581) | 2.892137 / 5.269862 (-2.377725) | 1.819840 / 4.565676 (-2.745836) | 0.062789 / 0.424275 (-0.361486) | 0.004920 / 0.007607 (-0.002687) | 0.329281 / 0.226044 (0.103237) | 3.261664 / 2.268929 (0.992735) | 1.775102 / 55.444624 (-53.669523) | 1.514341 / 6.876477 (-5.362136) | 1.530805 / 2.142072 (-0.611267) | 0.641009 / 4.805227 (-4.164218) | 0.118626 / 6.500664 (-6.382038) | 0.042732 / 0.075469 (-0.032737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933179 / 1.841788 (-0.908609) | 12.085247 / 8.074308 (4.010939) | 10.541596 / 10.191392 (0.350204) | 0.140141 / 0.680424 (-0.540283) | 0.014646 / 0.534201 (-0.519555) | 0.289640 / 0.579283 (-0.289643) | 0.281042 / 0.434364 (-0.153322) | 0.326462 / 0.540337 (-0.213876) | 0.441981 / 1.386936 (-0.944955) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005259 / 0.011353 (-0.006094) | 0.003766 / 0.011008 (-0.007242) | 0.048782 / 0.038508 (0.010273) | 0.064946 / 0.023109 (0.041836) | 0.264529 / 0.275898 (-0.011369) | 0.289675 / 0.323480 (-0.033805) | 0.004057 / 0.007986 (-0.003928) | 0.002805 / 0.004328 (-0.001523) | 0.047709 / 0.004250 (0.043459) | 0.041149 / 0.037052 (0.004096) | 0.271254 / 0.258489 (0.012765) | 0.296685 / 0.293841 (0.002844) | 0.029486 / 0.128546 (-0.099060) | 0.010608 / 0.075646 (-0.065038) | 0.056392 / 0.419271 (-0.362879) | 0.033181 / 0.043533 (-0.010352) | 0.267029 / 0.255139 (0.011890) | 0.284987 / 0.283200 (0.001787) | 0.018045 / 0.141683 (-0.123637) | 1.137358 / 1.452155 (-0.314796) | 1.184007 / 1.492716 (-0.308709) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004603 / 0.018006 (-0.013403) | 0.303901 / 0.000490 (0.303411) | 0.000225 / 0.000200 (0.000025) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021957 / 0.037411 (-0.015454) | 0.069427 / 0.014526 (0.054901) | 0.082394 / 0.176557 (-0.094163) | 0.120745 / 0.737135 (-0.616390) | 0.084571 / 0.296338 (-0.211767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292832 / 0.215209 (0.077623) | 2.824295 / 2.077655 (0.746640) | 1.563273 / 1.504120 (0.059153) | 1.440202 / 1.541195 (-0.100992) | 1.489810 / 1.468490 (0.021320) | 0.561120 / 4.584777 (-4.023657) | 2.439045 / 3.745712 (-1.306667) | 2.867139 / 5.269862 (-2.402722) | 1.793812 / 4.565676 (-2.771865) | 0.062797 / 0.424275 (-0.361478) | 0.005033 / 0.007607 (-0.002574) | 0.343648 / 0.226044 (0.117604) | 3.432285 / 2.268929 (1.163357) | 1.918175 / 55.444624 (-53.526449) | 1.637245 / 6.876477 (-5.239232) | 1.709246 / 2.142072 (-0.432826) | 0.634744 / 4.805227 (-4.170483) | 0.115782 / 6.500664 (-6.384882) | 0.041228 / 0.075469 (-0.034241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962369 / 1.841788 (-0.879418) | 12.750819 / 8.074308 (4.676511) | 10.927356 / 10.191392 (0.735964) | 0.143454 / 0.680424 (-0.536970) | 0.015348 / 0.534201 (-0.518853) | 0.291207 / 0.579283 (-0.288076) | 0.276924 / 0.434364 (-0.157440) | 0.327287 / 0.540337 (-0.213050) | 0.577439 / 1.386936 (-0.809497) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#544ad95f6b6da7fee44a2bc838e15a5e0156c946 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003475 / 0.011008 (-0.007533) | 0.061985 / 0.038508 (0.023477) | 0.048539 / 0.023109 (0.025430) | 0.229935 / 0.275898 (-0.045963) | 0.255247 / 0.323480 (-0.068233) | 0.003919 / 0.007986 (-0.004066) | 0.002664 / 0.004328 (-0.001664) | 0.048892 / 0.004250 (0.044642) | 0.037381 / 0.037052 (0.000328) | 0.238517 / 0.258489 (-0.019972) | 0.284069 / 0.293841 (-0.009772) | 0.027513 / 0.128546 (-0.101033) | 0.010778 / 0.075646 (-0.064868) | 0.205004 / 0.419271 (-0.214268) | 0.035553 / 0.043533 (-0.007980) | 0.230117 / 0.255139 (-0.025022) | 0.251150 / 0.283200 (-0.032050) | 0.017951 / 0.141683 (-0.123732) | 1.145548 / 1.452155 (-0.306607) | 1.191659 / 1.492716 (-0.301057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092335 / 0.018006 (0.074329) | 0.300264 / 0.000490 (0.299774) | 0.000206 / 0.000200 (0.000006) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018608 / 0.037411 (-0.018804) | 0.060376 / 0.014526 (0.045850) | 0.073551 / 0.176557 (-0.103006) | 0.118840 / 0.737135 (-0.618295) | 0.074447 / 0.296338 (-0.221892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287033 / 0.215209 (0.071824) | 2.770958 / 2.077655 (0.693303) | 1.443986 / 1.504120 (-0.060134) | 1.314627 / 1.541195 (-0.226567) | 1.342287 / 1.468490 (-0.126203) | 0.559607 / 4.584777 (-4.025170) | 2.409678 / 3.745712 (-1.336034) | 2.772566 / 5.269862 (-2.497295) | 1.743511 / 4.565676 (-2.822165) | 0.062277 / 0.424275 (-0.361998) | 0.004952 / 0.007607 (-0.002655) | 0.330581 / 0.226044 (0.104537) | 3.280385 / 2.268929 (1.011456) | 1.809599 / 55.444624 (-53.635025) | 1.532186 / 6.876477 (-5.344290) | 1.529689 / 2.142072 (-0.612383) | 0.645213 / 4.805227 (-4.160014) | 0.117564 / 6.500664 (-6.383100) | 0.041657 / 0.075469 (-0.033812) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943912 / 1.841788 (-0.897876) | 11.414317 / 8.074308 (3.340009) | 10.394915 / 10.191392 (0.203523) | 0.129271 / 0.680424 (-0.551153) | 0.013934 / 0.534201 (-0.520267) | 0.288217 / 0.579283 (-0.291066) | 0.267171 / 0.434364 (-0.167193) | 0.327112 / 0.540337 (-0.213225) | 0.446680 / 1.386936 (-0.940256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005200 / 0.011353 (-0.006152) | 0.003453 / 0.011008 (-0.007555) | 0.048736 / 0.038508 (0.010228) | 0.051073 / 0.023109 (0.027964) | 0.276591 / 0.275898 (0.000693) | 0.294495 / 0.323480 (-0.028985) | 0.004069 / 0.007986 (-0.003917) | 0.002945 / 0.004328 (-0.001383) | 0.047090 / 0.004250 (0.042839) | 0.040445 / 0.037052 (0.003393) | 0.278464 / 0.258489 (0.019975) | 0.304020 / 0.293841 (0.010179) | 0.028811 / 0.128546 (-0.099736) | 0.010388 / 0.075646 (-0.065259) | 0.057214 / 0.419271 (-0.362057) | 0.032588 / 0.043533 (-0.010945) | 0.277694 / 0.255139 (0.022555) | 0.294979 / 0.283200 (0.011779) | 0.018384 / 0.141683 (-0.123299) | 1.162332 / 1.452155 (-0.289822) | 1.188355 / 1.492716 (-0.304361) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090501 / 0.018006 (0.072495) | 0.303122 / 0.000490 (0.302632) | 0.000222 / 0.000200 (0.000022) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022536 / 0.037411 (-0.014876) | 0.068452 / 0.014526 (0.053926) | 0.080932 / 0.176557 (-0.095625) | 0.119185 / 0.737135 (-0.617950) | 0.081513 / 0.296338 (-0.214825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291522 / 0.215209 (0.076313) | 2.849467 / 2.077655 (0.771812) | 1.597395 / 1.504120 (0.093275) | 1.512872 / 1.541195 (-0.028323) | 1.488144 / 1.468490 (0.019654) | 0.572436 / 4.584777 (-4.012341) | 2.440129 / 3.745712 (-1.305583) | 2.788045 / 5.269862 (-2.481817) | 1.754246 / 4.565676 (-2.811430) | 0.066706 / 0.424275 (-0.357569) | 0.005035 / 0.007607 (-0.002573) | 0.336621 / 0.226044 (0.110576) | 3.322820 / 2.268929 (1.053891) | 1.940494 / 55.444624 (-53.504130) | 1.670022 / 6.876477 (-5.206454) | 1.666353 / 2.142072 (-0.475720) | 0.646180 / 4.805227 (-4.159047) | 0.116676 / 6.500664 (-6.383988) | 0.040559 / 0.075469 (-0.034910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971396 / 1.841788 (-0.870392) | 11.782426 / 8.074308 (3.708118) | 10.672034 / 10.191392 (0.480642) | 0.137658 / 0.680424 (-0.542766) | 0.016210 / 0.534201 (-0.517991) | 0.288302 / 0.579283 (-0.290981) | 0.280775 / 0.434364 (-0.153589) | 0.326962 / 0.540337 (-0.213375) | 0.558511 / 1.386936 (-0.828425) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#76020180407d7ea9a0b535758d8d1b241fd19d8c \"CML watermark\")\n"
] | 2023-11-30T14:57:14 | 2023-12-01T17:57:39 | 2023-12-01T17:51:33 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6461",
"html_url": "https://github.com/huggingface/datasets/pull/6461",
"diff_url": "https://github.com/huggingface/datasets/pull/6461.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6461.patch",
"merged_at": "2023-12-01T17:51:33"
} | When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that.
Fix https://github.com/huggingface/datasets/issues/6392 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6461/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6460/comments | https://api.github.com/repos/huggingface/datasets/issues/6460/events | https://github.com/huggingface/datasets/issues/6460 | 2,017,433,899 | I_kwDODunzps54P5kr | 6,460 | jsonlines files don't load with `load_dataset` | {
"login": "serenalotreck",
"id": 41377532,
"node_id": "MDQ6VXNlcjQxMzc3NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/41377532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/serenalotreck",
"html_url": "https://github.com/serenalotreck",
"followers_url": "https://api.github.com/users/serenalotreck/followers",
"following_url": "https://api.github.com/users/serenalotreck/following{/other_user}",
"gists_url": "https://api.github.com/users/serenalotreck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/serenalotreck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serenalotreck/subscriptions",
"organizations_url": "https://api.github.com/users/serenalotreck/orgs",
"repos_url": "https://api.github.com/users/serenalotreck/repos",
"events_url": "https://api.github.com/users/serenalotreck/events{/privacy}",
"received_events_url": "https://api.github.com/users/serenalotreck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @serenalotreck,\r\n\r\nWe use Apache Arrow `pyarrow` to read jsonlines and it throws an error when trying to load your data files:\r\n```python\r\nIn [1]: import pyarrow as pa\r\n\r\nIn [2]: data = pa.json.read_json(\"train.jsonl\")\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-14-e9b104832528> in <module>\r\n----> 1 data = pa.json.read_json(\"train.jsonl\")\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n.../huggingface/datasets/venv/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0\r\n```\r\n\r\nI think it has to do with the data structure of the fields \"ner\" (and also \"relations\"):\r\n```json\r\n\"ner\": [\r\n [\r\n [0, 4, \"Biochemical_process\"], \r\n [15, 16, \"Protein\"]\r\n ], \r\n```\r\nArrow interprets this data structure as an array, an arrays contain just a single data type: \r\n- when reading sequentially, it finds first the `0` and infers that the data is of type `number`;\r\n- when it finds the string `\"Biochemical_process\"`, it cannot cast it to number and throws the `ArrowInvalid` error\r\n\r\nOne solution could be to change the data structure of your data files. Any other ideas, @huggingface/datasets ?",
"Hi @albertvillanova, \r\n\r\nThanks for the explanation! To the best of my knowledge, arrays in a json [can contain multiple data types](https://docs.actian.com/ingres/11.2/index.html#page/SQLRef/Data_Types.htm), and I'm able to read these files with the `jsonlines` package. Is the requirement for arrays to only have one data type specific to PyArrow?\r\n\r\nI'd prefer to keep the data structure as is, since it's a specific input requirement for the models this data was generated for. Any thoughts on how to enable the use of `load_dataset` with this dataset would be great!",
"Hi again @serenalotreck,\r\n\r\nYes, it is specific to PyArrow: as far as I know, Arrow does not support arrays with multiple data types.\r\n\r\nAs this is related specifically to your dataset structure (and not the `datasets` library), I have created a dedicated issue in your dataset page: https://huggingface.co./datasets/slotreck/pickle/discussions/1\r\n\r\nLet's continue the discussion there! :hugs: "
] | 2023-11-29T21:20:11 | 2023-12-05T14:02:12 | 2023-12-05T13:30:53 | NONE | null | null | null | ### Describe the bug
While [the docs](https://huggingface.co./docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`.
### Steps to reproduce the bug
Code:
```
from datasets import load_dataset
dset = load_dataset('slotreck/pickle')
```
Traceback:
```
Downloading readme: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 925/925 [00:00<00:00, 3.11MB/s]
Downloading and preparing dataset json/slotreck--pickle to /mnt/home/lotrecks/.cache/huggingface/datasets/slotreck___json/slotreck--pickle-0c311f36ed032b04/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96...
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 589k/589k [00:00<00:00, 18.9MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 104k/104k [00:00<00:00, 4.61MB/s]
Downloading data: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 170k/170k [00:00<00:00, 7.71MB/s]
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 3.77it/s]
Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 523.92it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/mnt/home/lotrecks/.cache/huggingface/datasets/downloads/6ec07bb2f279c9377036af6948532513fa8f48244c672d2644a2d7018ee5c9cb' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables
dataset = json.load(f)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 3086)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 147, in _generate_tables
raise e
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 259, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset
storage_options=storage_options,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare
**download_and_prepare_kwargs,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
For the dataset to be loaded without error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 8.0.0
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6460/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6459/comments | https://api.github.com/repos/huggingface/datasets/issues/6459/events | https://github.com/huggingface/datasets/pull/6459 | 2,017,029,380 | PR_kwDODunzps5gsWlz | 6,459 | Retrieve cached datasets that were pushed to hub when offline | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005292 / 0.011353 (-0.006061) | 0.003811 / 0.011008 (-0.007197) | 0.064912 / 0.038508 (0.026404) | 0.061199 / 0.023109 (0.038090) | 0.242953 / 0.275898 (-0.032945) | 0.271789 / 0.323480 (-0.051691) | 0.003994 / 0.007986 (-0.003991) | 0.002723 / 0.004328 (-0.001606) | 0.049952 / 0.004250 (0.045701) | 0.039489 / 0.037052 (0.002437) | 0.261143 / 0.258489 (0.002654) | 0.288800 / 0.293841 (-0.005041) | 0.028130 / 0.128546 (-0.100416) | 0.010724 / 0.075646 (-0.064922) | 0.208218 / 0.419271 (-0.211054) | 0.036224 / 0.043533 (-0.007309) | 0.247189 / 0.255139 (-0.007950) | 0.274702 / 0.283200 (-0.008498) | 0.019714 / 0.141683 (-0.121969) | 1.134853 / 1.452155 (-0.317301) | 1.192655 / 1.492716 (-0.300062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096391 / 0.018006 (0.078385) | 0.303802 / 0.000490 (0.303312) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019530 / 0.037411 (-0.017881) | 0.061588 / 0.014526 (0.047062) | 0.075122 / 0.176557 (-0.101434) | 0.120980 / 0.737135 (-0.616155) | 0.075807 / 0.296338 (-0.220532) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281672 / 0.215209 (0.066463) | 2.779884 / 2.077655 (0.702229) | 1.502026 / 1.504120 (-0.002094) | 1.369474 / 1.541195 (-0.171721) | 1.402694 / 1.468490 (-0.065796) | 0.559120 / 4.584777 (-4.025657) | 2.355320 / 3.745712 (-1.390393) | 2.823987 / 5.269862 (-2.445875) | 1.763888 / 4.565676 (-2.801788) | 0.061715 / 0.424275 (-0.362560) | 0.005015 / 0.007607 (-0.002592) | 0.342669 / 0.226044 (0.116625) | 3.360651 / 2.268929 (1.091722) | 1.887277 / 55.444624 (-53.557348) | 1.555613 / 6.876477 (-5.320864) | 1.614126 / 2.142072 (-0.527946) | 0.643797 / 4.805227 (-4.161430) | 0.118365 / 6.500664 (-6.382299) | 0.042596 / 0.075469 (-0.032873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951383 / 1.841788 (-0.890405) | 13.169812 / 8.074308 (5.095504) | 10.772460 / 10.191392 (0.581068) | 0.133248 / 0.680424 (-0.547176) | 0.014597 / 0.534201 (-0.519604) | 0.289758 / 0.579283 (-0.289525) | 0.266324 / 0.434364 (-0.168040) | 0.334811 / 0.540337 (-0.205526) | 0.445566 / 1.386936 (-0.941370) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005668 / 0.011353 (-0.005684) | 0.003583 / 0.011008 (-0.007425) | 0.050681 / 0.038508 (0.012173) | 0.063244 / 0.023109 (0.040135) | 0.279624 / 0.275898 (0.003726) | 0.308030 / 0.323480 (-0.015450) | 0.004160 / 0.007986 (-0.003826) | 0.002633 / 0.004328 (-0.001696) | 0.048475 / 0.004250 (0.044225) | 0.043106 / 0.037052 (0.006054) | 0.283678 / 0.258489 (0.025189) | 0.309730 / 0.293841 (0.015889) | 0.030290 / 0.128546 (-0.098256) | 0.011112 / 0.075646 (-0.064534) | 0.058234 / 0.419271 (-0.361038) | 0.033553 / 0.043533 (-0.009979) | 0.279902 / 0.255139 (0.024763) | 0.298041 / 0.283200 (0.014841) | 0.019367 / 0.141683 (-0.122316) | 1.142438 / 1.452155 (-0.309717) | 1.197305 / 1.492716 (-0.295411) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090875 / 0.018006 (0.072869) | 0.301174 / 0.000490 (0.300685) | 0.000216 / 0.000200 (0.000016) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021544 / 0.037411 (-0.015867) | 0.071371 / 0.014526 (0.056846) | 0.080821 / 0.176557 (-0.095736) | 0.120054 / 0.737135 (-0.617082) | 0.082611 / 0.296338 (-0.213728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293787 / 0.215209 (0.078578) | 2.862610 / 2.077655 (0.784955) | 1.597282 / 1.504120 (0.093162) | 1.485094 / 1.541195 (-0.056101) | 1.507384 / 1.468490 (0.038893) | 0.558470 / 4.584777 (-4.026307) | 2.414137 / 3.745712 (-1.331575) | 2.863342 / 5.269862 (-2.406520) | 1.776973 / 4.565676 (-2.788704) | 0.062296 / 0.424275 (-0.361979) | 0.004954 / 0.007607 (-0.002653) | 0.346037 / 0.226044 (0.119993) | 3.441864 / 2.268929 (1.172935) | 1.969842 / 55.444624 (-53.474783) | 1.714878 / 6.876477 (-5.161599) | 1.738141 / 2.142072 (-0.403931) | 0.645929 / 4.805227 (-4.159298) | 0.117332 / 6.500664 (-6.383332) | 0.041963 / 0.075469 (-0.033507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983229 / 1.841788 (-0.858559) | 13.186932 / 8.074308 (5.112624) | 11.220549 / 10.191392 (1.029157) | 0.142105 / 0.680424 (-0.538319) | 0.015210 / 0.534201 (-0.518991) | 0.290055 / 0.579283 (-0.289228) | 0.274513 / 0.434364 (-0.159851) | 0.346834 / 0.540337 (-0.193504) | 0.575897 / 1.386936 (-0.811039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d3c0694d0c47a64a3cab5d468b4d9575ad7b1d96 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6459). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005308 / 0.011353 (-0.006045) | 0.003135 / 0.011008 (-0.007873) | 0.061820 / 0.038508 (0.023312) | 0.052005 / 0.023109 (0.028895) | 0.233507 / 0.275898 (-0.042391) | 0.257790 / 0.323480 (-0.065690) | 0.002848 / 0.007986 (-0.005138) | 0.002645 / 0.004328 (-0.001683) | 0.048379 / 0.004250 (0.044128) | 0.038320 / 0.037052 (0.001268) | 0.245470 / 0.258489 (-0.013019) | 0.274854 / 0.293841 (-0.018987) | 0.027335 / 0.128546 (-0.101211) | 0.010349 / 0.075646 (-0.065297) | 0.205872 / 0.419271 (-0.213400) | 0.035896 / 0.043533 (-0.007637) | 0.241645 / 0.255139 (-0.013494) | 0.260033 / 0.283200 (-0.023167) | 0.020325 / 0.141683 (-0.121358) | 1.116768 / 1.452155 (-0.335387) | 1.188067 / 1.492716 (-0.304649) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092622 / 0.018006 (0.074616) | 0.302663 / 0.000490 (0.302173) | 0.000227 / 0.000200 (0.000027) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018633 / 0.037411 (-0.018778) | 0.060117 / 0.014526 (0.045592) | 0.072713 / 0.176557 (-0.103844) | 0.119955 / 0.737135 (-0.617180) | 0.074698 / 0.296338 (-0.221640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277157 / 0.215209 (0.061948) | 2.699650 / 2.077655 (0.621995) | 1.413625 / 1.504120 (-0.090494) | 1.295900 / 1.541195 (-0.245295) | 1.306280 / 1.468490 (-0.162210) | 0.555354 / 4.584777 (-4.029423) | 2.386866 / 3.745712 (-1.358847) | 2.794069 / 5.269862 (-2.475793) | 1.736275 / 4.565676 (-2.829401) | 0.061812 / 0.424275 (-0.362464) | 0.004957 / 0.007607 (-0.002650) | 0.334533 / 0.226044 (0.108488) | 3.251096 / 2.268929 (0.982168) | 1.768193 / 55.444624 (-53.676431) | 1.473752 / 6.876477 (-5.402724) | 1.476320 / 2.142072 (-0.665753) | 0.642485 / 4.805227 (-4.162742) | 0.116986 / 6.500664 (-6.383678) | 0.042083 / 0.075469 (-0.033386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941364 / 1.841788 (-0.900424) | 11.587408 / 8.074308 (3.513100) | 10.500198 / 10.191392 (0.308806) | 0.129126 / 0.680424 (-0.551298) | 0.015206 / 0.534201 (-0.518995) | 0.286580 / 0.579283 (-0.292703) | 0.263566 / 0.434364 (-0.170798) | 0.331662 / 0.540337 (-0.208676) | 0.431423 / 1.386936 (-0.955513) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005151 / 0.011353 (-0.006202) | 0.003425 / 0.011008 (-0.007583) | 0.049301 / 0.038508 (0.010793) | 0.052005 / 0.023109 (0.028895) | 0.289594 / 0.275898 (0.013696) | 0.312630 / 0.323480 (-0.010849) | 0.003988 / 0.007986 (-0.003998) | 0.002705 / 0.004328 (-0.001624) | 0.048529 / 0.004250 (0.044279) | 0.039645 / 0.037052 (0.002592) | 0.293430 / 0.258489 (0.034941) | 0.311697 / 0.293841 (0.017856) | 0.029044 / 0.128546 (-0.099502) | 0.010282 / 0.075646 (-0.065364) | 0.057641 / 0.419271 (-0.361630) | 0.032733 / 0.043533 (-0.010800) | 0.293553 / 0.255139 (0.038414) | 0.308850 / 0.283200 (0.025651) | 0.018452 / 0.141683 (-0.123231) | 1.147931 / 1.452155 (-0.304224) | 1.173093 / 1.492716 (-0.319623) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100862 / 0.018006 (0.082856) | 0.309286 / 0.000490 (0.308796) | 0.000223 / 0.000200 (0.000023) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021365 / 0.037411 (-0.016046) | 0.068987 / 0.014526 (0.054461) | 0.081092 / 0.176557 (-0.095465) | 0.119852 / 0.737135 (-0.617283) | 0.082850 / 0.296338 (-0.213489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288477 / 0.215209 (0.073268) | 2.833766 / 2.077655 (0.756111) | 1.576670 / 1.504120 (0.072550) | 1.431643 / 1.541195 (-0.109552) | 1.442132 / 1.468490 (-0.026358) | 0.556079 / 4.584777 (-4.028698) | 2.465042 / 3.745712 (-1.280670) | 2.786329 / 5.269862 (-2.483532) | 1.779428 / 4.565676 (-2.786249) | 0.062278 / 0.424275 (-0.361997) | 0.004867 / 0.007607 (-0.002740) | 0.348444 / 0.226044 (0.122399) | 3.389824 / 2.268929 (1.120896) | 1.919141 / 55.444624 (-53.525484) | 1.635411 / 6.876477 (-5.241066) | 1.654869 / 2.142072 (-0.487204) | 0.634467 / 4.805227 (-4.170761) | 0.114330 / 6.500664 (-6.386334) | 0.039900 / 0.075469 (-0.035569) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970851 / 1.841788 (-0.870937) | 11.951660 / 8.074308 (3.877352) | 10.571115 / 10.191392 (0.379723) | 0.131040 / 0.680424 (-0.549384) | 0.015299 / 0.534201 (-0.518902) | 0.287851 / 0.579283 (-0.291432) | 0.278366 / 0.434364 (-0.155998) | 0.326468 / 0.540337 (-0.213870) | 0.552288 / 1.386936 (-0.834648) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8214ff2a9f706427669a6c2a01ccabffa5bf0d2b \"CML watermark\")\n"
] | 2023-11-29T16:56:15 | 2023-12-13T13:54:48 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6459",
"html_url": "https://github.com/huggingface/datasets/pull/6459",
"diff_url": "https://github.com/huggingface/datasets/pull/6459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6459.patch",
"merged_at": null
} | I drafted the logic to retrieve a no-script dataset in the cache.
For example it can reload datasets that were pushed to hub if they exist in the cache.
example:
```python
>>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp")
>>> load_dataset("lhoestq/tmp")
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
and later, without connection:
```python
>>> load_dataset("lhoestq/tmp")
Using the latest cached version of the dataset from /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/*/*/0b3caccda1725efb(last modified on Wed Nov 29 16:50:27 2023) since it couldn't be found locally at lhoestq/tmp.
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
fix https://github.com/huggingface/datasets/issues/3547
## Implementation details (EDITED)
I continued in https://github.com/huggingface/datasets/pull/6493, see the changes there
TODO:
- [x] tests
- [ ] compatible with https://github.com/huggingface/datasets/pull/6458 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6459/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6458/comments | https://api.github.com/repos/huggingface/datasets/issues/6458/events | https://github.com/huggingface/datasets/pull/6458 | 2,016,577,761 | PR_kwDODunzps5gqy4M | 6,458 | Lazy data files resolution | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005097 / 0.011353 (-0.006256) | 0.003523 / 0.011008 (-0.007485) | 0.062827 / 0.038508 (0.024319) | 0.051677 / 0.023109 (0.028568) | 0.248919 / 0.275898 (-0.026980) | 0.275892 / 0.323480 (-0.047588) | 0.003908 / 0.007986 (-0.004077) | 0.002622 / 0.004328 (-0.001706) | 0.048634 / 0.004250 (0.044383) | 0.037903 / 0.037052 (0.000850) | 0.255754 / 0.258489 (-0.002735) | 0.283343 / 0.293841 (-0.010498) | 0.027886 / 0.128546 (-0.100660) | 0.010849 / 0.075646 (-0.064797) | 0.208255 / 0.419271 (-0.211017) | 0.035664 / 0.043533 (-0.007869) | 0.254661 / 0.255139 (-0.000478) | 0.274366 / 0.283200 (-0.008834) | 0.017240 / 0.141683 (-0.124443) | 1.092952 / 1.452155 (-0.359203) | 1.148373 / 1.492716 (-0.344344) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091592 / 0.018006 (0.073586) | 0.301926 / 0.000490 (0.301436) | 0.000207 / 0.000200 (0.000007) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018525 / 0.037411 (-0.018887) | 0.060539 / 0.014526 (0.046014) | 0.073812 / 0.176557 (-0.102745) | 0.120655 / 0.737135 (-0.616480) | 0.076931 / 0.296338 (-0.219407) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282797 / 0.215209 (0.067588) | 2.746573 / 2.077655 (0.668918) | 1.477652 / 1.504120 (-0.026468) | 1.349922 / 1.541195 (-0.191273) | 1.374347 / 1.468490 (-0.094143) | 0.574096 / 4.584777 (-4.010681) | 2.383317 / 3.745712 (-1.362395) | 2.809320 / 5.269862 (-2.460541) | 1.758947 / 4.565676 (-2.806729) | 0.064029 / 0.424275 (-0.360246) | 0.004936 / 0.007607 (-0.002672) | 0.331403 / 0.226044 (0.105358) | 3.260908 / 2.268929 (0.991980) | 1.817670 / 55.444624 (-53.626954) | 1.525863 / 6.876477 (-5.350613) | 1.542017 / 2.142072 (-0.600055) | 0.638900 / 4.805227 (-4.166327) | 0.119485 / 6.500664 (-6.381179) | 0.042588 / 0.075469 (-0.032881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951583 / 1.841788 (-0.890205) | 11.621917 / 8.074308 (3.547609) | 10.511062 / 10.191392 (0.319670) | 0.130137 / 0.680424 (-0.550287) | 0.014048 / 0.534201 (-0.520153) | 0.290621 / 0.579283 (-0.288662) | 0.271665 / 0.434364 (-0.162699) | 0.331260 / 0.540337 (-0.209077) | 0.441621 / 1.386936 (-0.945316) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005272 / 0.011353 (-0.006081) | 0.003656 / 0.011008 (-0.007352) | 0.049245 / 0.038508 (0.010737) | 0.054130 / 0.023109 (0.031021) | 0.274775 / 0.275898 (-0.001123) | 0.296664 / 0.323480 (-0.026816) | 0.004870 / 0.007986 (-0.003115) | 0.002728 / 0.004328 (-0.001601) | 0.048087 / 0.004250 (0.043837) | 0.041448 / 0.037052 (0.004396) | 0.279110 / 0.258489 (0.020621) | 0.303660 / 0.293841 (0.009819) | 0.029767 / 0.128546 (-0.098779) | 0.010799 / 0.075646 (-0.064848) | 0.058650 / 0.419271 (-0.360622) | 0.033088 / 0.043533 (-0.010445) | 0.274456 / 0.255139 (0.019317) | 0.290206 / 0.283200 (0.007007) | 0.017259 / 0.141683 (-0.124424) | 1.176501 / 1.452155 (-0.275654) | 1.197552 / 1.492716 (-0.295165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092865 / 0.018006 (0.074859) | 0.302437 / 0.000490 (0.301947) | 0.000209 / 0.000200 (0.000009) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021211 / 0.037411 (-0.016200) | 0.068858 / 0.014526 (0.054332) | 0.081783 / 0.176557 (-0.094773) | 0.120472 / 0.737135 (-0.616663) | 0.083900 / 0.296338 (-0.212438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295157 / 0.215209 (0.079948) | 2.910979 / 2.077655 (0.833324) | 1.575772 / 1.504120 (0.071652) | 1.456955 / 1.541195 (-0.084239) | 1.468982 / 1.468490 (0.000492) | 0.560309 / 4.584777 (-4.024468) | 2.460171 / 3.745712 (-1.285541) | 2.805713 / 5.269862 (-2.464149) | 1.754074 / 4.565676 (-2.811603) | 0.063333 / 0.424275 (-0.360942) | 0.004940 / 0.007607 (-0.002667) | 0.346141 / 0.226044 (0.120097) | 3.463431 / 2.268929 (1.194502) | 1.929135 / 55.444624 (-53.515490) | 1.660191 / 6.876477 (-5.216286) | 1.668327 / 2.142072 (-0.473746) | 0.644183 / 4.805227 (-4.161044) | 0.115738 / 6.500664 (-6.384926) | 0.041347 / 0.075469 (-0.034122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961565 / 1.841788 (-0.880222) | 12.232589 / 8.074308 (4.158281) | 10.778774 / 10.191392 (0.587382) | 0.132709 / 0.680424 (-0.547715) | 0.015964 / 0.534201 (-0.518237) | 0.286944 / 0.579283 (-0.292340) | 0.279740 / 0.434364 (-0.154624) | 0.333024 / 0.540337 (-0.207314) | 0.438819 / 1.386936 (-0.948117) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51002cb0325772adaf46d6f3ce01d41c01b51079 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6458). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005317 / 0.011353 (-0.006036) | 0.003936 / 0.011008 (-0.007072) | 0.063122 / 0.038508 (0.024614) | 0.061274 / 0.023109 (0.038165) | 0.251764 / 0.275898 (-0.024134) | 0.274849 / 0.323480 (-0.048631) | 0.004059 / 0.007986 (-0.003927) | 0.002874 / 0.004328 (-0.001455) | 0.048716 / 0.004250 (0.044465) | 0.038281 / 0.037052 (0.001228) | 0.265224 / 0.258489 (0.006735) | 0.285962 / 0.293841 (-0.007878) | 0.028522 / 0.128546 (-0.100024) | 0.011150 / 0.075646 (-0.064496) | 0.208362 / 0.419271 (-0.210910) | 0.038900 / 0.043533 (-0.004633) | 0.254113 / 0.255139 (-0.001026) | 0.276721 / 0.283200 (-0.006478) | 0.018372 / 0.141683 (-0.123311) | 1.121336 / 1.452155 (-0.330818) | 1.189548 / 1.492716 (-0.303168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097633 / 0.018006 (0.079627) | 0.304443 / 0.000490 (0.303953) | 0.000218 / 0.000200 (0.000018) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021757 / 0.037411 (-0.015654) | 0.061978 / 0.014526 (0.047453) | 0.076296 / 0.176557 (-0.100260) | 0.122320 / 0.737135 (-0.614816) | 0.076738 / 0.296338 (-0.219601) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284328 / 0.215209 (0.069119) | 2.793071 / 2.077655 (0.715417) | 1.504768 / 1.504120 (0.000648) | 1.386083 / 1.541195 (-0.155111) | 1.457593 / 1.468490 (-0.010897) | 0.575887 / 4.584777 (-4.008890) | 2.419396 / 3.745712 (-1.326316) | 2.931305 / 5.269862 (-2.338556) | 1.840759 / 4.565676 (-2.724917) | 0.063801 / 0.424275 (-0.360474) | 0.004966 / 0.007607 (-0.002641) | 0.341612 / 0.226044 (0.115568) | 3.402842 / 2.268929 (1.133913) | 1.860521 / 55.444624 (-53.584103) | 1.603156 / 6.876477 (-5.273321) | 1.665835 / 2.142072 (-0.476237) | 0.655299 / 4.805227 (-4.149929) | 0.124527 / 6.500664 (-6.376137) | 0.044021 / 0.075469 (-0.031449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972068 / 1.841788 (-0.869720) | 12.393202 / 8.074308 (4.318894) | 10.420876 / 10.191392 (0.229484) | 0.140684 / 0.680424 (-0.539740) | 0.014442 / 0.534201 (-0.519759) | 0.288182 / 0.579283 (-0.291101) | 0.265029 / 0.434364 (-0.169334) | 0.327133 / 0.540337 (-0.213204) | 0.443403 / 1.386936 (-0.943533) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005559 / 0.011353 (-0.005794) | 0.004046 / 0.011008 (-0.006962) | 0.048991 / 0.038508 (0.010483) | 0.059576 / 0.023109 (0.036467) | 0.273596 / 0.275898 (-0.002302) | 0.296658 / 0.323480 (-0.026822) | 0.004089 / 0.007986 (-0.003897) | 0.002777 / 0.004328 (-0.001551) | 0.048216 / 0.004250 (0.043966) | 0.043200 / 0.037052 (0.006148) | 0.276815 / 0.258489 (0.018326) | 0.300570 / 0.293841 (0.006729) | 0.030250 / 0.128546 (-0.098296) | 0.011322 / 0.075646 (-0.064324) | 0.057843 / 0.419271 (-0.361429) | 0.033366 / 0.043533 (-0.010167) | 0.275636 / 0.255139 (0.020497) | 0.293750 / 0.283200 (0.010550) | 0.018551 / 0.141683 (-0.123132) | 1.160919 / 1.452155 (-0.291236) | 1.214519 / 1.492716 (-0.278197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100074 / 0.018006 (0.082068) | 0.308434 / 0.000490 (0.307944) | 0.000232 / 0.000200 (0.000032) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022600 / 0.037411 (-0.014811) | 0.070506 / 0.014526 (0.055980) | 0.081185 / 0.176557 (-0.095371) | 0.120688 / 0.737135 (-0.616448) | 0.082897 / 0.296338 (-0.213441) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306661 / 0.215209 (0.091452) | 2.989656 / 2.077655 (0.912001) | 1.618868 / 1.504120 (0.114749) | 1.485045 / 1.541195 (-0.056149) | 1.549359 / 1.468490 (0.080869) | 0.593596 / 4.584777 (-3.991181) | 2.466215 / 3.745712 (-1.279497) | 2.956570 / 5.269862 (-2.313292) | 1.823160 / 4.565676 (-2.742516) | 0.063442 / 0.424275 (-0.360833) | 0.004928 / 0.007607 (-0.002679) | 0.358464 / 0.226044 (0.132419) | 3.566345 / 2.268929 (1.297417) | 2.006784 / 55.444624 (-53.437840) | 1.687091 / 6.876477 (-5.189386) | 1.729464 / 2.142072 (-0.412609) | 0.655656 / 4.805227 (-4.149572) | 0.119044 / 6.500664 (-6.381620) | 0.042782 / 0.075469 (-0.032687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974937 / 1.841788 (-0.866850) | 12.992888 / 8.074308 (4.918580) | 10.893713 / 10.191392 (0.702321) | 0.133853 / 0.680424 (-0.546570) | 0.016055 / 0.534201 (-0.518145) | 0.289342 / 0.579283 (-0.289941) | 0.286094 / 0.434364 (-0.148270) | 0.328670 / 0.540337 (-0.211667) | 0.444605 / 1.386936 (-0.942331) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5a5bb38bcc71ea21f2d7304aab374fdb81ded463 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005705 / 0.011353 (-0.005648) | 0.003519 / 0.011008 (-0.007489) | 0.062009 / 0.038508 (0.023501) | 0.053481 / 0.023109 (0.030372) | 0.262669 / 0.275898 (-0.013229) | 0.280290 / 0.323480 (-0.043189) | 0.002957 / 0.007986 (-0.005029) | 0.002587 / 0.004328 (-0.001741) | 0.047876 / 0.004250 (0.043626) | 0.038868 / 0.037052 (0.001815) | 0.267854 / 0.258489 (0.009365) | 0.290430 / 0.293841 (-0.003411) | 0.028120 / 0.128546 (-0.100427) | 0.011042 / 0.075646 (-0.064605) | 0.206113 / 0.419271 (-0.213158) | 0.036039 / 0.043533 (-0.007494) | 0.257715 / 0.255139 (0.002576) | 0.281279 / 0.283200 (-0.001921) | 0.019790 / 0.141683 (-0.121893) | 1.114472 / 1.452155 (-0.337683) | 1.192219 / 1.492716 (-0.300497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091049 / 0.018006 (0.073043) | 0.300846 / 0.000490 (0.300356) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018569 / 0.037411 (-0.018843) | 0.060075 / 0.014526 (0.045549) | 0.073877 / 0.176557 (-0.102680) | 0.120337 / 0.737135 (-0.616799) | 0.075454 / 0.296338 (-0.220884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290084 / 0.215209 (0.074875) | 2.805712 / 2.077655 (0.728057) | 1.459393 / 1.504120 (-0.044727) | 1.327356 / 1.541195 (-0.213838) | 1.384734 / 1.468490 (-0.083756) | 0.574532 / 4.584777 (-4.010245) | 2.419696 / 3.745712 (-1.326016) | 2.805449 / 5.269862 (-2.464412) | 1.764127 / 4.565676 (-2.801549) | 0.063256 / 0.424275 (-0.361020) | 0.004954 / 0.007607 (-0.002653) | 0.344246 / 0.226044 (0.118202) | 3.396050 / 2.268929 (1.127121) | 1.807621 / 55.444624 (-53.637004) | 1.536627 / 6.876477 (-5.339850) | 1.552450 / 2.142072 (-0.589623) | 0.651156 / 4.805227 (-4.154071) | 0.119358 / 6.500664 (-6.381306) | 0.042810 / 0.075469 (-0.032660) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930646 / 1.841788 (-0.911142) | 11.830454 / 8.074308 (3.756146) | 10.615315 / 10.191392 (0.423923) | 0.130617 / 0.680424 (-0.549807) | 0.014081 / 0.534201 (-0.520120) | 0.285027 / 0.579283 (-0.294256) | 0.267296 / 0.434364 (-0.167068) | 0.331478 / 0.540337 (-0.208859) | 0.442676 / 1.386936 (-0.944260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005340 / 0.011353 (-0.006013) | 0.003745 / 0.011008 (-0.007264) | 0.049011 / 0.038508 (0.010503) | 0.051342 / 0.023109 (0.028233) | 0.272482 / 0.275898 (-0.003416) | 0.292816 / 0.323480 (-0.030663) | 0.003977 / 0.007986 (-0.004008) | 0.002642 / 0.004328 (-0.001687) | 0.048213 / 0.004250 (0.043963) | 0.040341 / 0.037052 (0.003289) | 0.275176 / 0.258489 (0.016687) | 0.301098 / 0.293841 (0.007257) | 0.029052 / 0.128546 (-0.099495) | 0.010796 / 0.075646 (-0.064850) | 0.057654 / 0.419271 (-0.361618) | 0.032914 / 0.043533 (-0.010619) | 0.271235 / 0.255139 (0.016096) | 0.289883 / 0.283200 (0.006684) | 0.018548 / 0.141683 (-0.123135) | 1.134072 / 1.452155 (-0.318083) | 1.208228 / 1.492716 (-0.284488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094524 / 0.018006 (0.076518) | 0.310162 / 0.000490 (0.309672) | 0.000237 / 0.000200 (0.000037) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021090 / 0.037411 (-0.016321) | 0.068351 / 0.014526 (0.053825) | 0.082370 / 0.176557 (-0.094186) | 0.121648 / 0.737135 (-0.615487) | 0.083433 / 0.296338 (-0.212906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294616 / 0.215209 (0.079407) | 2.894194 / 2.077655 (0.816539) | 1.619739 / 1.504120 (0.115619) | 1.492466 / 1.541195 (-0.048729) | 1.511662 / 1.468490 (0.043172) | 0.557179 / 4.584777 (-4.027597) | 2.400669 / 3.745712 (-1.345043) | 2.781363 / 5.269862 (-2.488499) | 1.769144 / 4.565676 (-2.796533) | 0.063996 / 0.424275 (-0.360279) | 0.004922 / 0.007607 (-0.002685) | 0.354483 / 0.226044 (0.128438) | 3.474795 / 2.268929 (1.205867) | 1.985743 / 55.444624 (-53.458881) | 1.693173 / 6.876477 (-5.183303) | 1.695857 / 2.142072 (-0.446216) | 0.654800 / 4.805227 (-4.150427) | 0.117316 / 6.500664 (-6.383348) | 0.040708 / 0.075469 (-0.034761) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977678 / 1.841788 (-0.864109) | 12.214098 / 8.074308 (4.139790) | 10.741857 / 10.191392 (0.550465) | 0.130308 / 0.680424 (-0.550116) | 0.015053 / 0.534201 (-0.519148) | 0.295496 / 0.579283 (-0.283787) | 0.276348 / 0.434364 (-0.158015) | 0.326568 / 0.540337 (-0.213769) | 0.441902 / 1.386936 (-0.945034) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#214a3e6dcb66e9c1a8ff586553e8eee0f1c70710 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005218 / 0.011353 (-0.006135) | 0.003270 / 0.011008 (-0.007738) | 0.062380 / 0.038508 (0.023872) | 0.052896 / 0.023109 (0.029787) | 0.233060 / 0.275898 (-0.042838) | 0.259194 / 0.323480 (-0.064286) | 0.002880 / 0.007986 (-0.005106) | 0.002643 / 0.004328 (-0.001686) | 0.048084 / 0.004250 (0.043833) | 0.038807 / 0.037052 (0.001755) | 0.244925 / 0.258489 (-0.013564) | 0.269619 / 0.293841 (-0.024222) | 0.026901 / 0.128546 (-0.101646) | 0.010150 / 0.075646 (-0.065497) | 0.206854 / 0.419271 (-0.212417) | 0.035618 / 0.043533 (-0.007915) | 0.239577 / 0.255139 (-0.015562) | 0.259684 / 0.283200 (-0.023516) | 0.019823 / 0.141683 (-0.121860) | 1.074472 / 1.452155 (-0.377682) | 1.142911 / 1.492716 (-0.349805) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092616 / 0.018006 (0.074610) | 0.301974 / 0.000490 (0.301485) | 0.000201 / 0.000200 (0.000002) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018864 / 0.037411 (-0.018548) | 0.061007 / 0.014526 (0.046481) | 0.073228 / 0.176557 (-0.103328) | 0.120719 / 0.737135 (-0.616416) | 0.075686 / 0.296338 (-0.220653) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281404 / 0.215209 (0.066195) | 2.777671 / 2.077655 (0.700017) | 1.464689 / 1.504120 (-0.039431) | 1.345357 / 1.541195 (-0.195838) | 1.384273 / 1.468490 (-0.084217) | 0.560298 / 4.584777 (-4.024479) | 2.389877 / 3.745712 (-1.355835) | 2.755564 / 5.269862 (-2.514297) | 1.737754 / 4.565676 (-2.827922) | 0.063025 / 0.424275 (-0.361251) | 0.004975 / 0.007607 (-0.002632) | 0.346741 / 0.226044 (0.120697) | 3.321918 / 2.268929 (1.052989) | 1.815700 / 55.444624 (-53.628924) | 1.547333 / 6.876477 (-5.329144) | 1.564809 / 2.142072 (-0.577263) | 0.638645 / 4.805227 (-4.166582) | 0.118157 / 6.500664 (-6.382507) | 0.041605 / 0.075469 (-0.033864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942515 / 1.841788 (-0.899273) | 11.400386 / 8.074308 (3.326078) | 10.208763 / 10.191392 (0.017370) | 0.138144 / 0.680424 (-0.542280) | 0.014354 / 0.534201 (-0.519847) | 0.288289 / 0.579283 (-0.290994) | 0.265973 / 0.434364 (-0.168391) | 0.327703 / 0.540337 (-0.212634) | 0.435474 / 1.386936 (-0.951462) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005163 / 0.011353 (-0.006190) | 0.003307 / 0.011008 (-0.007701) | 0.048885 / 0.038508 (0.010377) | 0.049044 / 0.023109 (0.025935) | 0.261408 / 0.275898 (-0.014490) | 0.284625 / 0.323480 (-0.038855) | 0.003970 / 0.007986 (-0.004015) | 0.002754 / 0.004328 (-0.001575) | 0.048271 / 0.004250 (0.044021) | 0.039849 / 0.037052 (0.002797) | 0.266898 / 0.258489 (0.008409) | 0.291445 / 0.293841 (-0.002396) | 0.028477 / 0.128546 (-0.100069) | 0.010656 / 0.075646 (-0.064990) | 0.057732 / 0.419271 (-0.361539) | 0.033298 / 0.043533 (-0.010235) | 0.297773 / 0.255139 (0.042634) | 0.281894 / 0.283200 (-0.001305) | 0.018595 / 0.141683 (-0.123088) | 1.168849 / 1.452155 (-0.283306) | 1.183493 / 1.492716 (-0.309224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092683 / 0.018006 (0.074677) | 0.300387 / 0.000490 (0.299897) | 0.000221 / 0.000200 (0.000021) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021356 / 0.037411 (-0.016055) | 0.068095 / 0.014526 (0.053569) | 0.079806 / 0.176557 (-0.096750) | 0.118965 / 0.737135 (-0.618170) | 0.082066 / 0.296338 (-0.214273) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293105 / 0.215209 (0.077896) | 2.842800 / 2.077655 (0.765146) | 1.572052 / 1.504120 (0.067932) | 1.450156 / 1.541195 (-0.091038) | 1.464227 / 1.468490 (-0.004263) | 0.561215 / 4.584777 (-4.023562) | 2.456117 / 3.745712 (-1.289596) | 2.739766 / 5.269862 (-2.530095) | 1.730354 / 4.565676 (-2.835323) | 0.062636 / 0.424275 (-0.361639) | 0.004933 / 0.007607 (-0.002674) | 0.345800 / 0.226044 (0.119756) | 3.415858 / 2.268929 (1.146929) | 1.937288 / 55.444624 (-53.507336) | 1.661975 / 6.876477 (-5.214502) | 1.660347 / 2.142072 (-0.481726) | 0.642780 / 4.805227 (-4.162448) | 0.116643 / 6.500664 (-6.384021) | 0.041282 / 0.075469 (-0.034187) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976629 / 1.841788 (-0.865159) | 11.900319 / 8.074308 (3.826011) | 10.574198 / 10.191392 (0.382806) | 0.129689 / 0.680424 (-0.550735) | 0.015390 / 0.534201 (-0.518811) | 0.286543 / 0.579283 (-0.292741) | 0.277676 / 0.434364 (-0.156688) | 0.325053 / 0.540337 (-0.215284) | 0.439663 / 1.386936 (-0.947274) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7a9674e17156ff10124632ba705125288de7442 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005382 / 0.011353 (-0.005971) | 0.003606 / 0.011008 (-0.007402) | 0.063234 / 0.038508 (0.024726) | 0.053738 / 0.023109 (0.030629) | 0.250405 / 0.275898 (-0.025493) | 0.272244 / 0.323480 (-0.051236) | 0.002896 / 0.007986 (-0.005090) | 0.002684 / 0.004328 (-0.001644) | 0.048394 / 0.004250 (0.044143) | 0.039017 / 0.037052 (0.001964) | 0.259554 / 0.258489 (0.001065) | 0.287215 / 0.293841 (-0.006626) | 0.028290 / 0.128546 (-0.100257) | 0.011482 / 0.075646 (-0.064164) | 0.214264 / 0.419271 (-0.205007) | 0.036257 / 0.043533 (-0.007276) | 0.252873 / 0.255139 (-0.002266) | 0.271269 / 0.283200 (-0.011931) | 0.017173 / 0.141683 (-0.124510) | 1.137474 / 1.452155 (-0.314681) | 1.161499 / 1.492716 (-0.331217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092424 / 0.018006 (0.074418) | 0.283703 / 0.000490 (0.283213) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018307 / 0.037411 (-0.019105) | 0.060780 / 0.014526 (0.046254) | 0.073984 / 0.176557 (-0.102573) | 0.120824 / 0.737135 (-0.616311) | 0.074724 / 0.296338 (-0.221615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297682 / 0.215209 (0.082473) | 2.853267 / 2.077655 (0.775612) | 1.567643 / 1.504120 (0.063523) | 1.437218 / 1.541195 (-0.103976) | 1.467187 / 1.468490 (-0.001304) | 0.560552 / 4.584777 (-4.024225) | 2.387848 / 3.745712 (-1.357864) | 2.718946 / 5.269862 (-2.550916) | 1.724107 / 4.565676 (-2.841570) | 0.061923 / 0.424275 (-0.362352) | 0.004828 / 0.007607 (-0.002779) | 0.353916 / 0.226044 (0.127871) | 3.404477 / 2.268929 (1.135548) | 1.906078 / 55.444624 (-53.538546) | 1.629686 / 6.876477 (-5.246791) | 1.640839 / 2.142072 (-0.501233) | 0.641082 / 4.805227 (-4.164145) | 0.118078 / 6.500664 (-6.382586) | 0.041881 / 0.075469 (-0.033588) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936062 / 1.841788 (-0.905726) | 11.397678 / 8.074308 (3.323370) | 10.385159 / 10.191392 (0.193766) | 0.127337 / 0.680424 (-0.553087) | 0.013562 / 0.534201 (-0.520639) | 0.290817 / 0.579283 (-0.288466) | 0.259377 / 0.434364 (-0.174987) | 0.324829 / 0.540337 (-0.215508) | 0.434344 / 1.386936 (-0.952592) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005134 / 0.011353 (-0.006219) | 0.003404 / 0.011008 (-0.007604) | 0.048281 / 0.038508 (0.009772) | 0.050952 / 0.023109 (0.027842) | 0.277553 / 0.275898 (0.001655) | 0.298855 / 0.323480 (-0.024625) | 0.003928 / 0.007986 (-0.004058) | 0.002642 / 0.004328 (-0.001687) | 0.047374 / 0.004250 (0.043123) | 0.039883 / 0.037052 (0.002831) | 0.279808 / 0.258489 (0.021318) | 0.301604 / 0.293841 (0.007763) | 0.028708 / 0.128546 (-0.099838) | 0.010949 / 0.075646 (-0.064697) | 0.057090 / 0.419271 (-0.362181) | 0.032438 / 0.043533 (-0.011095) | 0.274690 / 0.255139 (0.019551) | 0.290912 / 0.283200 (0.007712) | 0.017556 / 0.141683 (-0.124127) | 1.111091 / 1.452155 (-0.341064) | 1.166063 / 1.492716 (-0.326653) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090557 / 0.018006 (0.072551) | 0.298661 / 0.000490 (0.298171) | 0.000228 / 0.000200 (0.000028) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021712 / 0.037411 (-0.015699) | 0.068682 / 0.014526 (0.054156) | 0.080108 / 0.176557 (-0.096449) | 0.119480 / 0.737135 (-0.617655) | 0.082703 / 0.296338 (-0.213636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294095 / 0.215209 (0.078886) | 2.884758 / 2.077655 (0.807103) | 1.598312 / 1.504120 (0.094192) | 1.480050 / 1.541195 (-0.061145) | 1.488611 / 1.468490 (0.020121) | 0.556052 / 4.584777 (-4.028724) | 2.435484 / 3.745712 (-1.310228) | 2.741592 / 5.269862 (-2.528270) | 1.706223 / 4.565676 (-2.859454) | 0.062214 / 0.424275 (-0.362061) | 0.004901 / 0.007607 (-0.002706) | 0.346301 / 0.226044 (0.120257) | 3.474516 / 2.268929 (1.205587) | 1.995205 / 55.444624 (-53.449419) | 1.726349 / 6.876477 (-5.150128) | 1.659600 / 2.142072 (-0.482472) | 0.643560 / 4.805227 (-4.161667) | 0.115222 / 6.500664 (-6.385442) | 0.041137 / 0.075469 (-0.034332) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974566 / 1.841788 (-0.867221) | 11.872479 / 8.074308 (3.798171) | 10.496919 / 10.191392 (0.305527) | 0.129087 / 0.680424 (-0.551337) | 0.014627 / 0.534201 (-0.519574) | 0.289070 / 0.579283 (-0.290213) | 0.269609 / 0.434364 (-0.164755) | 0.327785 / 0.540337 (-0.212553) | 0.444634 / 1.386936 (-0.942302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32e0960ea165a9481b1ff6eed31771475120cb38 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005080 / 0.011353 (-0.006273) | 0.003782 / 0.011008 (-0.007226) | 0.062816 / 0.038508 (0.024308) | 0.056338 / 0.023109 (0.033229) | 0.251317 / 0.275898 (-0.024581) | 0.269414 / 0.323480 (-0.054066) | 0.003984 / 0.007986 (-0.004001) | 0.002749 / 0.004328 (-0.001580) | 0.048126 / 0.004250 (0.043876) | 0.038516 / 0.037052 (0.001464) | 0.253809 / 0.258489 (-0.004680) | 0.283309 / 0.293841 (-0.010532) | 0.027015 / 0.128546 (-0.101531) | 0.010610 / 0.075646 (-0.065037) | 0.213024 / 0.419271 (-0.206247) | 0.035734 / 0.043533 (-0.007799) | 0.247909 / 0.255139 (-0.007230) | 0.263539 / 0.283200 (-0.019660) | 0.018408 / 0.141683 (-0.123275) | 1.104366 / 1.452155 (-0.347789) | 1.169668 / 1.492716 (-0.323048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.114366 / 0.018006 (0.096360) | 0.317674 / 0.000490 (0.317184) | 0.000227 / 0.000200 (0.000027) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018955 / 0.037411 (-0.018457) | 0.060716 / 0.014526 (0.046190) | 0.072963 / 0.176557 (-0.103593) | 0.121671 / 0.737135 (-0.615464) | 0.073785 / 0.296338 (-0.222554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292349 / 0.215209 (0.077140) | 2.832049 / 2.077655 (0.754394) | 1.504488 / 1.504120 (0.000368) | 1.403418 / 1.541195 (-0.137777) | 1.449223 / 1.468490 (-0.019267) | 0.563846 / 4.584777 (-4.020931) | 2.376726 / 3.745712 (-1.368986) | 2.823304 / 5.269862 (-2.446558) | 1.774858 / 4.565676 (-2.790818) | 0.063229 / 0.424275 (-0.361046) | 0.004923 / 0.007607 (-0.002684) | 0.347240 / 0.226044 (0.121195) | 3.486563 / 2.268929 (1.217634) | 1.890516 / 55.444624 (-53.554109) | 1.570620 / 6.876477 (-5.305857) | 1.600842 / 2.142072 (-0.541231) | 0.644287 / 4.805227 (-4.160940) | 0.116931 / 6.500664 (-6.383733) | 0.042068 / 0.075469 (-0.033401) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935662 / 1.841788 (-0.906126) | 11.950247 / 8.074308 (3.875939) | 10.636225 / 10.191392 (0.444833) | 0.139137 / 0.680424 (-0.541287) | 0.014473 / 0.534201 (-0.519728) | 0.294213 / 0.579283 (-0.285070) | 0.273413 / 0.434364 (-0.160951) | 0.325930 / 0.540337 (-0.214407) | 0.444265 / 1.386936 (-0.942671) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005904) | 0.003155 / 0.011008 (-0.007853) | 0.048626 / 0.038508 (0.010117) | 0.057427 / 0.023109 (0.034318) | 0.270412 / 0.275898 (-0.005486) | 0.290816 / 0.323480 (-0.032664) | 0.004744 / 0.007986 (-0.003241) | 0.002776 / 0.004328 (-0.001552) | 0.047953 / 0.004250 (0.043703) | 0.041126 / 0.037052 (0.004073) | 0.276046 / 0.258489 (0.017557) | 0.297548 / 0.293841 (0.003707) | 0.029308 / 0.128546 (-0.099238) | 0.010516 / 0.075646 (-0.065131) | 0.056982 / 0.419271 (-0.362290) | 0.032922 / 0.043533 (-0.010611) | 0.271342 / 0.255139 (0.016203) | 0.288963 / 0.283200 (0.005763) | 0.019048 / 0.141683 (-0.122635) | 1.130453 / 1.452155 (-0.321702) | 1.206462 / 1.492716 (-0.286254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099249 / 0.018006 (0.081242) | 0.312409 / 0.000490 (0.311919) | 0.000224 / 0.000200 (0.000024) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021992 / 0.037411 (-0.015419) | 0.068377 / 0.014526 (0.053851) | 0.080749 / 0.176557 (-0.095807) | 0.120534 / 0.737135 (-0.616602) | 0.082549 / 0.296338 (-0.213790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299634 / 0.215209 (0.084425) | 2.943496 / 2.077655 (0.865841) | 1.602842 / 1.504120 (0.098722) | 1.462140 / 1.541195 (-0.079055) | 1.511082 / 1.468490 (0.042592) | 0.574148 / 4.584777 (-4.010629) | 2.492158 / 3.745712 (-1.253554) | 2.921695 / 5.269862 (-2.348166) | 1.812416 / 4.565676 (-2.753260) | 0.064145 / 0.424275 (-0.360130) | 0.005133 / 0.007607 (-0.002475) | 0.357935 / 0.226044 (0.131891) | 3.543728 / 2.268929 (1.274800) | 1.948676 / 55.444624 (-53.495948) | 1.664960 / 6.876477 (-5.211517) | 1.678703 / 2.142072 (-0.463370) | 0.645867 / 4.805227 (-4.159360) | 0.117671 / 6.500664 (-6.382993) | 0.040887 / 0.075469 (-0.034582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979127 / 1.841788 (-0.862661) | 12.363904 / 8.074308 (4.289596) | 10.673725 / 10.191392 (0.482333) | 0.143358 / 0.680424 (-0.537066) | 0.015375 / 0.534201 (-0.518825) | 0.287590 / 0.579283 (-0.291694) | 0.284742 / 0.434364 (-0.149622) | 0.326901 / 0.540337 (-0.213437) | 0.443962 / 1.386936 (-0.942974) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#68099ca55294bfc12a34781835dd73c533a764bd \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004994 / 0.011353 (-0.006359) | 0.003368 / 0.011008 (-0.007640) | 0.062803 / 0.038508 (0.024295) | 0.050778 / 0.023109 (0.027669) | 0.255955 / 0.275898 (-0.019943) | 0.278215 / 0.323480 (-0.045265) | 0.003801 / 0.007986 (-0.004184) | 0.002703 / 0.004328 (-0.001626) | 0.048369 / 0.004250 (0.044119) | 0.037795 / 0.037052 (0.000743) | 0.255634 / 0.258489 (-0.002855) | 0.284226 / 0.293841 (-0.009615) | 0.027252 / 0.128546 (-0.101294) | 0.010686 / 0.075646 (-0.064961) | 0.206139 / 0.419271 (-0.213133) | 0.035543 / 0.043533 (-0.007990) | 0.257167 / 0.255139 (0.002028) | 0.277784 / 0.283200 (-0.005416) | 0.016938 / 0.141683 (-0.124745) | 1.108595 / 1.452155 (-0.343560) | 1.188542 / 1.492716 (-0.304175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090938 / 0.018006 (0.072932) | 0.298463 / 0.000490 (0.297973) | 0.000203 / 0.000200 (0.000003) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027762 / 0.037411 (-0.009649) | 0.060539 / 0.014526 (0.046014) | 0.075986 / 0.176557 (-0.100570) | 0.133851 / 0.737135 (-0.603285) | 0.074669 / 0.296338 (-0.221670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285614 / 0.215209 (0.070405) | 2.810529 / 2.077655 (0.732874) | 1.537092 / 1.504120 (0.032973) | 1.412211 / 1.541195 (-0.128983) | 1.446395 / 1.468490 (-0.022095) | 0.559008 / 4.584777 (-4.025769) | 2.343445 / 3.745712 (-1.402267) | 2.748113 / 5.269862 (-2.521748) | 1.733593 / 4.565676 (-2.832083) | 0.061720 / 0.424275 (-0.362555) | 0.004930 / 0.007607 (-0.002677) | 0.330646 / 0.226044 (0.104602) | 3.314999 / 2.268929 (1.046071) | 1.854527 / 55.444624 (-53.590098) | 1.605819 / 6.876477 (-5.270657) | 1.591406 / 2.142072 (-0.550667) | 0.624239 / 4.805227 (-4.180988) | 0.115352 / 6.500664 (-6.385312) | 0.041600 / 0.075469 (-0.033869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933179 / 1.841788 (-0.908608) | 11.456372 / 8.074308 (3.382064) | 10.578042 / 10.191392 (0.386650) | 0.128045 / 0.680424 (-0.552379) | 0.014212 / 0.534201 (-0.519989) | 0.284795 / 0.579283 (-0.294488) | 0.266210 / 0.434364 (-0.168153) | 0.344468 / 0.540337 (-0.195869) | 0.434414 / 1.386936 (-0.952522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005142 / 0.011353 (-0.006211) | 0.003607 / 0.011008 (-0.007401) | 0.048770 / 0.038508 (0.010262) | 0.051147 / 0.023109 (0.028038) | 0.277329 / 0.275898 (0.001430) | 0.300863 / 0.323480 (-0.022617) | 0.004005 / 0.007986 (-0.003980) | 0.002624 / 0.004328 (-0.001705) | 0.047740 / 0.004250 (0.043489) | 0.040811 / 0.037052 (0.003759) | 0.280020 / 0.258489 (0.021531) | 0.303758 / 0.293841 (0.009918) | 0.028273 / 0.128546 (-0.100274) | 0.010379 / 0.075646 (-0.065267) | 0.057503 / 0.419271 (-0.361768) | 0.032717 / 0.043533 (-0.010816) | 0.277560 / 0.255139 (0.022421) | 0.300622 / 0.283200 (0.017422) | 0.018142 / 0.141683 (-0.123541) | 1.121890 / 1.452155 (-0.330265) | 1.251481 / 1.492716 (-0.241235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091523 / 0.018006 (0.073517) | 0.300173 / 0.000490 (0.299683) | 0.000216 / 0.000200 (0.000016) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026386 / 0.037411 (-0.011025) | 0.078710 / 0.014526 (0.064184) | 0.090594 / 0.176557 (-0.085962) | 0.130623 / 0.737135 (-0.606512) | 0.092637 / 0.296338 (-0.203701) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299427 / 0.215209 (0.084218) | 2.929463 / 2.077655 (0.851808) | 1.608905 / 1.504120 (0.104785) | 1.490863 / 1.541195 (-0.050331) | 1.484286 / 1.468490 (0.015796) | 0.568208 / 4.584777 (-4.016569) | 2.447081 / 3.745712 (-1.298632) | 2.801287 / 5.269862 (-2.468574) | 1.744449 / 4.565676 (-2.821227) | 0.064222 / 0.424275 (-0.360053) | 0.004959 / 0.007607 (-0.002648) | 0.350207 / 0.226044 (0.124162) | 3.471944 / 2.268929 (1.203016) | 1.951715 / 55.444624 (-53.492909) | 1.668764 / 6.876477 (-5.207713) | 1.675322 / 2.142072 (-0.466751) | 0.642217 / 4.805227 (-4.163011) | 0.116776 / 6.500664 (-6.383888) | 0.040812 / 0.075469 (-0.034658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996478 / 1.841788 (-0.845310) | 12.090647 / 8.074308 (4.016339) | 10.723688 / 10.191392 (0.532296) | 0.141770 / 0.680424 (-0.538653) | 0.015578 / 0.534201 (-0.518623) | 0.288236 / 0.579283 (-0.291047) | 0.278542 / 0.434364 (-0.155822) | 0.327411 / 0.540337 (-0.212927) | 0.450309 / 1.386936 (-0.936627) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5dd4698f483d37afe243db0ffae774cbd34a4af4 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004967 / 0.011353 (-0.006385) | 0.003382 / 0.011008 (-0.007627) | 0.063436 / 0.038508 (0.024928) | 0.050769 / 0.023109 (0.027659) | 0.254214 / 0.275898 (-0.021684) | 0.272076 / 0.323480 (-0.051404) | 0.003815 / 0.007986 (-0.004170) | 0.002618 / 0.004328 (-0.001711) | 0.049021 / 0.004250 (0.044771) | 0.037329 / 0.037052 (0.000277) | 0.261112 / 0.258489 (0.002623) | 0.284133 / 0.293841 (-0.009708) | 0.026828 / 0.128546 (-0.101719) | 0.010757 / 0.075646 (-0.064889) | 0.208047 / 0.419271 (-0.211225) | 0.035061 / 0.043533 (-0.008472) | 0.250896 / 0.255139 (-0.004243) | 0.273038 / 0.283200 (-0.010162) | 0.016559 / 0.141683 (-0.125124) | 1.128899 / 1.452155 (-0.323255) | 1.188857 / 1.492716 (-0.303860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100121 / 0.018006 (0.082114) | 0.298427 / 0.000490 (0.297937) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018369 / 0.037411 (-0.019042) | 0.060425 / 0.014526 (0.045899) | 0.073501 / 0.176557 (-0.103055) | 0.120254 / 0.737135 (-0.616881) | 0.074889 / 0.296338 (-0.221450) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287153 / 0.215209 (0.071944) | 2.797036 / 2.077655 (0.719382) | 1.446216 / 1.504120 (-0.057904) | 1.336015 / 1.541195 (-0.205179) | 1.369841 / 1.468490 (-0.098650) | 0.559424 / 4.584777 (-4.025353) | 2.361344 / 3.745712 (-1.384368) | 2.766619 / 5.269862 (-2.503243) | 1.747235 / 4.565676 (-2.818441) | 0.066243 / 0.424275 (-0.358032) | 0.004974 / 0.007607 (-0.002633) | 0.333565 / 0.226044 (0.107520) | 3.319877 / 2.268929 (1.050948) | 1.798024 / 55.444624 (-53.646601) | 1.495896 / 6.876477 (-5.380580) | 1.529243 / 2.142072 (-0.612830) | 0.636609 / 4.805227 (-4.168618) | 0.116151 / 6.500664 (-6.384514) | 0.041779 / 0.075469 (-0.033690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952176 / 1.841788 (-0.889611) | 11.559160 / 8.074308 (3.484852) | 10.556771 / 10.191392 (0.365379) | 0.127118 / 0.680424 (-0.553306) | 0.014142 / 0.534201 (-0.520059) | 0.286585 / 0.579283 (-0.292698) | 0.260233 / 0.434364 (-0.174131) | 0.324012 / 0.540337 (-0.216326) | 0.435131 / 1.386936 (-0.951805) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005171 / 0.011353 (-0.006182) | 0.003402 / 0.011008 (-0.007607) | 0.048826 / 0.038508 (0.010318) | 0.050455 / 0.023109 (0.027346) | 0.272120 / 0.275898 (-0.003778) | 0.290404 / 0.323480 (-0.033076) | 0.003986 / 0.007986 (-0.003999) | 0.002569 / 0.004328 (-0.001760) | 0.047845 / 0.004250 (0.043595) | 0.040203 / 0.037052 (0.003150) | 0.278263 / 0.258489 (0.019774) | 0.299255 / 0.293841 (0.005414) | 0.028643 / 0.128546 (-0.099903) | 0.010584 / 0.075646 (-0.065062) | 0.056921 / 0.419271 (-0.362351) | 0.032362 / 0.043533 (-0.011171) | 0.274010 / 0.255139 (0.018871) | 0.288601 / 0.283200 (0.005401) | 0.017856 / 0.141683 (-0.123827) | 1.154112 / 1.452155 (-0.298043) | 1.216288 / 1.492716 (-0.276428) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091399 / 0.018006 (0.073392) | 0.299966 / 0.000490 (0.299477) | 0.000218 / 0.000200 (0.000018) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021728 / 0.037411 (-0.015683) | 0.068285 / 0.014526 (0.053759) | 0.081767 / 0.176557 (-0.094789) | 0.120000 / 0.737135 (-0.617135) | 0.082149 / 0.296338 (-0.214189) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289625 / 0.215209 (0.074416) | 2.835114 / 2.077655 (0.757460) | 1.583207 / 1.504120 (0.079087) | 1.465251 / 1.541195 (-0.075944) | 1.480691 / 1.468490 (0.012200) | 0.569103 / 4.584777 (-4.015674) | 2.416981 / 3.745712 (-1.328731) | 2.761746 / 5.269862 (-2.508115) | 1.720055 / 4.565676 (-2.845621) | 0.063349 / 0.424275 (-0.360926) | 0.004931 / 0.007607 (-0.002676) | 0.343658 / 0.226044 (0.117614) | 3.362996 / 2.268929 (1.094068) | 1.948088 / 55.444624 (-53.496536) | 1.659504 / 6.876477 (-5.216973) | 1.660359 / 2.142072 (-0.481713) | 0.647871 / 4.805227 (-4.157356) | 0.117395 / 6.500664 (-6.383269) | 0.041049 / 0.075469 (-0.034420) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953971 / 1.841788 (-0.887817) | 12.076998 / 8.074308 (4.002690) | 10.549021 / 10.191392 (0.357629) | 0.130026 / 0.680424 (-0.550398) | 0.015697 / 0.534201 (-0.518504) | 0.287125 / 0.579283 (-0.292158) | 0.298402 / 0.434364 (-0.135962) | 0.326005 / 0.540337 (-0.214332) | 0.444065 / 1.386936 (-0.942871) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf86d48792f585bf802bb2ff70e0d9c3a4de4bcf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005053 / 0.011353 (-0.006300) | 0.003537 / 0.011008 (-0.007472) | 0.062923 / 0.038508 (0.024415) | 0.053796 / 0.023109 (0.030687) | 0.242523 / 0.275898 (-0.033375) | 0.264014 / 0.323480 (-0.059466) | 0.002879 / 0.007986 (-0.005106) | 0.003273 / 0.004328 (-0.001055) | 0.048735 / 0.004250 (0.044484) | 0.037541 / 0.037052 (0.000488) | 0.248587 / 0.258489 (-0.009902) | 0.275531 / 0.293841 (-0.018310) | 0.027215 / 0.128546 (-0.101331) | 0.010466 / 0.075646 (-0.065180) | 0.206508 / 0.419271 (-0.212763) | 0.035606 / 0.043533 (-0.007927) | 0.251044 / 0.255139 (-0.004095) | 0.267183 / 0.283200 (-0.016016) | 0.018357 / 0.141683 (-0.123326) | 1.083513 / 1.452155 (-0.368642) | 1.152988 / 1.492716 (-0.339728) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091749 / 0.018006 (0.073742) | 0.299946 / 0.000490 (0.299456) | 0.000212 / 0.000200 (0.000013) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018300 / 0.037411 (-0.019111) | 0.060691 / 0.014526 (0.046166) | 0.072998 / 0.176557 (-0.103559) | 0.120581 / 0.737135 (-0.616554) | 0.073912 / 0.296338 (-0.222427) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277602 / 0.215209 (0.062393) | 2.719181 / 2.077655 (0.641526) | 1.450894 / 1.504120 (-0.053226) | 1.314344 / 1.541195 (-0.226851) | 1.351996 / 1.468490 (-0.116494) | 0.586231 / 4.584777 (-3.998546) | 2.349746 / 3.745712 (-1.395967) | 2.810060 / 5.269862 (-2.459802) | 1.761362 / 4.565676 (-2.804314) | 0.062535 / 0.424275 (-0.361740) | 0.004918 / 0.007607 (-0.002689) | 0.336091 / 0.226044 (0.110047) | 3.238139 / 2.268929 (0.969211) | 1.769734 / 55.444624 (-53.674890) | 1.505332 / 6.876477 (-5.371145) | 1.527875 / 2.142072 (-0.614198) | 0.640194 / 4.805227 (-4.165033) | 0.116567 / 6.500664 (-6.384097) | 0.042464 / 0.075469 (-0.033005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930919 / 1.841788 (-0.910869) | 11.462498 / 8.074308 (3.388190) | 10.575359 / 10.191392 (0.383967) | 0.130567 / 0.680424 (-0.549857) | 0.014203 / 0.534201 (-0.519998) | 0.286944 / 0.579283 (-0.292339) | 0.264706 / 0.434364 (-0.169658) | 0.324820 / 0.540337 (-0.215517) | 0.434579 / 1.386936 (-0.952357) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005164 / 0.011353 (-0.006189) | 0.003442 / 0.011008 (-0.007567) | 0.050146 / 0.038508 (0.011638) | 0.050800 / 0.023109 (0.027691) | 0.263405 / 0.275898 (-0.012493) | 0.284876 / 0.323480 (-0.038604) | 0.004011 / 0.007986 (-0.003975) | 0.002602 / 0.004328 (-0.001726) | 0.046742 / 0.004250 (0.042491) | 0.040393 / 0.037052 (0.003341) | 0.265052 / 0.258489 (0.006563) | 0.294217 / 0.293841 (0.000377) | 0.028429 / 0.128546 (-0.100118) | 0.010418 / 0.075646 (-0.065228) | 0.057285 / 0.419271 (-0.361987) | 0.032137 / 0.043533 (-0.011396) | 0.265867 / 0.255139 (0.010728) | 0.284764 / 0.283200 (0.001564) | 0.017448 / 0.141683 (-0.124235) | 1.172830 / 1.452155 (-0.279325) | 1.223982 / 1.492716 (-0.268735) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091859 / 0.018006 (0.073853) | 0.285421 / 0.000490 (0.284931) | 0.000220 / 0.000200 (0.000020) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021620 / 0.037411 (-0.015792) | 0.069058 / 0.014526 (0.054532) | 0.082560 / 0.176557 (-0.093997) | 0.119511 / 0.737135 (-0.617624) | 0.082318 / 0.296338 (-0.214021) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291499 / 0.215209 (0.076290) | 2.863352 / 2.077655 (0.785698) | 1.557242 / 1.504120 (0.053122) | 1.430170 / 1.541195 (-0.111024) | 1.432850 / 1.468490 (-0.035640) | 0.559716 / 4.584777 (-4.025061) | 2.385405 / 3.745712 (-1.360307) | 2.748938 / 5.269862 (-2.520924) | 1.740802 / 4.565676 (-2.824874) | 0.061811 / 0.424275 (-0.362465) | 0.005174 / 0.007607 (-0.002433) | 0.348687 / 0.226044 (0.122642) | 3.420120 / 2.268929 (1.151191) | 1.918278 / 55.444624 (-53.526346) | 1.631559 / 6.876477 (-5.244918) | 1.635850 / 2.142072 (-0.506222) | 0.644144 / 4.805227 (-4.161083) | 0.115823 / 6.500664 (-6.384841) | 0.041255 / 0.075469 (-0.034214) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960066 / 1.841788 (-0.881722) | 12.011372 / 8.074308 (3.937064) | 10.580532 / 10.191392 (0.389140) | 0.134763 / 0.680424 (-0.545661) | 0.017027 / 0.534201 (-0.517174) | 0.290484 / 0.579283 (-0.288799) | 0.285171 / 0.434364 (-0.149193) | 0.322453 / 0.540337 (-0.217884) | 0.438088 / 1.386936 (-0.948848) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3fc42882a2d84d7482c27063f1e19539e99b9d3 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005212 / 0.011353 (-0.006141) | 0.003440 / 0.011008 (-0.007568) | 0.063612 / 0.038508 (0.025104) | 0.049070 / 0.023109 (0.025961) | 0.269748 / 0.275898 (-0.006150) | 0.283270 / 0.323480 (-0.040210) | 0.002892 / 0.007986 (-0.005094) | 0.002693 / 0.004328 (-0.001635) | 0.049710 / 0.004250 (0.045459) | 0.036707 / 0.037052 (-0.000345) | 0.299035 / 0.258489 (0.040546) | 0.296443 / 0.293841 (0.002602) | 0.028095 / 0.128546 (-0.100451) | 0.010682 / 0.075646 (-0.064964) | 0.213914 / 0.419271 (-0.205358) | 0.036210 / 0.043533 (-0.007323) | 0.235720 / 0.255139 (-0.019419) | 0.252687 / 0.283200 (-0.030512) | 0.016985 / 0.141683 (-0.124698) | 1.099024 / 1.452155 (-0.353130) | 1.162970 / 1.492716 (-0.329746) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093114 / 0.018006 (0.075108) | 0.305168 / 0.000490 (0.304678) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018370 / 0.037411 (-0.019041) | 0.060534 / 0.014526 (0.046008) | 0.073960 / 0.176557 (-0.102596) | 0.120325 / 0.737135 (-0.616810) | 0.073754 / 0.296338 (-0.222585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284244 / 0.215209 (0.069035) | 2.756854 / 2.077655 (0.679199) | 1.477304 / 1.504120 (-0.026816) | 1.374635 / 1.541195 (-0.166560) | 1.383284 / 1.468490 (-0.085206) | 0.564656 / 4.584777 (-4.020121) | 2.361719 / 3.745712 (-1.383993) | 2.794822 / 5.269862 (-2.475039) | 1.742981 / 4.565676 (-2.822696) | 0.063443 / 0.424275 (-0.360832) | 0.004952 / 0.007607 (-0.002655) | 0.342058 / 0.226044 (0.116014) | 3.351093 / 2.268929 (1.082164) | 1.857375 / 55.444624 (-53.587250) | 1.541680 / 6.876477 (-5.334797) | 1.580147 / 2.142072 (-0.561926) | 0.645216 / 4.805227 (-4.160012) | 0.118768 / 6.500664 (-6.381896) | 0.042115 / 0.075469 (-0.033354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.925845 / 1.841788 (-0.915943) | 11.444147 / 8.074308 (3.369839) | 10.291297 / 10.191392 (0.099905) | 0.128129 / 0.680424 (-0.552295) | 0.013774 / 0.534201 (-0.520427) | 0.289278 / 0.579283 (-0.290005) | 0.262353 / 0.434364 (-0.172011) | 0.328517 / 0.540337 (-0.211820) | 0.436050 / 1.386936 (-0.950886) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005666 / 0.011353 (-0.005687) | 0.003691 / 0.011008 (-0.007318) | 0.049361 / 0.038508 (0.010853) | 0.054245 / 0.023109 (0.031136) | 0.274433 / 0.275898 (-0.001465) | 0.285648 / 0.323480 (-0.037832) | 0.004080 / 0.007986 (-0.003906) | 0.002666 / 0.004328 (-0.001663) | 0.047539 / 0.004250 (0.043288) | 0.041001 / 0.037052 (0.003948) | 0.296018 / 0.258489 (0.037529) | 0.294542 / 0.293841 (0.000701) | 0.030546 / 0.128546 (-0.098001) | 0.010556 / 0.075646 (-0.065090) | 0.058146 / 0.419271 (-0.361126) | 0.033407 / 0.043533 (-0.010126) | 0.263977 / 0.255139 (0.008838) | 0.286228 / 0.283200 (0.003028) | 0.018088 / 0.141683 (-0.123595) | 1.121295 / 1.452155 (-0.330860) | 1.182183 / 1.492716 (-0.310533) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104540 / 0.018006 (0.086534) | 0.303494 / 0.000490 (0.303004) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021274 / 0.037411 (-0.016137) | 0.070146 / 0.014526 (0.055621) | 0.080343 / 0.176557 (-0.096213) | 0.120017 / 0.737135 (-0.617119) | 0.081303 / 0.296338 (-0.215036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294390 / 0.215209 (0.079181) | 2.883366 / 2.077655 (0.805711) | 1.564629 / 1.504120 (0.060509) | 1.432633 / 1.541195 (-0.108562) | 1.438786 / 1.468490 (-0.029704) | 0.569663 / 4.584777 (-4.015114) | 2.448691 / 3.745712 (-1.297021) | 2.817010 / 5.269862 (-2.452851) | 1.757274 / 4.565676 (-2.808402) | 0.064147 / 0.424275 (-0.360129) | 0.004910 / 0.007607 (-0.002697) | 0.344062 / 0.226044 (0.118018) | 3.394223 / 2.268929 (1.125294) | 1.927139 / 55.444624 (-53.517485) | 1.624983 / 6.876477 (-5.251494) | 1.629076 / 2.142072 (-0.512996) | 0.654239 / 4.805227 (-4.150988) | 0.117309 / 6.500664 (-6.383355) | 0.041067 / 0.075469 (-0.034402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993184 / 1.841788 (-0.848604) | 11.969985 / 8.074308 (3.895677) | 10.363356 / 10.191392 (0.171964) | 0.130708 / 0.680424 (-0.549716) | 0.015577 / 0.534201 (-0.518624) | 0.289579 / 0.579283 (-0.289704) | 0.274875 / 0.434364 (-0.159488) | 0.326736 / 0.540337 (-0.213601) | 0.442770 / 1.386936 (-0.944166) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#796a47e388a5c5711a95bd649648608c18219ac5 \"CML watermark\")\n",
"Getting the same windows error as in my other PR. I couldn't reproduce on my windows machine though π§ ",
"`DataFilesList` is a list so we expect to be able to get its length with zero cost, which wouldn't be the case if we make it lazy no ? ",
"But we don't call `len` on it, do we? And I couldn't find an instance of `DataFilesList` being used in GitHub's public repos.",
"`DataFilesDict` is used in some repositories in dataset scripts when people want to list files from a repo using glob patterns",
"Also making DataFilesList lazy would require to make the pickling more complex, since we don't want to resolve the data files when pickling. At the same time we want to get different hashes if the data files and origin metadata are different so revolving the patterns is needed in that case (we hash the data files when creating the config_id, used in the cache)",
"> `DataFilesDict` is used in some repositories in dataset scripts when people want to list files from a repo using glob patterns\r\n\r\nWould be interesting to know how often these scripts call `len` or do random access on `DataFilesList`.\r\n\r\nStill, I think we should opt for a solution that makes more sense for us. To avoid the breaking change, we can define a `BuilderConfig.data_files` property that resolves this iterable. \r\n\r\n> Also making DataFilesList lazy would require to make the pickling more complex, since we don't want to resolve the data files when pickling. At the same time we want to get different hashes if the data files and origin metadata are different so revolving the patterns is needed in that case (we hash the data files when creating the config_id, used in the cache)\r\n\r\nThe `BuilderConfig.data_files` property suggested above should address this, no? \r\n\r\nI think we should be more careful not to make our API needlessly complex because of the YAML README feature. And if this can't be avoided, we should probably refactor the builder API.",
"> The BuilderConfig.data_files property suggested above should address this, no?\r\n\r\nThat works indeed ! let me try something",
"Implementing lazy DataFilesList and .data_files brings more complexity (less readable, more bad side effects) so I think the current solution is the best one",
"I opened https://github.com/huggingface/datasets/pull/6493 to continue this and fix conflicts with https://github.com/huggingface/datasets/pull/6459"
] | 2023-11-29T13:18:44 | 2023-12-12T23:30:33 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6458",
"html_url": "https://github.com/huggingface/datasets/pull/6458",
"diff_url": "https://github.com/huggingface/datasets/pull/6458.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6458.patch",
"merged_at": null
} | Related to discussion at https://github.com/huggingface/datasets/pull/6255
this makes this code run in 2sec instead of >10sec
```python
from datasets import load_dataset
ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False)
```
For some datasets with many configs and files it can be up to 100x faster.
This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts.
The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6458/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6458/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6457/comments | https://api.github.com/repos/huggingface/datasets/issues/6457/events | https://github.com/huggingface/datasets/issues/6457 | 2,015,650,563 | I_kwDODunzps54JGMD | 6,457 | `TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth' | {
"login": "wasertech",
"id": 79070834,
"node_id": "MDQ6VXNlcjc5MDcwODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wasertech",
"html_url": "https://github.com/wasertech",
"followers_url": "https://api.github.com/users/wasertech/followers",
"following_url": "https://api.github.com/users/wasertech/following{/other_user}",
"gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasertech/subscriptions",
"organizations_url": "https://api.github.com/users/wasertech/orgs",
"repos_url": "https://api.github.com/users/wasertech/repos",
"events_url": "https://api.github.com/users/wasertech/events{/privacy}",
"received_events_url": "https://api.github.com/users/wasertech/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Updating `fsspec>=2023.10.0` did solve the issue.",
"May be it should be pinned somewhere?",
"> Maybe this should go in datasets directly... anyways you can easily fix this error by updating datasets>=2.15.1.dev0.\r\n\r\n@lhoestq @mariosasko for what I understand this is a bug fixed in `datasets` already, right? No need to do anything in `huggingface_hub`?",
"I've opened a PR with a fix in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/1875",
"Thanks! PR is merged and will be shipped in next release of `huggingface_hub`."
] | 2023-11-29T01:57:36 | 2023-11-29T15:39:03 | 2023-11-29T02:02:38 | NONE | null | null | null | ### Describe the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Steps to reproduce the bug
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Expected behavior
Please see https://github.com/huggingface/huggingface_hub/issues/1872
### Environment info
Please see https://github.com/huggingface/huggingface_hub/issues/1872 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6457/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6456/comments | https://api.github.com/repos/huggingface/datasets/issues/6456/events | https://github.com/huggingface/datasets/pull/6456 | 2,015,186,090 | PR_kwDODunzps5gmDJY | 6,456 | Don't require trust_remote_code in inspect_dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005705 / 0.011353 (-0.005648) | 0.003536 / 0.011008 (-0.007473) | 0.062852 / 0.038508 (0.024343) | 0.053902 / 0.023109 (0.030793) | 0.239465 / 0.275898 (-0.036433) | 0.270829 / 0.323480 (-0.052651) | 0.004052 / 0.007986 (-0.003934) | 0.002775 / 0.004328 (-0.001554) | 0.048475 / 0.004250 (0.044225) | 0.039430 / 0.037052 (0.002377) | 0.244318 / 0.258489 (-0.014171) | 0.277539 / 0.293841 (-0.016302) | 0.027637 / 0.128546 (-0.100909) | 0.010875 / 0.075646 (-0.064771) | 0.208839 / 0.419271 (-0.210432) | 0.036984 / 0.043533 (-0.006549) | 0.246355 / 0.255139 (-0.008784) | 0.271200 / 0.283200 (-0.011999) | 0.020636 / 0.141683 (-0.121047) | 1.078472 / 1.452155 (-0.373683) | 1.155701 / 1.492716 (-0.337015) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100971 / 0.018006 (0.082965) | 0.310996 / 0.000490 (0.310507) | 0.000218 / 0.000200 (0.000018) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019300 / 0.037411 (-0.018111) | 0.060625 / 0.014526 (0.046099) | 0.073778 / 0.176557 (-0.102778) | 0.120280 / 0.737135 (-0.616855) | 0.075288 / 0.296338 (-0.221051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289838 / 0.215209 (0.074629) | 2.859492 / 2.077655 (0.781837) | 1.528478 / 1.504120 (0.024358) | 1.417911 / 1.541195 (-0.123283) | 1.444227 / 1.468490 (-0.024263) | 0.566799 / 4.584777 (-4.017978) | 2.402526 / 3.745712 (-1.343186) | 2.805241 / 5.269862 (-2.464620) | 1.798572 / 4.565676 (-2.767104) | 0.062920 / 0.424275 (-0.361355) | 0.004995 / 0.007607 (-0.002612) | 0.340688 / 0.226044 (0.114644) | 3.347967 / 2.268929 (1.079039) | 1.898464 / 55.444624 (-53.546160) | 1.604784 / 6.876477 (-5.271693) | 1.648864 / 2.142072 (-0.493209) | 0.642242 / 4.805227 (-4.162985) | 0.117567 / 6.500664 (-6.383097) | 0.041911 / 0.075469 (-0.033558) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949099 / 1.841788 (-0.892689) | 12.367323 / 8.074308 (4.293015) | 10.694238 / 10.191392 (0.502846) | 0.143424 / 0.680424 (-0.537000) | 0.014569 / 0.534201 (-0.519632) | 0.289127 / 0.579283 (-0.290156) | 0.270490 / 0.434364 (-0.163874) | 0.326470 / 0.540337 (-0.213867) | 0.432223 / 1.386936 (-0.954713) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005380 / 0.011353 (-0.005973) | 0.003582 / 0.011008 (-0.007426) | 0.049341 / 0.038508 (0.010833) | 0.053274 / 0.023109 (0.030165) | 0.284319 / 0.275898 (0.008421) | 0.334248 / 0.323480 (0.010768) | 0.004032 / 0.007986 (-0.003953) | 0.002682 / 0.004328 (-0.001646) | 0.048317 / 0.004250 (0.044067) | 0.040157 / 0.037052 (0.003105) | 0.284594 / 0.258489 (0.026105) | 0.341567 / 0.293841 (0.047726) | 0.029639 / 0.128546 (-0.098908) | 0.010780 / 0.075646 (-0.064867) | 0.057990 / 0.419271 (-0.361282) | 0.032730 / 0.043533 (-0.010803) | 0.290328 / 0.255139 (0.035189) | 0.298563 / 0.283200 (0.015363) | 0.018546 / 0.141683 (-0.123137) | 1.143157 / 1.452155 (-0.308998) | 1.191391 / 1.492716 (-0.301326) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093802 / 0.018006 (0.075796) | 0.312771 / 0.000490 (0.312282) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021867 / 0.037411 (-0.015544) | 0.069064 / 0.014526 (0.054538) | 0.082270 / 0.176557 (-0.094287) | 0.120222 / 0.737135 (-0.616913) | 0.084628 / 0.296338 (-0.211710) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295505 / 0.215209 (0.080296) | 2.891105 / 2.077655 (0.813450) | 1.619480 / 1.504120 (0.115360) | 1.498290 / 1.541195 (-0.042905) | 1.547896 / 1.468490 (0.079406) | 0.575188 / 4.584777 (-4.009589) | 2.434426 / 3.745712 (-1.311286) | 2.899286 / 5.269862 (-2.370576) | 1.806085 / 4.565676 (-2.759591) | 0.063660 / 0.424275 (-0.360616) | 0.004933 / 0.007607 (-0.002674) | 0.348274 / 0.226044 (0.122229) | 3.447900 / 2.268929 (1.178971) | 1.956237 / 55.444624 (-53.488387) | 1.680416 / 6.876477 (-5.196061) | 1.732307 / 2.142072 (-0.409766) | 0.668428 / 4.805227 (-4.136799) | 0.119161 / 6.500664 (-6.381503) | 0.041694 / 0.075469 (-0.033775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973730 / 1.841788 (-0.868058) | 12.082452 / 8.074308 (4.008144) | 10.624836 / 10.191392 (0.433444) | 0.144027 / 0.680424 (-0.536397) | 0.014830 / 0.534201 (-0.519370) | 0.289946 / 0.579283 (-0.289337) | 0.281939 / 0.434364 (-0.152424) | 0.325639 / 0.540337 (-0.214699) | 0.551690 / 1.386936 (-0.835246) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9e1cf8526c9216b08b5431695d9f8e0eec64cc5f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005279 / 0.011353 (-0.006074) | 0.003506 / 0.011008 (-0.007502) | 0.062579 / 0.038508 (0.024071) | 0.052809 / 0.023109 (0.029700) | 0.274693 / 0.275898 (-0.001205) | 0.283917 / 0.323480 (-0.039563) | 0.003950 / 0.007986 (-0.004036) | 0.002772 / 0.004328 (-0.001557) | 0.048127 / 0.004250 (0.043877) | 0.037771 / 0.037052 (0.000719) | 0.280595 / 0.258489 (0.022106) | 0.292310 / 0.293841 (-0.001531) | 0.027890 / 0.128546 (-0.100656) | 0.010771 / 0.075646 (-0.064875) | 0.207285 / 0.419271 (-0.211987) | 0.036179 / 0.043533 (-0.007354) | 0.253617 / 0.255139 (-0.001522) | 0.276107 / 0.283200 (-0.007093) | 0.018253 / 0.141683 (-0.123430) | 1.112219 / 1.452155 (-0.339936) | 1.166756 / 1.492716 (-0.325960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095159 / 0.018006 (0.077152) | 0.306097 / 0.000490 (0.305608) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019056 / 0.037411 (-0.018355) | 0.060445 / 0.014526 (0.045919) | 0.073553 / 0.176557 (-0.103004) | 0.120306 / 0.737135 (-0.616829) | 0.075613 / 0.296338 (-0.220725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277839 / 0.215209 (0.062630) | 2.761037 / 2.077655 (0.683382) | 1.508524 / 1.504120 (0.004404) | 1.368994 / 1.541195 (-0.172201) | 1.415961 / 1.468490 (-0.052529) | 0.570490 / 4.584777 (-4.014287) | 2.356355 / 3.745712 (-1.389357) | 2.806626 / 5.269862 (-2.463235) | 1.757849 / 4.565676 (-2.807827) | 0.063504 / 0.424275 (-0.360771) | 0.005021 / 0.007607 (-0.002586) | 0.338880 / 0.226044 (0.112836) | 3.290947 / 2.268929 (1.022018) | 1.818238 / 55.444624 (-53.626386) | 1.529970 / 6.876477 (-5.346507) | 1.557085 / 2.142072 (-0.584987) | 0.645352 / 4.805227 (-4.159876) | 0.123066 / 6.500664 (-6.377598) | 0.043387 / 0.075469 (-0.032082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974512 / 1.841788 (-0.867276) | 11.976411 / 8.074308 (3.902103) | 10.361084 / 10.191392 (0.169692) | 0.127171 / 0.680424 (-0.553253) | 0.014091 / 0.534201 (-0.520110) | 0.288608 / 0.579283 (-0.290675) | 0.261886 / 0.434364 (-0.172478) | 0.331632 / 0.540337 (-0.208705) | 0.437002 / 1.386936 (-0.949934) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005129 / 0.011353 (-0.006224) | 0.003490 / 0.011008 (-0.007518) | 0.049005 / 0.038508 (0.010497) | 0.054077 / 0.023109 (0.030968) | 0.276653 / 0.275898 (0.000755) | 0.298752 / 0.323480 (-0.024728) | 0.003979 / 0.007986 (-0.004007) | 0.002625 / 0.004328 (-0.001703) | 0.047951 / 0.004250 (0.043701) | 0.040969 / 0.037052 (0.003916) | 0.279879 / 0.258489 (0.021390) | 0.306244 / 0.293841 (0.012403) | 0.029025 / 0.128546 (-0.099522) | 0.010450 / 0.075646 (-0.065197) | 0.056846 / 0.419271 (-0.362426) | 0.033476 / 0.043533 (-0.010057) | 0.273340 / 0.255139 (0.018201) | 0.294783 / 0.283200 (0.011584) | 0.019105 / 0.141683 (-0.122578) | 1.126389 / 1.452155 (-0.325766) | 1.183369 / 1.492716 (-0.309348) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094995 / 0.018006 (0.076989) | 0.306984 / 0.000490 (0.306495) | 0.000224 / 0.000200 (0.000024) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021880 / 0.037411 (-0.015532) | 0.069674 / 0.014526 (0.055148) | 0.082191 / 0.176557 (-0.094366) | 0.120956 / 0.737135 (-0.616179) | 0.083843 / 0.296338 (-0.212495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295139 / 0.215209 (0.079929) | 2.860520 / 2.077655 (0.782865) | 1.578892 / 1.504120 (0.074772) | 1.451003 / 1.541195 (-0.090192) | 1.483099 / 1.468490 (0.014609) | 0.550491 / 4.584777 (-4.034286) | 2.430352 / 3.745712 (-1.315360) | 2.874468 / 5.269862 (-2.395393) | 1.741474 / 4.565676 (-2.824202) | 0.062563 / 0.424275 (-0.361712) | 0.004962 / 0.007607 (-0.002645) | 0.343747 / 0.226044 (0.117703) | 3.419046 / 2.268929 (1.150118) | 1.943774 / 55.444624 (-53.500851) | 1.650989 / 6.876477 (-5.225488) | 1.704083 / 2.142072 (-0.437990) | 0.645447 / 4.805227 (-4.159780) | 0.125105 / 6.500664 (-6.375559) | 0.041319 / 0.075469 (-0.034150) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959708 / 1.841788 (-0.882079) | 12.235906 / 8.074308 (4.161598) | 10.575402 / 10.191392 (0.384010) | 0.143619 / 0.680424 (-0.536805) | 0.015517 / 0.534201 (-0.518684) | 0.285231 / 0.579283 (-0.294052) | 0.281549 / 0.434364 (-0.152815) | 0.326649 / 0.540337 (-0.213689) | 0.565706 / 1.386936 (-0.821230) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fb6985bc33277a3ece7f28c74ca742ba84655b0c \"CML watermark\")\n"
] | 2023-11-28T19:47:07 | 2023-11-30T10:40:23 | 2023-11-30T10:34:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6456",
"html_url": "https://github.com/huggingface/datasets/pull/6456",
"diff_url": "https://github.com/huggingface/datasets/pull/6456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6456.patch",
"merged_at": "2023-11-30T10:34:12"
} | don't require `trust_remote_code` in (deprecated) `inspect_dataset` (it defeats its purpose)
(not super important but we might as well keep it until the next major release)
this is needed to fix the tests in https://github.com/huggingface/datasets/pull/6448 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6456/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6454/comments | https://api.github.com/repos/huggingface/datasets/issues/6454/events | https://github.com/huggingface/datasets/pull/6454 | 2,013,001,584 | PR_kwDODunzps5gej3H | 6,454 | Refactor `dill` logic | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005490 / 0.011353 (-0.005863) | 0.003554 / 0.011008 (-0.007454) | 0.062183 / 0.038508 (0.023675) | 0.053093 / 0.023109 (0.029984) | 0.245370 / 0.275898 (-0.030528) | 0.271637 / 0.323480 (-0.051842) | 0.002997 / 0.007986 (-0.004989) | 0.002811 / 0.004328 (-0.001517) | 0.047874 / 0.004250 (0.043623) | 0.039673 / 0.037052 (0.002620) | 0.253219 / 0.258489 (-0.005271) | 0.280438 / 0.293841 (-0.013403) | 0.028393 / 0.128546 (-0.100153) | 0.010914 / 0.075646 (-0.064732) | 0.207491 / 0.419271 (-0.211781) | 0.037565 / 0.043533 (-0.005968) | 0.252382 / 0.255139 (-0.002757) | 0.272204 / 0.283200 (-0.010995) | 0.019007 / 0.141683 (-0.122676) | 1.099767 / 1.452155 (-0.352388) | 1.173220 / 1.492716 (-0.319496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098777 / 0.018006 (0.080771) | 0.325912 / 0.000490 (0.325422) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018815 / 0.037411 (-0.018596) | 0.070031 / 0.014526 (0.055506) | 0.075395 / 0.176557 (-0.101162) | 0.122633 / 0.737135 (-0.614502) | 0.077621 / 0.296338 (-0.218718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290830 / 0.215209 (0.075621) | 2.869214 / 2.077655 (0.791559) | 1.507337 / 1.504120 (0.003217) | 1.351391 / 1.541195 (-0.189804) | 1.386642 / 1.468490 (-0.081848) | 0.570318 / 4.584777 (-4.014459) | 2.423442 / 3.745712 (-1.322270) | 2.897812 / 5.269862 (-2.372050) | 1.796458 / 4.565676 (-2.769219) | 0.063649 / 0.424275 (-0.360626) | 0.005038 / 0.007607 (-0.002570) | 0.357819 / 0.226044 (0.131774) | 3.535478 / 2.268929 (1.266549) | 1.831764 / 55.444624 (-53.612861) | 1.545035 / 6.876477 (-5.331442) | 1.585919 / 2.142072 (-0.556154) | 0.643333 / 4.805227 (-4.161894) | 0.120319 / 6.500664 (-6.380345) | 0.043031 / 0.075469 (-0.032438) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981155 / 1.841788 (-0.860633) | 12.136069 / 8.074308 (4.061760) | 10.579923 / 10.191392 (0.388531) | 0.152963 / 0.680424 (-0.527461) | 0.014783 / 0.534201 (-0.519418) | 0.289177 / 0.579283 (-0.290106) | 0.271784 / 0.434364 (-0.162580) | 0.322381 / 0.540337 (-0.217956) | 0.420034 / 1.386936 (-0.966902) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005315 / 0.011353 (-0.006038) | 0.003584 / 0.011008 (-0.007424) | 0.048596 / 0.038508 (0.010088) | 0.055940 / 0.023109 (0.032830) | 0.277687 / 0.275898 (0.001789) | 0.301545 / 0.323480 (-0.021935) | 0.004150 / 0.007986 (-0.003836) | 0.002699 / 0.004328 (-0.001629) | 0.047661 / 0.004250 (0.043410) | 0.040618 / 0.037052 (0.003565) | 0.279173 / 0.258489 (0.020684) | 0.306105 / 0.293841 (0.012264) | 0.030099 / 0.128546 (-0.098447) | 0.010784 / 0.075646 (-0.064862) | 0.057418 / 0.419271 (-0.361853) | 0.032632 / 0.043533 (-0.010901) | 0.276064 / 0.255139 (0.020925) | 0.307194 / 0.283200 (0.023995) | 0.017416 / 0.141683 (-0.124267) | 1.107749 / 1.452155 (-0.344406) | 1.161104 / 1.492716 (-0.331612) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102395 / 0.018006 (0.084389) | 0.316933 / 0.000490 (0.316443) | 0.000246 / 0.000200 (0.000046) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022833 / 0.037411 (-0.014579) | 0.069372 / 0.014526 (0.054846) | 0.082139 / 0.176557 (-0.094418) | 0.121666 / 0.737135 (-0.615469) | 0.084039 / 0.296338 (-0.212300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298775 / 0.215209 (0.083566) | 2.973898 / 2.077655 (0.896244) | 1.614436 / 1.504120 (0.110316) | 1.476112 / 1.541195 (-0.065083) | 1.502031 / 1.468490 (0.033541) | 0.580626 / 4.584777 (-4.004151) | 2.493428 / 3.745712 (-1.252285) | 2.931050 / 5.269862 (-2.338811) | 1.823603 / 4.565676 (-2.742073) | 0.064736 / 0.424275 (-0.359539) | 0.004963 / 0.007607 (-0.002644) | 0.355096 / 0.226044 (0.129052) | 3.522801 / 2.268929 (1.253872) | 1.968690 / 55.444624 (-53.475935) | 1.698624 / 6.876477 (-5.177853) | 1.714166 / 2.142072 (-0.427906) | 0.681734 / 4.805227 (-4.123493) | 0.118940 / 6.500664 (-6.381724) | 0.041960 / 0.075469 (-0.033509) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985311 / 1.841788 (-0.856476) | 12.785393 / 8.074308 (4.711085) | 11.289459 / 10.191392 (1.098067) | 0.145297 / 0.680424 (-0.535127) | 0.016125 / 0.534201 (-0.518076) | 0.289445 / 0.579283 (-0.289838) | 0.278974 / 0.434364 (-0.155390) | 0.322456 / 0.540337 (-0.217881) | 0.418218 / 1.386936 (-0.968718) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#66cef090c55d3561412468d94cb545b47fb000fb \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005142 / 0.011353 (-0.006211) | 0.004180 / 0.011008 (-0.006829) | 0.062647 / 0.038508 (0.024139) | 0.055072 / 0.023109 (0.031962) | 0.254681 / 0.275898 (-0.021217) | 0.282650 / 0.323480 (-0.040830) | 0.003950 / 0.007986 (-0.004035) | 0.002862 / 0.004328 (-0.001466) | 0.048420 / 0.004250 (0.044170) | 0.038447 / 0.037052 (0.001394) | 0.258160 / 0.258489 (-0.000329) | 0.288596 / 0.293841 (-0.005245) | 0.027898 / 0.128546 (-0.100648) | 0.011165 / 0.075646 (-0.064482) | 0.206844 / 0.419271 (-0.212427) | 0.036312 / 0.043533 (-0.007221) | 0.257957 / 0.255139 (0.002819) | 0.277387 / 0.283200 (-0.005812) | 0.018205 / 0.141683 (-0.123478) | 1.109870 / 1.452155 (-0.342284) | 1.175005 / 1.492716 (-0.317712) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096692 / 0.018006 (0.078686) | 0.307463 / 0.000490 (0.306973) | 0.000218 / 0.000200 (0.000018) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018602 / 0.037411 (-0.018809) | 0.061489 / 0.014526 (0.046964) | 0.072936 / 0.176557 (-0.103620) | 0.119863 / 0.737135 (-0.617272) | 0.073983 / 0.296338 (-0.222355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291444 / 0.215209 (0.076235) | 2.849024 / 2.077655 (0.771369) | 1.533121 / 1.504120 (0.029001) | 1.402148 / 1.541195 (-0.139046) | 1.406397 / 1.468490 (-0.062094) | 0.564241 / 4.584777 (-4.020536) | 2.402052 / 3.745712 (-1.343660) | 2.772639 / 5.269862 (-2.497223) | 1.732342 / 4.565676 (-2.833334) | 0.062361 / 0.424275 (-0.361914) | 0.004945 / 0.007607 (-0.002662) | 0.355841 / 0.226044 (0.129797) | 3.426931 / 2.268929 (1.158003) | 1.865412 / 55.444624 (-53.579212) | 1.592628 / 6.876477 (-5.283849) | 1.662364 / 2.142072 (-0.479708) | 0.653278 / 4.805227 (-4.151949) | 0.118626 / 6.500664 (-6.382038) | 0.042961 / 0.075469 (-0.032508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956279 / 1.841788 (-0.885509) | 11.635540 / 8.074308 (3.561232) | 10.719590 / 10.191392 (0.528198) | 0.130015 / 0.680424 (-0.550409) | 0.014424 / 0.534201 (-0.519777) | 0.288135 / 0.579283 (-0.291148) | 0.270819 / 0.434364 (-0.163545) | 0.320238 / 0.540337 (-0.220099) | 0.421044 / 1.386936 (-0.965892) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005201 / 0.011353 (-0.006152) | 0.003467 / 0.011008 (-0.007541) | 0.048939 / 0.038508 (0.010431) | 0.051841 / 0.023109 (0.028732) | 0.273708 / 0.275898 (-0.002190) | 0.293491 / 0.323480 (-0.029988) | 0.004830 / 0.007986 (-0.003156) | 0.002696 / 0.004328 (-0.001632) | 0.047727 / 0.004250 (0.043476) | 0.041319 / 0.037052 (0.004266) | 0.273837 / 0.258489 (0.015348) | 0.309860 / 0.293841 (0.016019) | 0.029054 / 0.128546 (-0.099492) | 0.010410 / 0.075646 (-0.065237) | 0.058139 / 0.419271 (-0.361133) | 0.032682 / 0.043533 (-0.010850) | 0.273244 / 0.255139 (0.018105) | 0.291579 / 0.283200 (0.008380) | 0.018262 / 0.141683 (-0.123421) | 1.144590 / 1.452155 (-0.307565) | 1.202474 / 1.492716 (-0.290243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097110 / 0.018006 (0.079104) | 0.307344 / 0.000490 (0.306854) | 0.000229 / 0.000200 (0.000029) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022263 / 0.037411 (-0.015148) | 0.070140 / 0.014526 (0.055614) | 0.081251 / 0.176557 (-0.095306) | 0.120839 / 0.737135 (-0.616297) | 0.083312 / 0.296338 (-0.213026) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297381 / 0.215209 (0.082172) | 2.895530 / 2.077655 (0.817875) | 1.608442 / 1.504120 (0.104322) | 1.476237 / 1.541195 (-0.064958) | 1.491306 / 1.468490 (0.022816) | 0.567272 / 4.584777 (-4.017505) | 2.463543 / 3.745712 (-1.282170) | 2.814764 / 5.269862 (-2.455098) | 1.725845 / 4.565676 (-2.839831) | 0.064149 / 0.424275 (-0.360126) | 0.004953 / 0.007607 (-0.002654) | 0.359629 / 0.226044 (0.133585) | 3.482414 / 2.268929 (1.213486) | 1.949897 / 55.444624 (-53.494727) | 1.677383 / 6.876477 (-5.199094) | 1.683655 / 2.142072 (-0.458418) | 0.645671 / 4.805227 (-4.159557) | 0.115612 / 6.500664 (-6.385053) | 0.041013 / 0.075469 (-0.034456) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967843 / 1.841788 (-0.873945) | 12.376877 / 8.074308 (4.302569) | 10.988174 / 10.191392 (0.796782) | 0.134660 / 0.680424 (-0.545764) | 0.015801 / 0.534201 (-0.518400) | 0.288699 / 0.579283 (-0.290584) | 0.284887 / 0.434364 (-0.149477) | 0.322000 / 0.540337 (-0.218337) | 0.412360 / 1.386936 (-0.974576) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#148454d48b7c36507a283217c7c0e3bcc0539f75 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005407 / 0.011353 (-0.005946) | 0.003496 / 0.011008 (-0.007512) | 0.062730 / 0.038508 (0.024222) | 0.051882 / 0.023109 (0.028773) | 0.244766 / 0.275898 (-0.031132) | 0.257963 / 0.323480 (-0.065516) | 0.002894 / 0.007986 (-0.005092) | 0.002567 / 0.004328 (-0.001761) | 0.048756 / 0.004250 (0.044506) | 0.039024 / 0.037052 (0.001971) | 0.247303 / 0.258489 (-0.011186) | 0.278341 / 0.293841 (-0.015500) | 0.026725 / 0.128546 (-0.101821) | 0.010577 / 0.075646 (-0.065069) | 0.210483 / 0.419271 (-0.208789) | 0.035230 / 0.043533 (-0.008303) | 0.246125 / 0.255139 (-0.009014) | 0.264039 / 0.283200 (-0.019160) | 0.019881 / 0.141683 (-0.121802) | 1.113475 / 1.452155 (-0.338679) | 1.149606 / 1.492716 (-0.343110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092946 / 0.018006 (0.074940) | 0.299985 / 0.000490 (0.299495) | 0.000215 / 0.000200 (0.000016) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018421 / 0.037411 (-0.018991) | 0.060531 / 0.014526 (0.046005) | 0.074459 / 0.176557 (-0.102098) | 0.120369 / 0.737135 (-0.616766) | 0.075505 / 0.296338 (-0.220833) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289497 / 0.215209 (0.074288) | 2.783139 / 2.077655 (0.705485) | 1.482533 / 1.504120 (-0.021587) | 1.371013 / 1.541195 (-0.170182) | 1.379114 / 1.468490 (-0.089376) | 0.563953 / 4.584777 (-4.020824) | 2.389996 / 3.745712 (-1.355716) | 2.788067 / 5.269862 (-2.481795) | 1.751772 / 4.565676 (-2.813904) | 0.062680 / 0.424275 (-0.361595) | 0.004901 / 0.007607 (-0.002706) | 0.365193 / 0.226044 (0.139149) | 3.389181 / 2.268929 (1.120252) | 1.861659 / 55.444624 (-53.582965) | 1.558899 / 6.876477 (-5.317577) | 1.591079 / 2.142072 (-0.550993) | 0.648300 / 4.805227 (-4.156927) | 0.117486 / 6.500664 (-6.383178) | 0.041961 / 0.075469 (-0.033508) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.944391 / 1.841788 (-0.897396) | 11.500823 / 8.074308 (3.426515) | 10.580430 / 10.191392 (0.389038) | 0.142845 / 0.680424 (-0.537579) | 0.014305 / 0.534201 (-0.519896) | 0.290723 / 0.579283 (-0.288560) | 0.266206 / 0.434364 (-0.168158) | 0.325482 / 0.540337 (-0.214856) | 0.416224 / 1.386936 (-0.970712) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005363 / 0.011353 (-0.005990) | 0.003548 / 0.011008 (-0.007460) | 0.048704 / 0.038508 (0.010196) | 0.051025 / 0.023109 (0.027916) | 0.273037 / 0.275898 (-0.002861) | 0.297148 / 0.323480 (-0.026332) | 0.003985 / 0.007986 (-0.004001) | 0.002739 / 0.004328 (-0.001590) | 0.048108 / 0.004250 (0.043857) | 0.040244 / 0.037052 (0.003191) | 0.277825 / 0.258489 (0.019336) | 0.303704 / 0.293841 (0.009863) | 0.029460 / 0.128546 (-0.099086) | 0.010428 / 0.075646 (-0.065218) | 0.057022 / 0.419271 (-0.362249) | 0.032711 / 0.043533 (-0.010822) | 0.274462 / 0.255139 (0.019323) | 0.293499 / 0.283200 (0.010299) | 0.018266 / 0.141683 (-0.123417) | 1.158049 / 1.452155 (-0.294106) | 1.170097 / 1.492716 (-0.322620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093412 / 0.018006 (0.075406) | 0.301538 / 0.000490 (0.301049) | 0.000222 / 0.000200 (0.000022) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021698 / 0.037411 (-0.015713) | 0.068735 / 0.014526 (0.054209) | 0.083010 / 0.176557 (-0.093546) | 0.127491 / 0.737135 (-0.609644) | 0.083005 / 0.296338 (-0.213333) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298299 / 0.215209 (0.083090) | 2.894209 / 2.077655 (0.816554) | 1.597455 / 1.504120 (0.093335) | 1.472953 / 1.541195 (-0.068241) | 1.491553 / 1.468490 (0.023063) | 0.556566 / 4.584777 (-4.028211) | 2.419429 / 3.745712 (-1.326283) | 2.788706 / 5.269862 (-2.481156) | 1.759888 / 4.565676 (-2.805789) | 0.062535 / 0.424275 (-0.361740) | 0.004959 / 0.007607 (-0.002648) | 0.345226 / 0.226044 (0.119182) | 3.438539 / 2.268929 (1.169611) | 1.943842 / 55.444624 (-53.500782) | 1.661080 / 6.876477 (-5.215397) | 1.687632 / 2.142072 (-0.454440) | 0.639971 / 4.805227 (-4.165256) | 0.116012 / 6.500664 (-6.384652) | 0.041723 / 0.075469 (-0.033746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965143 / 1.841788 (-0.876645) | 12.086547 / 8.074308 (4.012238) | 10.708787 / 10.191392 (0.517395) | 0.129506 / 0.680424 (-0.550918) | 0.015254 / 0.534201 (-0.518947) | 0.288326 / 0.579283 (-0.290957) | 0.271976 / 0.434364 (-0.162388) | 0.328402 / 0.540337 (-0.211936) | 0.418102 / 1.386936 (-0.968834) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#18b6f13ede3dccedf335bb2d8ff04db306dc710a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005375 / 0.011353 (-0.005978) | 0.003530 / 0.011008 (-0.007478) | 0.062521 / 0.038508 (0.024013) | 0.051514 / 0.023109 (0.028405) | 0.241623 / 0.275898 (-0.034275) | 0.269054 / 0.323480 (-0.054426) | 0.002877 / 0.007986 (-0.005109) | 0.002724 / 0.004328 (-0.001605) | 0.049045 / 0.004250 (0.044794) | 0.038560 / 0.037052 (0.001507) | 0.248437 / 0.258489 (-0.010052) | 0.276762 / 0.293841 (-0.017079) | 0.027522 / 0.128546 (-0.101024) | 0.010817 / 0.075646 (-0.064829) | 0.208686 / 0.419271 (-0.210585) | 0.035818 / 0.043533 (-0.007715) | 0.249398 / 0.255139 (-0.005741) | 0.268288 / 0.283200 (-0.014911) | 0.019039 / 0.141683 (-0.122644) | 1.135115 / 1.452155 (-0.317040) | 1.195531 / 1.492716 (-0.297185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093126 / 0.018006 (0.075120) | 0.301028 / 0.000490 (0.300539) | 0.000222 / 0.000200 (0.000023) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018385 / 0.037411 (-0.019027) | 0.060902 / 0.014526 (0.046376) | 0.073168 / 0.176557 (-0.103389) | 0.119216 / 0.737135 (-0.617919) | 0.074225 / 0.296338 (-0.222114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283749 / 0.215209 (0.068540) | 2.741609 / 2.077655 (0.663954) | 1.483439 / 1.504120 (-0.020681) | 1.352896 / 1.541195 (-0.188299) | 1.378824 / 1.468490 (-0.089667) | 0.548731 / 4.584777 (-4.036046) | 2.342717 / 3.745712 (-1.402995) | 2.791592 / 5.269862 (-2.478269) | 1.740605 / 4.565676 (-2.825071) | 0.062059 / 0.424275 (-0.362216) | 0.005028 / 0.007607 (-0.002579) | 0.339205 / 0.226044 (0.113161) | 3.353386 / 2.268929 (1.084458) | 1.785717 / 55.444624 (-53.658907) | 1.523390 / 6.876477 (-5.353086) | 1.556999 / 2.142072 (-0.585073) | 0.636745 / 4.805227 (-4.168483) | 0.115821 / 6.500664 (-6.384843) | 0.042200 / 0.075469 (-0.033269) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948678 / 1.841788 (-0.893110) | 11.588670 / 8.074308 (3.514362) | 10.897130 / 10.191392 (0.705738) | 0.140068 / 0.680424 (-0.540356) | 0.014565 / 0.534201 (-0.519636) | 0.286336 / 0.579283 (-0.292947) | 0.265292 / 0.434364 (-0.169072) | 0.324146 / 0.540337 (-0.216192) | 0.413463 / 1.386936 (-0.973473) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005187 / 0.011353 (-0.006165) | 0.003471 / 0.011008 (-0.007537) | 0.048968 / 0.038508 (0.010460) | 0.051285 / 0.023109 (0.028176) | 0.283286 / 0.275898 (0.007388) | 0.307046 / 0.323480 (-0.016434) | 0.004017 / 0.007986 (-0.003969) | 0.002655 / 0.004328 (-0.001673) | 0.047762 / 0.004250 (0.043512) | 0.039855 / 0.037052 (0.002803) | 0.283101 / 0.258489 (0.024612) | 0.312905 / 0.293841 (0.019064) | 0.028188 / 0.128546 (-0.100358) | 0.010849 / 0.075646 (-0.064797) | 0.058112 / 0.419271 (-0.361159) | 0.032163 / 0.043533 (-0.011369) | 0.280825 / 0.255139 (0.025686) | 0.300946 / 0.283200 (0.017747) | 0.017409 / 0.141683 (-0.124274) | 1.127360 / 1.452155 (-0.324795) | 1.180409 / 1.492716 (-0.312307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093186 / 0.018006 (0.075180) | 0.300827 / 0.000490 (0.300338) | 0.000220 / 0.000200 (0.000020) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021560 / 0.037411 (-0.015851) | 0.069158 / 0.014526 (0.054632) | 0.080953 / 0.176557 (-0.095603) | 0.119071 / 0.737135 (-0.618064) | 0.082817 / 0.296338 (-0.213521) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.307259 / 0.215209 (0.092050) | 2.996058 / 2.077655 (0.918404) | 1.627406 / 1.504120 (0.123286) | 1.500715 / 1.541195 (-0.040480) | 1.524278 / 1.468490 (0.055788) | 0.569711 / 4.584777 (-4.015066) | 2.436132 / 3.745712 (-1.309580) | 2.796995 / 5.269862 (-2.472866) | 1.760701 / 4.565676 (-2.804975) | 0.063521 / 0.424275 (-0.360754) | 0.004909 / 0.007607 (-0.002698) | 0.359129 / 0.226044 (0.133085) | 3.567278 / 2.268929 (1.298349) | 2.013821 / 55.444624 (-53.430804) | 1.708021 / 6.876477 (-5.168456) | 1.738959 / 2.142072 (-0.403114) | 0.648620 / 4.805227 (-4.156607) | 0.122016 / 6.500664 (-6.378648) | 0.041802 / 0.075469 (-0.033667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985208 / 1.841788 (-0.856579) | 12.307785 / 8.074308 (4.233477) | 10.587262 / 10.191392 (0.395870) | 0.130468 / 0.680424 (-0.549956) | 0.014912 / 0.534201 (-0.519289) | 0.293822 / 0.579283 (-0.285461) | 0.283021 / 0.434364 (-0.151343) | 0.329560 / 0.540337 (-0.210777) | 0.424741 / 1.386936 (-0.962195) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#04426d9c8e0aa5c97af2826064287f8cab6bece0 \"CML watermark\")\n"
] | 2023-11-27T20:01:25 | 2023-11-28T16:29:58 | 2023-11-28T16:29:31 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6454",
"html_url": "https://github.com/huggingface/datasets/pull/6454",
"diff_url": "https://github.com/huggingface/datasets/pull/6454.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6454.patch",
"merged_at": "2023-11-28T16:29:31"
} | Refactor the `dill` logic to make it easier to maintain (and fix some issues along the way)
It makes the following improvements to the serialization API:
* consistent order of a `dict`'s keys
* support for hashing `torch.compile`-ed modules and functions
* deprecates `datasets.fingerprint.hashregister` as the `hashregister`-ed reducers are never invoked anyways (does not support nested data as `pickle`/`dill` do)
~~TODO: optimize hashing of `pa.Table` and `datasets.table.Table`~~ The `pa_array.to_string` approach is faster for large arrays because it outputs the first 10 and last 10 elements (by default). The problem is that this can produce identical hashes for non-identical arrays if their differing elements get ellipsed...
Fix https://github.com/huggingface/datasets/issues/6440, fix https://github.com/huggingface/datasets/issues/5839 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6454/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6453/comments | https://api.github.com/repos/huggingface/datasets/issues/6453/events | https://github.com/huggingface/datasets/pull/6453 | 2,011,907,787 | PR_kwDODunzps5ga0rv | 6,453 | Update hub-docs reference | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005119 / 0.011353 (-0.006234) | 0.003469 / 0.011008 (-0.007540) | 0.061791 / 0.038508 (0.023283) | 0.051655 / 0.023109 (0.028545) | 0.241157 / 0.275898 (-0.034741) | 0.265930 / 0.323480 (-0.057549) | 0.003851 / 0.007986 (-0.004134) | 0.002412 / 0.004328 (-0.001916) | 0.047498 / 0.004250 (0.043247) | 0.037328 / 0.037052 (0.000276) | 0.250418 / 0.258489 (-0.008071) | 0.277842 / 0.293841 (-0.015999) | 0.027626 / 0.128546 (-0.100920) | 0.009947 / 0.075646 (-0.065699) | 0.204549 / 0.419271 (-0.214722) | 0.037546 / 0.043533 (-0.005987) | 0.245383 / 0.255139 (-0.009756) | 0.263486 / 0.283200 (-0.019713) | 0.017792 / 0.141683 (-0.123891) | 1.158900 / 1.452155 (-0.293255) | 1.194060 / 1.492716 (-0.298657) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090607 / 0.018006 (0.072601) | 0.299909 / 0.000490 (0.299419) | 0.000206 / 0.000200 (0.000006) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018814 / 0.037411 (-0.018597) | 0.062068 / 0.014526 (0.047542) | 0.087221 / 0.176557 (-0.089336) | 0.119594 / 0.737135 (-0.617541) | 0.075485 / 0.296338 (-0.220853) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286093 / 0.215209 (0.070884) | 2.767396 / 2.077655 (0.689741) | 1.500472 / 1.504120 (-0.003648) | 1.389514 / 1.541195 (-0.151680) | 1.438933 / 1.468490 (-0.029557) | 0.562545 / 4.584777 (-4.022232) | 2.383330 / 3.745712 (-1.362382) | 2.799215 / 5.269862 (-2.470647) | 1.732618 / 4.565676 (-2.833058) | 0.061282 / 0.424275 (-0.362993) | 0.005007 / 0.007607 (-0.002601) | 0.339769 / 0.226044 (0.113725) | 3.337146 / 2.268929 (1.068218) | 1.890789 / 55.444624 (-53.553836) | 1.593555 / 6.876477 (-5.282922) | 1.660016 / 2.142072 (-0.482057) | 0.632452 / 4.805227 (-4.172775) | 0.115503 / 6.500664 (-6.385161) | 0.041590 / 0.075469 (-0.033880) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941966 / 1.841788 (-0.899822) | 11.470271 / 8.074308 (3.395963) | 10.579454 / 10.191392 (0.388062) | 0.140970 / 0.680424 (-0.539454) | 0.014057 / 0.534201 (-0.520144) | 0.289326 / 0.579283 (-0.289957) | 0.265366 / 0.434364 (-0.168998) | 0.324612 / 0.540337 (-0.215726) | 0.415832 / 1.386936 (-0.971104) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005208 / 0.011353 (-0.006145) | 0.003199 / 0.011008 (-0.007809) | 0.048299 / 0.038508 (0.009791) | 0.050727 / 0.023109 (0.027618) | 0.274897 / 0.275898 (-0.001001) | 0.298328 / 0.323480 (-0.025152) | 0.003989 / 0.007986 (-0.003997) | 0.002439 / 0.004328 (-0.001890) | 0.047308 / 0.004250 (0.043058) | 0.039726 / 0.037052 (0.002673) | 0.276279 / 0.258489 (0.017790) | 0.303679 / 0.293841 (0.009838) | 0.028943 / 0.128546 (-0.099603) | 0.010223 / 0.075646 (-0.065423) | 0.056694 / 0.419271 (-0.362577) | 0.032283 / 0.043533 (-0.011250) | 0.275344 / 0.255139 (0.020205) | 0.296358 / 0.283200 (0.013158) | 0.017481 / 0.141683 (-0.124201) | 1.131063 / 1.452155 (-0.321092) | 1.181146 / 1.492716 (-0.311570) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092259 / 0.018006 (0.074253) | 0.299381 / 0.000490 (0.298891) | 0.000216 / 0.000200 (0.000016) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021693 / 0.037411 (-0.015718) | 0.070441 / 0.014526 (0.055916) | 0.080648 / 0.176557 (-0.095908) | 0.119002 / 0.737135 (-0.618133) | 0.081412 / 0.296338 (-0.214926) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296475 / 0.215209 (0.081266) | 2.905098 / 2.077655 (0.827443) | 1.596321 / 1.504120 (0.092201) | 1.472640 / 1.541195 (-0.068555) | 1.484453 / 1.468490 (0.015963) | 0.565229 / 4.584777 (-4.019548) | 2.390631 / 3.745712 (-1.355081) | 2.765125 / 5.269862 (-2.504737) | 1.738993 / 4.565676 (-2.826683) | 0.063034 / 0.424275 (-0.361241) | 0.004891 / 0.007607 (-0.002716) | 0.350678 / 0.226044 (0.124633) | 3.530919 / 2.268929 (1.261990) | 1.943758 / 55.444624 (-53.500867) | 1.665553 / 6.876477 (-5.210924) | 1.656990 / 2.142072 (-0.485083) | 0.647027 / 4.805227 (-4.158201) | 0.116771 / 6.500664 (-6.383893) | 0.041012 / 0.075469 (-0.034457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.034226 / 1.841788 (-0.807561) | 12.036726 / 8.074308 (3.962418) | 10.934239 / 10.191392 (0.742847) | 0.130142 / 0.680424 (-0.550281) | 0.015537 / 0.534201 (-0.518664) | 0.286020 / 0.579283 (-0.293263) | 0.276739 / 0.434364 (-0.157625) | 0.326284 / 0.540337 (-0.214054) | 0.413392 / 1.386936 (-0.973544) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4787c0022c8b59c15256021478b444a6c51fa984 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005400 / 0.011353 (-0.005953) | 0.003415 / 0.011008 (-0.007593) | 0.062416 / 0.038508 (0.023908) | 0.055962 / 0.023109 (0.032853) | 0.234725 / 0.275898 (-0.041173) | 0.261775 / 0.323480 (-0.061705) | 0.002868 / 0.007986 (-0.005118) | 0.002426 / 0.004328 (-0.001902) | 0.047989 / 0.004250 (0.043738) | 0.039214 / 0.037052 (0.002162) | 0.246068 / 0.258489 (-0.012421) | 0.270245 / 0.293841 (-0.023596) | 0.027558 / 0.128546 (-0.100988) | 0.010256 / 0.075646 (-0.065390) | 0.210988 / 0.419271 (-0.208283) | 0.035684 / 0.043533 (-0.007849) | 0.245254 / 0.255139 (-0.009885) | 0.255476 / 0.283200 (-0.027724) | 0.018495 / 0.141683 (-0.123188) | 1.115458 / 1.452155 (-0.336697) | 1.166149 / 1.492716 (-0.326567) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092736 / 0.018006 (0.074730) | 0.301040 / 0.000490 (0.300550) | 0.000213 / 0.000200 (0.000013) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018607 / 0.037411 (-0.018805) | 0.062189 / 0.014526 (0.047664) | 0.073782 / 0.176557 (-0.102775) | 0.119895 / 0.737135 (-0.617240) | 0.074907 / 0.296338 (-0.221431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283986 / 0.215209 (0.068777) | 2.824498 / 2.077655 (0.746844) | 1.505848 / 1.504120 (0.001728) | 1.358879 / 1.541195 (-0.182316) | 1.357087 / 1.468490 (-0.111403) | 0.574307 / 4.584777 (-4.010470) | 2.416478 / 3.745712 (-1.329234) | 2.772909 / 5.269862 (-2.496953) | 1.750395 / 4.565676 (-2.815282) | 0.062465 / 0.424275 (-0.361810) | 0.004983 / 0.007607 (-0.002624) | 0.344490 / 0.226044 (0.118445) | 3.405062 / 2.268929 (1.136134) | 1.854972 / 55.444624 (-53.589653) | 1.572789 / 6.876477 (-5.303687) | 1.586109 / 2.142072 (-0.555963) | 0.647431 / 4.805227 (-4.157797) | 0.123079 / 6.500664 (-6.377585) | 0.042766 / 0.075469 (-0.032703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950493 / 1.841788 (-0.891295) | 11.814821 / 8.074308 (3.740513) | 10.494768 / 10.191392 (0.303376) | 0.131322 / 0.680424 (-0.549102) | 0.015253 / 0.534201 (-0.518948) | 0.287405 / 0.579283 (-0.291878) | 0.269664 / 0.434364 (-0.164699) | 0.322700 / 0.540337 (-0.217637) | 0.424103 / 1.386936 (-0.962833) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005264 / 0.011353 (-0.006088) | 0.003304 / 0.011008 (-0.007704) | 0.048531 / 0.038508 (0.010023) | 0.052752 / 0.023109 (0.029643) | 0.274435 / 0.275898 (-0.001463) | 0.297500 / 0.323480 (-0.025980) | 0.003977 / 0.007986 (-0.004009) | 0.002444 / 0.004328 (-0.001884) | 0.048464 / 0.004250 (0.044214) | 0.040192 / 0.037052 (0.003139) | 0.278256 / 0.258489 (0.019767) | 0.303627 / 0.293841 (0.009786) | 0.028709 / 0.128546 (-0.099837) | 0.010530 / 0.075646 (-0.065117) | 0.057427 / 0.419271 (-0.361844) | 0.032539 / 0.043533 (-0.010994) | 0.272237 / 0.255139 (0.017098) | 0.295288 / 0.283200 (0.012088) | 0.018820 / 0.141683 (-0.122862) | 1.116100 / 1.452155 (-0.336055) | 1.180124 / 1.492716 (-0.312592) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092651 / 0.018006 (0.074644) | 0.301481 / 0.000490 (0.300991) | 0.000217 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022461 / 0.037411 (-0.014951) | 0.070623 / 0.014526 (0.056097) | 0.082642 / 0.176557 (-0.093915) | 0.120021 / 0.737135 (-0.617114) | 0.083387 / 0.296338 (-0.212952) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291451 / 0.215209 (0.076242) | 2.865602 / 2.077655 (0.787947) | 1.592051 / 1.504120 (0.087931) | 1.463521 / 1.541195 (-0.077673) | 1.498899 / 1.468490 (0.030409) | 0.570854 / 4.584777 (-4.013923) | 2.410002 / 3.745712 (-1.335710) | 2.768028 / 5.269862 (-2.501834) | 1.740463 / 4.565676 (-2.825214) | 0.063801 / 0.424275 (-0.360474) | 0.005019 / 0.007607 (-0.002588) | 0.348353 / 0.226044 (0.122309) | 3.425793 / 2.268929 (1.156864) | 1.957294 / 55.444624 (-53.487331) | 1.696121 / 6.876477 (-5.180355) | 1.691544 / 2.142072 (-0.450528) | 0.645528 / 4.805227 (-4.159700) | 0.118876 / 6.500664 (-6.381788) | 0.041001 / 0.075469 (-0.034469) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983805 / 1.841788 (-0.857983) | 12.085909 / 8.074308 (4.011600) | 10.835395 / 10.191392 (0.644003) | 0.141971 / 0.680424 (-0.538453) | 0.015534 / 0.534201 (-0.518667) | 0.289289 / 0.579283 (-0.289994) | 0.276316 / 0.434364 (-0.158048) | 0.354577 / 0.540337 (-0.185761) | 0.421824 / 1.386936 (-0.965112) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#27d1fe52857c6a25a29cac63a296405136b2797c \"CML watermark\")\n"
] | 2023-11-27T09:57:20 | 2023-11-27T10:23:44 | 2023-11-27T10:17:34 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6453",
"html_url": "https://github.com/huggingface/datasets/pull/6453",
"diff_url": "https://github.com/huggingface/datasets/pull/6453.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6453.patch",
"merged_at": "2023-11-27T10:17:34"
} | Follow up to huggingface/huggingface.js#296 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6453/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6452/comments | https://api.github.com/repos/huggingface/datasets/issues/6452/events | https://github.com/huggingface/datasets/pull/6452 | 2,011,632,708 | PR_kwDODunzps5gZ5oe | 6,452 | Praveen_repo_pull_req | {
"login": "Praveenhh",
"id": 151713216,
"node_id": "U_kgDOCQr1wA",
"avatar_url": "https://avatars.githubusercontent.com/u/151713216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Praveenhh",
"html_url": "https://github.com/Praveenhh",
"followers_url": "https://api.github.com/users/Praveenhh/followers",
"following_url": "https://api.github.com/users/Praveenhh/following{/other_user}",
"gists_url": "https://api.github.com/users/Praveenhh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Praveenhh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Praveenhh/subscriptions",
"organizations_url": "https://api.github.com/users/Praveenhh/orgs",
"repos_url": "https://api.github.com/users/Praveenhh/repos",
"events_url": "https://api.github.com/users/Praveenhh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Praveenhh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-11-27T07:07:50 | 2023-11-27T09:28:00 | 2023-11-27T09:28:00 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6452",
"html_url": "https://github.com/huggingface/datasets/pull/6452",
"diff_url": "https://github.com/huggingface/datasets/pull/6452.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6452.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6452/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6451/comments | https://api.github.com/repos/huggingface/datasets/issues/6451/events | https://github.com/huggingface/datasets/issues/6451 | 2,010,693,912 | I_kwDODunzps532MEY | 6,451 | Unable to read "marsyas/gtzan" data | {
"login": "gerald-wrona",
"id": 32300890,
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gerald-wrona",
"html_url": "https://github.com/gerald-wrona",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! We've merged a [PR](https://huggingface.co./datasets/marsyas/gtzan/discussions/1) that fixes the script's path logic on Windows.",
"I have transferred the discussion to the corresponding dataset: https://huggingface.co./datasets/marsyas/gtzan/discussions/2\r\n\r\nLet's continue there.",
"@mariosasko @albertvillanova \r\n\r\nThank you both very much for the speedy resolution :)"
] | 2023-11-25T15:13:17 | 2023-12-01T12:53:46 | 2023-11-27T09:36:25 | NONE | null | null | null | Hi, this is my code and the error:
```
from datasets import load_dataset
gtzan = load_dataset("marsyas/gtzan", "all")
```
[error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt)
[audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt)
Python 3.11.5
Jupyter Notebook 6.5.4
Windows 10
I'm able to download and work with other datasets, but not this one. For example, both these below work fine:
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True)
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
Thanks for your help
https://huggingface.co./datasets/marsyas/gtzan/tree/main | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6451/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6450/comments | https://api.github.com/repos/huggingface/datasets/issues/6450/events | https://github.com/huggingface/datasets/issues/6450 | 2,009,491,386 | I_kwDODunzps53xme6 | 6,450 | Support multiple image/audio columns in ImageFolder/AudioFolder | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | [
"A duplicate of https://github.com/huggingface/datasets/issues/5760"
] | 2023-11-24T10:34:09 | 2023-11-28T11:07:17 | 2023-11-24T17:24:38 | CONTRIBUTOR | null | null | null | ### Feature request
Have a metadata.csv file with multiple columns that point to relative image or audio files.
### Motivation
Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative audio files.
But it's not possible to have two image columns, or to have two audio column, or to have one audio column and one image column.
### Your contribution
no specific contribution | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6450/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6449/comments | https://api.github.com/repos/huggingface/datasets/issues/6449/events | https://github.com/huggingface/datasets/pull/6449 | 2,008,617,992 | PR_kwDODunzps5gQCVZ | 6,449 | Fix metadata file resolution when inferred pattern is `**` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005551 / 0.011353 (-0.005802) | 0.003297 / 0.011008 (-0.007711) | 0.062524 / 0.038508 (0.024016) | 0.058467 / 0.023109 (0.035358) | 0.255703 / 0.275898 (-0.020195) | 0.281420 / 0.323480 (-0.042060) | 0.003857 / 0.007986 (-0.004129) | 0.002460 / 0.004328 (-0.001868) | 0.047762 / 0.004250 (0.043512) | 0.038757 / 0.037052 (0.001705) | 0.259937 / 0.258489 (0.001448) | 0.290050 / 0.293841 (-0.003791) | 0.028433 / 0.128546 (-0.100113) | 0.010422 / 0.075646 (-0.065224) | 0.207135 / 0.419271 (-0.212136) | 0.036004 / 0.043533 (-0.007529) | 0.268137 / 0.255139 (0.012998) | 0.275020 / 0.283200 (-0.008179) | 0.018301 / 0.141683 (-0.123382) | 1.095479 / 1.452155 (-0.356676) | 1.145452 / 1.492716 (-0.347265) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092046 / 0.018006 (0.074040) | 0.299784 / 0.000490 (0.299294) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019071 / 0.037411 (-0.018340) | 0.072836 / 0.014526 (0.058310) | 0.073974 / 0.176557 (-0.102583) | 0.120903 / 0.737135 (-0.616232) | 0.075740 / 0.296338 (-0.220599) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276365 / 0.215209 (0.061156) | 2.671217 / 2.077655 (0.593563) | 1.438862 / 1.504120 (-0.065258) | 1.327348 / 1.541195 (-0.213847) | 1.349514 / 1.468490 (-0.118976) | 0.548793 / 4.584777 (-4.035984) | 2.364458 / 3.745712 (-1.381255) | 2.716205 / 5.269862 (-2.553657) | 1.735714 / 4.565676 (-2.829963) | 0.061140 / 0.424275 (-0.363135) | 0.004926 / 0.007607 (-0.002681) | 0.330449 / 0.226044 (0.104404) | 3.255243 / 2.268929 (0.986315) | 1.824254 / 55.444624 (-53.620371) | 1.540262 / 6.876477 (-5.336215) | 1.535632 / 2.142072 (-0.606441) | 0.635224 / 4.805227 (-4.170003) | 0.116230 / 6.500664 (-6.384435) | 0.042706 / 0.075469 (-0.032763) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948796 / 1.841788 (-0.892992) | 11.448403 / 8.074308 (3.374095) | 10.523862 / 10.191392 (0.332470) | 0.129694 / 0.680424 (-0.550730) | 0.014146 / 0.534201 (-0.520055) | 0.285706 / 0.579283 (-0.293577) | 0.262572 / 0.434364 (-0.171792) | 0.321251 / 0.540337 (-0.219087) | 0.417130 / 1.386936 (-0.969806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005266 / 0.011353 (-0.006086) | 0.003339 / 0.011008 (-0.007670) | 0.048411 / 0.038508 (0.009903) | 0.053951 / 0.023109 (0.030842) | 0.271228 / 0.275898 (-0.004670) | 0.290066 / 0.323480 (-0.033414) | 0.004087 / 0.007986 (-0.003898) | 0.002446 / 0.004328 (-0.001882) | 0.047049 / 0.004250 (0.042798) | 0.040866 / 0.037052 (0.003813) | 0.273711 / 0.258489 (0.015222) | 0.298192 / 0.293841 (0.004351) | 0.029025 / 0.128546 (-0.099521) | 0.010479 / 0.075646 (-0.065167) | 0.056941 / 0.419271 (-0.362330) | 0.032914 / 0.043533 (-0.010619) | 0.270432 / 0.255139 (0.015293) | 0.291274 / 0.283200 (0.008074) | 0.018602 / 0.141683 (-0.123081) | 1.136707 / 1.452155 (-0.315447) | 1.184704 / 1.492716 (-0.308012) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090041 / 0.018006 (0.072035) | 0.300185 / 0.000490 (0.299696) | 0.000221 / 0.000200 (0.000022) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022074 / 0.037411 (-0.015337) | 0.070763 / 0.014526 (0.056237) | 0.082141 / 0.176557 (-0.094415) | 0.120286 / 0.737135 (-0.616850) | 0.082680 / 0.296338 (-0.213659) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292223 / 0.215209 (0.077014) | 2.856711 / 2.077655 (0.779056) | 1.581194 / 1.504120 (0.077075) | 1.496567 / 1.541195 (-0.044628) | 1.485256 / 1.468490 (0.016766) | 0.550633 / 4.584777 (-4.034144) | 2.420281 / 3.745712 (-1.325431) | 2.764373 / 5.269862 (-2.505489) | 1.735958 / 4.565676 (-2.829719) | 0.062562 / 0.424275 (-0.361714) | 0.004918 / 0.007607 (-0.002689) | 0.346038 / 0.226044 (0.119994) | 3.443478 / 2.268929 (1.174550) | 1.949366 / 55.444624 (-53.495259) | 1.686140 / 6.876477 (-5.190337) | 1.683038 / 2.142072 (-0.459034) | 0.629270 / 4.805227 (-4.175958) | 0.114947 / 6.500664 (-6.385717) | 0.040635 / 0.075469 (-0.034834) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969746 / 1.841788 (-0.872041) | 11.922662 / 8.074308 (3.848354) | 10.441432 / 10.191392 (0.250040) | 0.128950 / 0.680424 (-0.551473) | 0.015964 / 0.534201 (-0.518237) | 0.289176 / 0.579283 (-0.290107) | 0.279203 / 0.434364 (-0.155161) | 0.323833 / 0.540337 (-0.216505) | 0.540297 / 1.386936 (-0.846639) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ed759d0f5aea6d166caa0532aa17c209bb3af79 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005288 / 0.011353 (-0.006065) | 0.003383 / 0.011008 (-0.007625) | 0.061926 / 0.038508 (0.023418) | 0.049080 / 0.023109 (0.025971) | 0.244852 / 0.275898 (-0.031046) | 0.263957 / 0.323480 (-0.059523) | 0.002810 / 0.007986 (-0.005175) | 0.002384 / 0.004328 (-0.001945) | 0.047807 / 0.004250 (0.043556) | 0.038374 / 0.037052 (0.001321) | 0.244414 / 0.258489 (-0.014075) | 0.272257 / 0.293841 (-0.021584) | 0.027356 / 0.128546 (-0.101190) | 0.010235 / 0.075646 (-0.065411) | 0.214896 / 0.419271 (-0.204375) | 0.035604 / 0.043533 (-0.007929) | 0.246584 / 0.255139 (-0.008555) | 0.263281 / 0.283200 (-0.019918) | 0.019689 / 0.141683 (-0.121994) | 1.114100 / 1.452155 (-0.338054) | 1.177644 / 1.492716 (-0.315073) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088892 / 0.018006 (0.070886) | 0.298128 / 0.000490 (0.297639) | 0.000199 / 0.000200 (-0.000001) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019337 / 0.037411 (-0.018075) | 0.062096 / 0.014526 (0.047570) | 0.073019 / 0.176557 (-0.103537) | 0.118801 / 0.737135 (-0.618334) | 0.074779 / 0.296338 (-0.221559) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289892 / 0.215209 (0.074683) | 2.824131 / 2.077655 (0.746476) | 1.466351 / 1.504120 (-0.037768) | 1.339528 / 1.541195 (-0.201667) | 1.369257 / 1.468490 (-0.099233) | 0.561175 / 4.584777 (-4.023602) | 2.394174 / 3.745712 (-1.351538) | 2.749668 / 5.269862 (-2.520193) | 1.747146 / 4.565676 (-2.818530) | 0.063054 / 0.424275 (-0.361221) | 0.004970 / 0.007607 (-0.002637) | 0.342985 / 0.226044 (0.116941) | 3.334894 / 2.268929 (1.065966) | 1.838459 / 55.444624 (-53.606165) | 1.579755 / 6.876477 (-5.296722) | 1.560200 / 2.142072 (-0.581872) | 0.642643 / 4.805227 (-4.162585) | 0.117741 / 6.500664 (-6.382923) | 0.042440 / 0.075469 (-0.033029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.937476 / 1.841788 (-0.904312) | 11.403556 / 8.074308 (3.329248) | 10.317207 / 10.191392 (0.125815) | 0.145277 / 0.680424 (-0.535147) | 0.015297 / 0.534201 (-0.518904) | 0.287511 / 0.579283 (-0.291772) | 0.263516 / 0.434364 (-0.170848) | 0.320803 / 0.540337 (-0.219534) | 0.415580 / 1.386936 (-0.971356) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003506 / 0.011008 (-0.007502) | 0.048635 / 0.038508 (0.010127) | 0.052067 / 0.023109 (0.028957) | 0.277526 / 0.275898 (0.001628) | 0.300536 / 0.323480 (-0.022944) | 0.003982 / 0.007986 (-0.004004) | 0.002413 / 0.004328 (-0.001915) | 0.046523 / 0.004250 (0.042273) | 0.039383 / 0.037052 (0.002331) | 0.281208 / 0.258489 (0.022719) | 0.306199 / 0.293841 (0.012359) | 0.028646 / 0.128546 (-0.099900) | 0.010664 / 0.075646 (-0.064982) | 0.057393 / 0.419271 (-0.361879) | 0.032171 / 0.043533 (-0.011362) | 0.277576 / 0.255139 (0.022437) | 0.296039 / 0.283200 (0.012840) | 0.017519 / 0.141683 (-0.124164) | 1.153172 / 1.452155 (-0.298982) | 1.180274 / 1.492716 (-0.312442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088287 / 0.018006 (0.070280) | 0.297922 / 0.000490 (0.297433) | 0.000216 / 0.000200 (0.000016) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021936 / 0.037411 (-0.015475) | 0.070181 / 0.014526 (0.055655) | 0.082068 / 0.176557 (-0.094488) | 0.119327 / 0.737135 (-0.617808) | 0.083642 / 0.296338 (-0.212697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299449 / 0.215209 (0.084240) | 2.914362 / 2.077655 (0.836707) | 1.611906 / 1.504120 (0.107786) | 1.488805 / 1.541195 (-0.052390) | 1.536010 / 1.468490 (0.067520) | 0.566772 / 4.584777 (-4.018004) | 2.397897 / 3.745712 (-1.347815) | 2.786048 / 5.269862 (-2.483814) | 1.745153 / 4.565676 (-2.820523) | 0.063870 / 0.424275 (-0.360405) | 0.004968 / 0.007607 (-0.002640) | 0.344455 / 0.226044 (0.118410) | 3.465772 / 2.268929 (1.196844) | 1.965761 / 55.444624 (-53.478863) | 1.687960 / 6.876477 (-5.188516) | 1.713987 / 2.142072 (-0.428085) | 0.643760 / 4.805227 (-4.161467) | 0.117623 / 6.500664 (-6.383042) | 0.041086 / 0.075469 (-0.034383) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985129 / 1.841788 (-0.856659) | 11.986676 / 8.074308 (3.912368) | 10.493440 / 10.191392 (0.302048) | 0.130070 / 0.680424 (-0.550353) | 0.015293 / 0.534201 (-0.518908) | 0.285683 / 0.579283 (-0.293600) | 0.275656 / 0.434364 (-0.158708) | 0.328704 / 0.540337 (-0.211633) | 0.537249 / 1.386936 (-0.849687) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d7ee58f322082d3af5f11863d1f809444910827a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005170 / 0.011353 (-0.006183) | 0.003267 / 0.011008 (-0.007741) | 0.061992 / 0.038508 (0.023484) | 0.053414 / 0.023109 (0.030305) | 0.245678 / 0.275898 (-0.030220) | 0.261320 / 0.323480 (-0.062160) | 0.003887 / 0.007986 (-0.004099) | 0.002543 / 0.004328 (-0.001786) | 0.048496 / 0.004250 (0.044246) | 0.037392 / 0.037052 (0.000340) | 0.243728 / 0.258489 (-0.014761) | 0.272524 / 0.293841 (-0.021317) | 0.027578 / 0.128546 (-0.100968) | 0.010530 / 0.075646 (-0.065116) | 0.206014 / 0.419271 (-0.213257) | 0.035987 / 0.043533 (-0.007546) | 0.243544 / 0.255139 (-0.011595) | 0.263872 / 0.283200 (-0.019327) | 0.017867 / 0.141683 (-0.123816) | 1.105159 / 1.452155 (-0.346996) | 1.186640 / 1.492716 (-0.306076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092888 / 0.018006 (0.074882) | 0.302024 / 0.000490 (0.301534) | 0.000220 / 0.000200 (0.000020) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019329 / 0.037411 (-0.018083) | 0.062135 / 0.014526 (0.047609) | 0.075125 / 0.176557 (-0.101431) | 0.120743 / 0.737135 (-0.616393) | 0.078687 / 0.296338 (-0.217652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279449 / 0.215209 (0.064240) | 2.727310 / 2.077655 (0.649656) | 1.442710 / 1.504120 (-0.061410) | 1.315271 / 1.541195 (-0.225923) | 1.360435 / 1.468490 (-0.108055) | 0.567720 / 4.584777 (-4.017057) | 2.397049 / 3.745712 (-1.348663) | 2.891180 / 5.269862 (-2.378682) | 1.774179 / 4.565676 (-2.791497) | 0.063155 / 0.424275 (-0.361120) | 0.004963 / 0.007607 (-0.002644) | 0.337526 / 0.226044 (0.111482) | 3.266016 / 2.268929 (0.997088) | 1.808819 / 55.444624 (-53.635806) | 1.525326 / 6.876477 (-5.351151) | 1.566937 / 2.142072 (-0.575135) | 0.654226 / 4.805227 (-4.151001) | 0.118968 / 6.500664 (-6.381696) | 0.042666 / 0.075469 (-0.032803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.940792 / 1.841788 (-0.900996) | 11.736380 / 8.074308 (3.662072) | 10.709538 / 10.191392 (0.518146) | 0.141390 / 0.680424 (-0.539034) | 0.014204 / 0.534201 (-0.519996) | 0.284842 / 0.579283 (-0.294441) | 0.266315 / 0.434364 (-0.168049) | 0.331619 / 0.540337 (-0.208718) | 0.416446 / 1.386936 (-0.970491) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005298 / 0.011353 (-0.006055) | 0.003507 / 0.011008 (-0.007501) | 0.048315 / 0.038508 (0.009807) | 0.054855 / 0.023109 (0.031746) | 0.271558 / 0.275898 (-0.004340) | 0.316851 / 0.323480 (-0.006628) | 0.004054 / 0.007986 (-0.003932) | 0.002433 / 0.004328 (-0.001896) | 0.046442 / 0.004250 (0.042191) | 0.040853 / 0.037052 (0.003801) | 0.272537 / 0.258489 (0.014048) | 0.293736 / 0.293841 (-0.000105) | 0.029112 / 0.128546 (-0.099434) | 0.010573 / 0.075646 (-0.065074) | 0.056501 / 0.419271 (-0.362771) | 0.032541 / 0.043533 (-0.010992) | 0.271004 / 0.255139 (0.015865) | 0.289276 / 0.283200 (0.006076) | 0.018618 / 0.141683 (-0.123065) | 1.149435 / 1.452155 (-0.302719) | 1.205113 / 1.492716 (-0.287604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094726 / 0.018006 (0.076720) | 0.304347 / 0.000490 (0.303857) | 0.000217 / 0.000200 (0.000017) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021374 / 0.037411 (-0.016037) | 0.070574 / 0.014526 (0.056049) | 0.081749 / 0.176557 (-0.094807) | 0.119829 / 0.737135 (-0.617306) | 0.082602 / 0.296338 (-0.213737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293378 / 0.215209 (0.078169) | 2.893607 / 2.077655 (0.815952) | 1.577734 / 1.504120 (0.073614) | 1.453670 / 1.541195 (-0.087525) | 1.467354 / 1.468490 (-0.001136) | 0.563415 / 4.584777 (-4.021362) | 2.438330 / 3.745712 (-1.307382) | 2.761822 / 5.269862 (-2.508040) | 1.730944 / 4.565676 (-2.834732) | 0.062251 / 0.424275 (-0.362024) | 0.004969 / 0.007607 (-0.002638) | 0.371238 / 0.226044 (0.145194) | 3.399831 / 2.268929 (1.130903) | 1.936156 / 55.444624 (-53.508469) | 1.649716 / 6.876477 (-5.226761) | 1.669107 / 2.142072 (-0.472965) | 0.633696 / 4.805227 (-4.171531) | 0.115857 / 6.500664 (-6.384807) | 0.041012 / 0.075469 (-0.034457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964777 / 1.841788 (-0.877010) | 12.037613 / 8.074308 (3.963305) | 10.579241 / 10.191392 (0.387849) | 0.130932 / 0.680424 (-0.549492) | 0.015621 / 0.534201 (-0.518580) | 0.286898 / 0.579283 (-0.292385) | 0.281139 / 0.434364 (-0.153225) | 0.325240 / 0.540337 (-0.215097) | 0.554302 / 1.386936 (-0.832635) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#48d2378944a47987f96562ee856167aef1e78522 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005258 / 0.011353 (-0.006095) | 0.003863 / 0.011008 (-0.007145) | 0.064585 / 0.038508 (0.026077) | 0.058013 / 0.023109 (0.034904) | 0.249042 / 0.275898 (-0.026856) | 0.273434 / 0.323480 (-0.050046) | 0.004779 / 0.007986 (-0.003207) | 0.002550 / 0.004328 (-0.001778) | 0.048290 / 0.004250 (0.044040) | 0.038777 / 0.037052 (0.001725) | 0.253039 / 0.258489 (-0.005450) | 0.285365 / 0.293841 (-0.008476) | 0.028053 / 0.128546 (-0.100494) | 0.010521 / 0.075646 (-0.065125) | 0.210954 / 0.419271 (-0.208317) | 0.035720 / 0.043533 (-0.007813) | 0.252540 / 0.255139 (-0.002599) | 0.264786 / 0.283200 (-0.018414) | 0.018692 / 0.141683 (-0.122990) | 1.108971 / 1.452155 (-0.343183) | 1.201004 / 1.492716 (-0.291712) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095936 / 0.018006 (0.077930) | 0.302979 / 0.000490 (0.302489) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018859 / 0.037411 (-0.018552) | 0.062559 / 0.014526 (0.048034) | 0.073545 / 0.176557 (-0.103012) | 0.120780 / 0.737135 (-0.616355) | 0.074998 / 0.296338 (-0.221340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276728 / 0.215209 (0.061519) | 2.715310 / 2.077655 (0.637655) | 1.444927 / 1.504120 (-0.059193) | 1.323867 / 1.541195 (-0.217328) | 1.364962 / 1.468490 (-0.103528) | 0.556792 / 4.584777 (-4.027985) | 2.409151 / 3.745712 (-1.336561) | 2.811836 / 5.269862 (-2.458026) | 1.777369 / 4.565676 (-2.788308) | 0.061398 / 0.424275 (-0.362877) | 0.004924 / 0.007607 (-0.002683) | 0.341228 / 0.226044 (0.115183) | 3.369570 / 2.268929 (1.100641) | 1.858151 / 55.444624 (-53.586474) | 1.587352 / 6.876477 (-5.289125) | 1.625004 / 2.142072 (-0.517068) | 0.635317 / 4.805227 (-4.169910) | 0.117197 / 6.500664 (-6.383467) | 0.042672 / 0.075469 (-0.032797) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.940419 / 1.841788 (-0.901368) | 12.156882 / 8.074308 (4.082574) | 10.646780 / 10.191392 (0.455388) | 0.129279 / 0.680424 (-0.551144) | 0.013967 / 0.534201 (-0.520234) | 0.287956 / 0.579283 (-0.291327) | 0.265250 / 0.434364 (-0.169114) | 0.323357 / 0.540337 (-0.216980) | 0.412045 / 1.386936 (-0.974891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005264 / 0.011353 (-0.006089) | 0.003575 / 0.011008 (-0.007433) | 0.049249 / 0.038508 (0.010741) | 0.057069 / 0.023109 (0.033959) | 0.327547 / 0.275898 (0.051649) | 0.299027 / 0.323480 (-0.024453) | 0.004768 / 0.007986 (-0.003217) | 0.002522 / 0.004328 (-0.001807) | 0.048020 / 0.004250 (0.043770) | 0.041328 / 0.037052 (0.004275) | 0.281385 / 0.258489 (0.022895) | 0.304957 / 0.293841 (0.011116) | 0.031371 / 0.128546 (-0.097175) | 0.010523 / 0.075646 (-0.065124) | 0.057073 / 0.419271 (-0.362198) | 0.032913 / 0.043533 (-0.010620) | 0.284963 / 0.255139 (0.029824) | 0.291997 / 0.283200 (0.008798) | 0.018325 / 0.141683 (-0.123357) | 1.126681 / 1.452155 (-0.325473) | 1.183011 / 1.492716 (-0.309705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092544 / 0.018006 (0.074538) | 0.299841 / 0.000490 (0.299351) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022279 / 0.037411 (-0.015133) | 0.072515 / 0.014526 (0.057989) | 0.083068 / 0.176557 (-0.093488) | 0.120600 / 0.737135 (-0.616536) | 0.083574 / 0.296338 (-0.212765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293393 / 0.215209 (0.078184) | 2.865420 / 2.077655 (0.787765) | 1.562419 / 1.504120 (0.058299) | 1.440846 / 1.541195 (-0.100349) | 1.471993 / 1.468490 (0.003503) | 0.572510 / 4.584777 (-4.012267) | 2.427417 / 3.745712 (-1.318295) | 2.895347 / 5.269862 (-2.374515) | 1.790578 / 4.565676 (-2.775098) | 0.064489 / 0.424275 (-0.359786) | 0.005044 / 0.007607 (-0.002564) | 0.340774 / 0.226044 (0.114730) | 3.391414 / 2.268929 (1.122486) | 1.939980 / 55.444624 (-53.504644) | 1.658514 / 6.876477 (-5.217963) | 1.741406 / 2.142072 (-0.400667) | 0.649033 / 4.805227 (-4.156194) | 0.117587 / 6.500664 (-6.383077) | 0.042042 / 0.075469 (-0.033427) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980490 / 1.841788 (-0.861298) | 12.664045 / 8.074308 (4.589737) | 10.944437 / 10.191392 (0.753045) | 0.142059 / 0.680424 (-0.538365) | 0.015914 / 0.534201 (-0.518287) | 0.288826 / 0.579283 (-0.290457) | 0.282351 / 0.434364 (-0.152013) | 0.325302 / 0.540337 (-0.215035) | 0.416900 / 1.386936 (-0.970036) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#59750317ad258a4380ab6a6d206932b8d482ece1 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005591 / 0.011353 (-0.005762) | 0.003445 / 0.011008 (-0.007563) | 0.064290 / 0.038508 (0.025782) | 0.053046 / 0.023109 (0.029936) | 0.229101 / 0.275898 (-0.046797) | 0.255515 / 0.323480 (-0.067964) | 0.002912 / 0.007986 (-0.005073) | 0.002466 / 0.004328 (-0.001863) | 0.049348 / 0.004250 (0.045098) | 0.039492 / 0.037052 (0.002440) | 0.236301 / 0.258489 (-0.022188) | 0.270109 / 0.293841 (-0.023732) | 0.027506 / 0.128546 (-0.101040) | 0.010381 / 0.075646 (-0.065265) | 0.209999 / 0.419271 (-0.209273) | 0.035827 / 0.043533 (-0.007705) | 0.237231 / 0.255139 (-0.017908) | 0.254345 / 0.283200 (-0.028854) | 0.019689 / 0.141683 (-0.121994) | 1.096103 / 1.452155 (-0.356052) | 1.172393 / 1.492716 (-0.320323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101749 / 0.018006 (0.083743) | 0.310913 / 0.000490 (0.310424) | 0.000217 / 0.000200 (0.000017) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018743 / 0.037411 (-0.018669) | 0.064190 / 0.014526 (0.049664) | 0.074575 / 0.176557 (-0.101982) | 0.124143 / 0.737135 (-0.612993) | 0.077415 / 0.296338 (-0.218924) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286175 / 0.215209 (0.070965) | 2.781169 / 2.077655 (0.703515) | 1.495130 / 1.504120 (-0.008990) | 1.379136 / 1.541195 (-0.162059) | 1.397548 / 1.468490 (-0.070942) | 0.564467 / 4.584777 (-4.020310) | 2.408896 / 3.745712 (-1.336816) | 2.857771 / 5.269862 (-2.412091) | 1.776531 / 4.565676 (-2.789145) | 0.062700 / 0.424275 (-0.361575) | 0.004965 / 0.007607 (-0.002642) | 0.344026 / 0.226044 (0.117982) | 3.390829 / 2.268929 (1.121900) | 1.875258 / 55.444624 (-53.569366) | 1.602435 / 6.876477 (-5.274042) | 1.613619 / 2.142072 (-0.528454) | 0.639421 / 4.805227 (-4.165806) | 0.117697 / 6.500664 (-6.382967) | 0.042878 / 0.075469 (-0.032591) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957694 / 1.841788 (-0.884094) | 11.888917 / 8.074308 (3.814609) | 10.643389 / 10.191392 (0.451997) | 0.143358 / 0.680424 (-0.537066) | 0.014382 / 0.534201 (-0.519819) | 0.288731 / 0.579283 (-0.290552) | 0.270040 / 0.434364 (-0.164324) | 0.323586 / 0.540337 (-0.216751) | 0.415743 / 1.386936 (-0.971193) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005228 / 0.011353 (-0.006125) | 0.003445 / 0.011008 (-0.007563) | 0.051072 / 0.038508 (0.012563) | 0.053087 / 0.023109 (0.029978) | 0.273116 / 0.275898 (-0.002782) | 0.298633 / 0.323480 (-0.024847) | 0.004067 / 0.007986 (-0.003919) | 0.002537 / 0.004328 (-0.001791) | 0.049326 / 0.004250 (0.045075) | 0.041011 / 0.037052 (0.003959) | 0.277748 / 0.258489 (0.019258) | 0.304152 / 0.293841 (0.010311) | 0.029012 / 0.128546 (-0.099534) | 0.010589 / 0.075646 (-0.065057) | 0.057564 / 0.419271 (-0.361707) | 0.032785 / 0.043533 (-0.010747) | 0.272508 / 0.255139 (0.017369) | 0.294127 / 0.283200 (0.010927) | 0.018466 / 0.141683 (-0.123217) | 1.129341 / 1.452155 (-0.322814) | 1.194631 / 1.492716 (-0.298086) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098558 / 0.018006 (0.080552) | 0.312353 / 0.000490 (0.311863) | 0.000269 / 0.000200 (0.000069) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022148 / 0.037411 (-0.015263) | 0.070601 / 0.014526 (0.056075) | 0.081780 / 0.176557 (-0.094777) | 0.121993 / 0.737135 (-0.615142) | 0.084263 / 0.296338 (-0.212076) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300501 / 0.215209 (0.085292) | 2.927534 / 2.077655 (0.849879) | 1.595527 / 1.504120 (0.091407) | 1.475607 / 1.541195 (-0.065587) | 1.496707 / 1.468490 (0.028217) | 0.559051 / 4.584777 (-4.025726) | 2.427126 / 3.745712 (-1.318586) | 2.820908 / 5.269862 (-2.448953) | 1.757492 / 4.565676 (-2.808185) | 0.062391 / 0.424275 (-0.361884) | 0.004950 / 0.007607 (-0.002657) | 0.351204 / 0.226044 (0.125160) | 3.485068 / 2.268929 (1.216139) | 1.976418 / 55.444624 (-53.468207) | 1.682715 / 6.876477 (-5.193761) | 1.703457 / 2.142072 (-0.438616) | 0.643476 / 4.805227 (-4.161751) | 0.116321 / 6.500664 (-6.384343) | 0.040776 / 0.075469 (-0.034694) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974152 / 1.841788 (-0.867635) | 12.390170 / 8.074308 (4.315862) | 10.866283 / 10.191392 (0.674891) | 0.145049 / 0.680424 (-0.535375) | 0.016404 / 0.534201 (-0.517797) | 0.288799 / 0.579283 (-0.290484) | 0.285917 / 0.434364 (-0.148447) | 0.328455 / 0.540337 (-0.211883) | 0.417286 / 1.386936 (-0.969650) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#59750317ad258a4380ab6a6d206932b8d482ece1 \"CML watermark\")\n"
] | 2023-11-23T17:35:02 | 2023-11-27T10:02:56 | 2023-11-24T17:13:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6449",
"html_url": "https://github.com/huggingface/datasets/pull/6449",
"diff_url": "https://github.com/huggingface/datasets/pull/6449.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6449.patch",
"merged_at": "2023-11-24T17:13:02"
} | Refetch metadata files in case they were dropped by `filter_extensions` in the previous step.
Fix #6442
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6449/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6448/comments | https://api.github.com/repos/huggingface/datasets/issues/6448/events | https://github.com/huggingface/datasets/pull/6448 | 2,008,614,985 | PR_kwDODunzps5gQBsE | 6,448 | Use parquet export if possible | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005177 / 0.011353 (-0.006176) | 0.003002 / 0.011008 (-0.008006) | 0.061915 / 0.038508 (0.023407) | 0.052065 / 0.023109 (0.028956) | 0.246114 / 0.275898 (-0.029784) | 0.273974 / 0.323480 (-0.049506) | 0.002983 / 0.007986 (-0.005003) | 0.002444 / 0.004328 (-0.001885) | 0.048424 / 0.004250 (0.044174) | 0.039609 / 0.037052 (0.002557) | 0.257771 / 0.258489 (-0.000718) | 0.286228 / 0.293841 (-0.007613) | 0.023925 / 0.128546 (-0.104621) | 0.007248 / 0.075646 (-0.068398) | 0.202205 / 0.419271 (-0.217067) | 0.037124 / 0.043533 (-0.006409) | 0.254872 / 0.255139 (-0.000267) | 0.275252 / 0.283200 (-0.007947) | 0.019251 / 0.141683 (-0.122432) | 1.074921 / 1.452155 (-0.377234) | 1.146515 / 1.492716 (-0.346202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091998 / 0.018006 (0.073992) | 0.299146 / 0.000490 (0.298656) | 0.000240 / 0.000200 (0.000040) | 0.000054 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019266 / 0.037411 (-0.018145) | 0.062560 / 0.014526 (0.048034) | 0.075012 / 0.176557 (-0.101544) | 0.120077 / 0.737135 (-0.617058) | 0.077851 / 0.296338 (-0.218488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290629 / 0.215209 (0.075420) | 2.823847 / 2.077655 (0.746192) | 1.516966 / 1.504120 (0.012846) | 1.393383 / 1.541195 (-0.147812) | 1.427688 / 1.468490 (-0.040802) | 0.407456 / 4.584777 (-4.177321) | 2.378280 / 3.745712 (-1.367433) | 2.689800 / 5.269862 (-2.580061) | 1.588037 / 4.565676 (-2.977640) | 0.045837 / 0.424275 (-0.378438) | 0.004884 / 0.007607 (-0.002724) | 0.340464 / 0.226044 (0.114420) | 3.377158 / 2.268929 (1.108230) | 1.897854 / 55.444624 (-53.546771) | 1.588285 / 6.876477 (-5.288191) | 1.651708 / 2.142072 (-0.490364) | 0.482018 / 4.805227 (-4.323209) | 0.101583 / 6.500664 (-6.399081) | 0.042306 / 0.075469 (-0.033163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948659 / 1.841788 (-0.893128) | 11.809778 / 8.074308 (3.735470) | 10.481896 / 10.191392 (0.290504) | 0.143538 / 0.680424 (-0.536885) | 0.014105 / 0.534201 (-0.520096) | 0.272278 / 0.579283 (-0.307005) | 0.264241 / 0.434364 (-0.170123) | 0.307187 / 0.540337 (-0.233150) | 0.401270 / 1.386936 (-0.985666) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004831 / 0.011353 (-0.006521) | 0.002896 / 0.011008 (-0.008112) | 0.047479 / 0.038508 (0.008971) | 0.050665 / 0.023109 (0.027555) | 0.275243 / 0.275898 (-0.000655) | 0.296547 / 0.323480 (-0.026933) | 0.004022 / 0.007986 (-0.003963) | 0.002425 / 0.004328 (-0.001904) | 0.047086 / 0.004250 (0.042836) | 0.039611 / 0.037052 (0.002558) | 0.275272 / 0.258489 (0.016783) | 0.302429 / 0.293841 (0.008588) | 0.024308 / 0.128546 (-0.104238) | 0.007167 / 0.075646 (-0.068479) | 0.052825 / 0.419271 (-0.366446) | 0.032319 / 0.043533 (-0.011213) | 0.273334 / 0.255139 (0.018195) | 0.291161 / 0.283200 (0.007961) | 0.017918 / 0.141683 (-0.123764) | 1.110005 / 1.452155 (-0.342150) | 1.176616 / 1.492716 (-0.316100) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092478 / 0.018006 (0.074471) | 0.311431 / 0.000490 (0.310942) | 0.000237 / 0.000200 (0.000037) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021979 / 0.037411 (-0.015432) | 0.080617 / 0.014526 (0.066091) | 0.081534 / 0.176557 (-0.095023) | 0.121073 / 0.737135 (-0.616062) | 0.083235 / 0.296338 (-0.213104) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289527 / 0.215209 (0.074318) | 2.839668 / 2.077655 (0.762013) | 1.601737 / 1.504120 (0.097617) | 1.496028 / 1.541195 (-0.045167) | 1.511933 / 1.468490 (0.043443) | 0.399819 / 4.584777 (-4.184958) | 2.394147 / 3.745712 (-1.351565) | 2.520767 / 5.269862 (-2.749095) | 1.589496 / 4.565676 (-2.976180) | 0.046673 / 0.424275 (-0.377602) | 0.004858 / 0.007607 (-0.002749) | 0.357986 / 0.226044 (0.131941) | 3.376217 / 2.268929 (1.107289) | 1.981853 / 55.444624 (-53.462771) | 1.682240 / 6.876477 (-5.194236) | 1.830643 / 2.142072 (-0.311429) | 0.478286 / 4.805227 (-4.326941) | 0.099589 / 6.500664 (-6.401075) | 0.041173 / 0.075469 (-0.034296) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985160 / 1.841788 (-0.856628) | 12.312963 / 8.074308 (4.238655) | 10.577225 / 10.191392 (0.385833) | 0.130167 / 0.680424 (-0.550257) | 0.016657 / 0.534201 (-0.517544) | 0.271330 / 0.579283 (-0.307953) | 0.276979 / 0.434364 (-0.157385) | 0.304904 / 0.540337 (-0.235434) | 0.412090 / 1.386936 (-0.974846) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1adc80151e892122ecb60f4e0b4572b136b2dd47 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6448). All of your documentation changes will be reflected on that endpoint.",
"hooray! very excited about this",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005039 / 0.011353 (-0.006314) | 0.003577 / 0.011008 (-0.007431) | 0.062892 / 0.038508 (0.024384) | 0.056334 / 0.023109 (0.033225) | 0.252281 / 0.275898 (-0.023617) | 0.274945 / 0.323480 (-0.048535) | 0.003906 / 0.007986 (-0.004080) | 0.002483 / 0.004328 (-0.001845) | 0.049006 / 0.004250 (0.044756) | 0.038375 / 0.037052 (0.001323) | 0.257376 / 0.258489 (-0.001113) | 0.292512 / 0.293841 (-0.001328) | 0.027134 / 0.128546 (-0.101412) | 0.010579 / 0.075646 (-0.065068) | 0.212021 / 0.419271 (-0.207250) | 0.035851 / 0.043533 (-0.007682) | 0.258076 / 0.255139 (0.002937) | 0.271758 / 0.283200 (-0.011442) | 0.018222 / 0.141683 (-0.123461) | 1.120481 / 1.452155 (-0.331674) | 1.187007 / 1.492716 (-0.305710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094986 / 0.018006 (0.076980) | 0.302121 / 0.000490 (0.301631) | 0.000211 / 0.000200 (0.000011) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019260 / 0.037411 (-0.018152) | 0.062909 / 0.014526 (0.048383) | 0.075644 / 0.176557 (-0.100912) | 0.120966 / 0.737135 (-0.616170) | 0.076678 / 0.296338 (-0.219661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286754 / 0.215209 (0.071545) | 2.797467 / 2.077655 (0.719812) | 1.436798 / 1.504120 (-0.067322) | 1.315032 / 1.541195 (-0.226163) | 1.367841 / 1.468490 (-0.100649) | 0.578917 / 4.584777 (-4.005860) | 2.439773 / 3.745712 (-1.305939) | 2.932779 / 5.269862 (-2.337082) | 1.843895 / 4.565676 (-2.721782) | 0.063351 / 0.424275 (-0.360925) | 0.004998 / 0.007607 (-0.002610) | 0.347385 / 0.226044 (0.121340) | 3.449969 / 2.268929 (1.181040) | 1.857734 / 55.444624 (-53.586890) | 1.541341 / 6.876477 (-5.335136) | 1.574915 / 2.142072 (-0.567158) | 0.660178 / 4.805227 (-4.145049) | 0.117686 / 6.500664 (-6.382978) | 0.042602 / 0.075469 (-0.032867) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.937735 / 1.841788 (-0.904052) | 11.962091 / 8.074308 (3.887783) | 10.401715 / 10.191392 (0.210323) | 0.142200 / 0.680424 (-0.538224) | 0.014137 / 0.534201 (-0.520064) | 0.289853 / 0.579283 (-0.289430) | 0.267100 / 0.434364 (-0.167264) | 0.323401 / 0.540337 (-0.216936) | 0.418665 / 1.386936 (-0.968271) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005480 / 0.011353 (-0.005873) | 0.003401 / 0.011008 (-0.007607) | 0.049304 / 0.038508 (0.010796) | 0.062043 / 0.023109 (0.038934) | 0.270571 / 0.275898 (-0.005327) | 0.295226 / 0.323480 (-0.028254) | 0.004152 / 0.007986 (-0.003834) | 0.002511 / 0.004328 (-0.001817) | 0.048480 / 0.004250 (0.044229) | 0.043964 / 0.037052 (0.006912) | 0.273545 / 0.258489 (0.015056) | 0.295152 / 0.293841 (0.001311) | 0.029224 / 0.128546 (-0.099322) | 0.010629 / 0.075646 (-0.065018) | 0.057433 / 0.419271 (-0.361839) | 0.033115 / 0.043533 (-0.010418) | 0.269893 / 0.255139 (0.014754) | 0.288658 / 0.283200 (0.005459) | 0.018216 / 0.141683 (-0.123467) | 1.123039 / 1.452155 (-0.329116) | 1.182892 / 1.492716 (-0.309825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095948 / 0.018006 (0.077942) | 0.305811 / 0.000490 (0.305321) | 0.000221 / 0.000200 (0.000021) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022996 / 0.037411 (-0.014415) | 0.073836 / 0.014526 (0.059310) | 0.082658 / 0.176557 (-0.093899) | 0.121970 / 0.737135 (-0.615166) | 0.086096 / 0.296338 (-0.210242) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291032 / 0.215209 (0.075823) | 2.864613 / 2.077655 (0.786958) | 1.567530 / 1.504120 (0.063410) | 1.460291 / 1.541195 (-0.080903) | 1.527066 / 1.468490 (0.058576) | 0.571160 / 4.584777 (-4.013617) | 2.465261 / 3.745712 (-1.280451) | 2.915547 / 5.269862 (-2.354314) | 1.835822 / 4.565676 (-2.729855) | 0.064328 / 0.424275 (-0.359947) | 0.005061 / 0.007607 (-0.002546) | 0.357105 / 0.226044 (0.131061) | 3.491363 / 2.268929 (1.222435) | 1.943213 / 55.444624 (-53.501412) | 1.675778 / 6.876477 (-5.200699) | 1.719016 / 2.142072 (-0.423057) | 0.658993 / 4.805227 (-4.146235) | 0.122320 / 6.500664 (-6.378344) | 0.049030 / 0.075469 (-0.026439) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964762 / 1.841788 (-0.877025) | 12.367251 / 8.074308 (4.292943) | 10.886213 / 10.191392 (0.694821) | 0.141533 / 0.680424 (-0.538891) | 0.015646 / 0.534201 (-0.518555) | 0.288583 / 0.579283 (-0.290700) | 0.280353 / 0.434364 (-0.154010) | 0.329095 / 0.540337 (-0.211242) | 0.565118 / 1.386936 (-0.821818) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#493bf695dc3ee6cc81bfd0aae6a38f70547bb752 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006475 / 0.011353 (-0.004878) | 0.004080 / 0.011008 (-0.006928) | 0.066479 / 0.038508 (0.027971) | 0.073270 / 0.023109 (0.050161) | 0.244412 / 0.275898 (-0.031486) | 0.273778 / 0.323480 (-0.049702) | 0.003186 / 0.007986 (-0.004800) | 0.003419 / 0.004328 (-0.000910) | 0.049743 / 0.004250 (0.045492) | 0.043581 / 0.037052 (0.006529) | 0.248215 / 0.258489 (-0.010274) | 0.280873 / 0.293841 (-0.012967) | 0.029282 / 0.128546 (-0.099264) | 0.011241 / 0.075646 (-0.064405) | 0.215031 / 0.419271 (-0.204241) | 0.038764 / 0.043533 (-0.004769) | 0.259363 / 0.255139 (0.004224) | 0.279253 / 0.283200 (-0.003946) | 0.019524 / 0.141683 (-0.122159) | 1.104735 / 1.452155 (-0.347420) | 1.159823 / 1.492716 (-0.332894) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.108383 / 0.018006 (0.090377) | 0.332904 / 0.000490 (0.332415) | 0.000222 / 0.000200 (0.000022) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020693 / 0.037411 (-0.016719) | 0.071764 / 0.014526 (0.057238) | 0.077073 / 0.176557 (-0.099484) | 0.124604 / 0.737135 (-0.612532) | 0.078057 / 0.296338 (-0.218282) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291014 / 0.215209 (0.075805) | 2.865885 / 2.077655 (0.788231) | 1.506141 / 1.504120 (0.002021) | 1.435924 / 1.541195 (-0.105271) | 1.461994 / 1.468490 (-0.006497) | 0.571779 / 4.584777 (-4.012998) | 2.461950 / 3.745712 (-1.283762) | 3.079771 / 5.269862 (-2.190091) | 1.933337 / 4.565676 (-2.632339) | 0.063405 / 0.424275 (-0.360870) | 0.005203 / 0.007607 (-0.002404) | 0.345077 / 0.226044 (0.119032) | 3.487189 / 2.268929 (1.218261) | 1.903733 / 55.444624 (-53.540891) | 1.705596 / 6.876477 (-5.170880) | 1.718849 / 2.142072 (-0.423223) | 0.658745 / 4.805227 (-4.146482) | 0.120847 / 6.500664 (-6.379817) | 0.045670 / 0.075469 (-0.029799) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965969 / 1.841788 (-0.875819) | 13.520489 / 8.074308 (5.446181) | 12.322363 / 10.191392 (2.130971) | 0.146605 / 0.680424 (-0.533819) | 0.015061 / 0.534201 (-0.519140) | 0.298125 / 0.579283 (-0.281159) | 0.276864 / 0.434364 (-0.157500) | 0.326787 / 0.540337 (-0.213550) | 0.436897 / 1.386936 (-0.950039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005862 / 0.011353 (-0.005491) | 0.003716 / 0.011008 (-0.007292) | 0.052849 / 0.038508 (0.014341) | 0.072114 / 0.023109 (0.049005) | 0.277800 / 0.275898 (0.001902) | 0.325321 / 0.323480 (0.001841) | 0.004428 / 0.007986 (-0.003557) | 0.002527 / 0.004328 (-0.001801) | 0.048847 / 0.004250 (0.044596) | 0.047355 / 0.037052 (0.010303) | 0.279331 / 0.258489 (0.020842) | 0.310477 / 0.293841 (0.016636) | 0.029661 / 0.128546 (-0.098886) | 0.010812 / 0.075646 (-0.064834) | 0.059803 / 0.419271 (-0.359469) | 0.033554 / 0.043533 (-0.009978) | 0.276890 / 0.255139 (0.021751) | 0.308911 / 0.283200 (0.025712) | 0.020752 / 0.141683 (-0.120931) | 1.120896 / 1.452155 (-0.331259) | 1.186428 / 1.492716 (-0.306288) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106551 / 0.018006 (0.088545) | 0.354455 / 0.000490 (0.353966) | 0.000353 / 0.000200 (0.000153) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023488 / 0.037411 (-0.013923) | 0.080548 / 0.014526 (0.066022) | 0.084431 / 0.176557 (-0.092126) | 0.140698 / 0.737135 (-0.596438) | 0.085692 / 0.296338 (-0.210647) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.314253 / 0.215209 (0.099044) | 2.993236 / 2.077655 (0.915582) | 1.639013 / 1.504120 (0.134893) | 1.543966 / 1.541195 (0.002771) | 1.567732 / 1.468490 (0.099242) | 0.565857 / 4.584777 (-4.018920) | 2.545339 / 3.745712 (-1.200373) | 3.134546 / 5.269862 (-2.135316) | 1.940350 / 4.565676 (-2.625326) | 0.063847 / 0.424275 (-0.360429) | 0.005079 / 0.007607 (-0.002528) | 0.365762 / 0.226044 (0.139718) | 3.610921 / 2.268929 (1.341993) | 2.035151 / 55.444624 (-53.409473) | 1.773409 / 6.876477 (-5.103068) | 1.790332 / 2.142072 (-0.351741) | 0.683019 / 4.805227 (-4.122209) | 0.119566 / 6.500664 (-6.381099) | 0.043578 / 0.075469 (-0.031891) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996568 / 1.841788 (-0.845219) | 14.094366 / 8.074308 (6.020058) | 12.433600 / 10.191392 (2.242208) | 0.139835 / 0.680424 (-0.540589) | 0.016454 / 0.534201 (-0.517747) | 0.294073 / 0.579283 (-0.285210) | 0.309032 / 0.434364 (-0.125332) | 0.330699 / 0.540337 (-0.209638) | 0.619392 / 1.386936 (-0.767544) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#026fbce1c93a30188b6d0646bb975da8f56e2a2f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005389 / 0.011353 (-0.005964) | 0.003209 / 0.011008 (-0.007799) | 0.061610 / 0.038508 (0.023102) | 0.049781 / 0.023109 (0.026672) | 0.240208 / 0.275898 (-0.035690) | 0.263307 / 0.323480 (-0.060173) | 0.002908 / 0.007986 (-0.005078) | 0.002375 / 0.004328 (-0.001953) | 0.047462 / 0.004250 (0.043212) | 0.038643 / 0.037052 (0.001591) | 0.246287 / 0.258489 (-0.012202) | 0.278715 / 0.293841 (-0.015126) | 0.027507 / 0.128546 (-0.101039) | 0.010168 / 0.075646 (-0.065479) | 0.204131 / 0.419271 (-0.215140) | 0.035452 / 0.043533 (-0.008081) | 0.251721 / 0.255139 (-0.003418) | 0.266642 / 0.283200 (-0.016558) | 0.017741 / 0.141683 (-0.123942) | 1.094672 / 1.452155 (-0.357482) | 1.162715 / 1.492716 (-0.330002) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092154 / 0.018006 (0.074148) | 0.301376 / 0.000490 (0.300886) | 0.000217 / 0.000200 (0.000017) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018534 / 0.037411 (-0.018877) | 0.061995 / 0.014526 (0.047469) | 0.072654 / 0.176557 (-0.103903) | 0.119501 / 0.737135 (-0.617635) | 0.073756 / 0.296338 (-0.222583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280066 / 0.215209 (0.064857) | 2.744207 / 2.077655 (0.666553) | 1.483367 / 1.504120 (-0.020753) | 1.386173 / 1.541195 (-0.155022) | 1.381833 / 1.468490 (-0.086657) | 0.552780 / 4.584777 (-4.031997) | 2.395541 / 3.745712 (-1.350171) | 2.747507 / 5.269862 (-2.522355) | 1.735074 / 4.565676 (-2.830602) | 0.062096 / 0.424275 (-0.362179) | 0.004905 / 0.007607 (-0.002702) | 0.338327 / 0.226044 (0.112283) | 3.365391 / 2.268929 (1.096462) | 1.839663 / 55.444624 (-53.604961) | 1.577535 / 6.876477 (-5.298942) | 1.558054 / 2.142072 (-0.584018) | 0.636520 / 4.805227 (-4.168708) | 0.116182 / 6.500664 (-6.384482) | 0.042078 / 0.075469 (-0.033391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.938512 / 1.841788 (-0.903276) | 11.455749 / 8.074308 (3.381441) | 10.510985 / 10.191392 (0.319593) | 0.140865 / 0.680424 (-0.539559) | 0.014073 / 0.534201 (-0.520128) | 0.294747 / 0.579283 (-0.284536) | 0.266147 / 0.434364 (-0.168217) | 0.325354 / 0.540337 (-0.214984) | 0.422182 / 1.386936 (-0.964754) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005231 / 0.011353 (-0.006122) | 0.003032 / 0.011008 (-0.007977) | 0.049608 / 0.038508 (0.011099) | 0.051441 / 0.023109 (0.028332) | 0.273812 / 0.275898 (-0.002086) | 0.294318 / 0.323480 (-0.029162) | 0.003958 / 0.007986 (-0.004028) | 0.002384 / 0.004328 (-0.001944) | 0.047942 / 0.004250 (0.043691) | 0.039179 / 0.037052 (0.002127) | 0.277504 / 0.258489 (0.019014) | 0.299713 / 0.293841 (0.005872) | 0.028989 / 0.128546 (-0.099557) | 0.010267 / 0.075646 (-0.065379) | 0.058318 / 0.419271 (-0.360954) | 0.032214 / 0.043533 (-0.011318) | 0.277964 / 0.255139 (0.022825) | 0.293055 / 0.283200 (0.009856) | 0.018532 / 0.141683 (-0.123151) | 1.128620 / 1.452155 (-0.323535) | 1.187365 / 1.492716 (-0.305351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092137 / 0.018006 (0.074130) | 0.299726 / 0.000490 (0.299236) | 0.000222 / 0.000200 (0.000022) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021342 / 0.037411 (-0.016070) | 0.069943 / 0.014526 (0.055417) | 0.079862 / 0.176557 (-0.096694) | 0.118917 / 0.737135 (-0.618218) | 0.081861 / 0.296338 (-0.214477) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295883 / 0.215209 (0.080674) | 2.881640 / 2.077655 (0.803986) | 1.597705 / 1.504120 (0.093585) | 1.473220 / 1.541195 (-0.067975) | 1.501006 / 1.468490 (0.032516) | 0.559409 / 4.584777 (-4.025368) | 2.442709 / 3.745712 (-1.303003) | 2.742139 / 5.269862 (-2.527723) | 1.726002 / 4.565676 (-2.839674) | 0.062436 / 0.424275 (-0.361840) | 0.004896 / 0.007607 (-0.002711) | 0.349203 / 0.226044 (0.123159) | 3.435175 / 2.268929 (1.166247) | 1.954888 / 55.444624 (-53.489737) | 1.666233 / 6.876477 (-5.210243) | 1.680852 / 2.142072 (-0.461221) | 0.644271 / 4.805227 (-4.160956) | 0.115160 / 6.500664 (-6.385504) | 0.040681 / 0.075469 (-0.034788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963810 / 1.841788 (-0.877977) | 11.860860 / 8.074308 (3.786552) | 10.541703 / 10.191392 (0.350311) | 0.131532 / 0.680424 (-0.548892) | 0.016790 / 0.534201 (-0.517411) | 0.286695 / 0.579283 (-0.292588) | 0.279628 / 0.434364 (-0.154735) | 0.324622 / 0.540337 (-0.215715) | 0.535507 / 1.386936 (-0.851429) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#11217347e4bcfe1aaf794d164a5dd9f085b2f682 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005672 / 0.011353 (-0.005681) | 0.003411 / 0.011008 (-0.007597) | 0.062528 / 0.038508 (0.024020) | 0.055209 / 0.023109 (0.032100) | 0.248366 / 0.275898 (-0.027532) | 0.279522 / 0.323480 (-0.043957) | 0.002907 / 0.007986 (-0.005079) | 0.002369 / 0.004328 (-0.001959) | 0.047982 / 0.004250 (0.043731) | 0.039009 / 0.037052 (0.001956) | 0.256422 / 0.258489 (-0.002067) | 0.288530 / 0.293841 (-0.005311) | 0.028164 / 0.128546 (-0.100382) | 0.010448 / 0.075646 (-0.065198) | 0.208863 / 0.419271 (-0.210408) | 0.036291 / 0.043533 (-0.007242) | 0.251642 / 0.255139 (-0.003497) | 0.275589 / 0.283200 (-0.007610) | 0.019839 / 0.141683 (-0.121844) | 1.092800 / 1.452155 (-0.359355) | 1.147950 / 1.492716 (-0.344766) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094920 / 0.018006 (0.076914) | 0.303049 / 0.000490 (0.302559) | 0.000199 / 0.000200 (-0.000001) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018820 / 0.037411 (-0.018591) | 0.063319 / 0.014526 (0.048793) | 0.073644 / 0.176557 (-0.102912) | 0.120045 / 0.737135 (-0.617091) | 0.076219 / 0.296338 (-0.220119) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283897 / 0.215209 (0.068688) | 2.822836 / 2.077655 (0.745182) | 1.490505 / 1.504120 (-0.013615) | 1.359777 / 1.541195 (-0.181418) | 1.420536 / 1.468490 (-0.047954) | 0.562308 / 4.584777 (-4.022469) | 2.419249 / 3.745712 (-1.326463) | 2.827620 / 5.269862 (-2.442241) | 1.783171 / 4.565676 (-2.782505) | 0.063206 / 0.424275 (-0.361069) | 0.004966 / 0.007607 (-0.002641) | 0.339647 / 0.226044 (0.113602) | 3.378157 / 2.268929 (1.109229) | 1.873221 / 55.444624 (-53.571403) | 1.606367 / 6.876477 (-5.270109) | 1.624976 / 2.142072 (-0.517096) | 0.652653 / 4.805227 (-4.152574) | 0.117997 / 6.500664 (-6.382667) | 0.041955 / 0.075469 (-0.033514) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961420 / 1.841788 (-0.880368) | 11.807624 / 8.074308 (3.733316) | 10.668249 / 10.191392 (0.476857) | 0.141855 / 0.680424 (-0.538569) | 0.014451 / 0.534201 (-0.519750) | 0.289706 / 0.579283 (-0.289577) | 0.268392 / 0.434364 (-0.165972) | 0.323435 / 0.540337 (-0.216903) | 0.420667 / 1.386936 (-0.966269) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005382 / 0.011353 (-0.005971) | 0.003361 / 0.011008 (-0.007647) | 0.048420 / 0.038508 (0.009912) | 0.053702 / 0.023109 (0.030593) | 0.286976 / 0.275898 (0.011078) | 0.296708 / 0.323480 (-0.026772) | 0.004013 / 0.007986 (-0.003972) | 0.002444 / 0.004328 (-0.001884) | 0.047797 / 0.004250 (0.043547) | 0.042361 / 0.037052 (0.005309) | 0.277543 / 0.258489 (0.019054) | 0.300736 / 0.293841 (0.006896) | 0.029894 / 0.128546 (-0.098653) | 0.014119 / 0.075646 (-0.061527) | 0.057636 / 0.419271 (-0.361636) | 0.032533 / 0.043533 (-0.010999) | 0.280963 / 0.255139 (0.025824) | 0.291305 / 0.283200 (0.008106) | 0.018391 / 0.141683 (-0.123292) | 1.140042 / 1.452155 (-0.312113) | 1.179485 / 1.492716 (-0.313231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094668 / 0.018006 (0.076661) | 0.301677 / 0.000490 (0.301187) | 0.000245 / 0.000200 (0.000045) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021376 / 0.037411 (-0.016036) | 0.070628 / 0.014526 (0.056102) | 0.082249 / 0.176557 (-0.094308) | 0.120423 / 0.737135 (-0.616712) | 0.083792 / 0.296338 (-0.212546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298884 / 0.215209 (0.083675) | 2.931849 / 2.077655 (0.854194) | 1.591888 / 1.504120 (0.087768) | 1.455781 / 1.541195 (-0.085414) | 1.500312 / 1.468490 (0.031822) | 0.558466 / 4.584777 (-4.026311) | 2.450449 / 3.745712 (-1.295263) | 2.842768 / 5.269862 (-2.427094) | 1.755614 / 4.565676 (-2.810062) | 0.063200 / 0.424275 (-0.361075) | 0.005022 / 0.007607 (-0.002585) | 0.358282 / 0.226044 (0.132238) | 3.575392 / 2.268929 (1.306464) | 1.960258 / 55.444624 (-53.484366) | 1.675518 / 6.876477 (-5.200959) | 1.696630 / 2.142072 (-0.445442) | 0.647185 / 4.805227 (-4.158042) | 0.117038 / 6.500664 (-6.383626) | 0.041622 / 0.075469 (-0.033848) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962503 / 1.841788 (-0.879285) | 12.194950 / 8.074308 (4.120642) | 10.662233 / 10.191392 (0.470841) | 0.131618 / 0.680424 (-0.548806) | 0.016000 / 0.534201 (-0.518201) | 0.291546 / 0.579283 (-0.287737) | 0.279537 / 0.434364 (-0.154827) | 0.328716 / 0.540337 (-0.211622) | 0.547565 / 1.386936 (-0.839371) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4de8f5f09f60613d47b5d7eb901752321c7b6a49 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005209 / 0.011353 (-0.006144) | 0.003017 / 0.011008 (-0.007991) | 0.062017 / 0.038508 (0.023509) | 0.048268 / 0.023109 (0.025158) | 0.246384 / 0.275898 (-0.029514) | 0.270441 / 0.323480 (-0.053039) | 0.002763 / 0.007986 (-0.005222) | 0.003140 / 0.004328 (-0.001188) | 0.048720 / 0.004250 (0.044470) | 0.038175 / 0.037052 (0.001123) | 0.254184 / 0.258489 (-0.004306) | 0.275515 / 0.293841 (-0.018326) | 0.027309 / 0.128546 (-0.101238) | 0.010507 / 0.075646 (-0.065140) | 0.210315 / 0.419271 (-0.208956) | 0.035203 / 0.043533 (-0.008329) | 0.253015 / 0.255139 (-0.002124) | 0.271465 / 0.283200 (-0.011734) | 0.019543 / 0.141683 (-0.122140) | 1.119242 / 1.452155 (-0.332913) | 1.149359 / 1.492716 (-0.343357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088935 / 0.018006 (0.070928) | 0.293922 / 0.000490 (0.293432) | 0.000202 / 0.000200 (0.000002) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018174 / 0.037411 (-0.019237) | 0.060215 / 0.014526 (0.045689) | 0.072868 / 0.176557 (-0.103689) | 0.117998 / 0.737135 (-0.619137) | 0.074159 / 0.296338 (-0.222179) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289229 / 0.215209 (0.074020) | 2.840414 / 2.077655 (0.762759) | 1.468357 / 1.504120 (-0.035763) | 1.347714 / 1.541195 (-0.193481) | 1.363704 / 1.468490 (-0.104786) | 0.572059 / 4.584777 (-4.012718) | 2.400631 / 3.745712 (-1.345081) | 2.755779 / 5.269862 (-2.514083) | 1.740937 / 4.565676 (-2.824739) | 0.063473 / 0.424275 (-0.360802) | 0.005012 / 0.007607 (-0.002595) | 0.336057 / 0.226044 (0.110012) | 3.382126 / 2.268929 (1.113197) | 1.807838 / 55.444624 (-53.636786) | 1.534594 / 6.876477 (-5.341883) | 1.529951 / 2.142072 (-0.612121) | 0.636661 / 4.805227 (-4.168566) | 0.117090 / 6.500664 (-6.383574) | 0.042310 / 0.075469 (-0.033160) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.924440 / 1.841788 (-0.917347) | 11.120517 / 8.074308 (3.046209) | 10.177210 / 10.191392 (-0.014182) | 0.139060 / 0.680424 (-0.541364) | 0.013818 / 0.534201 (-0.520383) | 0.285634 / 0.579283 (-0.293649) | 0.268657 / 0.434364 (-0.165706) | 0.325842 / 0.540337 (-0.214496) | 0.439902 / 1.386936 (-0.947034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005202 / 0.011353 (-0.006150) | 0.003002 / 0.011008 (-0.008006) | 0.048729 / 0.038508 (0.010221) | 0.048178 / 0.023109 (0.025069) | 0.288573 / 0.275898 (0.012675) | 0.311122 / 0.323480 (-0.012358) | 0.003953 / 0.007986 (-0.004033) | 0.002544 / 0.004328 (-0.001785) | 0.047762 / 0.004250 (0.043511) | 0.039711 / 0.037052 (0.002658) | 0.308389 / 0.258489 (0.049900) | 0.321913 / 0.293841 (0.028072) | 0.029166 / 0.128546 (-0.099380) | 0.010697 / 0.075646 (-0.064950) | 0.057758 / 0.419271 (-0.361514) | 0.032743 / 0.043533 (-0.010789) | 0.290933 / 0.255139 (0.035794) | 0.309404 / 0.283200 (0.026205) | 0.017691 / 0.141683 (-0.123992) | 1.157713 / 1.452155 (-0.294442) | 1.210485 / 1.492716 (-0.282231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088959 / 0.018006 (0.070953) | 0.298531 / 0.000490 (0.298041) | 0.000221 / 0.000200 (0.000021) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021129 / 0.037411 (-0.016283) | 0.068419 / 0.014526 (0.053893) | 0.079328 / 0.176557 (-0.097228) | 0.118603 / 0.737135 (-0.618532) | 0.080489 / 0.296338 (-0.215850) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292464 / 0.215209 (0.077254) | 2.898221 / 2.077655 (0.820566) | 1.600868 / 1.504120 (0.096748) | 1.485128 / 1.541195 (-0.056067) | 1.493091 / 1.468490 (0.024600) | 0.576117 / 4.584777 (-4.008660) | 2.450440 / 3.745712 (-1.295273) | 2.746026 / 5.269862 (-2.523836) | 1.722555 / 4.565676 (-2.843122) | 0.062869 / 0.424275 (-0.361406) | 0.004918 / 0.007607 (-0.002689) | 0.348470 / 0.226044 (0.122425) | 3.420267 / 2.268929 (1.151339) | 1.942973 / 55.444624 (-53.501651) | 1.667684 / 6.876477 (-5.208793) | 1.669618 / 2.142072 (-0.472454) | 0.630275 / 4.805227 (-4.174952) | 0.115072 / 6.500664 (-6.385592) | 0.040430 / 0.075469 (-0.035039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989827 / 1.841788 (-0.851961) | 11.578068 / 8.074308 (3.503760) | 10.636060 / 10.191392 (0.444668) | 0.131943 / 0.680424 (-0.548481) | 0.015915 / 0.534201 (-0.518286) | 0.287277 / 0.579283 (-0.292006) | 0.279451 / 0.434364 (-0.154913) | 0.325485 / 0.540337 (-0.214852) | 0.544635 / 1.386936 (-0.842301) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f22579be6c73867ac1a3c03e925abaf4872f8437 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005144 / 0.011353 (-0.006209) | 0.003686 / 0.011008 (-0.007322) | 0.064003 / 0.038508 (0.025495) | 0.058962 / 0.023109 (0.035853) | 0.233753 / 0.275898 (-0.042145) | 0.255802 / 0.323480 (-0.067677) | 0.003871 / 0.007986 (-0.004115) | 0.002609 / 0.004328 (-0.001719) | 0.048675 / 0.004250 (0.044425) | 0.037550 / 0.037052 (0.000498) | 0.240658 / 0.258489 (-0.017831) | 0.272303 / 0.293841 (-0.021538) | 0.027455 / 0.128546 (-0.101091) | 0.010706 / 0.075646 (-0.064941) | 0.210878 / 0.419271 (-0.208393) | 0.035763 / 0.043533 (-0.007770) | 0.239937 / 0.255139 (-0.015202) | 0.262520 / 0.283200 (-0.020680) | 0.017676 / 0.141683 (-0.124006) | 1.095036 / 1.452155 (-0.357118) | 1.178318 / 1.492716 (-0.314399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095310 / 0.018006 (0.077304) | 0.307485 / 0.000490 (0.306995) | 0.000212 / 0.000200 (0.000013) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018630 / 0.037411 (-0.018781) | 0.060461 / 0.014526 (0.045936) | 0.073117 / 0.176557 (-0.103440) | 0.119737 / 0.737135 (-0.617399) | 0.073909 / 0.296338 (-0.222430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280938 / 0.215209 (0.065729) | 2.755333 / 2.077655 (0.677679) | 1.468153 / 1.504120 (-0.035967) | 1.350247 / 1.541195 (-0.190948) | 1.379834 / 1.468490 (-0.088656) | 0.564027 / 4.584777 (-4.020750) | 2.387794 / 3.745712 (-1.357918) | 2.768529 / 5.269862 (-2.501333) | 1.761994 / 4.565676 (-2.803682) | 0.062079 / 0.424275 (-0.362196) | 0.005018 / 0.007607 (-0.002589) | 0.337576 / 0.226044 (0.111532) | 3.345347 / 2.268929 (1.076418) | 1.821950 / 55.444624 (-53.622674) | 1.545471 / 6.876477 (-5.331006) | 1.534941 / 2.142072 (-0.607131) | 0.626560 / 4.805227 (-4.178668) | 0.116227 / 6.500664 (-6.384437) | 0.041722 / 0.075469 (-0.033747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950480 / 1.841788 (-0.891307) | 11.616355 / 8.074308 (3.542047) | 10.426687 / 10.191392 (0.235295) | 0.129967 / 0.680424 (-0.550457) | 0.013977 / 0.534201 (-0.520224) | 0.287150 / 0.579283 (-0.292133) | 0.264028 / 0.434364 (-0.170336) | 0.325061 / 0.540337 (-0.215277) | 0.441281 / 1.386936 (-0.945655) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005436 / 0.011353 (-0.005917) | 0.003567 / 0.011008 (-0.007441) | 0.055275 / 0.038508 (0.016767) | 0.053216 / 0.023109 (0.030107) | 0.272826 / 0.275898 (-0.003072) | 0.298399 / 0.323480 (-0.025081) | 0.004803 / 0.007986 (-0.003183) | 0.002681 / 0.004328 (-0.001648) | 0.048704 / 0.004250 (0.044453) | 0.040048 / 0.037052 (0.002996) | 0.278200 / 0.258489 (0.019711) | 0.331167 / 0.293841 (0.037326) | 0.029282 / 0.128546 (-0.099265) | 0.010766 / 0.075646 (-0.064881) | 0.057370 / 0.419271 (-0.361902) | 0.032674 / 0.043533 (-0.010859) | 0.269430 / 0.255139 (0.014291) | 0.288256 / 0.283200 (0.005056) | 0.019340 / 0.141683 (-0.122343) | 1.118058 / 1.452155 (-0.334097) | 1.157811 / 1.492716 (-0.334906) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094091 / 0.018006 (0.076085) | 0.301833 / 0.000490 (0.301343) | 0.000216 / 0.000200 (0.000016) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021327 / 0.037411 (-0.016085) | 0.068636 / 0.014526 (0.054110) | 0.080246 / 0.176557 (-0.096311) | 0.120524 / 0.737135 (-0.616611) | 0.082226 / 0.296338 (-0.214113) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293579 / 0.215209 (0.078370) | 2.880281 / 2.077655 (0.802626) | 1.594647 / 1.504120 (0.090528) | 1.477152 / 1.541195 (-0.064043) | 1.498122 / 1.468490 (0.029632) | 0.555073 / 4.584777 (-4.029704) | 2.446743 / 3.745712 (-1.298970) | 2.794971 / 5.269862 (-2.474890) | 1.749730 / 4.565676 (-2.815947) | 0.062537 / 0.424275 (-0.361738) | 0.004908 / 0.007607 (-0.002699) | 0.350772 / 0.226044 (0.124727) | 3.486535 / 2.268929 (1.217607) | 1.957414 / 55.444624 (-53.487210) | 1.669169 / 6.876477 (-5.207308) | 1.682396 / 2.142072 (-0.459676) | 0.627379 / 4.805227 (-4.177848) | 0.117218 / 6.500664 (-6.383446) | 0.041000 / 0.075469 (-0.034469) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.958248 / 1.841788 (-0.883539) | 12.022677 / 8.074308 (3.948369) | 10.331661 / 10.191392 (0.140269) | 0.129765 / 0.680424 (-0.550659) | 0.015073 / 0.534201 (-0.519128) | 0.287212 / 0.579283 (-0.292071) | 0.278310 / 0.434364 (-0.156054) | 0.328155 / 0.540337 (-0.212183) | 0.564990 / 1.386936 (-0.821946) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0c16e56371e50adae771288945e3389cb81a31fd \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005576 / 0.011353 (-0.005777) | 0.003430 / 0.011008 (-0.007578) | 0.062714 / 0.038508 (0.024206) | 0.051240 / 0.023109 (0.028131) | 0.236637 / 0.275898 (-0.039261) | 0.262660 / 0.323480 (-0.060820) | 0.002924 / 0.007986 (-0.005061) | 0.002712 / 0.004328 (-0.001616) | 0.048680 / 0.004250 (0.044430) | 0.038997 / 0.037052 (0.001945) | 0.241426 / 0.258489 (-0.017063) | 0.270652 / 0.293841 (-0.023189) | 0.027355 / 0.128546 (-0.101192) | 0.010640 / 0.075646 (-0.065006) | 0.207754 / 0.419271 (-0.211517) | 0.035921 / 0.043533 (-0.007612) | 0.247645 / 0.255139 (-0.007494) | 0.262933 / 0.283200 (-0.020266) | 0.019658 / 0.141683 (-0.122025) | 1.112576 / 1.452155 (-0.339578) | 1.177362 / 1.492716 (-0.315354) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098100 / 0.018006 (0.080093) | 0.310170 / 0.000490 (0.309680) | 0.000220 / 0.000200 (0.000020) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019626 / 0.037411 (-0.017785) | 0.065468 / 0.014526 (0.050942) | 0.074767 / 0.176557 (-0.101789) | 0.123619 / 0.737135 (-0.613516) | 0.077159 / 0.296338 (-0.219179) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288585 / 0.215209 (0.073376) | 2.771254 / 2.077655 (0.693599) | 1.457091 / 1.504120 (-0.047029) | 1.324341 / 1.541195 (-0.216854) | 1.361960 / 1.468490 (-0.106530) | 0.574197 / 4.584777 (-4.010580) | 2.391440 / 3.745712 (-1.354273) | 2.935060 / 5.269862 (-2.334802) | 1.802792 / 4.565676 (-2.762884) | 0.063530 / 0.424275 (-0.360745) | 0.005129 / 0.007607 (-0.002478) | 0.345977 / 0.226044 (0.119933) | 3.368042 / 2.268929 (1.099113) | 1.789575 / 55.444624 (-53.655050) | 1.509165 / 6.876477 (-5.367312) | 1.579792 / 2.142072 (-0.562280) | 0.652136 / 4.805227 (-4.153091) | 0.117014 / 6.500664 (-6.383650) | 0.042385 / 0.075469 (-0.033084) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963967 / 1.841788 (-0.877821) | 11.847856 / 8.074308 (3.773548) | 10.584088 / 10.191392 (0.392696) | 0.143953 / 0.680424 (-0.536471) | 0.014355 / 0.534201 (-0.519846) | 0.286936 / 0.579283 (-0.292347) | 0.269039 / 0.434364 (-0.165325) | 0.324531 / 0.540337 (-0.215807) | 0.443187 / 1.386936 (-0.943749) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005905) | 0.003742 / 0.011008 (-0.007266) | 0.048808 / 0.038508 (0.010300) | 0.055409 / 0.023109 (0.032300) | 0.271574 / 0.275898 (-0.004324) | 0.295599 / 0.323480 (-0.027881) | 0.004208 / 0.007986 (-0.003778) | 0.002683 / 0.004328 (-0.001645) | 0.048813 / 0.004250 (0.044562) | 0.043672 / 0.037052 (0.006620) | 0.282173 / 0.258489 (0.023684) | 0.295447 / 0.293841 (0.001606) | 0.030461 / 0.128546 (-0.098086) | 0.010988 / 0.075646 (-0.064658) | 0.057050 / 0.419271 (-0.362221) | 0.033329 / 0.043533 (-0.010203) | 0.269700 / 0.255139 (0.014561) | 0.287099 / 0.283200 (0.003899) | 0.018203 / 0.141683 (-0.123480) | 1.142584 / 1.452155 (-0.309571) | 1.181848 / 1.492716 (-0.310869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096958 / 0.018006 (0.078952) | 0.310563 / 0.000490 (0.310074) | 0.000224 / 0.000200 (0.000024) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022213 / 0.037411 (-0.015199) | 0.072054 / 0.014526 (0.057528) | 0.086393 / 0.176557 (-0.090163) | 0.122431 / 0.737135 (-0.614704) | 0.085298 / 0.296338 (-0.211041) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290823 / 0.215209 (0.075614) | 2.838026 / 2.077655 (0.760371) | 1.541425 / 1.504120 (0.037305) | 1.431903 / 1.541195 (-0.109292) | 1.476567 / 1.468490 (0.008077) | 0.557856 / 4.584777 (-4.026920) | 2.449101 / 3.745712 (-1.296611) | 2.924633 / 5.269862 (-2.345229) | 1.824420 / 4.565676 (-2.741256) | 0.063735 / 0.424275 (-0.360540) | 0.005025 / 0.007607 (-0.002582) | 0.349458 / 0.226044 (0.123413) | 3.468627 / 2.268929 (1.199699) | 1.925173 / 55.444624 (-53.519451) | 1.655038 / 6.876477 (-5.221439) | 1.698612 / 2.142072 (-0.443460) | 0.643623 / 4.805227 (-4.161604) | 0.116128 / 6.500664 (-6.384536) | 0.042283 / 0.075469 (-0.033186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963029 / 1.841788 (-0.878758) | 13.273985 / 8.074308 (5.199677) | 11.400884 / 10.191392 (1.209492) | 0.152635 / 0.680424 (-0.527788) | 0.016442 / 0.534201 (-0.517759) | 0.289272 / 0.579283 (-0.290012) | 0.285286 / 0.434364 (-0.149078) | 0.330028 / 0.540337 (-0.210310) | 0.596500 / 1.386936 (-0.790436) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c427c4b1dcf84c898ae62dc521bf446bb35e0e7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005124 / 0.011353 (-0.006229) | 0.003832 / 0.011008 (-0.007176) | 0.062806 / 0.038508 (0.024298) | 0.053137 / 0.023109 (0.030028) | 0.241155 / 0.275898 (-0.034743) | 0.260521 / 0.323480 (-0.062959) | 0.004005 / 0.007986 (-0.003981) | 0.002754 / 0.004328 (-0.001575) | 0.048934 / 0.004250 (0.044684) | 0.039438 / 0.037052 (0.002385) | 0.242534 / 0.258489 (-0.015955) | 0.275498 / 0.293841 (-0.018343) | 0.027338 / 0.128546 (-0.101208) | 0.010809 / 0.075646 (-0.064837) | 0.206986 / 0.419271 (-0.212285) | 0.035614 / 0.043533 (-0.007919) | 0.245780 / 0.255139 (-0.009359) | 0.259793 / 0.283200 (-0.023407) | 0.018108 / 0.141683 (-0.123575) | 1.103412 / 1.452155 (-0.348742) | 1.162940 / 1.492716 (-0.329776) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092463 / 0.018006 (0.074457) | 0.299516 / 0.000490 (0.299026) | 0.000210 / 0.000200 (0.000010) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018261 / 0.037411 (-0.019150) | 0.060178 / 0.014526 (0.045652) | 0.073043 / 0.176557 (-0.103513) | 0.120541 / 0.737135 (-0.616594) | 0.074972 / 0.296338 (-0.221367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287288 / 0.215209 (0.072078) | 2.814915 / 2.077655 (0.737260) | 1.520221 / 1.504120 (0.016101) | 1.396045 / 1.541195 (-0.145149) | 1.419662 / 1.468490 (-0.048828) | 0.589247 / 4.584777 (-3.995530) | 2.411101 / 3.745712 (-1.334611) | 2.777709 / 5.269862 (-2.492153) | 1.750386 / 4.565676 (-2.815291) | 0.063734 / 0.424275 (-0.360541) | 0.005021 / 0.007607 (-0.002586) | 0.338817 / 0.226044 (0.112773) | 3.371218 / 2.268929 (1.102289) | 1.892691 / 55.444624 (-53.551934) | 1.599039 / 6.876477 (-5.277438) | 1.574726 / 2.142072 (-0.567346) | 0.665623 / 4.805227 (-4.139604) | 0.118628 / 6.500664 (-6.382036) | 0.041803 / 0.075469 (-0.033666) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948696 / 1.841788 (-0.893092) | 11.502916 / 8.074308 (3.428608) | 10.301174 / 10.191392 (0.109782) | 0.141752 / 0.680424 (-0.538672) | 0.014064 / 0.534201 (-0.520137) | 0.286701 / 0.579283 (-0.292583) | 0.265805 / 0.434364 (-0.168559) | 0.328420 / 0.540337 (-0.211917) | 0.433619 / 1.386936 (-0.953317) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005262 / 0.011353 (-0.006091) | 0.003361 / 0.011008 (-0.007648) | 0.049525 / 0.038508 (0.011016) | 0.048950 / 0.023109 (0.025841) | 0.273617 / 0.275898 (-0.002281) | 0.296614 / 0.323480 (-0.026866) | 0.004014 / 0.007986 (-0.003971) | 0.002630 / 0.004328 (-0.001698) | 0.048203 / 0.004250 (0.043952) | 0.040912 / 0.037052 (0.003860) | 0.279736 / 0.258489 (0.021247) | 0.301671 / 0.293841 (0.007830) | 0.028546 / 0.128546 (-0.100000) | 0.010440 / 0.075646 (-0.065206) | 0.057869 / 0.419271 (-0.361402) | 0.032876 / 0.043533 (-0.010657) | 0.277649 / 0.255139 (0.022510) | 0.296565 / 0.283200 (0.013365) | 0.017558 / 0.141683 (-0.124125) | 1.155005 / 1.452155 (-0.297149) | 1.204827 / 1.492716 (-0.287889) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093248 / 0.018006 (0.075242) | 0.302721 / 0.000490 (0.302231) | 0.000218 / 0.000200 (0.000018) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021882 / 0.037411 (-0.015530) | 0.068259 / 0.014526 (0.053733) | 0.080982 / 0.176557 (-0.095574) | 0.119386 / 0.737135 (-0.617750) | 0.081745 / 0.296338 (-0.214593) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297812 / 0.215209 (0.082603) | 2.909938 / 2.077655 (0.832283) | 1.603736 / 1.504120 (0.099616) | 1.482989 / 1.541195 (-0.058206) | 1.495107 / 1.468490 (0.026617) | 0.562275 / 4.584777 (-4.022502) | 2.424812 / 3.745712 (-1.320901) | 2.759127 / 5.269862 (-2.510735) | 1.733283 / 4.565676 (-2.832394) | 0.063144 / 0.424275 (-0.361131) | 0.004949 / 0.007607 (-0.002658) | 0.352756 / 0.226044 (0.126711) | 3.496028 / 2.268929 (1.227100) | 1.982804 / 55.444624 (-53.461820) | 1.689787 / 6.876477 (-5.186690) | 1.672699 / 2.142072 (-0.469373) | 0.660169 / 4.805227 (-4.145059) | 0.116535 / 6.500664 (-6.384129) | 0.040616 / 0.075469 (-0.034853) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975055 / 1.841788 (-0.866733) | 11.919295 / 8.074308 (3.844986) | 10.779188 / 10.191392 (0.587796) | 0.143106 / 0.680424 (-0.537318) | 0.015159 / 0.534201 (-0.519041) | 0.289734 / 0.579283 (-0.289549) | 0.278637 / 0.434364 (-0.155727) | 0.328159 / 0.540337 (-0.212178) | 0.570560 / 1.386936 (-0.816376) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#241500208da5fef64ad6ddc1cc5ab2be18f2f76d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005155 / 0.011353 (-0.006198) | 0.003589 / 0.011008 (-0.007419) | 0.064440 / 0.038508 (0.025932) | 0.051020 / 0.023109 (0.027911) | 0.246099 / 0.275898 (-0.029799) | 0.273383 / 0.323480 (-0.050097) | 0.003984 / 0.007986 (-0.004002) | 0.002791 / 0.004328 (-0.001537) | 0.049076 / 0.004250 (0.044826) | 0.037975 / 0.037052 (0.000922) | 0.253709 / 0.258489 (-0.004780) | 0.281730 / 0.293841 (-0.012111) | 0.028060 / 0.128546 (-0.100486) | 0.010808 / 0.075646 (-0.064838) | 0.206663 / 0.419271 (-0.212609) | 0.035989 / 0.043533 (-0.007544) | 0.252635 / 0.255139 (-0.002504) | 0.280042 / 0.283200 (-0.003158) | 0.016982 / 0.141683 (-0.124700) | 1.098679 / 1.452155 (-0.353475) | 1.157051 / 1.492716 (-0.335666) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098238 / 0.018006 (0.080232) | 0.311990 / 0.000490 (0.311501) | 0.000229 / 0.000200 (0.000029) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018270 / 0.037411 (-0.019141) | 0.062711 / 0.014526 (0.048186) | 0.074381 / 0.176557 (-0.102175) | 0.119946 / 0.737135 (-0.617189) | 0.075013 / 0.296338 (-0.221325) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282106 / 0.215209 (0.066897) | 2.752653 / 2.077655 (0.674999) | 1.488771 / 1.504120 (-0.015349) | 1.372552 / 1.541195 (-0.168643) | 1.390270 / 1.468490 (-0.078220) | 0.558928 / 4.584777 (-4.025849) | 2.411821 / 3.745712 (-1.333891) | 2.771441 / 5.269862 (-2.498421) | 1.747507 / 4.565676 (-2.818169) | 0.061360 / 0.424275 (-0.362915) | 0.004956 / 0.007607 (-0.002652) | 0.332330 / 0.226044 (0.106286) | 3.301405 / 2.268929 (1.032476) | 1.786726 / 55.444624 (-53.657899) | 1.529974 / 6.876477 (-5.346502) | 1.538412 / 2.142072 (-0.603660) | 0.637590 / 4.805227 (-4.167637) | 0.117215 / 6.500664 (-6.383449) | 0.042186 / 0.075469 (-0.033283) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945574 / 1.841788 (-0.896213) | 11.616152 / 8.074308 (3.541844) | 10.365114 / 10.191392 (0.173722) | 0.130358 / 0.680424 (-0.550066) | 0.013587 / 0.534201 (-0.520614) | 0.306024 / 0.579283 (-0.273259) | 0.270577 / 0.434364 (-0.163787) | 0.340768 / 0.540337 (-0.199569) | 0.460841 / 1.386936 (-0.926095) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005254 / 0.011353 (-0.006099) | 0.003137 / 0.011008 (-0.007871) | 0.048302 / 0.038508 (0.009794) | 0.051952 / 0.023109 (0.028843) | 0.269078 / 0.275898 (-0.006820) | 0.292044 / 0.323480 (-0.031436) | 0.003985 / 0.007986 (-0.004000) | 0.002597 / 0.004328 (-0.001732) | 0.049998 / 0.004250 (0.045747) | 0.040227 / 0.037052 (0.003174) | 0.274714 / 0.258489 (0.016225) | 0.298160 / 0.293841 (0.004319) | 0.028857 / 0.128546 (-0.099690) | 0.010545 / 0.075646 (-0.065101) | 0.057234 / 0.419271 (-0.362038) | 0.032515 / 0.043533 (-0.011018) | 0.271526 / 0.255139 (0.016387) | 0.288556 / 0.283200 (0.005356) | 0.018155 / 0.141683 (-0.123527) | 1.201906 / 1.452155 (-0.250248) | 1.220068 / 1.492716 (-0.272648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100098 / 0.018006 (0.082092) | 0.311081 / 0.000490 (0.310591) | 0.000231 / 0.000200 (0.000032) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022349 / 0.037411 (-0.015062) | 0.069698 / 0.014526 (0.055172) | 0.081334 / 0.176557 (-0.095222) | 0.120847 / 0.737135 (-0.616289) | 0.082091 / 0.296338 (-0.214248) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293810 / 0.215209 (0.078601) | 2.844191 / 2.077655 (0.766536) | 1.594494 / 1.504120 (0.090374) | 1.486531 / 1.541195 (-0.054664) | 1.506307 / 1.468490 (0.037817) | 0.560247 / 4.584777 (-4.024530) | 2.478309 / 3.745712 (-1.267403) | 2.759024 / 5.269862 (-2.510837) | 1.733063 / 4.565676 (-2.832613) | 0.061838 / 0.424275 (-0.362438) | 0.004869 / 0.007607 (-0.002738) | 0.347267 / 0.226044 (0.121222) | 3.407737 / 2.268929 (1.138808) | 1.944420 / 55.444624 (-53.500204) | 1.660060 / 6.876477 (-5.216417) | 1.704219 / 2.142072 (-0.437854) | 0.646969 / 4.805227 (-4.158258) | 0.115750 / 6.500664 (-6.384914) | 0.041614 / 0.075469 (-0.033855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972537 / 1.841788 (-0.869251) | 12.013530 / 8.074308 (3.939222) | 10.650215 / 10.191392 (0.458823) | 0.132877 / 0.680424 (-0.547547) | 0.016828 / 0.534201 (-0.517372) | 0.288321 / 0.579283 (-0.290962) | 0.284203 / 0.434364 (-0.150161) | 0.324016 / 0.540337 (-0.216321) | 0.575403 / 1.386936 (-0.811533) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#17ec1a7a610adba3db44f316a930b979872d4ef7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005925 / 0.011353 (-0.005427) | 0.005138 / 0.011008 (-0.005870) | 0.069865 / 0.038508 (0.031356) | 0.067181 / 0.023109 (0.044072) | 0.309642 / 0.275898 (0.033743) | 0.302919 / 0.323480 (-0.020561) | 0.003365 / 0.007986 (-0.004620) | 0.003148 / 0.004328 (-0.001180) | 0.054102 / 0.004250 (0.049852) | 0.044196 / 0.037052 (0.007143) | 0.306882 / 0.258489 (0.048393) | 0.315153 / 0.293841 (0.021313) | 0.030458 / 0.128546 (-0.098089) | 0.011773 / 0.075646 (-0.063874) | 0.235075 / 0.419271 (-0.184196) | 0.040840 / 0.043533 (-0.002693) | 0.279897 / 0.255139 (0.024758) | 0.316334 / 0.283200 (0.033135) | 0.020128 / 0.141683 (-0.121555) | 1.237327 / 1.452155 (-0.214828) | 1.290386 / 1.492716 (-0.202331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.118540 / 0.018006 (0.100534) | 0.363282 / 0.000490 (0.362792) | 0.000266 / 0.000200 (0.000066) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021435 / 0.037411 (-0.015977) | 0.068124 / 0.014526 (0.053598) | 0.082747 / 0.176557 (-0.093809) | 0.137179 / 0.737135 (-0.599956) | 0.084815 / 0.296338 (-0.211523) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.307836 / 0.215209 (0.092626) | 2.983444 / 2.077655 (0.905790) | 1.616430 / 1.504120 (0.112310) | 1.466843 / 1.541195 (-0.074351) | 1.512440 / 1.468490 (0.043950) | 0.652311 / 4.584777 (-3.932466) | 2.676420 / 3.745712 (-1.069292) | 3.265747 / 5.269862 (-2.004115) | 2.028586 / 4.565676 (-2.537090) | 0.071997 / 0.424275 (-0.352278) | 0.007068 / 0.007607 (-0.000539) | 0.367199 / 0.226044 (0.141155) | 3.617970 / 2.268929 (1.349042) | 1.991345 / 55.444624 (-53.453280) | 1.670015 / 6.876477 (-5.206462) | 1.720515 / 2.142072 (-0.421557) | 0.724649 / 4.805227 (-4.080579) | 0.134888 / 6.500664 (-6.365776) | 0.048325 / 0.075469 (-0.027144) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.051058 / 1.841788 (-0.790730) | 13.772809 / 8.074308 (5.698501) | 11.813879 / 10.191392 (1.622487) | 0.160065 / 0.680424 (-0.520359) | 0.016256 / 0.534201 (-0.517945) | 0.320393 / 0.579283 (-0.258890) | 0.314462 / 0.434364 (-0.119901) | 0.371911 / 0.540337 (-0.168427) | 0.506864 / 1.386936 (-0.880072) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005857 / 0.011353 (-0.005496) | 0.004077 / 0.011008 (-0.006931) | 0.056033 / 0.038508 (0.017525) | 0.067622 / 0.023109 (0.044513) | 0.298956 / 0.275898 (0.023058) | 0.323484 / 0.323480 (0.000004) | 0.004825 / 0.007986 (-0.003160) | 0.003120 / 0.004328 (-0.001208) | 0.055227 / 0.004250 (0.050976) | 0.048439 / 0.037052 (0.011387) | 0.303207 / 0.258489 (0.044718) | 0.329478 / 0.293841 (0.035637) | 0.032516 / 0.128546 (-0.096031) | 0.012260 / 0.075646 (-0.063386) | 0.065037 / 0.419271 (-0.354234) | 0.038799 / 0.043533 (-0.004734) | 0.299102 / 0.255139 (0.043963) | 0.318248 / 0.283200 (0.035048) | 0.020190 / 0.141683 (-0.121493) | 1.263479 / 1.452155 (-0.188676) | 1.329788 / 1.492716 (-0.162928) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.119801 / 0.018006 (0.101794) | 0.359618 / 0.000490 (0.359129) | 0.000260 / 0.000200 (0.000060) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026876 / 0.037411 (-0.010535) | 0.080637 / 0.014526 (0.066111) | 0.092260 / 0.176557 (-0.084297) | 0.137260 / 0.737135 (-0.599875) | 0.093309 / 0.296338 (-0.203029) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.329327 / 0.215209 (0.114118) | 3.193014 / 2.077655 (1.115359) | 1.755838 / 1.504120 (0.251718) | 1.612279 / 1.541195 (0.071084) | 1.631958 / 1.468490 (0.163468) | 0.630886 / 4.584777 (-3.953891) | 2.739731 / 3.745712 (-1.005981) | 3.186745 / 5.269862 (-2.083117) | 1.987125 / 4.565676 (-2.578552) | 0.070694 / 0.424275 (-0.353581) | 0.006461 / 0.007607 (-0.001146) | 0.386367 / 0.226044 (0.160323) | 3.815837 / 2.268929 (1.546908) | 2.155904 / 55.444624 (-53.288720) | 1.832575 / 6.876477 (-5.043902) | 1.842097 / 2.142072 (-0.299975) | 0.716394 / 4.805227 (-4.088833) | 0.130796 / 6.500664 (-6.369869) | 0.045674 / 0.075469 (-0.029795) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.109117 / 1.841788 (-0.732671) | 14.116582 / 8.074308 (6.042274) | 11.926356 / 10.191392 (1.734964) | 0.150543 / 0.680424 (-0.529881) | 0.017426 / 0.534201 (-0.516775) | 0.323058 / 0.579283 (-0.256225) | 0.330228 / 0.434364 (-0.104136) | 0.372533 / 0.540337 (-0.167804) | 0.661348 / 1.386936 (-0.725588) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#04ffd22a30ecc7545234559edd9d23c85c6d84d9 \"CML watermark\")\n",
"Thanks for the review, I took your comments into account !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005477 / 0.011353 (-0.005876) | 0.003509 / 0.011008 (-0.007499) | 0.062884 / 0.038508 (0.024376) | 0.051042 / 0.023109 (0.027933) | 0.285180 / 0.275898 (0.009282) | 0.315353 / 0.323480 (-0.008127) | 0.002943 / 0.007986 (-0.005043) | 0.003286 / 0.004328 (-0.001042) | 0.048885 / 0.004250 (0.044635) | 0.038591 / 0.037052 (0.001539) | 0.288527 / 0.258489 (0.030038) | 0.316102 / 0.293841 (0.022261) | 0.028252 / 0.128546 (-0.100295) | 0.010622 / 0.075646 (-0.065024) | 0.205573 / 0.419271 (-0.213699) | 0.035764 / 0.043533 (-0.007769) | 0.285729 / 0.255139 (0.030590) | 0.304578 / 0.283200 (0.021378) | 0.019862 / 0.141683 (-0.121821) | 1.102866 / 1.452155 (-0.349288) | 1.175161 / 1.492716 (-0.317555) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095253 / 0.018006 (0.077246) | 0.302290 / 0.000490 (0.301800) | 0.000243 / 0.000200 (0.000043) | 0.000061 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018680 / 0.037411 (-0.018731) | 0.060375 / 0.014526 (0.045849) | 0.074033 / 0.176557 (-0.102524) | 0.120290 / 0.737135 (-0.616845) | 0.075350 / 0.296338 (-0.220989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277617 / 0.215209 (0.062408) | 2.718201 / 2.077655 (0.640546) | 1.462952 / 1.504120 (-0.041168) | 1.339199 / 1.541195 (-0.201996) | 1.375805 / 1.468490 (-0.092685) | 0.559956 / 4.584777 (-4.024821) | 2.373865 / 3.745712 (-1.371847) | 2.795732 / 5.269862 (-2.474129) | 1.755490 / 4.565676 (-2.810186) | 0.062002 / 0.424275 (-0.362273) | 0.004935 / 0.007607 (-0.002672) | 0.334786 / 0.226044 (0.108741) | 3.237499 / 2.268929 (0.968571) | 1.787561 / 55.444624 (-53.657064) | 1.513300 / 6.876477 (-5.363176) | 1.549797 / 2.142072 (-0.592275) | 0.643587 / 4.805227 (-4.161640) | 0.117275 / 6.500664 (-6.383389) | 0.042184 / 0.075469 (-0.033285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933366 / 1.841788 (-0.908421) | 11.792282 / 8.074308 (3.717973) | 10.466608 / 10.191392 (0.275216) | 0.142148 / 0.680424 (-0.538275) | 0.014084 / 0.534201 (-0.520117) | 0.287233 / 0.579283 (-0.292050) | 0.266022 / 0.434364 (-0.168342) | 0.326854 / 0.540337 (-0.213483) | 0.451348 / 1.386936 (-0.935588) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005384 / 0.011353 (-0.005969) | 0.003562 / 0.011008 (-0.007446) | 0.049014 / 0.038508 (0.010506) | 0.057480 / 0.023109 (0.034371) | 0.274456 / 0.275898 (-0.001442) | 0.298387 / 0.323480 (-0.025093) | 0.003909 / 0.007986 (-0.004076) | 0.002646 / 0.004328 (-0.001683) | 0.048374 / 0.004250 (0.044124) | 0.040907 / 0.037052 (0.003854) | 0.278267 / 0.258489 (0.019778) | 0.299862 / 0.293841 (0.006021) | 0.029108 / 0.128546 (-0.099439) | 0.010752 / 0.075646 (-0.064894) | 0.057523 / 0.419271 (-0.361749) | 0.032692 / 0.043533 (-0.010841) | 0.276288 / 0.255139 (0.021149) | 0.291572 / 0.283200 (0.008372) | 0.017818 / 0.141683 (-0.123865) | 1.129517 / 1.452155 (-0.322638) | 1.186630 / 1.492716 (-0.306086) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093405 / 0.018006 (0.075399) | 0.301254 / 0.000490 (0.300764) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021793 / 0.037411 (-0.015618) | 0.069033 / 0.014526 (0.054508) | 0.083502 / 0.176557 (-0.093055) | 0.122149 / 0.737135 (-0.614986) | 0.083801 / 0.296338 (-0.212537) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299149 / 0.215209 (0.083940) | 2.936550 / 2.077655 (0.858895) | 1.595766 / 1.504120 (0.091647) | 1.487117 / 1.541195 (-0.054078) | 1.494606 / 1.468490 (0.026116) | 0.569346 / 4.584777 (-4.015431) | 2.445642 / 3.745712 (-1.300070) | 2.805696 / 5.269862 (-2.464165) | 1.743796 / 4.565676 (-2.821881) | 0.062695 / 0.424275 (-0.361580) | 0.004885 / 0.007607 (-0.002723) | 0.354186 / 0.226044 (0.128142) | 3.487926 / 2.268929 (1.218997) | 1.965703 / 55.444624 (-53.478922) | 1.682284 / 6.876477 (-5.194193) | 1.705586 / 2.142072 (-0.436487) | 0.655099 / 4.805227 (-4.150128) | 0.116441 / 6.500664 (-6.384223) | 0.040851 / 0.075469 (-0.034618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967361 / 1.841788 (-0.874427) | 12.037718 / 8.074308 (3.963409) | 10.599761 / 10.191392 (0.408369) | 0.143127 / 0.680424 (-0.537297) | 0.015063 / 0.534201 (-0.519138) | 0.286894 / 0.579283 (-0.292389) | 0.301505 / 0.434364 (-0.132859) | 0.324339 / 0.540337 (-0.215999) | 0.591782 / 1.386936 (-0.795154) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b96ff08d4aa6dbafc8a10a9d03dfabe236378bcd \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005337 / 0.011353 (-0.006015) | 0.004074 / 0.011008 (-0.006934) | 0.062653 / 0.038508 (0.024145) | 0.054295 / 0.023109 (0.031186) | 0.248284 / 0.275898 (-0.027614) | 0.271604 / 0.323480 (-0.051876) | 0.003931 / 0.007986 (-0.004055) | 0.002907 / 0.004328 (-0.001422) | 0.047991 / 0.004250 (0.043740) | 0.042842 / 0.037052 (0.005790) | 0.253648 / 0.258489 (-0.004841) | 0.282546 / 0.293841 (-0.011295) | 0.028005 / 0.128546 (-0.100541) | 0.010734 / 0.075646 (-0.064912) | 0.210023 / 0.419271 (-0.209248) | 0.035940 / 0.043533 (-0.007592) | 0.250766 / 0.255139 (-0.004373) | 0.267644 / 0.283200 (-0.015556) | 0.020451 / 0.141683 (-0.121232) | 1.114972 / 1.452155 (-0.337183) | 1.159823 / 1.492716 (-0.332893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095527 / 0.018006 (0.077521) | 0.303321 / 0.000490 (0.302831) | 0.000216 / 0.000200 (0.000016) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018725 / 0.037411 (-0.018686) | 0.062537 / 0.014526 (0.048011) | 0.073091 / 0.176557 (-0.103466) | 0.119570 / 0.737135 (-0.617565) | 0.074863 / 0.296338 (-0.221476) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284936 / 0.215209 (0.069727) | 2.802498 / 2.077655 (0.724843) | 1.493316 / 1.504120 (-0.010804) | 1.372319 / 1.541195 (-0.168875) | 1.403657 / 1.468490 (-0.064833) | 0.569303 / 4.584777 (-4.015474) | 2.402498 / 3.745712 (-1.343214) | 2.834778 / 5.269862 (-2.435084) | 1.791312 / 4.565676 (-2.774365) | 0.062526 / 0.424275 (-0.361749) | 0.004947 / 0.007607 (-0.002660) | 0.345141 / 0.226044 (0.119097) | 3.371863 / 2.268929 (1.102934) | 1.846023 / 55.444624 (-53.598602) | 1.596368 / 6.876477 (-5.280109) | 1.615902 / 2.142072 (-0.526170) | 0.644333 / 4.805227 (-4.160894) | 0.119460 / 6.500664 (-6.381204) | 0.049122 / 0.075469 (-0.026347) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951839 / 1.841788 (-0.889948) | 11.677074 / 8.074308 (3.602766) | 10.562586 / 10.191392 (0.371194) | 0.143633 / 0.680424 (-0.536791) | 0.014157 / 0.534201 (-0.520044) | 0.289141 / 0.579283 (-0.290142) | 0.264719 / 0.434364 (-0.169645) | 0.327862 / 0.540337 (-0.212476) | 0.451215 / 1.386936 (-0.935721) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005343 / 0.011353 (-0.006010) | 0.003522 / 0.011008 (-0.007486) | 0.049354 / 0.038508 (0.010846) | 0.051441 / 0.023109 (0.028332) | 0.259350 / 0.275898 (-0.016548) | 0.288946 / 0.323480 (-0.034534) | 0.004052 / 0.007986 (-0.003934) | 0.002690 / 0.004328 (-0.001639) | 0.049996 / 0.004250 (0.045746) | 0.040224 / 0.037052 (0.003171) | 0.264588 / 0.258489 (0.006099) | 0.296474 / 0.293841 (0.002633) | 0.028868 / 0.128546 (-0.099679) | 0.010917 / 0.075646 (-0.064730) | 0.057866 / 0.419271 (-0.361405) | 0.032610 / 0.043533 (-0.010923) | 0.260657 / 0.255139 (0.005518) | 0.276947 / 0.283200 (-0.006253) | 0.018877 / 0.141683 (-0.122806) | 1.126205 / 1.452155 (-0.325949) | 1.206173 / 1.492716 (-0.286543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094464 / 0.018006 (0.076458) | 0.304473 / 0.000490 (0.303984) | 0.000231 / 0.000200 (0.000031) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021472 / 0.037411 (-0.015939) | 0.070864 / 0.014526 (0.056338) | 0.086607 / 0.176557 (-0.089950) | 0.120679 / 0.737135 (-0.616456) | 0.084271 / 0.296338 (-0.212068) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296448 / 0.215209 (0.081239) | 2.893996 / 2.077655 (0.816341) | 1.573409 / 1.504120 (0.069289) | 1.438799 / 1.541195 (-0.102396) | 1.461241 / 1.468490 (-0.007249) | 0.566737 / 4.584777 (-4.018040) | 2.425709 / 3.745712 (-1.320003) | 2.826764 / 5.269862 (-2.443098) | 1.785330 / 4.565676 (-2.780347) | 0.063721 / 0.424275 (-0.360554) | 0.005158 / 0.007607 (-0.002449) | 0.354961 / 0.226044 (0.128916) | 3.457499 / 2.268929 (1.188570) | 1.931374 / 55.444624 (-53.513251) | 1.646515 / 6.876477 (-5.229962) | 1.629891 / 2.142072 (-0.512182) | 0.648922 / 4.805227 (-4.156305) | 0.114953 / 6.500664 (-6.385711) | 0.040997 / 0.075469 (-0.034472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951049 / 1.841788 (-0.890739) | 12.258298 / 8.074308 (4.183990) | 10.663309 / 10.191392 (0.471917) | 0.142933 / 0.680424 (-0.537491) | 0.015927 / 0.534201 (-0.518273) | 0.286914 / 0.579283 (-0.292369) | 0.286600 / 0.434364 (-0.147764) | 0.324464 / 0.540337 (-0.215874) | 0.575075 / 1.386936 (-0.811861) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed47b9d5e9c6aa03a0aa07d8abfd3fa8241da353 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005298 / 0.011353 (-0.006055) | 0.003645 / 0.011008 (-0.007363) | 0.061629 / 0.038508 (0.023121) | 0.052322 / 0.023109 (0.029212) | 0.242579 / 0.275898 (-0.033319) | 0.263525 / 0.323480 (-0.059955) | 0.002794 / 0.007986 (-0.005192) | 0.002152 / 0.004328 (-0.002177) | 0.048301 / 0.004250 (0.044050) | 0.038177 / 0.037052 (0.001125) | 0.247724 / 0.258489 (-0.010765) | 0.274455 / 0.293841 (-0.019386) | 0.026992 / 0.128546 (-0.101555) | 0.010110 / 0.075646 (-0.065536) | 0.205662 / 0.419271 (-0.213609) | 0.034901 / 0.043533 (-0.008632) | 0.241920 / 0.255139 (-0.013219) | 0.262048 / 0.283200 (-0.021152) | 0.019111 / 0.141683 (-0.122572) | 1.127600 / 1.452155 (-0.324555) | 1.193931 / 1.492716 (-0.298786) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090321 / 0.018006 (0.072315) | 0.299046 / 0.000490 (0.298556) | 0.000197 / 0.000200 (-0.000003) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018278 / 0.037411 (-0.019133) | 0.060114 / 0.014526 (0.045588) | 0.073602 / 0.176557 (-0.102954) | 0.119676 / 0.737135 (-0.617459) | 0.074786 / 0.296338 (-0.221552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280385 / 0.215209 (0.065176) | 2.764259 / 2.077655 (0.686604) | 1.501027 / 1.504120 (-0.003093) | 1.376900 / 1.541195 (-0.164295) | 1.390587 / 1.468490 (-0.077903) | 0.555180 / 4.584777 (-4.029597) | 2.354307 / 3.745712 (-1.391405) | 2.755862 / 5.269862 (-2.514000) | 1.714771 / 4.565676 (-2.850906) | 0.062507 / 0.424275 (-0.361768) | 0.004974 / 0.007607 (-0.002633) | 0.333900 / 0.226044 (0.107856) | 3.266922 / 2.268929 (0.997994) | 1.805401 / 55.444624 (-53.639223) | 1.526970 / 6.876477 (-5.349507) | 1.539425 / 2.142072 (-0.602647) | 0.629364 / 4.805227 (-4.175863) | 0.114929 / 6.500664 (-6.385735) | 0.041258 / 0.075469 (-0.034211) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968601 / 1.841788 (-0.873187) | 11.260937 / 8.074308 (3.186629) | 10.393839 / 10.191392 (0.202447) | 0.127988 / 0.680424 (-0.552436) | 0.014564 / 0.534201 (-0.519637) | 0.286560 / 0.579283 (-0.292723) | 0.260493 / 0.434364 (-0.173871) | 0.330949 / 0.540337 (-0.209388) | 0.435798 / 1.386936 (-0.951138) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003030 / 0.011008 (-0.007978) | 0.048513 / 0.038508 (0.010005) | 0.049501 / 0.023109 (0.026392) | 0.270545 / 0.275898 (-0.005353) | 0.289128 / 0.323480 (-0.034352) | 0.003925 / 0.007986 (-0.004061) | 0.002568 / 0.004328 (-0.001761) | 0.047692 / 0.004250 (0.043442) | 0.039854 / 0.037052 (0.002802) | 0.272654 / 0.258489 (0.014165) | 0.296275 / 0.293841 (0.002434) | 0.029027 / 0.128546 (-0.099519) | 0.010335 / 0.075646 (-0.065311) | 0.056726 / 0.419271 (-0.362546) | 0.033257 / 0.043533 (-0.010275) | 0.272672 / 0.255139 (0.017533) | 0.286298 / 0.283200 (0.003098) | 0.017877 / 0.141683 (-0.123806) | 1.150322 / 1.452155 (-0.301833) | 1.221031 / 1.492716 (-0.271685) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102838 / 0.018006 (0.084832) | 0.298810 / 0.000490 (0.298320) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021232 / 0.037411 (-0.016180) | 0.067949 / 0.014526 (0.053423) | 0.116487 / 0.176557 (-0.060070) | 0.124035 / 0.737135 (-0.613100) | 0.081075 / 0.296338 (-0.215263) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289098 / 0.215209 (0.073889) | 2.844476 / 2.077655 (0.766821) | 1.609576 / 1.504120 (0.105456) | 1.480453 / 1.541195 (-0.060742) | 1.489672 / 1.468490 (0.021182) | 0.589661 / 4.584777 (-3.995116) | 2.453804 / 3.745712 (-1.291908) | 2.722381 / 5.269862 (-2.547480) | 1.720251 / 4.565676 (-2.845425) | 0.066085 / 0.424275 (-0.358190) | 0.004943 / 0.007607 (-0.002664) | 0.355149 / 0.226044 (0.129104) | 3.444323 / 2.268929 (1.175395) | 1.971157 / 55.444624 (-53.473467) | 1.683029 / 6.876477 (-5.193448) | 1.672798 / 2.142072 (-0.469274) | 0.644812 / 4.805227 (-4.160416) | 0.115098 / 6.500664 (-6.385566) | 0.039883 / 0.075469 (-0.035586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960454 / 1.841788 (-0.881334) | 11.604732 / 8.074308 (3.530424) | 10.405481 / 10.191392 (0.214089) | 0.129146 / 0.680424 (-0.551278) | 0.014945 / 0.534201 (-0.519256) | 0.286239 / 0.579283 (-0.293044) | 0.281041 / 0.434364 (-0.153323) | 0.320448 / 0.540337 (-0.219890) | 0.554304 / 1.386936 (-0.832632) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b2cfb7859b029654829c4dfee230812ddab1f104 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005510 / 0.011353 (-0.005843) | 0.003575 / 0.011008 (-0.007433) | 0.062232 / 0.038508 (0.023724) | 0.051115 / 0.023109 (0.028006) | 0.250709 / 0.275898 (-0.025189) | 0.274837 / 0.323480 (-0.048642) | 0.002972 / 0.007986 (-0.005014) | 0.002708 / 0.004328 (-0.001621) | 0.048088 / 0.004250 (0.043838) | 0.038588 / 0.037052 (0.001535) | 0.252550 / 0.258489 (-0.005939) | 0.285238 / 0.293841 (-0.008603) | 0.027867 / 0.128546 (-0.100679) | 0.011000 / 0.075646 (-0.064646) | 0.206918 / 0.419271 (-0.212354) | 0.035711 / 0.043533 (-0.007822) | 0.255306 / 0.255139 (0.000167) | 0.298636 / 0.283200 (0.015436) | 0.018222 / 0.141683 (-0.123461) | 1.122276 / 1.452155 (-0.329879) | 1.196471 / 1.492716 (-0.296245) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092072 / 0.018006 (0.074066) | 0.301469 / 0.000490 (0.300979) | 0.000225 / 0.000200 (0.000025) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018672 / 0.037411 (-0.018739) | 0.060235 / 0.014526 (0.045709) | 0.074036 / 0.176557 (-0.102521) | 0.119578 / 0.737135 (-0.617557) | 0.073605 / 0.296338 (-0.222734) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286474 / 0.215209 (0.071264) | 2.779427 / 2.077655 (0.701772) | 1.478746 / 1.504120 (-0.025373) | 1.362692 / 1.541195 (-0.178503) | 1.388194 / 1.468490 (-0.080296) | 0.560707 / 4.584777 (-4.024070) | 2.352846 / 3.745712 (-1.392866) | 2.784400 / 5.269862 (-2.485461) | 1.775642 / 4.565676 (-2.790035) | 0.062324 / 0.424275 (-0.361951) | 0.004938 / 0.007607 (-0.002669) | 0.334149 / 0.226044 (0.108105) | 3.319446 / 2.268929 (1.050517) | 1.810369 / 55.444624 (-53.634255) | 1.559462 / 6.876477 (-5.317014) | 1.611199 / 2.142072 (-0.530873) | 0.655984 / 4.805227 (-4.149244) | 0.118508 / 6.500664 (-6.382156) | 0.043661 / 0.075469 (-0.031808) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935046 / 1.841788 (-0.906742) | 11.413501 / 8.074308 (3.339192) | 10.392314 / 10.191392 (0.200922) | 0.131507 / 0.680424 (-0.548917) | 0.014827 / 0.534201 (-0.519374) | 0.289069 / 0.579283 (-0.290214) | 0.268288 / 0.434364 (-0.166076) | 0.326843 / 0.540337 (-0.213495) | 0.441283 / 1.386936 (-0.945653) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005375 / 0.011353 (-0.005978) | 0.003549 / 0.011008 (-0.007459) | 0.048996 / 0.038508 (0.010488) | 0.051408 / 0.023109 (0.028298) | 0.272265 / 0.275898 (-0.003633) | 0.293228 / 0.323480 (-0.030252) | 0.004147 / 0.007986 (-0.003839) | 0.002673 / 0.004328 (-0.001655) | 0.048116 / 0.004250 (0.043865) | 0.039926 / 0.037052 (0.002874) | 0.276987 / 0.258489 (0.018498) | 0.302955 / 0.293841 (0.009115) | 0.029488 / 0.128546 (-0.099058) | 0.010797 / 0.075646 (-0.064849) | 0.057552 / 0.419271 (-0.361720) | 0.032827 / 0.043533 (-0.010706) | 0.270888 / 0.255139 (0.015749) | 0.289136 / 0.283200 (0.005937) | 0.018815 / 0.141683 (-0.122868) | 1.148624 / 1.452155 (-0.303530) | 1.191184 / 1.492716 (-0.301532) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091712 / 0.018006 (0.073706) | 0.311198 / 0.000490 (0.310708) | 0.000226 / 0.000200 (0.000026) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022097 / 0.037411 (-0.015314) | 0.070641 / 0.014526 (0.056116) | 0.080084 / 0.176557 (-0.096472) | 0.118998 / 0.737135 (-0.618137) | 0.081827 / 0.296338 (-0.214512) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298599 / 0.215209 (0.083390) | 2.884759 / 2.077655 (0.807105) | 1.630794 / 1.504120 (0.126674) | 1.454309 / 1.541195 (-0.086886) | 1.466795 / 1.468490 (-0.001695) | 0.565405 / 4.584777 (-4.019372) | 2.460883 / 3.745712 (-1.284829) | 2.764193 / 5.269862 (-2.505668) | 1.734270 / 4.565676 (-2.831407) | 0.063408 / 0.424275 (-0.360867) | 0.004887 / 0.007607 (-0.002720) | 0.347762 / 0.226044 (0.121717) | 3.458385 / 2.268929 (1.189457) | 1.965434 / 55.444624 (-53.479190) | 1.671047 / 6.876477 (-5.205430) | 1.665642 / 2.142072 (-0.476430) | 0.640665 / 4.805227 (-4.164562) | 0.116025 / 6.500664 (-6.384639) | 0.040147 / 0.075469 (-0.035322) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982194 / 1.841788 (-0.859593) | 11.983487 / 8.074308 (3.909179) | 10.660605 / 10.191392 (0.469213) | 0.140647 / 0.680424 (-0.539777) | 0.015870 / 0.534201 (-0.518331) | 0.287032 / 0.579283 (-0.292251) | 0.276629 / 0.434364 (-0.157735) | 0.331171 / 0.540337 (-0.209166) | 0.575346 / 1.386936 (-0.811590) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#56433c2f6a42d5fcc5acb46c6275911c29afc371 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005014 / 0.011353 (-0.006339) | 0.003434 / 0.011008 (-0.007574) | 0.063283 / 0.038508 (0.024775) | 0.048068 / 0.023109 (0.024959) | 0.239521 / 0.275898 (-0.036377) | 0.265294 / 0.323480 (-0.058186) | 0.003790 / 0.007986 (-0.004196) | 0.002577 / 0.004328 (-0.001751) | 0.048618 / 0.004250 (0.044368) | 0.037427 / 0.037052 (0.000375) | 0.245263 / 0.258489 (-0.013226) | 0.276618 / 0.293841 (-0.017223) | 0.026615 / 0.128546 (-0.101931) | 0.010378 / 0.075646 (-0.065268) | 0.205670 / 0.419271 (-0.213601) | 0.035076 / 0.043533 (-0.008457) | 0.245062 / 0.255139 (-0.010077) | 0.264584 / 0.283200 (-0.018616) | 0.017760 / 0.141683 (-0.123922) | 1.148061 / 1.452155 (-0.304094) | 1.192762 / 1.492716 (-0.299955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090870 / 0.018006 (0.072864) | 0.305458 / 0.000490 (0.304968) | 0.000207 / 0.000200 (0.000007) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018597 / 0.037411 (-0.018814) | 0.060349 / 0.014526 (0.045823) | 0.074854 / 0.176557 (-0.101702) | 0.123243 / 0.737135 (-0.613892) | 0.075843 / 0.296338 (-0.220496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275855 / 0.215209 (0.060645) | 2.723965 / 2.077655 (0.646311) | 1.436010 / 1.504120 (-0.068110) | 1.323495 / 1.541195 (-0.217700) | 1.356234 / 1.468490 (-0.112256) | 0.564388 / 4.584777 (-4.020389) | 2.390180 / 3.745712 (-1.355532) | 2.782863 / 5.269862 (-2.486998) | 1.765048 / 4.565676 (-2.800628) | 0.062680 / 0.424275 (-0.361595) | 0.004929 / 0.007607 (-0.002678) | 0.337578 / 0.226044 (0.111533) | 3.316780 / 2.268929 (1.047851) | 1.803829 / 55.444624 (-53.640795) | 1.524585 / 6.876477 (-5.351891) | 1.549695 / 2.142072 (-0.592377) | 0.638053 / 4.805227 (-4.167174) | 0.116983 / 6.500664 (-6.383681) | 0.042251 / 0.075469 (-0.033218) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946978 / 1.841788 (-0.894810) | 11.809483 / 8.074308 (3.735175) | 10.459974 / 10.191392 (0.268582) | 0.130015 / 0.680424 (-0.550409) | 0.013843 / 0.534201 (-0.520358) | 0.286972 / 0.579283 (-0.292311) | 0.268904 / 0.434364 (-0.165460) | 0.325591 / 0.540337 (-0.214746) | 0.439233 / 1.386936 (-0.947703) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005804 / 0.011353 (-0.005549) | 0.003431 / 0.011008 (-0.007577) | 0.049041 / 0.038508 (0.010533) | 0.054758 / 0.023109 (0.031649) | 0.262330 / 0.275898 (-0.013568) | 0.288872 / 0.323480 (-0.034608) | 0.004016 / 0.007986 (-0.003970) | 0.002606 / 0.004328 (-0.001722) | 0.047878 / 0.004250 (0.043628) | 0.045066 / 0.037052 (0.008013) | 0.266310 / 0.258489 (0.007820) | 0.290072 / 0.293841 (-0.003768) | 0.028738 / 0.128546 (-0.099809) | 0.010667 / 0.075646 (-0.064979) | 0.057300 / 0.419271 (-0.361972) | 0.032715 / 0.043533 (-0.010818) | 0.264043 / 0.255139 (0.008904) | 0.278652 / 0.283200 (-0.004547) | 0.017873 / 0.141683 (-0.123810) | 1.125981 / 1.452155 (-0.326174) | 1.168548 / 1.492716 (-0.324168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090997 / 0.018006 (0.072991) | 0.300807 / 0.000490 (0.300317) | 0.000223 / 0.000200 (0.000023) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021510 / 0.037411 (-0.015901) | 0.068251 / 0.014526 (0.053725) | 0.082073 / 0.176557 (-0.094484) | 0.120071 / 0.737135 (-0.617064) | 0.082245 / 0.296338 (-0.214093) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290601 / 0.215209 (0.075392) | 2.871855 / 2.077655 (0.794200) | 1.558239 / 1.504120 (0.054119) | 1.447767 / 1.541195 (-0.093427) | 1.446851 / 1.468490 (-0.021639) | 0.573990 / 4.584777 (-4.010787) | 2.439859 / 3.745712 (-1.305853) | 2.795899 / 5.269862 (-2.473963) | 1.746751 / 4.565676 (-2.818926) | 0.062100 / 0.424275 (-0.362175) | 0.004948 / 0.007607 (-0.002659) | 0.344281 / 0.226044 (0.118236) | 3.427499 / 2.268929 (1.158570) | 1.940348 / 55.444624 (-53.504276) | 1.660926 / 6.876477 (-5.215551) | 1.669485 / 2.142072 (-0.472588) | 0.634034 / 4.805227 (-4.171193) | 0.114748 / 6.500664 (-6.385916) | 0.041617 / 0.075469 (-0.033852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966411 / 1.841788 (-0.875376) | 12.040753 / 8.074308 (3.966445) | 10.506542 / 10.191392 (0.315150) | 0.129659 / 0.680424 (-0.550764) | 0.015691 / 0.534201 (-0.518510) | 0.286911 / 0.579283 (-0.292372) | 0.273588 / 0.434364 (-0.160776) | 0.333642 / 0.540337 (-0.206695) | 0.568550 / 1.386936 (-0.818386) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b38ed4705263df92ae06d89baab0932ae10065e0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005023 / 0.011353 (-0.006330) | 0.003492 / 0.011008 (-0.007516) | 0.062808 / 0.038508 (0.024300) | 0.051649 / 0.023109 (0.028540) | 0.246871 / 0.275898 (-0.029027) | 0.273430 / 0.323480 (-0.050050) | 0.003851 / 0.007986 (-0.004135) | 0.002643 / 0.004328 (-0.001686) | 0.048499 / 0.004250 (0.044248) | 0.037713 / 0.037052 (0.000661) | 0.256431 / 0.258489 (-0.002058) | 0.306956 / 0.293841 (0.013116) | 0.027116 / 0.128546 (-0.101430) | 0.010769 / 0.075646 (-0.064877) | 0.206218 / 0.419271 (-0.213053) | 0.035592 / 0.043533 (-0.007941) | 0.249629 / 0.255139 (-0.005510) | 0.268438 / 0.283200 (-0.014761) | 0.018557 / 0.141683 (-0.123125) | 1.123988 / 1.452155 (-0.328167) | 1.158196 / 1.492716 (-0.334520) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090221 / 0.018006 (0.072215) | 0.300892 / 0.000490 (0.300402) | 0.000209 / 0.000200 (0.000009) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018305 / 0.037411 (-0.019106) | 0.060294 / 0.014526 (0.045769) | 0.073330 / 0.176557 (-0.103227) | 0.119620 / 0.737135 (-0.617515) | 0.074611 / 0.296338 (-0.221727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285347 / 0.215209 (0.070138) | 2.795144 / 2.077655 (0.717490) | 1.468321 / 1.504120 (-0.035799) | 1.343848 / 1.541195 (-0.197347) | 1.388998 / 1.468490 (-0.079492) | 0.559609 / 4.584777 (-4.025168) | 2.355056 / 3.745712 (-1.390656) | 2.798763 / 5.269862 (-2.471099) | 1.764371 / 4.565676 (-2.801305) | 0.062563 / 0.424275 (-0.361712) | 0.005101 / 0.007607 (-0.002506) | 0.339205 / 0.226044 (0.113161) | 3.336729 / 2.268929 (1.067800) | 1.801987 / 55.444624 (-53.642637) | 1.526720 / 6.876477 (-5.349757) | 1.539324 / 2.142072 (-0.602749) | 0.635805 / 4.805227 (-4.169422) | 0.138762 / 6.500664 (-6.361902) | 0.042092 / 0.075469 (-0.033377) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.928755 / 1.841788 (-0.913032) | 11.468224 / 8.074308 (3.393916) | 10.784568 / 10.191392 (0.593176) | 0.130332 / 0.680424 (-0.550092) | 0.014203 / 0.534201 (-0.519998) | 0.287125 / 0.579283 (-0.292158) | 0.263921 / 0.434364 (-0.170443) | 0.327824 / 0.540337 (-0.212513) | 0.434679 / 1.386936 (-0.952257) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005194 / 0.011353 (-0.006159) | 0.003411 / 0.011008 (-0.007598) | 0.050122 / 0.038508 (0.011614) | 0.049378 / 0.023109 (0.026269) | 0.272980 / 0.275898 (-0.002918) | 0.298047 / 0.323480 (-0.025433) | 0.003945 / 0.007986 (-0.004041) | 0.002633 / 0.004328 (-0.001696) | 0.048935 / 0.004250 (0.044685) | 0.040157 / 0.037052 (0.003104) | 0.277056 / 0.258489 (0.018567) | 0.299824 / 0.293841 (0.005983) | 0.028997 / 0.128546 (-0.099550) | 0.010868 / 0.075646 (-0.064779) | 0.057895 / 0.419271 (-0.361377) | 0.033522 / 0.043533 (-0.010010) | 0.274912 / 0.255139 (0.019773) | 0.288902 / 0.283200 (0.005702) | 0.018016 / 0.141683 (-0.123667) | 1.116669 / 1.452155 (-0.335485) | 1.175007 / 1.492716 (-0.317710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090169 / 0.018006 (0.072163) | 0.310577 / 0.000490 (0.310087) | 0.000215 / 0.000200 (0.000015) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020448 / 0.037411 (-0.016963) | 0.068216 / 0.014526 (0.053690) | 0.081798 / 0.176557 (-0.094759) | 0.119151 / 0.737135 (-0.617985) | 0.085197 / 0.296338 (-0.211142) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294957 / 0.215209 (0.079748) | 2.874065 / 2.077655 (0.796410) | 1.590963 / 1.504120 (0.086843) | 1.459596 / 1.541195 (-0.081599) | 1.467931 / 1.468490 (-0.000559) | 0.562832 / 4.584777 (-4.021944) | 2.426384 / 3.745712 (-1.319328) | 2.767749 / 5.269862 (-2.502112) | 1.746702 / 4.565676 (-2.818975) | 0.063353 / 0.424275 (-0.360922) | 0.005073 / 0.007607 (-0.002534) | 0.348258 / 0.226044 (0.122213) | 3.390351 / 2.268929 (1.121423) | 1.950092 / 55.444624 (-53.494532) | 1.671227 / 6.876477 (-5.205250) | 1.683349 / 2.142072 (-0.458723) | 0.637613 / 4.805227 (-4.167614) | 0.115172 / 6.500664 (-6.385492) | 0.040202 / 0.075469 (-0.035267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963085 / 1.841788 (-0.878702) | 11.895384 / 8.074308 (3.821076) | 10.609906 / 10.191392 (0.418513) | 0.130865 / 0.680424 (-0.549559) | 0.016020 / 0.534201 (-0.518181) | 0.287540 / 0.579283 (-0.291743) | 0.278204 / 0.434364 (-0.156160) | 0.326007 / 0.540337 (-0.214330) | 0.590881 / 1.386936 (-0.796055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c291e330a7d460ff09d867377de1d4c53fd5394c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005266 / 0.011353 (-0.006087) | 0.003751 / 0.011008 (-0.007257) | 0.063835 / 0.038508 (0.025327) | 0.052688 / 0.023109 (0.029579) | 0.261957 / 0.275898 (-0.013941) | 0.284264 / 0.323480 (-0.039216) | 0.003958 / 0.007986 (-0.004027) | 0.002696 / 0.004328 (-0.001633) | 0.052791 / 0.004250 (0.048540) | 0.038294 / 0.037052 (0.001242) | 0.259488 / 0.258489 (0.000999) | 0.298368 / 0.293841 (0.004528) | 0.028309 / 0.128546 (-0.100237) | 0.010819 / 0.075646 (-0.064827) | 0.208221 / 0.419271 (-0.211050) | 0.036373 / 0.043533 (-0.007160) | 0.257000 / 0.255139 (0.001861) | 0.273108 / 0.283200 (-0.010092) | 0.019674 / 0.141683 (-0.122009) | 1.119196 / 1.452155 (-0.332958) | 1.161613 / 1.492716 (-0.331104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093408 / 0.018006 (0.075401) | 0.302278 / 0.000490 (0.301788) | 0.000212 / 0.000200 (0.000012) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019417 / 0.037411 (-0.017995) | 0.060847 / 0.014526 (0.046321) | 0.075399 / 0.176557 (-0.101158) | 0.121233 / 0.737135 (-0.615902) | 0.076916 / 0.296338 (-0.219422) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281265 / 0.215209 (0.066056) | 2.651726 / 2.077655 (0.574072) | 1.457726 / 1.504120 (-0.046394) | 1.339250 / 1.541195 (-0.201945) | 1.398529 / 1.468490 (-0.069961) | 0.566574 / 4.584777 (-4.018203) | 2.431576 / 3.745712 (-1.314136) | 2.845884 / 5.269862 (-2.423977) | 1.798051 / 4.565676 (-2.767626) | 0.063619 / 0.424275 (-0.360656) | 0.005286 / 0.007607 (-0.002321) | 0.332834 / 0.226044 (0.106789) | 3.293222 / 2.268929 (1.024293) | 1.837810 / 55.444624 (-53.606815) | 1.568511 / 6.876477 (-5.307966) | 1.627518 / 2.142072 (-0.514555) | 0.643520 / 4.805227 (-4.161708) | 0.118482 / 6.500664 (-6.382182) | 0.049563 / 0.075469 (-0.025906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.947767 / 1.841788 (-0.894021) | 11.994999 / 8.074308 (3.920691) | 10.662651 / 10.191392 (0.471259) | 0.142070 / 0.680424 (-0.538354) | 0.014276 / 0.534201 (-0.519925) | 0.288455 / 0.579283 (-0.290828) | 0.266335 / 0.434364 (-0.168029) | 0.328455 / 0.540337 (-0.211883) | 0.440740 / 1.386936 (-0.946196) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005636 / 0.011353 (-0.005717) | 0.003664 / 0.011008 (-0.007344) | 0.050340 / 0.038508 (0.011832) | 0.062795 / 0.023109 (0.039685) | 0.280874 / 0.275898 (0.004976) | 0.314056 / 0.323480 (-0.009424) | 0.004089 / 0.007986 (-0.003897) | 0.002780 / 0.004328 (-0.001548) | 0.048468 / 0.004250 (0.044218) | 0.042924 / 0.037052 (0.005871) | 0.281381 / 0.258489 (0.022892) | 0.308232 / 0.293841 (0.014391) | 0.030294 / 0.128546 (-0.098252) | 0.011098 / 0.075646 (-0.064548) | 0.057535 / 0.419271 (-0.361736) | 0.034217 / 0.043533 (-0.009316) | 0.283022 / 0.255139 (0.027883) | 0.298425 / 0.283200 (0.015225) | 0.019285 / 0.141683 (-0.122398) | 1.117722 / 1.452155 (-0.334433) | 1.185878 / 1.492716 (-0.306839) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094915 / 0.018006 (0.076909) | 0.311782 / 0.000490 (0.311293) | 0.000217 / 0.000200 (0.000017) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022652 / 0.037411 (-0.014759) | 0.069766 / 0.014526 (0.055240) | 0.084495 / 0.176557 (-0.092061) | 0.121295 / 0.737135 (-0.615841) | 0.082447 / 0.296338 (-0.213891) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294286 / 0.215209 (0.079077) | 2.863694 / 2.077655 (0.786039) | 1.578338 / 1.504120 (0.074219) | 1.478737 / 1.541195 (-0.062458) | 1.528569 / 1.468490 (0.060079) | 0.576944 / 4.584777 (-4.007833) | 2.438730 / 3.745712 (-1.306982) | 2.956138 / 5.269862 (-2.313723) | 1.844484 / 4.565676 (-2.721192) | 0.065980 / 0.424275 (-0.358295) | 0.004998 / 0.007607 (-0.002609) | 0.352063 / 0.226044 (0.126019) | 3.456355 / 2.268929 (1.187426) | 1.971582 / 55.444624 (-53.473042) | 1.684536 / 6.876477 (-5.191940) | 1.726823 / 2.142072 (-0.415250) | 0.660235 / 4.805227 (-4.144992) | 0.119029 / 6.500664 (-6.381635) | 0.042497 / 0.075469 (-0.032972) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971817 / 1.841788 (-0.869970) | 12.900324 / 8.074308 (4.826015) | 10.957495 / 10.191392 (0.766103) | 0.133705 / 0.680424 (-0.546718) | 0.015669 / 0.534201 (-0.518532) | 0.287340 / 0.579283 (-0.291943) | 0.280380 / 0.434364 (-0.153984) | 0.330369 / 0.540337 (-0.209969) | 0.581793 / 1.386936 (-0.805143) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c2af5efae1985499d6a0a1b6ab4120337eebf776 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005038 / 0.011353 (-0.006315) | 0.003737 / 0.011008 (-0.007272) | 0.063118 / 0.038508 (0.024610) | 0.050120 / 0.023109 (0.027011) | 0.240722 / 0.275898 (-0.035176) | 0.263128 / 0.323480 (-0.060352) | 0.003839 / 0.007986 (-0.004147) | 0.002718 / 0.004328 (-0.001610) | 0.047869 / 0.004250 (0.043618) | 0.038092 / 0.037052 (0.001040) | 0.245759 / 0.258489 (-0.012730) | 0.277728 / 0.293841 (-0.016113) | 0.027466 / 0.128546 (-0.101081) | 0.011767 / 0.075646 (-0.063879) | 0.205505 / 0.419271 (-0.213766) | 0.035429 / 0.043533 (-0.008104) | 0.241665 / 0.255139 (-0.013474) | 0.260908 / 0.283200 (-0.022292) | 0.017133 / 0.141683 (-0.124550) | 1.107725 / 1.452155 (-0.344429) | 1.169707 / 1.492716 (-0.323009) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094112 / 0.018006 (0.076106) | 0.302596 / 0.000490 (0.302106) | 0.000237 / 0.000200 (0.000037) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017923 / 0.037411 (-0.019488) | 0.060356 / 0.014526 (0.045830) | 0.073708 / 0.176557 (-0.102849) | 0.119952 / 0.737135 (-0.617183) | 0.075350 / 0.296338 (-0.220989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289253 / 0.215209 (0.074044) | 2.800772 / 2.077655 (0.723117) | 1.538368 / 1.504120 (0.034248) | 1.401037 / 1.541195 (-0.140158) | 1.427170 / 1.468490 (-0.041320) | 0.560497 / 4.584777 (-4.024280) | 2.417844 / 3.745712 (-1.327868) | 2.798377 / 5.269862 (-2.471484) | 1.756517 / 4.565676 (-2.809160) | 0.063897 / 0.424275 (-0.360378) | 0.005323 / 0.007607 (-0.002284) | 0.339881 / 0.226044 (0.113836) | 3.354858 / 2.268929 (1.085929) | 1.877233 / 55.444624 (-53.567391) | 1.578713 / 6.876477 (-5.297764) | 1.631898 / 2.142072 (-0.510175) | 0.640303 / 4.805227 (-4.164924) | 0.116731 / 6.500664 (-6.383933) | 0.041978 / 0.075469 (-0.033491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963259 / 1.841788 (-0.878529) | 11.983646 / 8.074308 (3.909338) | 10.561596 / 10.191392 (0.370204) | 0.135863 / 0.680424 (-0.544561) | 0.015607 / 0.534201 (-0.518594) | 0.295164 / 0.579283 (-0.284119) | 0.283366 / 0.434364 (-0.150998) | 0.341848 / 0.540337 (-0.198489) | 0.448359 / 1.386936 (-0.938577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005366 / 0.011353 (-0.005987) | 0.003621 / 0.011008 (-0.007387) | 0.048615 / 0.038508 (0.010107) | 0.053950 / 0.023109 (0.030841) | 0.273112 / 0.275898 (-0.002786) | 0.295655 / 0.323480 (-0.027825) | 0.004066 / 0.007986 (-0.003920) | 0.002700 / 0.004328 (-0.001628) | 0.047899 / 0.004250 (0.043648) | 0.041633 / 0.037052 (0.004581) | 0.277760 / 0.258489 (0.019271) | 0.302068 / 0.293841 (0.008227) | 0.028879 / 0.128546 (-0.099668) | 0.010756 / 0.075646 (-0.064891) | 0.057190 / 0.419271 (-0.362082) | 0.032555 / 0.043533 (-0.010978) | 0.272045 / 0.255139 (0.016906) | 0.289330 / 0.283200 (0.006130) | 0.018466 / 0.141683 (-0.123216) | 1.180435 / 1.452155 (-0.271720) | 1.192228 / 1.492716 (-0.300488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094871 / 0.018006 (0.076864) | 0.302552 / 0.000490 (0.302062) | 0.000224 / 0.000200 (0.000024) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022008 / 0.037411 (-0.015403) | 0.068528 / 0.014526 (0.054002) | 0.081735 / 0.176557 (-0.094821) | 0.120990 / 0.737135 (-0.616145) | 0.083155 / 0.296338 (-0.213184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305030 / 0.215209 (0.089821) | 3.009812 / 2.077655 (0.932158) | 1.677773 / 1.504120 (0.173654) | 1.552280 / 1.541195 (0.011085) | 1.606248 / 1.468490 (0.137758) | 0.557093 / 4.584777 (-4.027684) | 2.418292 / 3.745712 (-1.327420) | 2.813049 / 5.269862 (-2.456813) | 1.764507 / 4.565676 (-2.801169) | 0.065089 / 0.424275 (-0.359186) | 0.004944 / 0.007607 (-0.002663) | 0.360672 / 0.226044 (0.134628) | 3.525850 / 2.268929 (1.256921) | 2.030091 / 55.444624 (-53.414533) | 1.754669 / 6.876477 (-5.121807) | 1.772673 / 2.142072 (-0.369399) | 0.642904 / 4.805227 (-4.162324) | 0.116018 / 6.500664 (-6.384646) | 0.041308 / 0.075469 (-0.034161) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986386 / 1.841788 (-0.855401) | 12.291623 / 8.074308 (4.217315) | 10.655932 / 10.191392 (0.464540) | 0.141736 / 0.680424 (-0.538688) | 0.016669 / 0.534201 (-0.517532) | 0.286875 / 0.579283 (-0.292408) | 0.281898 / 0.434364 (-0.152466) | 0.325206 / 0.540337 (-0.215132) | 0.577607 / 1.386936 (-0.809329) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1cf33502493fb9760ea8cc8e51622bf94d0c9e31 \"CML watermark\")\n",
"Alright tests are passing (except one on temp dir cleanup windows but I don't think it's related to this PR ?)\r\n\r\n```\r\nFAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\\\Users\\\\RUNNER~1\\\\AppData\\\\Local\\\\Temp\\\\tmpqy3f2ft_\\\\hf-internal-testing___dataset_with_script\\\\default\\\\0.0.0\\\\c240e2be3370bdbd\\\\dataset_with_script-train.arrow'\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005072 / 0.011353 (-0.006281) | 0.003449 / 0.011008 (-0.007559) | 0.062630 / 0.038508 (0.024122) | 0.054276 / 0.023109 (0.031167) | 0.253345 / 0.275898 (-0.022553) | 0.273460 / 0.323480 (-0.050020) | 0.003859 / 0.007986 (-0.004127) | 0.002646 / 0.004328 (-0.001683) | 0.048289 / 0.004250 (0.044038) | 0.037943 / 0.037052 (0.000891) | 0.256569 / 0.258489 (-0.001920) | 0.287809 / 0.293841 (-0.006032) | 0.027675 / 0.128546 (-0.100872) | 0.010554 / 0.075646 (-0.065092) | 0.205157 / 0.419271 (-0.214115) | 0.035464 / 0.043533 (-0.008069) | 0.254300 / 0.255139 (-0.000839) | 0.272907 / 0.283200 (-0.010292) | 0.018146 / 0.141683 (-0.123537) | 1.110528 / 1.452155 (-0.341626) | 1.170156 / 1.492716 (-0.322560) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093151 / 0.018006 (0.075144) | 0.302087 / 0.000490 (0.301598) | 0.000216 / 0.000200 (0.000016) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018744 / 0.037411 (-0.018667) | 0.059843 / 0.014526 (0.045317) | 0.073165 / 0.176557 (-0.103391) | 0.120464 / 0.737135 (-0.616671) | 0.074992 / 0.296338 (-0.221347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285103 / 0.215209 (0.069894) | 2.820254 / 2.077655 (0.742600) | 1.505336 / 1.504120 (0.001216) | 1.368631 / 1.541195 (-0.172564) | 1.404140 / 1.468490 (-0.064350) | 0.563906 / 4.584777 (-4.020871) | 2.411871 / 3.745712 (-1.333841) | 2.788390 / 5.269862 (-2.481471) | 1.749788 / 4.565676 (-2.815888) | 0.062171 / 0.424275 (-0.362104) | 0.004918 / 0.007607 (-0.002689) | 0.339615 / 0.226044 (0.113571) | 3.337789 / 2.268929 (1.068861) | 1.808445 / 55.444624 (-53.636180) | 1.541015 / 6.876477 (-5.335462) | 1.572389 / 2.142072 (-0.569683) | 0.641739 / 4.805227 (-4.163488) | 0.115844 / 6.500664 (-6.384820) | 0.042504 / 0.075469 (-0.032965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942463 / 1.841788 (-0.899325) | 11.602364 / 8.074308 (3.528056) | 10.628921 / 10.191392 (0.437529) | 0.136154 / 0.680424 (-0.544270) | 0.013842 / 0.534201 (-0.520359) | 0.287085 / 0.579283 (-0.292198) | 0.269860 / 0.434364 (-0.164503) | 0.329525 / 0.540337 (-0.210812) | 0.441287 / 1.386936 (-0.945649) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005215 / 0.011353 (-0.006138) | 0.003549 / 0.011008 (-0.007460) | 0.049199 / 0.038508 (0.010691) | 0.051655 / 0.023109 (0.028545) | 0.272150 / 0.275898 (-0.003748) | 0.291978 / 0.323480 (-0.031502) | 0.003985 / 0.007986 (-0.004001) | 0.002668 / 0.004328 (-0.001661) | 0.048524 / 0.004250 (0.044274) | 0.039824 / 0.037052 (0.002772) | 0.275566 / 0.258489 (0.017077) | 0.298076 / 0.293841 (0.004235) | 0.029508 / 0.128546 (-0.099038) | 0.010673 / 0.075646 (-0.064973) | 0.057327 / 0.419271 (-0.361944) | 0.032590 / 0.043533 (-0.010943) | 0.273295 / 0.255139 (0.018156) | 0.289127 / 0.283200 (0.005928) | 0.017694 / 0.141683 (-0.123989) | 1.134502 / 1.452155 (-0.317653) | 1.185603 / 1.492716 (-0.307114) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098403 / 0.018006 (0.080396) | 0.302735 / 0.000490 (0.302245) | 0.000228 / 0.000200 (0.000028) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025192 / 0.037411 (-0.012219) | 0.068149 / 0.014526 (0.053623) | 0.082220 / 0.176557 (-0.094336) | 0.119491 / 0.737135 (-0.617645) | 0.082484 / 0.296338 (-0.213855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295339 / 0.215209 (0.080130) | 2.868411 / 2.077655 (0.790757) | 1.590665 / 1.504120 (0.086545) | 1.465995 / 1.541195 (-0.075200) | 1.489205 / 1.468490 (0.020715) | 0.562503 / 4.584777 (-4.022274) | 2.480100 / 3.745712 (-1.265613) | 2.774216 / 5.269862 (-2.495646) | 1.733129 / 4.565676 (-2.832548) | 0.062698 / 0.424275 (-0.361577) | 0.004910 / 0.007607 (-0.002697) | 0.354766 / 0.226044 (0.128722) | 3.435541 / 2.268929 (1.166613) | 1.953357 / 55.444624 (-53.491267) | 1.673584 / 6.876477 (-5.202893) | 1.677749 / 2.142072 (-0.464323) | 0.632601 / 4.805227 (-4.172626) | 0.114875 / 6.500664 (-6.385789) | 0.040577 / 0.075469 (-0.034892) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967003 / 1.841788 (-0.874785) | 11.964490 / 8.074308 (3.890181) | 10.493812 / 10.191392 (0.302420) | 0.132177 / 0.680424 (-0.548247) | 0.015149 / 0.534201 (-0.519052) | 0.289011 / 0.579283 (-0.290272) | 0.285479 / 0.434364 (-0.148885) | 0.327090 / 0.540337 (-0.213248) | 0.571747 / 1.386936 (-0.815189) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4c9b4cb7ee4720415261216d72051e2a3320fe41 \"CML watermark\")\n"
] | 2023-11-23T17:31:57 | 2023-12-01T17:57:17 | 2023-12-01T17:50:59 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6448",
"html_url": "https://github.com/huggingface/datasets/pull/6448",
"diff_url": "https://github.com/huggingface/datasets/pull/6448.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6448.patch",
"merged_at": "2023-12-01T17:50:59"
} | The idea is to make this code work for datasets with scripts if they have a Parquet export
```python
ds = load_dataset("squad", trust_remote_code=False)
```
And more generally, it means we use the Parquet export whenever it's possible (it's safer and faster than dataset scripts).
I also added a `config.USE_PARQUET_EXPORT` variable to use in the datasets-server parquet conversion job
- [x] Needs https://github.com/huggingface/datasets/pull/6429 to be merged first
cc @severo I use the /parquet and /info endpoints from datasets-server | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6448/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6448/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6447/comments | https://api.github.com/repos/huggingface/datasets/issues/6447/events | https://github.com/huggingface/datasets/issues/6447 | 2,008,195,298 | I_kwDODunzps53sqDi | 6,447 | Support one dataset loader per config when using YAML | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [] | 2023-11-23T13:03:07 | 2023-11-23T13:03:07 | null | CONTRIBUTOR | null | null | null | ### Feature request
See https://huggingface.co./datasets/datasets-examples/doc-unsupported-1
I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc.
### Motivation
It would be more flexible for the users
### Your contribution
No specific contribution | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6447/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6446/comments | https://api.github.com/repos/huggingface/datasets/issues/6446/events | https://github.com/huggingface/datasets/issues/6446 | 2,007,092,708 | I_kwDODunzps53oc3k | 6,446 | Speech Commands v2 dataset doesn't match AST-v2 config | {
"login": "vymao",
"id": 18024303,
"node_id": "MDQ6VXNlcjE4MDI0MzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/18024303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vymao",
"html_url": "https://github.com/vymao",
"followers_url": "https://api.github.com/users/vymao/followers",
"following_url": "https://api.github.com/users/vymao/following{/other_user}",
"gists_url": "https://api.github.com/users/vymao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vymao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vymao/subscriptions",
"organizations_url": "https://api.github.com/users/vymao/orgs",
"repos_url": "https://api.github.com/users/vymao/repos",
"events_url": "https://api.github.com/users/vymao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vymao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can use `.align_labels_with_mapping` on the dataset to align the labels with the model config.\r\n\r\nRegarding the number of labels, only the special `_silence_` label corresponding to noise is missing, which is consistent with the model paper (reports training on 35 labels). You can run a `.filter` to drop it.\r\n\r\nPS: You should create a discussion on a model/dataset repo (on the Hub) for these kinds of questions",
"Thanks, will keep that in mind. But I tried running `dataset_aligned = dataset.align_labels_with_mapping(model.config.id2label, 'label')`, and received this error: \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/victor/anaconda3/envs/transformers-v2/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 5928, in align_labels_with_mapping\r\n label2id = {k.lower(): v for k, v in label2id.items()}\r\n File \"/Users/victor/anaconda3/envs/transformers-v2/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 5928, in <dictcomp>\r\n label2id = {k.lower(): v for k, v in label2id.items()}\r\nAttributeError: 'int' object has no attribute 'lower'\r\n```\r\nMy guess is that the dataset `label` column is purely an int ID, and I'm not sure there's a way to identify which class label the ID belongs to in the dataset easily.",
"Replacing `model.config.id2label` with `model.config.label2id` should fix the issue.\r\n\r\nSo, the full code to align the labels with the model config is as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoFeatureExtractor, AutoModelForAudioClassification\r\n\r\n# extractor = AutoFeatureExtractor.from_pretrained(\"MIT/ast-finetuned-speech-commands-v2\")\r\nmodel = AutoModelForAudioClassification.from_pretrained(\"MIT/ast-finetuned-speech-commands-v2\")\r\n\r\nds = load_dataset(\"speech_commands\", \"v0.02\")\r\nds = ds.filter(lambda label: label != ds[\"train\"].features[\"label\"].str2int(\"_silence_\"), input_columns=\"label\")\r\nds = ds.align_labels_with_mapping(model.config.label2id, \"label\")\r\n```"
] | 2023-11-22T20:46:36 | 2023-11-28T14:46:08 | 2023-11-28T14:46:08 | NONE | null | null | null | ### Describe the bug
[According](https://huggingface.co./MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`.
### Steps to reproduce the bug
```
>>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2")
>>> model.config.id2label
{0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'}
>>> dataset = load_dataset("speech_commands", "v0.02", split="test")
>>> torch.unique(torch.Tensor(dataset['label']))
tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13.,
14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27.,
28., 29., 30., 31., 32., 33., 34., 35.])
```
If you try to explore the [dataset itself](https://huggingface.co./datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`.
### Expected behavior
The labels should match completely and there should be the same number of label classes between the model config and the dataset itself.
### Environment info
datasets = 2.14.6, transformers = 4.33.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6446/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6445/comments | https://api.github.com/repos/huggingface/datasets/issues/6445/events | https://github.com/huggingface/datasets/pull/6445 | 2,006,958,595 | PR_kwDODunzps5gKg2d | 6,445 | Use `filelock` package for file locking | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005431 / 0.011353 (-0.005922) | 0.003255 / 0.011008 (-0.007753) | 0.062867 / 0.038508 (0.024359) | 0.051917 / 0.023109 (0.028808) | 0.254229 / 0.275898 (-0.021669) | 0.276949 / 0.323480 (-0.046531) | 0.002868 / 0.007986 (-0.005117) | 0.002539 / 0.004328 (-0.001789) | 0.048366 / 0.004250 (0.044115) | 0.038497 / 0.037052 (0.001445) | 0.252158 / 0.258489 (-0.006332) | 0.288868 / 0.293841 (-0.004973) | 0.027956 / 0.128546 (-0.100591) | 0.010500 / 0.075646 (-0.065147) | 0.209263 / 0.419271 (-0.210008) | 0.035415 / 0.043533 (-0.008118) | 0.253104 / 0.255139 (-0.002035) | 0.274646 / 0.283200 (-0.008554) | 0.019923 / 0.141683 (-0.121760) | 1.081870 / 1.452155 (-0.370285) | 1.157159 / 1.492716 (-0.335557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097420 / 0.018006 (0.079414) | 0.315021 / 0.000490 (0.314531) | 0.000218 / 0.000200 (0.000018) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018826 / 0.037411 (-0.018585) | 0.061921 / 0.014526 (0.047395) | 0.086825 / 0.176557 (-0.089731) | 0.120606 / 0.737135 (-0.616529) | 0.074344 / 0.296338 (-0.221994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283238 / 0.215209 (0.068028) | 2.771817 / 2.077655 (0.694162) | 1.500194 / 1.504120 (-0.003926) | 1.379286 / 1.541195 (-0.161908) | 1.447747 / 1.468490 (-0.020743) | 0.587176 / 4.584777 (-3.997601) | 2.411260 / 3.745712 (-1.334452) | 2.897682 / 5.269862 (-2.372180) | 1.821720 / 4.565676 (-2.743957) | 0.063299 / 0.424275 (-0.360976) | 0.004969 / 0.007607 (-0.002639) | 0.346417 / 0.226044 (0.120373) | 3.432936 / 2.268929 (1.164007) | 1.898662 / 55.444624 (-53.545963) | 1.624339 / 6.876477 (-5.252138) | 1.641653 / 2.142072 (-0.500419) | 0.655773 / 4.805227 (-4.149454) | 0.118588 / 6.500664 (-6.382076) | 0.043919 / 0.075469 (-0.031551) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949466 / 1.841788 (-0.892322) | 12.378025 / 8.074308 (4.303717) | 10.750942 / 10.191392 (0.559550) | 0.146575 / 0.680424 (-0.533849) | 0.015453 / 0.534201 (-0.518748) | 0.290608 / 0.579283 (-0.288676) | 0.273000 / 0.434364 (-0.161364) | 0.328019 / 0.540337 (-0.212318) | 0.417396 / 1.386936 (-0.969540) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005363 / 0.011353 (-0.005990) | 0.003421 / 0.011008 (-0.007587) | 0.049429 / 0.038508 (0.010920) | 0.052774 / 0.023109 (0.029664) | 0.274058 / 0.275898 (-0.001840) | 0.297307 / 0.323480 (-0.026173) | 0.004000 / 0.007986 (-0.003986) | 0.002463 / 0.004328 (-0.001866) | 0.048824 / 0.004250 (0.044574) | 0.041064 / 0.037052 (0.004012) | 0.279066 / 0.258489 (0.020577) | 0.302420 / 0.293841 (0.008579) | 0.029665 / 0.128546 (-0.098881) | 0.010628 / 0.075646 (-0.065018) | 0.057678 / 0.419271 (-0.361594) | 0.032731 / 0.043533 (-0.010802) | 0.274662 / 0.255139 (0.019523) | 0.291878 / 0.283200 (0.008678) | 0.018820 / 0.141683 (-0.122863) | 1.124042 / 1.452155 (-0.328112) | 1.175020 / 1.492716 (-0.317697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099419 / 0.018006 (0.081413) | 0.311511 / 0.000490 (0.311022) | 0.000228 / 0.000200 (0.000028) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022478 / 0.037411 (-0.014933) | 0.071955 / 0.014526 (0.057429) | 0.081423 / 0.176557 (-0.095134) | 0.119574 / 0.737135 (-0.617561) | 0.084724 / 0.296338 (-0.211615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295537 / 0.215209 (0.080328) | 2.893855 / 2.077655 (0.816201) | 1.602065 / 1.504120 (0.097945) | 1.478193 / 1.541195 (-0.063002) | 1.508250 / 1.468490 (0.039760) | 0.566140 / 4.584777 (-4.018637) | 2.455474 / 3.745712 (-1.290238) | 2.849525 / 5.269862 (-2.420337) | 1.763830 / 4.565676 (-2.801846) | 0.062375 / 0.424275 (-0.361900) | 0.004992 / 0.007607 (-0.002615) | 0.346068 / 0.226044 (0.120023) | 3.452421 / 2.268929 (1.183492) | 1.970346 / 55.444624 (-53.474278) | 1.690865 / 6.876477 (-5.185612) | 1.705358 / 2.142072 (-0.436714) | 0.644261 / 4.805227 (-4.160967) | 0.120596 / 6.500664 (-6.380068) | 0.042699 / 0.075469 (-0.032770) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980506 / 1.841788 (-0.861281) | 12.401901 / 8.074308 (4.327593) | 11.169413 / 10.191392 (0.978021) | 0.142540 / 0.680424 (-0.537884) | 0.015730 / 0.534201 (-0.518471) | 0.288871 / 0.579283 (-0.290412) | 0.287487 / 0.434364 (-0.146877) | 0.325133 / 0.540337 (-0.215204) | 0.417979 / 1.386936 (-0.968957) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#965685891db0d06979490aaebab72d5dc628e42b \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005062 / 0.011353 (-0.006291) | 0.003024 / 0.011008 (-0.007984) | 0.061801 / 0.038508 (0.023293) | 0.048934 / 0.023109 (0.025825) | 0.248024 / 0.275898 (-0.027874) | 0.265665 / 0.323480 (-0.057815) | 0.003885 / 0.007986 (-0.004100) | 0.002371 / 0.004328 (-0.001957) | 0.047895 / 0.004250 (0.043644) | 0.039015 / 0.037052 (0.001963) | 0.252320 / 0.258489 (-0.006169) | 0.286533 / 0.293841 (-0.007308) | 0.027694 / 0.128546 (-0.100852) | 0.010254 / 0.075646 (-0.065392) | 0.206586 / 0.419271 (-0.212685) | 0.035681 / 0.043533 (-0.007852) | 0.251645 / 0.255139 (-0.003494) | 0.285462 / 0.283200 (0.002262) | 0.017326 / 0.141683 (-0.124357) | 1.086927 / 1.452155 (-0.365228) | 1.153172 / 1.492716 (-0.339545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093020 / 0.018006 (0.075014) | 0.300018 / 0.000490 (0.299528) | 0.000208 / 0.000200 (0.000008) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018828 / 0.037411 (-0.018584) | 0.062569 / 0.014526 (0.048043) | 0.074130 / 0.176557 (-0.102427) | 0.119304 / 0.737135 (-0.617832) | 0.076409 / 0.296338 (-0.219930) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285938 / 0.215209 (0.070729) | 2.780662 / 2.077655 (0.703007) | 1.522401 / 1.504120 (0.018281) | 1.392475 / 1.541195 (-0.148720) | 1.412517 / 1.468490 (-0.055973) | 0.562768 / 4.584777 (-4.022009) | 2.421406 / 3.745712 (-1.324306) | 2.786271 / 5.269862 (-2.483591) | 1.737193 / 4.565676 (-2.828484) | 0.062775 / 0.424275 (-0.361500) | 0.004908 / 0.007607 (-0.002699) | 0.345070 / 0.226044 (0.119026) | 3.383700 / 2.268929 (1.114771) | 1.795974 / 55.444624 (-53.648651) | 1.527656 / 6.876477 (-5.348820) | 1.514035 / 2.142072 (-0.628037) | 0.647652 / 4.805227 (-4.157575) | 0.120121 / 6.500664 (-6.380543) | 0.042259 / 0.075469 (-0.033210) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948951 / 1.841788 (-0.892837) | 11.514971 / 8.074308 (3.440663) | 10.722668 / 10.191392 (0.531276) | 0.143034 / 0.680424 (-0.537390) | 0.014800 / 0.534201 (-0.519401) | 0.286189 / 0.579283 (-0.293094) | 0.270735 / 0.434364 (-0.163629) | 0.323907 / 0.540337 (-0.216430) | 0.417569 / 1.386936 (-0.969367) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005670 / 0.011353 (-0.005683) | 0.003238 / 0.011008 (-0.007770) | 0.048520 / 0.038508 (0.010012) | 0.051341 / 0.023109 (0.028232) | 0.273883 / 0.275898 (-0.002015) | 0.295165 / 0.323480 (-0.028315) | 0.004755 / 0.007986 (-0.003231) | 0.002471 / 0.004328 (-0.001857) | 0.047487 / 0.004250 (0.043237) | 0.040225 / 0.037052 (0.003172) | 0.276758 / 0.258489 (0.018269) | 0.301182 / 0.293841 (0.007341) | 0.029749 / 0.128546 (-0.098797) | 0.010340 / 0.075646 (-0.065306) | 0.057193 / 0.419271 (-0.362079) | 0.033067 / 0.043533 (-0.010466) | 0.272716 / 0.255139 (0.017577) | 0.292301 / 0.283200 (0.009101) | 0.019075 / 0.141683 (-0.122608) | 1.101778 / 1.452155 (-0.350376) | 1.173573 / 1.492716 (-0.319143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091008 / 0.018006 (0.073002) | 0.300749 / 0.000490 (0.300259) | 0.000218 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021760 / 0.037411 (-0.015651) | 0.071407 / 0.014526 (0.056881) | 0.081151 / 0.176557 (-0.095406) | 0.120140 / 0.737135 (-0.616995) | 0.082408 / 0.296338 (-0.213931) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294828 / 0.215209 (0.079619) | 2.880701 / 2.077655 (0.803047) | 1.604187 / 1.504120 (0.100068) | 1.479236 / 1.541195 (-0.061959) | 1.498875 / 1.468490 (0.030385) | 0.561950 / 4.584777 (-4.022827) | 2.462531 / 3.745712 (-1.283181) | 2.800905 / 5.269862 (-2.468957) | 1.746535 / 4.565676 (-2.819141) | 0.062732 / 0.424275 (-0.361544) | 0.004932 / 0.007607 (-0.002675) | 0.347125 / 0.226044 (0.121081) | 3.431343 / 2.268929 (1.162415) | 1.964999 / 55.444624 (-53.479625) | 1.669709 / 6.876477 (-5.206768) | 1.675148 / 2.142072 (-0.466924) | 0.635436 / 4.805227 (-4.169792) | 0.116598 / 6.500664 (-6.384066) | 0.041447 / 0.075469 (-0.034022) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975751 / 1.841788 (-0.866037) | 12.060246 / 8.074308 (3.985938) | 10.871641 / 10.191392 (0.680249) | 0.142936 / 0.680424 (-0.537488) | 0.015779 / 0.534201 (-0.518422) | 0.287120 / 0.579283 (-0.292163) | 0.283963 / 0.434364 (-0.150401) | 0.341231 / 0.540337 (-0.199107) | 0.419518 / 1.386936 (-0.967418) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0943ff0072dcef473530d8a494f314048f3a3d51 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005105 / 0.011353 (-0.006248) | 0.002855 / 0.011008 (-0.008153) | 0.062044 / 0.038508 (0.023536) | 0.052948 / 0.023109 (0.029839) | 0.249841 / 0.275898 (-0.026057) | 0.276687 / 0.323480 (-0.046792) | 0.003792 / 0.007986 (-0.004194) | 0.002385 / 0.004328 (-0.001943) | 0.048648 / 0.004250 (0.044398) | 0.038317 / 0.037052 (0.001264) | 0.255235 / 0.258489 (-0.003254) | 0.287870 / 0.293841 (-0.005971) | 0.027429 / 0.128546 (-0.101117) | 0.010182 / 0.075646 (-0.065464) | 0.206980 / 0.419271 (-0.212291) | 0.035444 / 0.043533 (-0.008089) | 0.255073 / 0.255139 (-0.000066) | 0.270636 / 0.283200 (-0.012563) | 0.018003 / 0.141683 (-0.123680) | 1.124691 / 1.452155 (-0.327463) | 1.191872 / 1.492716 (-0.300844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088824 / 0.018006 (0.070818) | 0.302771 / 0.000490 (0.302281) | 0.000210 / 0.000200 (0.000010) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018102 / 0.037411 (-0.019310) | 0.062131 / 0.014526 (0.047605) | 0.073230 / 0.176557 (-0.103327) | 0.119789 / 0.737135 (-0.617346) | 0.074804 / 0.296338 (-0.221534) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293244 / 0.215209 (0.078035) | 2.891401 / 2.077655 (0.813746) | 1.504481 / 1.504120 (0.000361) | 1.381251 / 1.541195 (-0.159944) | 1.387245 / 1.468490 (-0.081245) | 0.552732 / 4.584777 (-4.032045) | 2.386439 / 3.745712 (-1.359273) | 2.718918 / 5.269862 (-2.550944) | 1.725401 / 4.565676 (-2.840275) | 0.061946 / 0.424275 (-0.362329) | 0.004957 / 0.007607 (-0.002650) | 0.342776 / 0.226044 (0.116731) | 3.418911 / 2.268929 (1.149983) | 1.838283 / 55.444624 (-53.606341) | 1.538013 / 6.876477 (-5.338464) | 1.545144 / 2.142072 (-0.596928) | 0.637857 / 4.805227 (-4.167370) | 0.116451 / 6.500664 (-6.384213) | 0.042228 / 0.075469 (-0.033241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943575 / 1.841788 (-0.898212) | 11.492939 / 8.074308 (3.418631) | 10.601605 / 10.191392 (0.410212) | 0.139084 / 0.680424 (-0.541340) | 0.013691 / 0.534201 (-0.520510) | 0.286696 / 0.579283 (-0.292587) | 0.259979 / 0.434364 (-0.174385) | 0.322578 / 0.540337 (-0.217759) | 0.411950 / 1.386936 (-0.974986) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005168 / 0.011353 (-0.006185) | 0.003238 / 0.011008 (-0.007770) | 0.049028 / 0.038508 (0.010520) | 0.052930 / 0.023109 (0.029821) | 0.274750 / 0.275898 (-0.001148) | 0.294023 / 0.323480 (-0.029457) | 0.003829 / 0.007986 (-0.004157) | 0.002372 / 0.004328 (-0.001956) | 0.048689 / 0.004250 (0.044439) | 0.040056 / 0.037052 (0.003003) | 0.280147 / 0.258489 (0.021658) | 0.304871 / 0.293841 (0.011030) | 0.028734 / 0.128546 (-0.099812) | 0.010624 / 0.075646 (-0.065022) | 0.058705 / 0.419271 (-0.360566) | 0.032140 / 0.043533 (-0.011393) | 0.276702 / 0.255139 (0.021563) | 0.293186 / 0.283200 (0.009987) | 0.018124 / 0.141683 (-0.123559) | 1.139398 / 1.452155 (-0.312757) | 1.174862 / 1.492716 (-0.317855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.087627 / 0.018006 (0.069620) | 0.298376 / 0.000490 (0.297886) | 0.000238 / 0.000200 (0.000038) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021344 / 0.037411 (-0.016067) | 0.070208 / 0.014526 (0.055682) | 0.081177 / 0.176557 (-0.095380) | 0.120170 / 0.737135 (-0.616965) | 0.082472 / 0.296338 (-0.213866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293227 / 0.215209 (0.078018) | 2.844619 / 2.077655 (0.766964) | 1.586922 / 1.504120 (0.082803) | 1.460256 / 1.541195 (-0.080938) | 1.475955 / 1.468490 (0.007465) | 0.553226 / 4.584777 (-4.031551) | 2.418869 / 3.745712 (-1.326843) | 2.709256 / 5.269862 (-2.560606) | 1.705935 / 4.565676 (-2.859741) | 0.062391 / 0.424275 (-0.361884) | 0.004929 / 0.007607 (-0.002678) | 0.350358 / 0.226044 (0.124313) | 3.448824 / 2.268929 (1.179896) | 1.929451 / 55.444624 (-53.515174) | 1.669438 / 6.876477 (-5.207038) | 1.660923 / 2.142072 (-0.481150) | 0.633107 / 4.805227 (-4.172120) | 0.114657 / 6.500664 (-6.386007) | 0.041256 / 0.075469 (-0.034214) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.968408 / 1.841788 (-0.873380) | 11.749754 / 8.074308 (3.675446) | 10.796670 / 10.191392 (0.605278) | 0.128881 / 0.680424 (-0.551543) | 0.015326 / 0.534201 (-0.518875) | 0.286407 / 0.579283 (-0.292876) | 0.276324 / 0.434364 (-0.158040) | 0.326201 / 0.540337 (-0.214136) | 0.419854 / 1.386936 (-0.967082) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1731d5a8cd103533ef6b438b4429ab51d3a6a0ce \"CML watermark\")\n"
] | 2023-11-22T19:04:45 | 2023-11-23T18:47:30 | 2023-11-23T18:41:23 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6445",
"html_url": "https://github.com/huggingface/datasets/pull/6445",
"diff_url": "https://github.com/huggingface/datasets/pull/6445.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6445.patch",
"merged_at": "2023-11-23T18:41:22"
} | Use the `filelock` package instead of `datasets.utils.filelock` for file locking to be consistent with `huggingface_hub` and not to be responsible for improving the `filelock` capabilities π.
(Reverts https://github.com/huggingface/datasets/pull/859, but these `INFO` logs are not printed by default (anymore?), so this should be okay)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6445/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6444/comments | https://api.github.com/repos/huggingface/datasets/issues/6444/events | https://github.com/huggingface/datasets/pull/6444 | 2,006,842,179 | PR_kwDODunzps5gKG_e | 6,444 | Remove `Table.__getstate__` and `Table.__setstate__` | {
"login": "LZHgrla",
"id": 36994684,
"node_id": "MDQ6VXNlcjM2OTk0Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LZHgrla",
"html_url": "https://github.com/LZHgrla",
"followers_url": "https://api.github.com/users/LZHgrla/followers",
"following_url": "https://api.github.com/users/LZHgrla/following{/other_user}",
"gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions",
"organizations_url": "https://api.github.com/users/LZHgrla/orgs",
"repos_url": "https://api.github.com/users/LZHgrla/repos",
"events_url": "https://api.github.com/users/LZHgrla/events{/privacy}",
"received_events_url": "https://api.github.com/users/LZHgrla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for working on this! The [issue](https://bugs.python.org/issue24658) with pickling objects larger than 4GB seems to be patched in Python 3.8 (the minimal supported version was 3.6 at the time of implementing this), so a simple solution would be removing the `Table.__setstate__` and `Table.__getstate__` overrides.",
"@mariosasko \r\nCool!\r\nI removed these overrides, and it worked.\r\n\r\nAll modifications are committed. Ready for review!",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005251 / 0.011353 (-0.006102) | 0.003804 / 0.011008 (-0.007204) | 0.063143 / 0.038508 (0.024635) | 0.059409 / 0.023109 (0.036300) | 0.255319 / 0.275898 (-0.020579) | 0.279194 / 0.323480 (-0.044285) | 0.004643 / 0.007986 (-0.003343) | 0.002560 / 0.004328 (-0.001768) | 0.047490 / 0.004250 (0.043240) | 0.039034 / 0.037052 (0.001982) | 0.257352 / 0.258489 (-0.001137) | 0.293029 / 0.293841 (-0.000812) | 0.027548 / 0.128546 (-0.100998) | 0.011307 / 0.075646 (-0.064339) | 0.210325 / 0.419271 (-0.208946) | 0.035161 / 0.043533 (-0.008372) | 0.253491 / 0.255139 (-0.001648) | 0.272085 / 0.283200 (-0.011115) | 0.018924 / 0.141683 (-0.122759) | 1.111148 / 1.452155 (-0.341007) | 1.178076 / 1.492716 (-0.314641) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092447 / 0.018006 (0.074441) | 0.303680 / 0.000490 (0.303190) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019087 / 0.037411 (-0.018325) | 0.062663 / 0.014526 (0.048137) | 0.074651 / 0.176557 (-0.101905) | 0.121334 / 0.737135 (-0.615802) | 0.076703 / 0.296338 (-0.219636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286505 / 0.215209 (0.071295) | 2.804942 / 2.077655 (0.727287) | 1.481930 / 1.504120 (-0.022190) | 1.369485 / 1.541195 (-0.171710) | 1.424467 / 1.468490 (-0.044023) | 0.556810 / 4.584777 (-4.027967) | 2.416338 / 3.745712 (-1.329374) | 2.901869 / 5.269862 (-2.367992) | 1.827007 / 4.565676 (-2.738669) | 0.062252 / 0.424275 (-0.362024) | 0.005076 / 0.007607 (-0.002531) | 0.343850 / 0.226044 (0.117805) | 3.377611 / 2.268929 (1.108683) | 1.860214 / 55.444624 (-53.584410) | 1.595146 / 6.876477 (-5.281331) | 1.627234 / 2.142072 (-0.514838) | 0.651027 / 4.805227 (-4.154200) | 0.119214 / 6.500664 (-6.381450) | 0.043342 / 0.075469 (-0.032127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942863 / 1.841788 (-0.898924) | 12.484633 / 8.074308 (4.410324) | 10.560668 / 10.191392 (0.369276) | 0.144647 / 0.680424 (-0.535777) | 0.014734 / 0.534201 (-0.519466) | 0.286575 / 0.579283 (-0.292708) | 0.270913 / 0.434364 (-0.163451) | 0.323792 / 0.540337 (-0.216545) | 0.419186 / 1.386936 (-0.967750) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005315 / 0.011353 (-0.006038) | 0.003548 / 0.011008 (-0.007460) | 0.049271 / 0.038508 (0.010763) | 0.055198 / 0.023109 (0.032089) | 0.275940 / 0.275898 (0.000042) | 0.307637 / 0.323480 (-0.015843) | 0.003997 / 0.007986 (-0.003988) | 0.002544 / 0.004328 (-0.001785) | 0.050381 / 0.004250 (0.046130) | 0.041158 / 0.037052 (0.004105) | 0.281519 / 0.258489 (0.023030) | 0.308085 / 0.293841 (0.014244) | 0.030464 / 0.128546 (-0.098083) | 0.010690 / 0.075646 (-0.064957) | 0.057458 / 0.419271 (-0.361814) | 0.032814 / 0.043533 (-0.010719) | 0.282435 / 0.255139 (0.027296) | 0.301342 / 0.283200 (0.018142) | 0.017556 / 0.141683 (-0.124127) | 1.159423 / 1.452155 (-0.292732) | 1.177344 / 1.492716 (-0.315372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091086 / 0.018006 (0.073079) | 0.305316 / 0.000490 (0.304826) | 0.000218 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021832 / 0.037411 (-0.015579) | 0.071055 / 0.014526 (0.056529) | 0.082982 / 0.176557 (-0.093574) | 0.119966 / 0.737135 (-0.617169) | 0.083539 / 0.296338 (-0.212800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302501 / 0.215209 (0.087292) | 2.936347 / 2.077655 (0.858692) | 1.601658 / 1.504120 (0.097538) | 1.467267 / 1.541195 (-0.073928) | 1.514656 / 1.468490 (0.046166) | 0.563934 / 4.584777 (-4.020843) | 2.513715 / 3.745712 (-1.231997) | 2.813014 / 5.269862 (-2.456847) | 1.773243 / 4.565676 (-2.792433) | 0.063208 / 0.424275 (-0.361067) | 0.004979 / 0.007607 (-0.002628) | 0.360694 / 0.226044 (0.134650) | 3.520578 / 2.268929 (1.251650) | 1.975369 / 55.444624 (-53.469255) | 1.691257 / 6.876477 (-5.185220) | 1.730872 / 2.142072 (-0.411200) | 0.655366 / 4.805227 (-4.149861) | 0.146043 / 6.500664 (-6.354621) | 0.041386 / 0.075469 (-0.034083) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979840 / 1.841788 (-0.861948) | 12.456924 / 8.074308 (4.382616) | 10.938595 / 10.191392 (0.747203) | 0.133853 / 0.680424 (-0.546571) | 0.015744 / 0.534201 (-0.518457) | 0.289585 / 0.579283 (-0.289698) | 0.291143 / 0.434364 (-0.143221) | 0.328109 / 0.540337 (-0.212228) | 0.561897 / 1.386936 (-0.825039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05ec66cc1abc20bd13d02c681b7be372ae084a4f \"CML watermark\")\n"
] | 2023-11-22T17:55:10 | 2023-11-23T15:19:43 | 2023-11-23T15:13:28 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6444",
"html_url": "https://github.com/huggingface/datasets/pull/6444",
"diff_url": "https://github.com/huggingface/datasets/pull/6444.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6444.patch",
"merged_at": "2023-11-23T15:13:28"
} | When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'`
```python
from torch import distributed as dist
if dist.get_rank() == 0:
dataset = process_dataset(*args, **kwargs)
objects = [dataset]
else:
objects = [None]
dist.broadcast_object_list(objects, src=0)
dataset = objects[0]
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6444/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6444/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6443/comments | https://api.github.com/repos/huggingface/datasets/issues/6443/events | https://github.com/huggingface/datasets/issues/6443 | 2,006,568,368 | I_kwDODunzps53mc2w | 6,443 | Trouble loading files defined in YAML explicitly | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | [
"There is a typo in one of the file names - `data/edf.csv` should be renamed to `data/def.csv` π. ",
"wow, I reviewed it twice to avoid being ashamed like that, but... I didn't notice the typo.\r\n\r\n---\r\n\r\nBesides this: do you think we would be able to improve the error message to make this clearer?"
] | 2023-11-22T15:18:10 | 2023-11-23T09:06:20 | null | CONTRIBUTOR | null | null | null | Look at https://huggingface.co./datasets/severo/doc-yaml-2
It's a reproduction of the example given in the docs at https://huggingface.co./docs/hub/datasets-manual-configuration
```
You can select multiple files per split using a list of paths:
my_dataset_repository/
βββ README.md
βββ data/
β βββ abc.csv
β βββ def.csv
βββ holdout/
βββ ghi.csv
---
configs:
- config_name: default
data_files:
- split: train
path:
- "data/abc.csv"
- "data/def.csv"
- split: test
path: "holdout/ghi.csv"
---
```
It raises the following error:
```
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6443/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6442/comments | https://api.github.com/repos/huggingface/datasets/issues/6442/events | https://github.com/huggingface/datasets/issues/6442 | 2,006,086,907 | I_kwDODunzps53knT7 | 6,442 | Trouble loading image folder with additional features - metadata file ignored | {
"login": "linoytsaban",
"id": 57615435,
"node_id": "MDQ6VXNlcjU3NjE1NDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/57615435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/linoytsaban",
"html_url": "https://github.com/linoytsaban",
"followers_url": "https://api.github.com/users/linoytsaban/followers",
"following_url": "https://api.github.com/users/linoytsaban/following{/other_user}",
"gists_url": "https://api.github.com/users/linoytsaban/gists{/gist_id}",
"starred_url": "https://api.github.com/users/linoytsaban/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linoytsaban/subscriptions",
"organizations_url": "https://api.github.com/users/linoytsaban/orgs",
"repos_url": "https://api.github.com/users/linoytsaban/repos",
"events_url": "https://api.github.com/users/linoytsaban/events{/privacy}",
"received_events_url": "https://api.github.com/users/linoytsaban/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I reproduced too:\r\n- root: metadata file is ignored (https://huggingface.co./datasets/severo/doc-image-3)\r\n- data/ dir: metadata file is ignored (https://huggingface.co./datasets/severo/doc-image-4)\r\n- train/ dir: works (https://huggingface.co./datasets/severo/doc-image-5)"
] | 2023-11-22T11:01:35 | 2023-11-24T17:13:03 | 2023-11-24T17:13:03 | NONE | null | null | null | ### Describe the bug
Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions.
When loading a local image folder with captions using `datasets==2.13.0`
```
from datasets import load_dataset
data = load_dataset(<image_folder_path>)
data.column_names
```
yields
`{'train': ['image', 'prompt']}`
but when using `datasets==2.15.0`
yeilds
`{'train': ['image']}`
Putting the images and `metadata.jsonl` file into a nested `train` folder **or** loading with `load_dataset("imagefolder", data_dir=<image_folder_path>)` solves the issue and
yields
`{'train': ['image', 'prompt']}`
### Steps to reproduce the bug
1. create a folder `<image_folder_path>` that contains images and a metadata file with additional features- e.g. "prompt"
2. run:
```
from datasets import load_dataset
data = load_dataset("<image_folder_path>")
data.column_names
```
### Expected behavior
`{'train': ['image', 'prompt']}`
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6442/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6441/comments | https://api.github.com/repos/huggingface/datasets/issues/6441/events | https://github.com/huggingface/datasets/issues/6441 | 2,004,985,857 | I_kwDODunzps53gagB | 6,441 | Trouble Loading a Gated Dataset For User with Granted Permission | {
"login": "e-trop",
"id": 124715309,
"node_id": "U_kgDOB28BLQ",
"avatar_url": "https://avatars.githubusercontent.com/u/124715309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-trop",
"html_url": "https://github.com/e-trop",
"followers_url": "https://api.github.com/users/e-trop/followers",
"following_url": "https://api.github.com/users/e-trop/following{/other_user}",
"gists_url": "https://api.github.com/users/e-trop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-trop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-trop/subscriptions",
"organizations_url": "https://api.github.com/users/e-trop/orgs",
"repos_url": "https://api.github.com/users/e-trop/repos",
"events_url": "https://api.github.com/users/e-trop/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-trop/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Also when they try to click the url link for the dataset they get a 404 error.\r\n\r\nThis seems to be a Hub error then (cc @SBrandeis)",
"Could you report this to https://discuss.huggingface.co/c/hub/23, providing the URL of the dataset, or at least if the dataset is public or private?",
"Thanks for the reply! I've created an issue on the hub's board here: https://discuss.huggingface.co/t/trouble-loading-a-gated-dataset-for-user-with-granted-permission/65565"
] | 2023-11-21T19:24:36 | 2023-12-13T08:27:16 | 2023-12-13T08:27:16 | NONE | null | null | null | ### Describe the bug
I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get
`FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error.
### Steps to reproduce the bug
1. Grant access to gated dataset for specific users
2. Users accept invitation
3. Users login to hugging face hub using cli login
4. Users run load_dataset
### Expected behavior
Dataset is loaded normally for users who were granted access to the gated dataset.
### Environment info
datasets==2.15.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6441/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6440/comments | https://api.github.com/repos/huggingface/datasets/issues/6440/events | https://github.com/huggingface/datasets/issues/6440 | 2,004,509,301 | I_kwDODunzps53emJ1 | 6,440 | `.map` not hashing under python 3.9 | {
"login": "changyeli",
"id": 9058204,
"node_id": "MDQ6VXNlcjkwNTgyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/changyeli",
"html_url": "https://github.com/changyeli",
"followers_url": "https://api.github.com/users/changyeli/followers",
"following_url": "https://api.github.com/users/changyeli/following{/other_user}",
"gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changyeli/subscriptions",
"organizations_url": "https://api.github.com/users/changyeli/orgs",
"repos_url": "https://api.github.com/users/changyeli/repos",
"events_url": "https://api.github.com/users/changyeli/events{/privacy}",
"received_events_url": "https://api.github.com/users/changyeli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tried to upgrade Python to 3.11 - still get this message. A partial solution is to NOT use `num_proc` at all. It will be considerably longer to finish the job.",
"Hi! The `model = torch.compile(model)` line is problematic for our hashing logic. We would have to merge https://github.com/huggingface/datasets/pull/5867 to support hashing `torch.compile`-ed models/functions. \r\n\r\nI've started refactoring the hashing logic and plan to incorporate a fix for `torch.compile` as part of it, so this should be addressed soon (probably this or next week). "
] | 2023-11-21T15:14:54 | 2023-11-28T16:29:33 | 2023-11-28T16:29:33 | NONE | null | null | null | ### Describe the bug
The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message:
`Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
### Steps to reproduce the bug
```python
def map_to_pred(batch):
"""
Perform inference on an audio batch
Parameters:
batch (dict): A dictionary containing audio data and other related information.
Returns:
dict: The input batch dictionary with added prediction and transcription fields.
"""
audio = batch['audio']
input_features = processor(
audio['array'], sampling_rate=audio['sampling_rate'], return_tensors="pt").input_features
input_features = input_features.to('cuda')
with torch.no_grad():
predicted_ids = model.generate(input_features)
preds = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
batch['prediction'] = processor.tokenizer._normalize(preds)
batch["transcription"] = processor.tokenizer._normalize(batch['transcription'])
return batch
MODEL_CARD = "openai/whisper-small"
MODEL_NAME = MODEL_CARD.rsplit('/', maxsplit=1)[-1]
model = WhisperForConditionalGeneration.from_pretrained(MODEL_CARD)
processor = AutoProcessor.from_pretrained(
MODEL_CARD, language="english", task="transcribe")
model = torch.compile(model)
dt = load_dataset("audiofolder", data_dir=config['DATA']['dataset'], split="test")
dt = dt.cast_column("audio", Audio(sampling_rate=16000))
result = coraal_dt.map(map_to_pred, num_proc=16)
```
### Expected behavior
Hashed and cached dataset starts inferencing
### Environment info
- `transformers` version: 4.35.0
- Platform: Linux-5.14.0-284.30.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6440/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6439/comments | https://api.github.com/repos/huggingface/datasets/issues/6439/events | https://github.com/huggingface/datasets/issues/6439 | 2,002,916,514 | I_kwDODunzps53YhSi | 6,439 | Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding | {
"login": "AntreasAntoniou",
"id": 10792502,
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AntreasAntoniou",
"html_url": "https://github.com/AntreasAntoniou",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-11-20T20:07:23 | 2023-11-20T20:07:37 | null | NONE | null | null | null | ### Describe the bug
I am working with a dataset I am trying to publish.
The path is Antreas/TALI.
It's a fairly large dataset, and contains images, video, audio and text.
I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process.
With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths.
Find the script I am using below:
```python
import multiprocessing as mp
import pathlib
from typing import Optional
import datasets
from rich import print
from tqdm import tqdm
def download_dataset_via_hub(
dataset_name: str,
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
):
import huggingface_hub as hf_hub
download_folder = hf_hub.snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
cache_dir=dataset_download_path,
resume_download=True,
max_workers=num_download_workers,
ignore_patterns=[],
)
return pathlib.Path(download_folder) / "data"
def load_dataset_via_hub(
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
dataset_name: Optional[str] = None,
):
from dataclasses import dataclass, field
from datasets import ClassLabel, Features, Image, Sequence, Value
dataset_path = download_dataset_via_hub(
dataset_download_path=dataset_download_path,
num_download_workers=num_download_workers,
dataset_name=dataset_name,
)
# Building a list of file paths for validation set
train_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "train" in file.as_posix()
]
val_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "val" in file.as_posix()
]
test_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "test" in file.as_posix()
]
print(
f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set"
)
data_files = {
"test": test_files,
"val": val_files,
"train": train_files,
}
features = Features(
{
"image": Image(
decode=True
), # Set `decode=True` if you want to decode the images, otherwise `decode=False`
"image_url": Value("string"),
"item_idx": Value("int64"),
"wit_features": Sequence(
{
"attribution_passes_lang_id": Value("bool"),
"caption_alt_text_description": Value("string"),
"caption_reference_description": Value("string"),
"caption_title_and_reference_description": Value("string"),
"context_page_description": Value("string"),
"context_section_description": Value("string"),
"hierarchical_section_title": Value("string"),
"is_main_image": Value("bool"),
"language": Value("string"),
"page_changed_recently": Value("bool"),
"page_title": Value("string"),
"page_url": Value("string"),
"section_title": Value("string"),
}
),
"wit_idx": Value("int64"),
"youtube_title_text": Value("string"),
"youtube_description_text": Value("string"),
"youtube_video_content": Value("binary"),
"youtube_video_starting_time": Value("string"),
"youtube_subtitle_text": Value("string"),
"youtube_video_size": Value("int64"),
"youtube_video_file_path": Value("string"),
}
)
dataset = datasets.load_dataset(
"parquet" if dataset_name is None else dataset_name,
data_files=data_files,
features=features,
num_proc=1,
cache_dir=dataset_download_path / "cache",
)
return dataset
if __name__ == "__main__":
dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/")
dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[
"test"
]
for sample in tqdm(dataset):
print(list(sample.keys()))
```
Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start!
### Steps to reproduce the bug
1. Run the code I provided to get a sense of how fast snapshot + manual is
2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP.
3. You should now have an appreciation of how long these things take.
### Expected behavior
The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6439/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6438/comments | https://api.github.com/repos/huggingface/datasets/issues/6438/events | https://github.com/huggingface/datasets/issues/6438 | 2,002,032,804 | I_kwDODunzps53VJik | 6,438 | Support GeoParquet | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"Thank you, @severo ! I would be more than happy to help in any way I can. I am not familiar with this repo's codebase, but I would be eager to contribute. :)\r\n\r\nFor the preview in Datasets Hub, I think it makes sense to just display the geospatial column as text. If there were a dataset loader, though, I think it should be able to support the geospatial components. Geopandas is probably the most user-friendly interface for that. I'm not sure if it's currently relevant in the context of geoparquet, but I think the pyogrio driver is faster than fiona.\r\n\r\nBut the whole gdal dependency thing can be a real pain. If anything, it would need to be an optional dependency. Maybe it would be best if the loader tries importing relevant geospatial libraries, and in the event of an ImportError, falls back to text for the geometry column.\r\n\r\nPlease let me know if I can be of assistance, and thanks again for creating this Issue. :)"
] | 2023-11-20T11:54:58 | 2023-11-20T14:10:23 | null | CONTRIBUTOR | null | null | null | ### Feature request
Support the GeoParquet format
### Motivation
GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns.
It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub (see https://huggingface.co./datasets/joshuasundance/govgis_nov2023-slim-spatial/discussions/1).
### Your contribution
I would be happy to help work on a PR (but I don't think I can do one on my own).
Also, we have to define what we want to support:
- load all the columns, but get the "geospatial" column in text-only mode for now
- or, fully support the spatial features, maybe taking inspiration from (or depending upon) https://geopandas.org/en/stable/index.html (which itself depends on https://fiona.readthedocs.io/en/stable/, which requires a local install of https://gdal.org/) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6438/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6437/comments | https://api.github.com/repos/huggingface/datasets/issues/6437/events | https://github.com/huggingface/datasets/issues/6437 | 2,001,272,606 | I_kwDODunzps53SP8e | 6,437 | Problem in training iterable dataset | {
"login": "21Timothy",
"id": 38107672,
"node_id": "MDQ6VXNlcjM4MTA3Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/38107672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/21Timothy",
"html_url": "https://github.com/21Timothy",
"followers_url": "https://api.github.com/users/21Timothy/followers",
"following_url": "https://api.github.com/users/21Timothy/following{/other_user}",
"gists_url": "https://api.github.com/users/21Timothy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/21Timothy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/21Timothy/subscriptions",
"organizations_url": "https://api.github.com/users/21Timothy/orgs",
"repos_url": "https://api.github.com/users/21Timothy/repos",
"events_url": "https://api.github.com/users/21Timothy/events{/privacy}",
"received_events_url": "https://api.github.com/users/21Timothy/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Has anyone ever encountered this problem before?",
"`split_dataset_by_node` doesn't give the exact same number of examples to each node in the case of iterable datasets, though it tries to be as equal as possible. In particular if your dataset is sharded and you have a number of shards that is a factor of the number of workers, then the shards will be evenly distributed among workers. If the shards don't contain the same number of examples, then some workers might end up with more examples than others.\r\n\r\nHowever if you use a Dataset you'll end up with the same amount of data, because we know the length of the dataset we can split it exactly where we want. Also Dataset objects don't load the full dataset in memory; instead it memory maps Arrow files from disk."
] | 2023-11-20T03:04:02 | 2023-11-29T11:11:15 | null | NONE | null | null | null | ### Describe the bug
I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have noticed that this distribution results in different processes having different amounts of data to train on. As a result, when the earliest process finishes training and starts predicting on the test set, other processes are still training, causing the overall training speed to be very slow.
### Steps to reproduce the bug
```
def train(args, model, device, train_loader, optimizer, criterion, epoch, length):
model.train()
idx_length = 0
for batch_idx, data in enumerate(train_loader):
s_time = time.time()
X = data['X']
target = data['y'].reshape(-1, 28)
X, target = X.to(device), target.to(device)
optimizer.zero_grad()
output = model(X)
loss = criterion(output, target)
loss.backward()
optimizer.step()
idx_length += 1
if batch_idx % args.log_interval == 0:
# print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
# epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(),
# 100. * batch_idx * len(
# X) * torch.distributed.get_world_size() / length, loss.item()))
print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\t'.format(
epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(),
100. * batch_idx * len(
X) * torch.distributed.get_world_size() / length))
if args.dry_run:
break
print('Process %s length: %s time: %s' % (torch.distributed.get_rank(), idx_length, datetime.datetime.now()))
train_iterable_dataset = load_dataset("parquet", data_files=data_files, split="train", streaming=True)
test_iterable_dataset = load_dataset("parquet", data_files=data_files, split="test", streaming=True)
train_iterable_dataset = train_iterable_dataset.map(process_fn)
test_iterable_dataset = test_iterable_dataset.map(process_fn)
train_iterable_dataset = train_iterable_dataset.map(scale)
test_iterable_dataset = test_iterable_dataset.map(scale)
train_iterable_dataset = datasets.distributed.split_dataset_by_node(train_iterable_dataset,
world_size=world_size, rank=local_rank).shuffle(seed=1234)
test_iterable_dataset = datasets.distributed.split_dataset_by_node(test_iterable_dataset,
world_size=world_size, rank=local_rank).shuffle(seed=1234)
print(torch.distributed.get_rank(), train_iterable_dataset.n_shards, test_iterable_dataset.n_shards)
train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 3,#ngpus_per_node,
'pin_memory': True,
'shuffle': False}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)
train_loader = torch.utils.data.DataLoader(train_iterable_dataset, **train_kwargs,
# sampler=torch.utils.data.distributed.DistributedSampler(
# train_iterable_dataset,
# num_replicas=ngpus_per_node,
# rank=0)
)
test_loader = torch.utils.data.DataLoader(test_iterable_dataset, **test_kwargs,
# sampler=torch.utils.data.distributed.DistributedSampler(
# test_iterable_dataset,
# num_replicas=ngpus_per_node,
# rank=0)
)
for epoch in range(1, args.epochs + 1):
start_time = time.time()
train_iterable_dataset.set_epoch(epoch)
test_iterable_dataset.set_epoch(epoch)
train(args, model, device, train_loader, optimizer, criterion, epoch, train_len)
test(args, model, device, criterion2, test_loader)
```
And hereβs the part of output:
```
Train Epoch: 1 Batch_idx: 5000 Process: 0 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5000 Process: 1 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5000 Process: 2 [320000/4710975.0 (7%)]
Train Epoch: 1 Batch_idx: 5862 Process: 3 Data_length: 12 coststime: 0.04095172882080078
Train Epoch: 1 Batch_idx: 5862 Process: 0 Data_length: 3 coststime: 0.0751960277557373
Train Epoch: 1 Batch_idx: 5867 Process: 3 Data_length: 49 coststime: 0.0032558441162109375
Train Epoch: 1 Batch_idx: 5872 Process: 1 Data_length: 2 coststime: 0.022842884063720703
Train Epoch: 1 Batch_idx: 5876 Process: 3 Data_length: 63 coststime: 0.002694845199584961
Process 3 length: 5877 time: 2023-11-17 17:03:26.582317
Train epoch 1 costTime: 241.72063446044922s . Process 3 Start to test.
3 0 tensor(45508.8516, device='cuda:3')
3 100 tensor(45309.0469, device='cuda:3')
3 200 tensor(45675.3047, device='cuda:3')
3 300 tensor(45263.0273, device='cuda:3')
Process 3 Reduce metrics.
Train Epoch: 2 Batch_idx: 0 Process: 3 [0/4710975.0 (0%)]
Train Epoch: 1 Batch_idx: 5882 Process: 1 Data_length: 63 coststime: 0.05185818672180176
Train Epoch: 1 Batch_idx: 5887 Process: 1 Data_length: 12 coststime: 0.006895303726196289
Process 1 length: 5888 time: 2023-11-17 17:20:48.578204
Train epoch 1 costTime: 1285.7279663085938s . Process 1 Start to test.
1 0 tensor(45265.9141, device='cuda:1')
```
### Expected behavior
I'd like to know how to fix this problem.
### Environment info
```
torch==2.0
datasets==2.14.0
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6437/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6436/comments | https://api.github.com/repos/huggingface/datasets/issues/6436/events | https://github.com/huggingface/datasets/issues/6436 | 2,000,844,474 | I_kwDODunzps53Qna6 | 6,436 | TypeError: <lambda>() takes 0 positional arguments but 1 was given | {
"login": "ahmadmustafaanis",
"id": 47111429,
"node_id": "MDQ6VXNlcjQ3MTExNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/47111429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmadmustafaanis",
"html_url": "https://github.com/ahmadmustafaanis",
"followers_url": "https://api.github.com/users/ahmadmustafaanis/followers",
"following_url": "https://api.github.com/users/ahmadmustafaanis/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmadmustafaanis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmadmustafaanis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmadmustafaanis/subscriptions",
"organizations_url": "https://api.github.com/users/ahmadmustafaanis/orgs",
"repos_url": "https://api.github.com/users/ahmadmustafaanis/repos",
"events_url": "https://api.github.com/users/ahmadmustafaanis/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmadmustafaanis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks like a problem with your environment rather than `datasets`."
] | 2023-11-19T13:10:20 | 2023-11-29T16:28:34 | 2023-11-29T16:28:34 | NONE | null | null | null | ### Describe the bug
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from datasets import Dataset
9 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
20 __version__ = "2.15.0"
21
---> 22 from .arrow_dataset import Dataset
23 from .arrow_reader import ReadInstruction
24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
61 import pyarrow.compute as pc
62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi
---> 63 from multiprocess import Pool
64 from requests import HTTPError
65
[/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module>
31
32 import sys
---> 33 from . import context
34
35 #
[/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module>
4
5 from . import process
----> 6 from . import reduction
7
8 __all__ = ()
[/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module>
14 import os
15 try:
---> 16 import dill as pickle
17 except ImportError:
18 import pickle
[/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module>
24
25
---> 26 from ._dill import (
27 dump, dumps, load, loads, copy,
28 Pickler, Unpickler, register, pickle, pickles, check,
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module>
166 try:
167 from _pyio import open as _open
--> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open)
169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open)
170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open)
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs)
154 def get_file_type(*args, **kwargs):
155 open = kwargs.pop("open", __builtin__.open)
--> 156 f = open(os.devnull, *args, **kwargs)
157 t = type(f)
158 f.close()
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener)
280 return result
281 encoding = text_encoding(encoding)
--> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering)
283 result = text
284 text.mode = mode
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through)
2043 encoding = "utf-8"
2044 else:
-> 2045 encoding = locale.getpreferredencoding(False)
2046
2047 if not isinstance(encoding, str):
TypeError: <lambda>() takes 0 positional arguments but 1 was given
```
or
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-36-652e886d387f>](https://localhost:8080/#) in <cell line: 1>()
----> 1 import datasets
9 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
20 __version__ = "2.15.0"
21
---> 22 from .arrow_dataset import Dataset
23 from .arrow_reader import ReadInstruction
24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
61 import pyarrow.compute as pc
62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi
---> 63 from multiprocess import Pool
64 from requests import HTTPError
65
[/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module>
31
32 import sys
---> 33 from . import context
34
35 #
[/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module>
4
5 from . import process
----> 6 from . import reduction
7
8 __all__ = ()
[/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module>
14 import os
15 try:
---> 16 import dill as pickle
17 except ImportError:
18 import pickle
[/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module>
24
25
---> 26 from ._dill import (
27 dump, dumps, load, loads, copy,
28 Pickler, Unpickler, register, pickle, pickles, check,
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module>
166 try:
167 from _pyio import open as _open
--> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open)
169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open)
170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open)
[/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs)
154 def get_file_type(*args, **kwargs):
155 open = kwargs.pop("open", __builtin__.open)
--> 156 f = open(os.devnull, *args, **kwargs)
157 t = type(f)
158 f.close()
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener)
280 return result
281 encoding = text_encoding(encoding)
--> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering)
283 result = text
284 text.mode = mode
[/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through)
2043 encoding = "utf-8"
2044 else:
-> 2045 encoding = locale.getpreferredencoding(False)
2046
2047 if not isinstance(encoding, str):
TypeError: <lambda>() takes 0 positional arguments but 1 was given
```
### Steps to reproduce the bug
`import datasets` on colab
### Expected behavior
work fine
### Environment info
colab
`!pip install datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6436/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6435/comments | https://api.github.com/repos/huggingface/datasets/issues/6435/events | https://github.com/huggingface/datasets/issues/6435 | 2,000,690,513 | I_kwDODunzps53QB1R | 6,435 | Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method | {
"login": "kopyl",
"id": 17604849,
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kopyl",
"html_url": "https://github.com/kopyl",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"repos_url": "https://api.github.com/users/kopyl/repos",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"[This doc section](https://huggingface.co./docs/datasets/main/en/process#multiprocessing) explains how to modify the script to avoid this error.",
"@mariosasko thank you very much, i'll check it"
] | 2023-11-19T04:21:16 | 2023-12-04T16:57:44 | 2023-12-04T16:57:43 | NONE | null | null | null | ### Describe the bug
1. I ran dataset mapping with `num_proc=6` in it and got this error:
`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
I can't actually find a way to run multi-GPU dataset mapping. Can you help?
### Steps to reproduce the bug
1. Rund SDXL training with `num_proc=6`: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py
### Expected behavior
Should work well
### Environment info
6x A100 SXM, Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6435/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6434/comments | https://api.github.com/repos/huggingface/datasets/issues/6434/events | https://github.com/huggingface/datasets/pull/6434 | 1,999,554,915 | PR_kwDODunzps5fxgUO | 6,434 | Use `ruff` for formatting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004293 / 0.011353 (-0.007060) | 0.002953 / 0.011008 (-0.008055) | 0.063712 / 0.038508 (0.025204) | 0.029963 / 0.023109 (0.006854) | 0.248574 / 0.275898 (-0.027324) | 0.272757 / 0.323480 (-0.050723) | 0.003878 / 0.007986 (-0.004108) | 0.002456 / 0.004328 (-0.001872) | 0.047959 / 0.004250 (0.043709) | 0.043277 / 0.037052 (0.006224) | 0.255071 / 0.258489 (-0.003418) | 0.283934 / 0.293841 (-0.009907) | 0.022870 / 0.128546 (-0.105676) | 0.007224 / 0.075646 (-0.068422) | 0.221595 / 0.419271 (-0.197677) | 0.053468 / 0.043533 (0.009935) | 0.249906 / 0.255139 (-0.005233) | 0.274894 / 0.283200 (-0.008305) | 0.017246 / 0.141683 (-0.124437) | 1.112440 / 1.452155 (-0.339714) | 1.167293 / 1.492716 (-0.325424) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092684 / 0.018006 (0.074677) | 0.301721 / 0.000490 (0.301231) | 0.000220 / 0.000200 (0.000020) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018289 / 0.037411 (-0.019122) | 0.061898 / 0.014526 (0.047372) | 0.072904 / 0.176557 (-0.103653) | 0.118515 / 0.737135 (-0.618621) | 0.074000 / 0.296338 (-0.222338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287044 / 0.215209 (0.071835) | 2.818091 / 2.077655 (0.740436) | 1.502401 / 1.504120 (-0.001719) | 1.374688 / 1.541195 (-0.166506) | 1.410254 / 1.468490 (-0.058236) | 0.407519 / 4.584777 (-4.177258) | 2.379199 / 3.745712 (-1.366513) | 2.585745 / 5.269862 (-2.684117) | 1.562336 / 4.565676 (-3.003341) | 0.045977 / 0.424275 (-0.378299) | 0.004809 / 0.007607 (-0.002798) | 0.347942 / 0.226044 (0.121897) | 3.383318 / 2.268929 (1.114390) | 1.844784 / 55.444624 (-53.599841) | 1.561949 / 6.876477 (-5.314528) | 1.571082 / 2.142072 (-0.570990) | 0.482469 / 4.805227 (-4.322758) | 0.099357 / 6.500664 (-6.401307) | 0.041039 / 0.075469 (-0.034430) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.944236 / 1.841788 (-0.897551) | 11.519623 / 8.074308 (3.445315) | 10.353829 / 10.191392 (0.162437) | 0.137530 / 0.680424 (-0.542894) | 0.014454 / 0.534201 (-0.519747) | 0.268657 / 0.579283 (-0.310626) | 0.265165 / 0.434364 (-0.169199) | 0.302626 / 0.540337 (-0.237712) | 0.426923 / 1.386936 (-0.960013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004711 / 0.011353 (-0.006641) | 0.002504 / 0.011008 (-0.008504) | 0.047671 / 0.038508 (0.009163) | 0.051147 / 0.023109 (0.028037) | 0.272848 / 0.275898 (-0.003050) | 0.291705 / 0.323480 (-0.031775) | 0.004002 / 0.007986 (-0.003984) | 0.002382 / 0.004328 (-0.001947) | 0.047583 / 0.004250 (0.043332) | 0.038203 / 0.037052 (0.001150) | 0.278536 / 0.258489 (0.020047) | 0.305872 / 0.293841 (0.012031) | 0.023890 / 0.128546 (-0.104657) | 0.006954 / 0.075646 (-0.068693) | 0.053716 / 0.419271 (-0.365556) | 0.032158 / 0.043533 (-0.011375) | 0.273939 / 0.255139 (0.018800) | 0.290722 / 0.283200 (0.007522) | 0.016946 / 0.141683 (-0.124737) | 1.102726 / 1.452155 (-0.349429) | 1.169356 / 1.492716 (-0.323360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092520 / 0.018006 (0.074514) | 0.301949 / 0.000490 (0.301459) | 0.000248 / 0.000200 (0.000048) | 0.000061 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021013 / 0.037411 (-0.016399) | 0.069965 / 0.014526 (0.055439) | 0.080105 / 0.176557 (-0.096451) | 0.119802 / 0.737135 (-0.617334) | 0.081615 / 0.296338 (-0.214724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301170 / 0.215209 (0.085960) | 2.884817 / 2.077655 (0.807162) | 1.596376 / 1.504120 (0.092256) | 1.471205 / 1.541195 (-0.069990) | 1.499061 / 1.468490 (0.030571) | 0.407729 / 4.584777 (-4.177048) | 2.432824 / 3.745712 (-1.312888) | 2.561905 / 5.269862 (-2.707957) | 1.535364 / 4.565676 (-3.030313) | 0.046592 / 0.424275 (-0.377683) | 0.004773 / 0.007607 (-0.002834) | 0.350872 / 0.226044 (0.124828) | 3.474874 / 2.268929 (1.205945) | 1.963114 / 55.444624 (-53.481510) | 1.688213 / 6.876477 (-5.188263) | 1.686325 / 2.142072 (-0.455748) | 0.487151 / 4.805227 (-4.318076) | 0.104253 / 6.500664 (-6.396411) | 0.043499 / 0.075469 (-0.031970) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980395 / 1.841788 (-0.861393) | 11.907393 / 8.074308 (3.833085) | 10.983688 / 10.191392 (0.792296) | 0.142875 / 0.680424 (-0.537549) | 0.015375 / 0.534201 (-0.518826) | 0.270043 / 0.579283 (-0.309240) | 0.295092 / 0.434364 (-0.139272) | 0.309466 / 0.540337 (-0.230871) | 0.409812 / 1.386936 (-0.977124) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#17f97ca8ec66f6664d3e9b7ceb84fe3ca49a9c18 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004703 / 0.011353 (-0.006650) | 0.002767 / 0.011008 (-0.008241) | 0.063162 / 0.038508 (0.024654) | 0.052241 / 0.023109 (0.029132) | 0.237138 / 0.275898 (-0.038760) | 0.262793 / 0.323480 (-0.060687) | 0.003873 / 0.007986 (-0.004113) | 0.002433 / 0.004328 (-0.001896) | 0.048647 / 0.004250 (0.044397) | 0.037887 / 0.037052 (0.000834) | 0.244939 / 0.258489 (-0.013551) | 0.304015 / 0.293841 (0.010174) | 0.022859 / 0.128546 (-0.105688) | 0.006763 / 0.075646 (-0.068883) | 0.202728 / 0.419271 (-0.216544) | 0.035369 / 0.043533 (-0.008164) | 0.240785 / 0.255139 (-0.014354) | 0.255109 / 0.283200 (-0.028091) | 0.017951 / 0.141683 (-0.123732) | 1.096103 / 1.452155 (-0.356052) | 1.167662 / 1.492716 (-0.325054) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092285 / 0.018006 (0.074279) | 0.300201 / 0.000490 (0.299711) | 0.000222 / 0.000200 (0.000022) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018271 / 0.037411 (-0.019140) | 0.062306 / 0.014526 (0.047780) | 0.072615 / 0.176557 (-0.103942) | 0.119357 / 0.737135 (-0.617779) | 0.073365 / 0.296338 (-0.222974) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278763 / 0.215209 (0.063554) | 2.714943 / 2.077655 (0.637288) | 1.426318 / 1.504120 (-0.077802) | 1.313296 / 1.541195 (-0.227898) | 1.330920 / 1.468490 (-0.137570) | 0.391466 / 4.584777 (-4.193311) | 2.380521 / 3.745712 (-1.365191) | 2.545042 / 5.269862 (-2.724819) | 1.549696 / 4.565676 (-3.015980) | 0.044661 / 0.424275 (-0.379614) | 0.005269 / 0.007607 (-0.002338) | 0.331112 / 0.226044 (0.105068) | 3.241120 / 2.268929 (0.972192) | 1.783771 / 55.444624 (-53.660853) | 1.506205 / 6.876477 (-5.370272) | 1.521062 / 2.142072 (-0.621010) | 0.462339 / 4.805227 (-4.342888) | 0.097646 / 6.500664 (-6.403018) | 0.041365 / 0.075469 (-0.034104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939653 / 1.841788 (-0.902135) | 11.415472 / 8.074308 (3.341164) | 10.338961 / 10.191392 (0.147569) | 0.128543 / 0.680424 (-0.551881) | 0.013997 / 0.534201 (-0.520204) | 0.270034 / 0.579283 (-0.309249) | 0.266766 / 0.434364 (-0.167598) | 0.305290 / 0.540337 (-0.235047) | 0.395969 / 1.386936 (-0.990967) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004869 / 0.011353 (-0.006484) | 0.002445 / 0.011008 (-0.008563) | 0.051256 / 0.038508 (0.012748) | 0.050871 / 0.023109 (0.027761) | 0.271044 / 0.275898 (-0.004854) | 0.294138 / 0.323480 (-0.029342) | 0.003974 / 0.007986 (-0.004012) | 0.002423 / 0.004328 (-0.001906) | 0.048277 / 0.004250 (0.044027) | 0.039685 / 0.037052 (0.002632) | 0.277092 / 0.258489 (0.018603) | 0.302097 / 0.293841 (0.008256) | 0.024515 / 0.128546 (-0.104031) | 0.006892 / 0.075646 (-0.068754) | 0.053528 / 0.419271 (-0.365744) | 0.032243 / 0.043533 (-0.011290) | 0.272098 / 0.255139 (0.016959) | 0.291678 / 0.283200 (0.008479) | 0.018368 / 0.141683 (-0.123315) | 1.160151 / 1.452155 (-0.292004) | 1.193643 / 1.492716 (-0.299073) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096669 / 0.018006 (0.078663) | 0.299043 / 0.000490 (0.298553) | 0.000227 / 0.000200 (0.000027) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021557 / 0.037411 (-0.015855) | 0.069875 / 0.014526 (0.055349) | 0.080952 / 0.176557 (-0.095605) | 0.119509 / 0.737135 (-0.617626) | 0.082030 / 0.296338 (-0.214308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303062 / 0.215209 (0.087853) | 2.943823 / 2.077655 (0.866169) | 1.607816 / 1.504120 (0.103696) | 1.479773 / 1.541195 (-0.061422) | 1.482663 / 1.468490 (0.014173) | 0.411923 / 4.584777 (-4.172854) | 2.450138 / 3.745712 (-1.295574) | 2.466111 / 5.269862 (-2.803751) | 1.543852 / 4.565676 (-3.021825) | 0.046256 / 0.424275 (-0.378019) | 0.004787 / 0.007607 (-0.002820) | 0.353673 / 0.226044 (0.127628) | 3.528218 / 2.268929 (1.259289) | 1.984663 / 55.444624 (-53.459962) | 1.675785 / 6.876477 (-5.200691) | 1.775646 / 2.142072 (-0.366426) | 0.483277 / 4.805227 (-4.321950) | 0.097781 / 6.500664 (-6.402883) | 0.040291 / 0.075469 (-0.035178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975458 / 1.841788 (-0.866330) | 11.961966 / 8.074308 (3.887658) | 10.558559 / 10.191392 (0.367167) | 0.131372 / 0.680424 (-0.549052) | 0.016156 / 0.534201 (-0.518045) | 0.269254 / 0.579283 (-0.310029) | 0.274896 / 0.434364 (-0.159468) | 0.304672 / 0.540337 (-0.235665) | 0.517652 / 1.386936 (-0.869284) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1a1e7416892dcb71097b47120bc9b26b3d90f06a \"CML watermark\")\n"
] | 2023-11-17T16:53:22 | 2023-11-21T14:19:21 | 2023-11-21T14:13:13 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6434",
"html_url": "https://github.com/huggingface/datasets/pull/6434",
"diff_url": "https://github.com/huggingface/datasets/pull/6434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6434.patch",
"merged_at": "2023-11-21T14:13:13"
} | Use `ruff` instead of `black` for formatting to be consistent with `transformers` ([PR](https://github.com/huggingface/transformers/pull/27144)) and `huggingface_hub` ([PR 1](https://github.com/huggingface/huggingface_hub/pull/1783) and [PR 2](https://github.com/huggingface/huggingface_hub/pull/1789)). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6434/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6433/comments | https://api.github.com/repos/huggingface/datasets/issues/6433/events | https://github.com/huggingface/datasets/pull/6433 | 1,999,419,105 | PR_kwDODunzps5fxDoG | 6,433 | Better `tqdm` wrapper | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003251 / 0.011008 (-0.007757) | 0.061528 / 0.038508 (0.023020) | 0.055386 / 0.023109 (0.032276) | 0.248536 / 0.275898 (-0.027362) | 0.272346 / 0.323480 (-0.051134) | 0.003875 / 0.007986 (-0.004111) | 0.002396 / 0.004328 (-0.001933) | 0.047659 / 0.004250 (0.043409) | 0.037448 / 0.037052 (0.000396) | 0.251101 / 0.258489 (-0.007388) | 0.282353 / 0.293841 (-0.011488) | 0.027784 / 0.128546 (-0.100762) | 0.010534 / 0.075646 (-0.065113) | 0.206025 / 0.419271 (-0.213246) | 0.035410 / 0.043533 (-0.008123) | 0.250626 / 0.255139 (-0.004513) | 0.266801 / 0.283200 (-0.016399) | 0.017704 / 0.141683 (-0.123979) | 1.089970 / 1.452155 (-0.362185) | 1.171683 / 1.492716 (-0.321033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092700 / 0.018006 (0.074694) | 0.301314 / 0.000490 (0.300824) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018385 / 0.037411 (-0.019026) | 0.062364 / 0.014526 (0.047838) | 0.075887 / 0.176557 (-0.100670) | 0.119484 / 0.737135 (-0.617651) | 0.074490 / 0.296338 (-0.221849) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283893 / 0.215209 (0.068684) | 2.746772 / 2.077655 (0.669118) | 1.486568 / 1.504120 (-0.017552) | 1.376451 / 1.541195 (-0.164744) | 1.377928 / 1.468490 (-0.090562) | 0.572393 / 4.584777 (-4.012384) | 2.383282 / 3.745712 (-1.362430) | 2.791614 / 5.269862 (-2.478248) | 1.753373 / 4.565676 (-2.812303) | 0.063539 / 0.424275 (-0.360736) | 0.005014 / 0.007607 (-0.002593) | 0.341300 / 0.226044 (0.115256) | 3.376960 / 2.268929 (1.108032) | 1.914162 / 55.444624 (-53.530462) | 1.590188 / 6.876477 (-5.286289) | 1.618420 / 2.142072 (-0.523652) | 0.648723 / 4.805227 (-4.156504) | 0.117745 / 6.500664 (-6.382919) | 0.048858 / 0.075469 (-0.026611) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.944422 / 1.841788 (-0.897366) | 11.603590 / 8.074308 (3.529282) | 10.707000 / 10.191392 (0.515608) | 0.130779 / 0.680424 (-0.549645) | 0.015126 / 0.534201 (-0.519075) | 0.284869 / 0.579283 (-0.294414) | 0.266778 / 0.434364 (-0.167585) | 0.320646 / 0.540337 (-0.219691) | 0.417167 / 1.386936 (-0.969769) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005384 / 0.011353 (-0.005969) | 0.003311 / 0.011008 (-0.007698) | 0.049933 / 0.038508 (0.011425) | 0.052791 / 0.023109 (0.029681) | 0.277061 / 0.275898 (0.001162) | 0.302149 / 0.323480 (-0.021331) | 0.004006 / 0.007986 (-0.003979) | 0.002394 / 0.004328 (-0.001934) | 0.049020 / 0.004250 (0.044770) | 0.040168 / 0.037052 (0.003116) | 0.278625 / 0.258489 (0.020136) | 0.308641 / 0.293841 (0.014800) | 0.029808 / 0.128546 (-0.098738) | 0.010873 / 0.075646 (-0.064774) | 0.058040 / 0.419271 (-0.361231) | 0.032706 / 0.043533 (-0.010827) | 0.277254 / 0.255139 (0.022115) | 0.295208 / 0.283200 (0.012008) | 0.017769 / 0.141683 (-0.123914) | 1.126416 / 1.452155 (-0.325739) | 1.169046 / 1.492716 (-0.323670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094776 / 0.018006 (0.076770) | 0.306262 / 0.000490 (0.305772) | 0.000223 / 0.000200 (0.000023) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022279 / 0.037411 (-0.015132) | 0.086784 / 0.014526 (0.072258) | 0.082268 / 0.176557 (-0.094289) | 0.120131 / 0.737135 (-0.617004) | 0.082862 / 0.296338 (-0.213476) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300565 / 0.215209 (0.085356) | 2.923424 / 2.077655 (0.845769) | 1.594836 / 1.504120 (0.090716) | 1.504323 / 1.541195 (-0.036872) | 1.498495 / 1.468490 (0.030005) | 0.570969 / 4.584777 (-4.013808) | 2.476966 / 3.745712 (-1.268746) | 2.785190 / 5.269862 (-2.484672) | 1.749839 / 4.565676 (-2.815837) | 0.062809 / 0.424275 (-0.361466) | 0.004908 / 0.007607 (-0.002699) | 0.361513 / 0.226044 (0.135469) | 3.587135 / 2.268929 (1.318207) | 1.952030 / 55.444624 (-53.492595) | 1.661552 / 6.876477 (-5.214925) | 1.678673 / 2.142072 (-0.463399) | 0.645083 / 4.805227 (-4.160144) | 0.117098 / 6.500664 (-6.383566) | 0.041630 / 0.075469 (-0.033839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987883 / 1.841788 (-0.853904) | 12.300764 / 8.074308 (4.226456) | 10.962068 / 10.191392 (0.770675) | 0.143200 / 0.680424 (-0.537224) | 0.015743 / 0.534201 (-0.518458) | 0.289733 / 0.579283 (-0.289550) | 0.276384 / 0.434364 (-0.157979) | 0.328542 / 0.540337 (-0.211795) | 0.552175 / 1.386936 (-0.834761) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#81a65a57cf9fd0abdf85b664a144c9a06cb2896d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005110 / 0.011353 (-0.006243) | 0.003311 / 0.011008 (-0.007697) | 0.061962 / 0.038508 (0.023454) | 0.050250 / 0.023109 (0.027140) | 0.245313 / 0.275898 (-0.030585) | 0.268748 / 0.323480 (-0.054732) | 0.004693 / 0.007986 (-0.003293) | 0.002465 / 0.004328 (-0.001863) | 0.047698 / 0.004250 (0.043447) | 0.037314 / 0.037052 (0.000262) | 0.250370 / 0.258489 (-0.008119) | 0.286023 / 0.293841 (-0.007818) | 0.027891 / 0.128546 (-0.100655) | 0.010574 / 0.075646 (-0.065072) | 0.204895 / 0.419271 (-0.214376) | 0.036014 / 0.043533 (-0.007519) | 0.250959 / 0.255139 (-0.004180) | 0.266710 / 0.283200 (-0.016489) | 0.018492 / 0.141683 (-0.123191) | 1.115340 / 1.452155 (-0.336815) | 1.176488 / 1.492716 (-0.316229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099409 / 0.018006 (0.081402) | 0.310151 / 0.000490 (0.309661) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018132 / 0.037411 (-0.019279) | 0.061820 / 0.014526 (0.047294) | 0.074960 / 0.176557 (-0.101596) | 0.119793 / 0.737135 (-0.617342) | 0.074132 / 0.296338 (-0.222206) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286388 / 0.215209 (0.071179) | 2.830791 / 2.077655 (0.753137) | 1.514588 / 1.504120 (0.010468) | 1.376514 / 1.541195 (-0.164681) | 1.405080 / 1.468490 (-0.063410) | 0.555297 / 4.584777 (-4.029480) | 2.364838 / 3.745712 (-1.380874) | 2.806050 / 5.269862 (-2.463812) | 1.756114 / 4.565676 (-2.809562) | 0.062254 / 0.424275 (-0.362022) | 0.005020 / 0.007607 (-0.002588) | 0.346272 / 0.226044 (0.120227) | 3.453195 / 2.268929 (1.184266) | 1.837810 / 55.444624 (-53.606814) | 1.577984 / 6.876477 (-5.298493) | 1.560821 / 2.142072 (-0.581251) | 0.633930 / 4.805227 (-4.171297) | 0.116414 / 6.500664 (-6.384250) | 0.042007 / 0.075469 (-0.033462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941322 / 1.841788 (-0.900466) | 11.740927 / 8.074308 (3.666618) | 10.450543 / 10.191392 (0.259151) | 0.128820 / 0.680424 (-0.551604) | 0.014856 / 0.534201 (-0.519345) | 0.285636 / 0.579283 (-0.293647) | 0.270051 / 0.434364 (-0.164313) | 0.321244 / 0.540337 (-0.219093) | 0.415486 / 1.386936 (-0.971450) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005333 / 0.011353 (-0.006020) | 0.003370 / 0.011008 (-0.007638) | 0.049046 / 0.038508 (0.010538) | 0.055767 / 0.023109 (0.032658) | 0.273463 / 0.275898 (-0.002435) | 0.292909 / 0.323480 (-0.030571) | 0.004102 / 0.007986 (-0.003883) | 0.002460 / 0.004328 (-0.001868) | 0.048025 / 0.004250 (0.043775) | 0.040342 / 0.037052 (0.003290) | 0.275114 / 0.258489 (0.016625) | 0.295988 / 0.293841 (0.002147) | 0.029461 / 0.128546 (-0.099085) | 0.010654 / 0.075646 (-0.064992) | 0.057196 / 0.419271 (-0.362076) | 0.033238 / 0.043533 (-0.010295) | 0.275885 / 0.255139 (0.020746) | 0.288566 / 0.283200 (0.005366) | 0.018058 / 0.141683 (-0.123625) | 1.130513 / 1.452155 (-0.321642) | 1.173608 / 1.492716 (-0.319108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097751 / 0.018006 (0.079745) | 0.312106 / 0.000490 (0.311616) | 0.000232 / 0.000200 (0.000032) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021201 / 0.037411 (-0.016211) | 0.070150 / 0.014526 (0.055624) | 0.081073 / 0.176557 (-0.095484) | 0.119520 / 0.737135 (-0.617615) | 0.084479 / 0.296338 (-0.211859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292322 / 0.215209 (0.077113) | 2.844070 / 2.077655 (0.766415) | 1.581838 / 1.504120 (0.077718) | 1.462665 / 1.541195 (-0.078530) | 1.483013 / 1.468490 (0.014523) | 0.558705 / 4.584777 (-4.026072) | 2.422368 / 3.745712 (-1.323344) | 2.772274 / 5.269862 (-2.497587) | 1.725901 / 4.565676 (-2.839775) | 0.062993 / 0.424275 (-0.361282) | 0.004982 / 0.007607 (-0.002625) | 0.344336 / 0.226044 (0.118292) | 3.425230 / 2.268929 (1.156302) | 1.947199 / 55.444624 (-53.497425) | 1.670362 / 6.876477 (-5.206115) | 1.674112 / 2.142072 (-0.467961) | 0.633857 / 4.805227 (-4.171370) | 0.114837 / 6.500664 (-6.385827) | 0.042558 / 0.075469 (-0.032911) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979474 / 1.841788 (-0.862314) | 12.110856 / 8.074308 (4.036548) | 10.605998 / 10.191392 (0.414606) | 0.130769 / 0.680424 (-0.549654) | 0.016057 / 0.534201 (-0.518144) | 0.296448 / 0.579283 (-0.282835) | 0.278078 / 0.434364 (-0.156286) | 0.320809 / 0.540337 (-0.219528) | 0.570756 / 1.386936 (-0.816180) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eeb9727cc680a8f8172a012920bf84f285fec5a0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005181 / 0.011353 (-0.006172) | 0.003434 / 0.011008 (-0.007574) | 0.062333 / 0.038508 (0.023825) | 0.058544 / 0.023109 (0.035435) | 0.233794 / 0.275898 (-0.042104) | 0.258774 / 0.323480 (-0.064706) | 0.003869 / 0.007986 (-0.004117) | 0.002478 / 0.004328 (-0.001850) | 0.047871 / 0.004250 (0.043620) | 0.037997 / 0.037052 (0.000945) | 0.241269 / 0.258489 (-0.017220) | 0.270103 / 0.293841 (-0.023738) | 0.027710 / 0.128546 (-0.100836) | 0.010683 / 0.075646 (-0.064963) | 0.213204 / 0.419271 (-0.206067) | 0.036156 / 0.043533 (-0.007377) | 0.240061 / 0.255139 (-0.015078) | 0.253627 / 0.283200 (-0.029573) | 0.017880 / 0.141683 (-0.123803) | 1.102965 / 1.452155 (-0.349189) | 1.176919 / 1.492716 (-0.315797) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093206 / 0.018006 (0.075200) | 0.300960 / 0.000490 (0.300470) | 0.000214 / 0.000200 (0.000014) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019417 / 0.037411 (-0.017994) | 0.061948 / 0.014526 (0.047422) | 0.073560 / 0.176557 (-0.102997) | 0.120682 / 0.737135 (-0.616453) | 0.074925 / 0.296338 (-0.221413) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280157 / 0.215209 (0.064948) | 2.760648 / 2.077655 (0.682994) | 1.482129 / 1.504120 (-0.021991) | 1.364091 / 1.541195 (-0.177104) | 1.415680 / 1.468490 (-0.052810) | 0.564697 / 4.584777 (-4.020080) | 2.364080 / 3.745712 (-1.381633) | 2.794018 / 5.269862 (-2.475844) | 1.752157 / 4.565676 (-2.813520) | 0.062234 / 0.424275 (-0.362041) | 0.004927 / 0.007607 (-0.002680) | 0.337835 / 0.226044 (0.111790) | 3.313819 / 2.268929 (1.044891) | 1.834095 / 55.444624 (-53.610530) | 1.559964 / 6.876477 (-5.316513) | 1.598489 / 2.142072 (-0.543584) | 0.636829 / 4.805227 (-4.168399) | 0.116436 / 6.500664 (-6.384228) | 0.042506 / 0.075469 (-0.032963) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951508 / 1.841788 (-0.890280) | 11.599532 / 8.074308 (3.525224) | 10.492355 / 10.191392 (0.300963) | 0.151582 / 0.680424 (-0.528842) | 0.014356 / 0.534201 (-0.519845) | 0.288448 / 0.579283 (-0.290835) | 0.265607 / 0.434364 (-0.168757) | 0.324455 / 0.540337 (-0.215883) | 0.416718 / 1.386936 (-0.970218) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005489 / 0.011353 (-0.005864) | 0.003481 / 0.011008 (-0.007527) | 0.048952 / 0.038508 (0.010444) | 0.054650 / 0.023109 (0.031540) | 0.280853 / 0.275898 (0.004955) | 0.298089 / 0.323480 (-0.025391) | 0.004762 / 0.007986 (-0.003224) | 0.002500 / 0.004328 (-0.001828) | 0.048503 / 0.004250 (0.044253) | 0.042048 / 0.037052 (0.004995) | 0.281729 / 0.258489 (0.023240) | 0.303625 / 0.293841 (0.009785) | 0.028887 / 0.128546 (-0.099659) | 0.010687 / 0.075646 (-0.064960) | 0.058093 / 0.419271 (-0.361178) | 0.032366 / 0.043533 (-0.011167) | 0.281987 / 0.255139 (0.026848) | 0.295554 / 0.283200 (0.012355) | 0.019242 / 0.141683 (-0.122441) | 1.127760 / 1.452155 (-0.324395) | 1.166868 / 1.492716 (-0.325848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092367 / 0.018006 (0.074361) | 0.300195 / 0.000490 (0.299706) | 0.000222 / 0.000200 (0.000022) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022062 / 0.037411 (-0.015349) | 0.069955 / 0.014526 (0.055429) | 0.081224 / 0.176557 (-0.095333) | 0.120183 / 0.737135 (-0.616953) | 0.082846 / 0.296338 (-0.213492) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295880 / 0.215209 (0.080671) | 2.902508 / 2.077655 (0.824853) | 1.616311 / 1.504120 (0.112191) | 1.491320 / 1.541195 (-0.049875) | 1.517333 / 1.468490 (0.048843) | 0.566824 / 4.584777 (-4.017953) | 2.428397 / 3.745712 (-1.317315) | 2.807241 / 5.269862 (-2.462620) | 1.786364 / 4.565676 (-2.779312) | 0.065253 / 0.424275 (-0.359022) | 0.004971 / 0.007607 (-0.002636) | 0.350095 / 0.226044 (0.124051) | 3.422226 / 2.268929 (1.153297) | 1.972955 / 55.444624 (-53.471670) | 1.686142 / 6.876477 (-5.190335) | 1.694539 / 2.142072 (-0.447533) | 0.645709 / 4.805227 (-4.159518) | 0.117774 / 6.500664 (-6.382890) | 0.041679 / 0.075469 (-0.033790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976835 / 1.841788 (-0.864952) | 12.358039 / 8.074308 (4.283730) | 10.774304 / 10.191392 (0.582912) | 0.130442 / 0.680424 (-0.549982) | 0.016071 / 0.534201 (-0.518130) | 0.289911 / 0.579283 (-0.289372) | 0.280693 / 0.434364 (-0.153671) | 0.325598 / 0.540337 (-0.214739) | 0.549618 / 1.386936 (-0.837318) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1570235228b67a15dce1ed535e905edd7442117f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005176 / 0.011353 (-0.006177) | 0.003297 / 0.011008 (-0.007711) | 0.061673 / 0.038508 (0.023165) | 0.052174 / 0.023109 (0.029065) | 0.245897 / 0.275898 (-0.030001) | 0.273102 / 0.323480 (-0.050377) | 0.003870 / 0.007986 (-0.004115) | 0.002385 / 0.004328 (-0.001943) | 0.047675 / 0.004250 (0.043424) | 0.037722 / 0.037052 (0.000670) | 0.250780 / 0.258489 (-0.007709) | 0.279464 / 0.293841 (-0.014376) | 0.028107 / 0.128546 (-0.100439) | 0.010460 / 0.075646 (-0.065187) | 0.205522 / 0.419271 (-0.213750) | 0.035781 / 0.043533 (-0.007752) | 0.246526 / 0.255139 (-0.008613) | 0.263919 / 0.283200 (-0.019281) | 0.018634 / 0.141683 (-0.123049) | 1.103845 / 1.452155 (-0.348310) | 1.175536 / 1.492716 (-0.317181) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091696 / 0.018006 (0.073690) | 0.301284 / 0.000490 (0.300794) | 0.000213 / 0.000200 (0.000013) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019153 / 0.037411 (-0.018258) | 0.063846 / 0.014526 (0.049320) | 0.073635 / 0.176557 (-0.102922) | 0.119625 / 0.737135 (-0.617511) | 0.075161 / 0.296338 (-0.221177) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285637 / 0.215209 (0.070428) | 2.751787 / 2.077655 (0.674132) | 1.465098 / 1.504120 (-0.039022) | 1.341676 / 1.541195 (-0.199519) | 1.390636 / 1.468490 (-0.077854) | 0.567663 / 4.584777 (-4.017114) | 2.378183 / 3.745712 (-1.367529) | 2.801830 / 5.269862 (-2.468032) | 1.750125 / 4.565676 (-2.815551) | 0.063705 / 0.424275 (-0.360570) | 0.004967 / 0.007607 (-0.002640) | 0.373302 / 0.226044 (0.147258) | 3.301847 / 2.268929 (1.032918) | 1.830117 / 55.444624 (-53.614508) | 1.564360 / 6.876477 (-5.312117) | 1.551766 / 2.142072 (-0.590306) | 0.654424 / 4.805227 (-4.150803) | 0.120656 / 6.500664 (-6.380008) | 0.042383 / 0.075469 (-0.033086) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.931815 / 1.841788 (-0.909973) | 11.755904 / 8.074308 (3.681596) | 10.571707 / 10.191392 (0.380315) | 0.131118 / 0.680424 (-0.549306) | 0.015241 / 0.534201 (-0.518960) | 0.287137 / 0.579283 (-0.292146) | 0.265651 / 0.434364 (-0.168713) | 0.329083 / 0.540337 (-0.211254) | 0.417501 / 1.386936 (-0.969435) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005355 / 0.011353 (-0.005998) | 0.003305 / 0.011008 (-0.007703) | 0.048289 / 0.038508 (0.009781) | 0.059223 / 0.023109 (0.036114) | 0.267213 / 0.275898 (-0.008685) | 0.290151 / 0.323480 (-0.033329) | 0.004683 / 0.007986 (-0.003303) | 0.002413 / 0.004328 (-0.001916) | 0.047982 / 0.004250 (0.043732) | 0.040943 / 0.037052 (0.003891) | 0.270967 / 0.258489 (0.012478) | 0.297644 / 0.293841 (0.003803) | 0.029309 / 0.128546 (-0.099237) | 0.010624 / 0.075646 (-0.065023) | 0.057359 / 0.419271 (-0.361913) | 0.032716 / 0.043533 (-0.010816) | 0.268602 / 0.255139 (0.013463) | 0.286016 / 0.283200 (0.002817) | 0.018578 / 0.141683 (-0.123105) | 1.120275 / 1.452155 (-0.331880) | 1.195514 / 1.492716 (-0.297202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092590 / 0.018006 (0.074584) | 0.302589 / 0.000490 (0.302099) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022439 / 0.037411 (-0.014972) | 0.070914 / 0.014526 (0.056388) | 0.084927 / 0.176557 (-0.091629) | 0.123154 / 0.737135 (-0.613981) | 0.085527 / 0.296338 (-0.210812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292652 / 0.215209 (0.077443) | 2.843736 / 2.077655 (0.766081) | 1.561289 / 1.504120 (0.057169) | 1.439500 / 1.541195 (-0.101695) | 1.485074 / 1.468490 (0.016584) | 0.570520 / 4.584777 (-4.014257) | 2.436611 / 3.745712 (-1.309102) | 2.925600 / 5.269862 (-2.344261) | 1.796518 / 4.565676 (-2.769159) | 0.065075 / 0.424275 (-0.359200) | 0.004995 / 0.007607 (-0.002612) | 0.349976 / 0.226044 (0.123932) | 3.442535 / 2.268929 (1.173607) | 1.919002 / 55.444624 (-53.525622) | 1.659222 / 6.876477 (-5.217255) | 1.648370 / 2.142072 (-0.493703) | 0.643119 / 4.805227 (-4.162108) | 0.118015 / 6.500664 (-6.382649) | 0.041229 / 0.075469 (-0.034240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986226 / 1.841788 (-0.855562) | 12.302487 / 8.074308 (4.228179) | 10.528848 / 10.191392 (0.337456) | 0.143911 / 0.680424 (-0.536513) | 0.015265 / 0.534201 (-0.518936) | 0.287692 / 0.579283 (-0.291591) | 0.277011 / 0.434364 (-0.157353) | 0.327650 / 0.540337 (-0.212688) | 0.552951 / 1.386936 (-0.833985) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0af18e68664db94e863f0dcde4b0f3a7adcc80e7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005234 / 0.011353 (-0.006119) | 0.003324 / 0.011008 (-0.007684) | 0.062429 / 0.038508 (0.023921) | 0.051619 / 0.023109 (0.028510) | 0.256850 / 0.275898 (-0.019048) | 0.260566 / 0.323480 (-0.062914) | 0.002914 / 0.007986 (-0.005071) | 0.003093 / 0.004328 (-0.001235) | 0.047947 / 0.004250 (0.043696) | 0.038753 / 0.037052 (0.001701) | 0.246810 / 0.258489 (-0.011679) | 0.275128 / 0.293841 (-0.018713) | 0.027171 / 0.128546 (-0.101375) | 0.010290 / 0.075646 (-0.065356) | 0.206069 / 0.419271 (-0.213203) | 0.035514 / 0.043533 (-0.008019) | 0.240645 / 0.255139 (-0.014494) | 0.259693 / 0.283200 (-0.023507) | 0.019722 / 0.141683 (-0.121961) | 1.128534 / 1.452155 (-0.323620) | 1.139602 / 1.492716 (-0.353115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095837 / 0.018006 (0.077830) | 0.304754 / 0.000490 (0.304264) | 0.000204 / 0.000200 (0.000004) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018349 / 0.037411 (-0.019063) | 0.062763 / 0.014526 (0.048237) | 0.074443 / 0.176557 (-0.102113) | 0.120607 / 0.737135 (-0.616528) | 0.077721 / 0.296338 (-0.218617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281852 / 0.215209 (0.066643) | 2.770806 / 2.077655 (0.693151) | 1.466255 / 1.504120 (-0.037864) | 1.349611 / 1.541195 (-0.191584) | 1.385463 / 1.468490 (-0.083027) | 0.566489 / 4.584777 (-4.018288) | 2.420932 / 3.745712 (-1.324780) | 2.809397 / 5.269862 (-2.460464) | 1.749734 / 4.565676 (-2.815942) | 0.063407 / 0.424275 (-0.360868) | 0.005038 / 0.007607 (-0.002569) | 0.379121 / 0.226044 (0.153077) | 3.500938 / 2.268929 (1.232010) | 1.852207 / 55.444624 (-53.592417) | 1.570474 / 6.876477 (-5.306002) | 1.555222 / 2.142072 (-0.586850) | 0.657198 / 4.805227 (-4.148030) | 0.119835 / 6.500664 (-6.380829) | 0.042453 / 0.075469 (-0.033016) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949953 / 1.841788 (-0.891835) | 11.736811 / 8.074308 (3.662503) | 10.558049 / 10.191392 (0.366657) | 0.146230 / 0.680424 (-0.534194) | 0.014922 / 0.534201 (-0.519279) | 0.289100 / 0.579283 (-0.290183) | 0.267130 / 0.434364 (-0.167234) | 0.320055 / 0.540337 (-0.220282) | 0.417244 / 1.386936 (-0.969692) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005309 / 0.011353 (-0.006044) | 0.003329 / 0.011008 (-0.007679) | 0.048576 / 0.038508 (0.010068) | 0.055219 / 0.023109 (0.032110) | 0.271522 / 0.275898 (-0.004376) | 0.294435 / 0.323480 (-0.029045) | 0.004018 / 0.007986 (-0.003968) | 0.002456 / 0.004328 (-0.001873) | 0.047939 / 0.004250 (0.043689) | 0.041195 / 0.037052 (0.004143) | 0.274819 / 0.258489 (0.016330) | 0.299407 / 0.293841 (0.005566) | 0.029145 / 0.128546 (-0.099401) | 0.010680 / 0.075646 (-0.064966) | 0.057238 / 0.419271 (-0.362034) | 0.032722 / 0.043533 (-0.010810) | 0.272066 / 0.255139 (0.016927) | 0.289223 / 0.283200 (0.006023) | 0.017826 / 0.141683 (-0.123857) | 1.119079 / 1.452155 (-0.333076) | 1.179109 / 1.492716 (-0.313608) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095662 / 0.018006 (0.077656) | 0.307652 / 0.000490 (0.307162) | 0.000213 / 0.000200 (0.000013) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022263 / 0.037411 (-0.015149) | 0.070224 / 0.014526 (0.055698) | 0.081477 / 0.176557 (-0.095079) | 0.120763 / 0.737135 (-0.616372) | 0.083152 / 0.296338 (-0.213187) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295780 / 0.215209 (0.080571) | 2.926623 / 2.077655 (0.848968) | 1.605901 / 1.504120 (0.101781) | 1.482874 / 1.541195 (-0.058321) | 1.501467 / 1.468490 (0.032977) | 0.569566 / 4.584777 (-4.015211) | 2.474948 / 3.745712 (-1.270764) | 2.831877 / 5.269862 (-2.437985) | 1.761229 / 4.565676 (-2.804448) | 0.064129 / 0.424275 (-0.360147) | 0.004964 / 0.007607 (-0.002643) | 0.350081 / 0.226044 (0.124037) | 3.446766 / 2.268929 (1.177837) | 1.974998 / 55.444624 (-53.469627) | 1.683381 / 6.876477 (-5.193095) | 1.711543 / 2.142072 (-0.430530) | 0.648695 / 4.805227 (-4.156532) | 0.118224 / 6.500664 (-6.382440) | 0.040895 / 0.075469 (-0.034574) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960208 / 1.841788 (-0.881580) | 12.164941 / 8.074308 (4.090633) | 10.860573 / 10.191392 (0.669181) | 0.133525 / 0.680424 (-0.546899) | 0.015643 / 0.534201 (-0.518558) | 0.290898 / 0.579283 (-0.288386) | 0.289612 / 0.434364 (-0.144752) | 0.325836 / 0.540337 (-0.214501) | 0.565592 / 1.386936 (-0.821344) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d19a315920c6d4293f8226273d99bf3de5c1d4e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006097 / 0.011353 (-0.005256) | 0.004386 / 0.011008 (-0.006622) | 0.064481 / 0.038508 (0.025973) | 0.059983 / 0.023109 (0.036873) | 0.268177 / 0.275898 (-0.007721) | 0.296207 / 0.323480 (-0.027273) | 0.002986 / 0.007986 (-0.005000) | 0.002923 / 0.004328 (-0.001406) | 0.048798 / 0.004250 (0.044547) | 0.039945 / 0.037052 (0.002893) | 0.271234 / 0.258489 (0.012745) | 0.295461 / 0.293841 (0.001620) | 0.028771 / 0.128546 (-0.099775) | 0.011104 / 0.075646 (-0.064542) | 0.207471 / 0.419271 (-0.211800) | 0.036955 / 0.043533 (-0.006578) | 0.254761 / 0.255139 (-0.000378) | 0.275933 / 0.283200 (-0.007267) | 0.021232 / 0.141683 (-0.120451) | 1.170771 / 1.452155 (-0.281384) | 1.188900 / 1.492716 (-0.303816) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092328 / 0.018006 (0.074322) | 0.302591 / 0.000490 (0.302102) | 0.000220 / 0.000200 (0.000020) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019207 / 0.037411 (-0.018204) | 0.070247 / 0.014526 (0.055721) | 0.074963 / 0.176557 (-0.101593) | 0.124301 / 0.737135 (-0.612834) | 0.077356 / 0.296338 (-0.218982) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283321 / 0.215209 (0.068112) | 2.800448 / 2.077655 (0.722793) | 1.510278 / 1.504120 (0.006158) | 1.390353 / 1.541195 (-0.150842) | 1.387881 / 1.468490 (-0.080609) | 0.563927 / 4.584777 (-4.020850) | 2.387753 / 3.745712 (-1.357959) | 2.776655 / 5.269862 (-2.493207) | 1.767383 / 4.565676 (-2.798293) | 0.064864 / 0.424275 (-0.359411) | 0.004999 / 0.007607 (-0.002608) | 0.351173 / 0.226044 (0.125129) | 3.459446 / 2.268929 (1.190517) | 1.873078 / 55.444624 (-53.571547) | 1.602831 / 6.876477 (-5.273646) | 1.595612 / 2.142072 (-0.546460) | 0.648786 / 4.805227 (-4.156441) | 0.118720 / 6.500664 (-6.381944) | 0.042821 / 0.075469 (-0.032649) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970738 / 1.841788 (-0.871049) | 12.273548 / 8.074308 (4.199240) | 11.191375 / 10.191392 (0.999983) | 0.131903 / 0.680424 (-0.548521) | 0.014512 / 0.534201 (-0.519689) | 0.289382 / 0.579283 (-0.289901) | 0.269449 / 0.434364 (-0.164915) | 0.327557 / 0.540337 (-0.212781) | 0.427052 / 1.386936 (-0.959884) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005472 / 0.011353 (-0.005881) | 0.003380 / 0.011008 (-0.007628) | 0.050677 / 0.038508 (0.012169) | 0.059606 / 0.023109 (0.036497) | 0.275798 / 0.275898 (-0.000100) | 0.303733 / 0.323480 (-0.019747) | 0.004187 / 0.007986 (-0.003799) | 0.002657 / 0.004328 (-0.001672) | 0.048713 / 0.004250 (0.044463) | 0.043501 / 0.037052 (0.006449) | 0.278845 / 0.258489 (0.020356) | 0.305322 / 0.293841 (0.011481) | 0.030665 / 0.128546 (-0.097881) | 0.010600 / 0.075646 (-0.065047) | 0.058923 / 0.419271 (-0.360349) | 0.032936 / 0.043533 (-0.010596) | 0.272835 / 0.255139 (0.017696) | 0.293975 / 0.283200 (0.010775) | 0.018193 / 0.141683 (-0.123490) | 1.144903 / 1.452155 (-0.307251) | 1.192220 / 1.492716 (-0.300497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094519 / 0.018006 (0.076513) | 0.305591 / 0.000490 (0.305101) | 0.000221 / 0.000200 (0.000021) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022108 / 0.037411 (-0.015303) | 0.070184 / 0.014526 (0.055658) | 0.081640 / 0.176557 (-0.094916) | 0.124661 / 0.737135 (-0.612474) | 0.082229 / 0.296338 (-0.214110) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303710 / 0.215209 (0.088501) | 2.966478 / 2.077655 (0.888824) | 1.646066 / 1.504120 (0.141946) | 1.551454 / 1.541195 (0.010259) | 1.557995 / 1.468490 (0.089505) | 0.577723 / 4.584777 (-4.007054) | 2.510321 / 3.745712 (-1.235391) | 2.951343 / 5.269862 (-2.318519) | 1.857550 / 4.565676 (-2.708127) | 0.064079 / 0.424275 (-0.360196) | 0.004971 / 0.007607 (-0.002636) | 0.359022 / 0.226044 (0.132978) | 3.628716 / 2.268929 (1.359788) | 2.011380 / 55.444624 (-53.433245) | 1.710407 / 6.876477 (-5.166070) | 1.756235 / 2.142072 (-0.385838) | 0.659185 / 4.805227 (-4.146042) | 0.120245 / 6.500664 (-6.380419) | 0.042751 / 0.075469 (-0.032718) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026794 / 1.841788 (-0.814993) | 12.695125 / 8.074308 (4.620816) | 10.864908 / 10.191392 (0.673516) | 0.136128 / 0.680424 (-0.544295) | 0.016824 / 0.534201 (-0.517377) | 0.289717 / 0.579283 (-0.289567) | 0.282919 / 0.434364 (-0.151445) | 0.323345 / 0.540337 (-0.216992) | 0.556375 / 1.386936 (-0.830561) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#52207295162f734235b71428d13e6a42c6fdc370 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005407 / 0.011353 (-0.005946) | 0.003464 / 0.011008 (-0.007544) | 0.062084 / 0.038508 (0.023576) | 0.052582 / 0.023109 (0.029472) | 0.251239 / 0.275898 (-0.024659) | 0.276675 / 0.323480 (-0.046805) | 0.002894 / 0.007986 (-0.005092) | 0.003850 / 0.004328 (-0.000479) | 0.047789 / 0.004250 (0.043538) | 0.038955 / 0.037052 (0.001903) | 0.258333 / 0.258489 (-0.000156) | 0.290103 / 0.293841 (-0.003738) | 0.027291 / 0.128546 (-0.101256) | 0.010575 / 0.075646 (-0.065071) | 0.207208 / 0.419271 (-0.212063) | 0.035848 / 0.043533 (-0.007685) | 0.253918 / 0.255139 (-0.001221) | 0.269870 / 0.283200 (-0.013330) | 0.019830 / 0.141683 (-0.121853) | 1.085332 / 1.452155 (-0.366823) | 1.171385 / 1.492716 (-0.321331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094956 / 0.018006 (0.076950) | 0.301104 / 0.000490 (0.300614) | 0.000204 / 0.000200 (0.000004) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019045 / 0.037411 (-0.018367) | 0.070815 / 0.014526 (0.056289) | 0.073763 / 0.176557 (-0.102794) | 0.120668 / 0.737135 (-0.616467) | 0.075197 / 0.296338 (-0.221141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286072 / 0.215209 (0.070863) | 2.762868 / 2.077655 (0.685213) | 1.504481 / 1.504120 (0.000361) | 1.390301 / 1.541195 (-0.150894) | 1.449571 / 1.468490 (-0.018919) | 0.555598 / 4.584777 (-4.029179) | 2.404975 / 3.745712 (-1.340737) | 2.864359 / 5.269862 (-2.405503) | 1.764913 / 4.565676 (-2.800763) | 0.062956 / 0.424275 (-0.361320) | 0.005116 / 0.007607 (-0.002491) | 0.344027 / 0.226044 (0.117983) | 3.426781 / 2.268929 (1.157852) | 1.891040 / 55.444624 (-53.553584) | 1.599972 / 6.876477 (-5.276505) | 1.603464 / 2.142072 (-0.538608) | 0.638136 / 4.805227 (-4.167091) | 0.117808 / 6.500664 (-6.382857) | 0.043740 / 0.075469 (-0.031730) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.934654 / 1.841788 (-0.907133) | 12.243698 / 8.074308 (4.169390) | 10.566791 / 10.191392 (0.375399) | 0.130440 / 0.680424 (-0.549983) | 0.014019 / 0.534201 (-0.520182) | 0.285453 / 0.579283 (-0.293831) | 0.266121 / 0.434364 (-0.168243) | 0.325962 / 0.540337 (-0.214375) | 0.422181 / 1.386936 (-0.964755) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005151 / 0.011353 (-0.006202) | 0.003704 / 0.011008 (-0.007304) | 0.049483 / 0.038508 (0.010975) | 0.055147 / 0.023109 (0.032038) | 0.277589 / 0.275898 (0.001691) | 0.301274 / 0.323480 (-0.022206) | 0.004031 / 0.007986 (-0.003955) | 0.002568 / 0.004328 (-0.001760) | 0.048830 / 0.004250 (0.044580) | 0.040391 / 0.037052 (0.003339) | 0.281031 / 0.258489 (0.022541) | 0.304263 / 0.293841 (0.010422) | 0.029237 / 0.128546 (-0.099309) | 0.010598 / 0.075646 (-0.065048) | 0.058089 / 0.419271 (-0.361182) | 0.032529 / 0.043533 (-0.011004) | 0.275761 / 0.255139 (0.020622) | 0.294427 / 0.283200 (0.011227) | 0.017227 / 0.141683 (-0.124456) | 1.138036 / 1.452155 (-0.314119) | 1.201946 / 1.492716 (-0.290770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094241 / 0.018006 (0.076234) | 0.301622 / 0.000490 (0.301132) | 0.000229 / 0.000200 (0.000029) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022731 / 0.037411 (-0.014680) | 0.071217 / 0.014526 (0.056691) | 0.082619 / 0.176557 (-0.093937) | 0.123308 / 0.737135 (-0.613827) | 0.083552 / 0.296338 (-0.212787) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295770 / 0.215209 (0.080561) | 2.886069 / 2.077655 (0.808414) | 1.597686 / 1.504120 (0.093566) | 1.458612 / 1.541195 (-0.082583) | 1.501171 / 1.468490 (0.032680) | 0.575653 / 4.584777 (-4.009124) | 2.444021 / 3.745712 (-1.301691) | 2.860192 / 5.269862 (-2.409669) | 1.758896 / 4.565676 (-2.806780) | 0.063334 / 0.424275 (-0.360941) | 0.004913 / 0.007607 (-0.002694) | 0.341828 / 0.226044 (0.115783) | 3.420310 / 2.268929 (1.151381) | 1.996099 / 55.444624 (-53.448525) | 1.680112 / 6.876477 (-5.196365) | 1.693418 / 2.142072 (-0.448654) | 0.697321 / 4.805227 (-4.107906) | 0.120474 / 6.500664 (-6.380190) | 0.042192 / 0.075469 (-0.033277) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975876 / 1.841788 (-0.865912) | 12.174933 / 8.074308 (4.100625) | 10.400906 / 10.191392 (0.209514) | 0.162244 / 0.680424 (-0.518180) | 0.016443 / 0.534201 (-0.517758) | 0.293430 / 0.579283 (-0.285853) | 0.285664 / 0.434364 (-0.148700) | 0.332322 / 0.540337 (-0.208015) | 0.609815 / 1.386936 (-0.777121) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2c417d087d232b5abf9054ffb10305cc06c5440 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005155 / 0.011353 (-0.006198) | 0.003226 / 0.011008 (-0.007782) | 0.062651 / 0.038508 (0.024143) | 0.051314 / 0.023109 (0.028205) | 0.246075 / 0.275898 (-0.029823) | 0.266859 / 0.323480 (-0.056621) | 0.003895 / 0.007986 (-0.004091) | 0.002462 / 0.004328 (-0.001866) | 0.048097 / 0.004250 (0.043846) | 0.037313 / 0.037052 (0.000261) | 0.253208 / 0.258489 (-0.005281) | 0.280255 / 0.293841 (-0.013585) | 0.027052 / 0.128546 (-0.101494) | 0.010276 / 0.075646 (-0.065370) | 0.205663 / 0.419271 (-0.213608) | 0.035111 / 0.043533 (-0.008422) | 0.253757 / 0.255139 (-0.001382) | 0.265466 / 0.283200 (-0.017733) | 0.017873 / 0.141683 (-0.123810) | 1.118906 / 1.452155 (-0.333249) | 1.176384 / 1.492716 (-0.316332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094921 / 0.018006 (0.076914) | 0.300459 / 0.000490 (0.299970) | 0.000214 / 0.000200 (0.000014) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018430 / 0.037411 (-0.018981) | 0.062690 / 0.014526 (0.048165) | 0.074215 / 0.176557 (-0.102342) | 0.119969 / 0.737135 (-0.617166) | 0.075846 / 0.296338 (-0.220493) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273492 / 0.215209 (0.058283) | 2.667937 / 2.077655 (0.590282) | 1.405912 / 1.504120 (-0.098208) | 1.269041 / 1.541195 (-0.272153) | 1.313461 / 1.468490 (-0.155029) | 0.554633 / 4.584777 (-4.030144) | 2.325552 / 3.745712 (-1.420160) | 2.825580 / 5.269862 (-2.444282) | 1.745432 / 4.565676 (-2.820245) | 0.062497 / 0.424275 (-0.361778) | 0.004935 / 0.007607 (-0.002673) | 0.337045 / 0.226044 (0.111001) | 3.246360 / 2.268929 (0.977432) | 1.775329 / 55.444624 (-53.669296) | 1.491812 / 6.876477 (-5.384665) | 1.499783 / 2.142072 (-0.642290) | 0.636768 / 4.805227 (-4.168459) | 0.116471 / 6.500664 (-6.384193) | 0.041838 / 0.075469 (-0.033631) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.937388 / 1.841788 (-0.904400) | 11.950930 / 8.074308 (3.876622) | 10.532062 / 10.191392 (0.340670) | 0.129490 / 0.680424 (-0.550934) | 0.013907 / 0.534201 (-0.520294) | 0.287503 / 0.579283 (-0.291780) | 0.270548 / 0.434364 (-0.163816) | 0.324321 / 0.540337 (-0.216016) | 0.427639 / 1.386936 (-0.959297) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005272 / 0.011353 (-0.006081) | 0.003413 / 0.011008 (-0.007595) | 0.049800 / 0.038508 (0.011292) | 0.055978 / 0.023109 (0.032868) | 0.274365 / 0.275898 (-0.001533) | 0.293414 / 0.323480 (-0.030066) | 0.003994 / 0.007986 (-0.003992) | 0.002480 / 0.004328 (-0.001848) | 0.048787 / 0.004250 (0.044537) | 0.040520 / 0.037052 (0.003468) | 0.276198 / 0.258489 (0.017709) | 0.301085 / 0.293841 (0.007244) | 0.028352 / 0.128546 (-0.100194) | 0.010631 / 0.075646 (-0.065015) | 0.057103 / 0.419271 (-0.362168) | 0.032277 / 0.043533 (-0.011256) | 0.274472 / 0.255139 (0.019333) | 0.289953 / 0.283200 (0.006754) | 0.018048 / 0.141683 (-0.123635) | 1.120329 / 1.452155 (-0.331826) | 1.175784 / 1.492716 (-0.316932) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102519 / 0.018006 (0.084512) | 0.322030 / 0.000490 (0.321540) | 0.000234 / 0.000200 (0.000034) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023084 / 0.037411 (-0.014327) | 0.069592 / 0.014526 (0.055066) | 0.081293 / 0.176557 (-0.095264) | 0.119546 / 0.737135 (-0.617589) | 0.083249 / 0.296338 (-0.213090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294997 / 0.215209 (0.079788) | 2.925517 / 2.077655 (0.847863) | 1.607824 / 1.504120 (0.103705) | 1.469586 / 1.541195 (-0.071608) | 1.492350 / 1.468490 (0.023860) | 0.561351 / 4.584777 (-4.023426) | 2.446741 / 3.745712 (-1.298972) | 2.842588 / 5.269862 (-2.427273) | 1.789189 / 4.565676 (-2.776487) | 0.064064 / 0.424275 (-0.360211) | 0.005011 / 0.007607 (-0.002597) | 0.351059 / 0.226044 (0.125015) | 3.485277 / 2.268929 (1.216348) | 1.981821 / 55.444624 (-53.462803) | 1.671846 / 6.876477 (-5.204631) | 1.702014 / 2.142072 (-0.440058) | 0.645205 / 4.805227 (-4.160023) | 0.117358 / 6.500664 (-6.383306) | 0.041633 / 0.075469 (-0.033836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963281 / 1.841788 (-0.878506) | 12.141256 / 8.074308 (4.066947) | 10.595207 / 10.191392 (0.403815) | 0.130401 / 0.680424 (-0.550023) | 0.015490 / 0.534201 (-0.518710) | 0.284201 / 0.579283 (-0.295082) | 0.280244 / 0.434364 (-0.154120) | 0.323545 / 0.540337 (-0.216792) | 0.561246 / 1.386936 (-0.825690) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3193829cf0dd9888c42bd7640a71d9d656cba2a \"CML watermark\")\n"
] | 2023-11-17T15:45:15 | 2023-11-22T16:48:18 | 2023-11-22T16:42:08 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6433",
"html_url": "https://github.com/huggingface/datasets/pull/6433",
"diff_url": "https://github.com/huggingface/datasets/pull/6433.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6433.patch",
"merged_at": "2023-11-22T16:42:08"
} | This PR aligns the `tqdm` logic with `huggingface_hub` (without introducing breaking changes), as the current one is error-prone.
Additionally, it improves the doc page about the `datasets`' utilities, and the handling of local `fsspec` paths in `cached_path`.
Fix #6409 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6433/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6432/comments | https://api.github.com/repos/huggingface/datasets/issues/6432/events | https://github.com/huggingface/datasets/issues/6432 | 1,999,258,140 | I_kwDODunzps53KkIc | 6,432 | load_dataset does not load all of the data in my input file | {
"login": "demongolem-biz2",
"id": 121301001,
"node_id": "U_kgDOBzroCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/121301001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/demongolem-biz2",
"html_url": "https://github.com/demongolem-biz2",
"followers_url": "https://api.github.com/users/demongolem-biz2/followers",
"following_url": "https://api.github.com/users/demongolem-biz2/following{/other_user}",
"gists_url": "https://api.github.com/users/demongolem-biz2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/demongolem-biz2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demongolem-biz2/subscriptions",
"organizations_url": "https://api.github.com/users/demongolem-biz2/orgs",
"repos_url": "https://api.github.com/users/demongolem-biz2/repos",
"events_url": "https://api.github.com/users/demongolem-biz2/events{/privacy}",
"received_events_url": "https://api.github.com/users/demongolem-biz2/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"You should use `datasets.load_dataset` instead of `nlp.load_dataset`, as the `nlp` package is outdated.\r\n\r\nIf switching to `datasets.load_dataset` doesn't fix the issue, sharing the JSON file (feel free to replace the data with dummy data) would be nice so that we can reproduce it ourselves."
] | 2023-11-17T14:28:50 | 2023-11-22T17:34:58 | null | NONE | null | null | null | ### Describe the bug
I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements.
### Steps to reproduce the bug
train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.VALIDATION)
logger.info(len(train_dataset))
logger.info(len(valid_dataset))
Both train and valid input are 127 items. However, they both only load 124 items. The input format is in json. At the end of the day, I am trying to create .pt files.
### Expected behavior
I see all 127 elements in my dataset when performing len
### Environment info
Python 3.10. CentOS operating system. nlp==0.40, datasets==2.14.5, transformers==4.26.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6432/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6431/comments | https://api.github.com/repos/huggingface/datasets/issues/6431/events | https://github.com/huggingface/datasets/pull/6431 | 1,997,202,770 | PR_kwDODunzps5fpfos | 6,431 | Create DatasetNotFoundError and DataFilesNotFoundError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004459 / 0.011353 (-0.006894) | 0.002883 / 0.011008 (-0.008125) | 0.062434 / 0.038508 (0.023925) | 0.030353 / 0.023109 (0.007244) | 0.256696 / 0.275898 (-0.019202) | 0.280557 / 0.323480 (-0.042923) | 0.003903 / 0.007986 (-0.004083) | 0.002424 / 0.004328 (-0.001905) | 0.048509 / 0.004250 (0.044259) | 0.043583 / 0.037052 (0.006531) | 0.253900 / 0.258489 (-0.004590) | 0.309146 / 0.293841 (0.015305) | 0.023253 / 0.128546 (-0.105294) | 0.007073 / 0.075646 (-0.068573) | 0.204118 / 0.419271 (-0.215154) | 0.056429 / 0.043533 (0.012897) | 0.247331 / 0.255139 (-0.007808) | 0.271581 / 0.283200 (-0.011619) | 0.017021 / 0.141683 (-0.124662) | 1.115057 / 1.452155 (-0.337098) | 1.209947 / 1.492716 (-0.282770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093141 / 0.018006 (0.075134) | 0.295987 / 0.000490 (0.295497) | 0.000221 / 0.000200 (0.000021) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019182 / 0.037411 (-0.018230) | 0.062049 / 0.014526 (0.047523) | 0.073824 / 0.176557 (-0.102733) | 0.120175 / 0.737135 (-0.616960) | 0.074700 / 0.296338 (-0.221639) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280036 / 0.215209 (0.064827) | 2.731512 / 2.077655 (0.653857) | 1.414606 / 1.504120 (-0.089514) | 1.302433 / 1.541195 (-0.238761) | 1.313012 / 1.468490 (-0.155478) | 0.399722 / 4.584777 (-4.185055) | 2.371249 / 3.745712 (-1.374463) | 2.582520 / 5.269862 (-2.687342) | 1.558505 / 4.565676 (-3.007171) | 0.045765 / 0.424275 (-0.378510) | 0.004748 / 0.007607 (-0.002859) | 0.327623 / 0.226044 (0.101578) | 3.258742 / 2.268929 (0.989814) | 1.756798 / 55.444624 (-53.687826) | 1.494551 / 6.876477 (-5.381925) | 1.518161 / 2.142072 (-0.623911) | 0.468560 / 4.805227 (-4.336667) | 0.101034 / 6.500664 (-6.399630) | 0.048259 / 0.075469 (-0.027210) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.938146 / 1.841788 (-0.903642) | 11.636387 / 8.074308 (3.562078) | 10.638909 / 10.191392 (0.447517) | 0.128340 / 0.680424 (-0.552084) | 0.015194 / 0.534201 (-0.519007) | 0.275961 / 0.579283 (-0.303322) | 0.264629 / 0.434364 (-0.169735) | 0.308580 / 0.540337 (-0.231758) | 0.433658 / 1.386936 (-0.953278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004797 / 0.011353 (-0.006556) | 0.002801 / 0.011008 (-0.008208) | 0.048101 / 0.038508 (0.009593) | 0.056406 / 0.023109 (0.033296) | 0.274966 / 0.275898 (-0.000932) | 0.298310 / 0.323480 (-0.025170) | 0.004115 / 0.007986 (-0.003871) | 0.002437 / 0.004328 (-0.001891) | 0.047921 / 0.004250 (0.043671) | 0.038812 / 0.037052 (0.001760) | 0.279594 / 0.258489 (0.021105) | 0.313703 / 0.293841 (0.019862) | 0.024485 / 0.128546 (-0.104061) | 0.007095 / 0.075646 (-0.068551) | 0.053398 / 0.419271 (-0.365874) | 0.032306 / 0.043533 (-0.011227) | 0.278014 / 0.255139 (0.022875) | 0.301156 / 0.283200 (0.017956) | 0.017353 / 0.141683 (-0.124330) | 1.150168 / 1.452155 (-0.301987) | 1.190822 / 1.492716 (-0.301894) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092162 / 0.018006 (0.074156) | 0.301031 / 0.000490 (0.300541) | 0.000244 / 0.000200 (0.000044) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020918 / 0.037411 (-0.016494) | 0.072030 / 0.014526 (0.057504) | 0.081813 / 0.176557 (-0.094743) | 0.120233 / 0.737135 (-0.616903) | 0.082874 / 0.296338 (-0.213465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291659 / 0.215209 (0.076450) | 2.841978 / 2.077655 (0.764323) | 1.594207 / 1.504120 (0.090087) | 1.473941 / 1.541195 (-0.067254) | 1.514393 / 1.468490 (0.045903) | 0.393393 / 4.584777 (-4.191384) | 2.443663 / 3.745712 (-1.302050) | 2.545747 / 5.269862 (-2.724114) | 1.521130 / 4.565676 (-3.044546) | 0.046246 / 0.424275 (-0.378030) | 0.004826 / 0.007607 (-0.002781) | 0.340909 / 0.226044 (0.114865) | 3.319474 / 2.268929 (1.050546) | 1.933110 / 55.444624 (-53.511515) | 1.662463 / 6.876477 (-5.214014) | 1.670331 / 2.142072 (-0.471742) | 0.458062 / 4.805227 (-4.347165) | 0.098397 / 6.500664 (-6.402267) | 0.041339 / 0.075469 (-0.034130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973718 / 1.841788 (-0.868070) | 12.095266 / 8.074308 (4.020957) | 10.761212 / 10.191392 (0.569820) | 0.142352 / 0.680424 (-0.538072) | 0.015423 / 0.534201 (-0.518778) | 0.270912 / 0.579283 (-0.308371) | 0.276618 / 0.434364 (-0.157746) | 0.309120 / 0.540337 (-0.231217) | 0.415330 / 1.386936 (-0.971606) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf4ba6f0e2641056774c01f62984aef5de5d68f1 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004676 / 0.011353 (-0.006677) | 0.003101 / 0.011008 (-0.007907) | 0.062260 / 0.038508 (0.023752) | 0.030012 / 0.023109 (0.006903) | 0.253704 / 0.275898 (-0.022194) | 0.276404 / 0.323480 (-0.047075) | 0.004060 / 0.007986 (-0.003926) | 0.002467 / 0.004328 (-0.001861) | 0.047921 / 0.004250 (0.043670) | 0.045760 / 0.037052 (0.008708) | 0.254529 / 0.258489 (-0.003960) | 0.286283 / 0.293841 (-0.007558) | 0.023301 / 0.128546 (-0.105246) | 0.007407 / 0.075646 (-0.068239) | 0.204541 / 0.419271 (-0.214730) | 0.056387 / 0.043533 (0.012854) | 0.252120 / 0.255139 (-0.003019) | 0.275795 / 0.283200 (-0.007404) | 0.018648 / 0.141683 (-0.123034) | 1.113484 / 1.452155 (-0.338671) | 1.168685 / 1.492716 (-0.324031) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098286 / 0.018006 (0.080280) | 0.304619 / 0.000490 (0.304129) | 0.000225 / 0.000200 (0.000025) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019183 / 0.037411 (-0.018229) | 0.062183 / 0.014526 (0.047657) | 0.074288 / 0.176557 (-0.102269) | 0.120576 / 0.737135 (-0.616560) | 0.074833 / 0.296338 (-0.221505) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280512 / 0.215209 (0.065303) | 2.770052 / 2.077655 (0.692397) | 1.471234 / 1.504120 (-0.032886) | 1.352080 / 1.541195 (-0.189114) | 1.374518 / 1.468490 (-0.093973) | 0.407108 / 4.584777 (-4.177669) | 2.400581 / 3.745712 (-1.345131) | 2.677507 / 5.269862 (-2.592355) | 1.578042 / 4.565676 (-2.987635) | 0.048539 / 0.424275 (-0.375736) | 0.004905 / 0.007607 (-0.002703) | 0.346676 / 0.226044 (0.120631) | 3.367732 / 2.268929 (1.098803) | 1.844405 / 55.444624 (-53.600220) | 1.576883 / 6.876477 (-5.299594) | 1.666986 / 2.142072 (-0.475086) | 0.495872 / 4.805227 (-4.309355) | 0.103142 / 6.500664 (-6.397522) | 0.044037 / 0.075469 (-0.031432) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980865 / 1.841788 (-0.860923) | 12.268525 / 8.074308 (4.194217) | 10.756554 / 10.191392 (0.565162) | 0.129954 / 0.680424 (-0.550470) | 0.013864 / 0.534201 (-0.520337) | 0.267653 / 0.579283 (-0.311630) | 0.265120 / 0.434364 (-0.169244) | 0.309050 / 0.540337 (-0.231288) | 0.423877 / 1.386936 (-0.963059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005074 / 0.011353 (-0.006279) | 0.003001 / 0.011008 (-0.008007) | 0.048271 / 0.038508 (0.009763) | 0.061206 / 0.023109 (0.038097) | 0.279268 / 0.275898 (0.003370) | 0.302592 / 0.323480 (-0.020888) | 0.004177 / 0.007986 (-0.003809) | 0.002452 / 0.004328 (-0.001876) | 0.048259 / 0.004250 (0.044009) | 0.040032 / 0.037052 (0.002979) | 0.281398 / 0.258489 (0.022909) | 0.314121 / 0.293841 (0.020280) | 0.025137 / 0.128546 (-0.103409) | 0.007230 / 0.075646 (-0.068416) | 0.054537 / 0.419271 (-0.364735) | 0.033266 / 0.043533 (-0.010267) | 0.277305 / 0.255139 (0.022166) | 0.295993 / 0.283200 (0.012794) | 0.019278 / 0.141683 (-0.122405) | 1.131700 / 1.452155 (-0.320454) | 1.183848 / 1.492716 (-0.308868) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092258 / 0.018006 (0.074251) | 0.310668 / 0.000490 (0.310178) | 0.000219 / 0.000200 (0.000019) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021838 / 0.037411 (-0.015574) | 0.071382 / 0.014526 (0.056857) | 0.081389 / 0.176557 (-0.095168) | 0.120389 / 0.737135 (-0.616746) | 0.084135 / 0.296338 (-0.212203) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291676 / 0.215209 (0.076467) | 2.840623 / 2.077655 (0.762968) | 1.565748 / 1.504120 (0.061628) | 1.452529 / 1.541195 (-0.088666) | 1.490633 / 1.468490 (0.022143) | 0.402878 / 4.584777 (-4.181899) | 2.486192 / 3.745712 (-1.259520) | 2.520563 / 5.269862 (-2.749299) | 1.518550 / 4.565676 (-3.047127) | 0.047423 / 0.424275 (-0.376852) | 0.004823 / 0.007607 (-0.002784) | 0.353122 / 0.226044 (0.127078) | 3.452136 / 2.268929 (1.183208) | 1.973798 / 55.444624 (-53.470827) | 1.669569 / 6.876477 (-5.206907) | 1.654910 / 2.142072 (-0.487163) | 0.486746 / 4.805227 (-4.318481) | 0.097260 / 6.500664 (-6.403404) | 0.040608 / 0.075469 (-0.034861) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989705 / 1.841788 (-0.852083) | 12.114386 / 8.074308 (4.040077) | 11.284551 / 10.191392 (1.093159) | 0.141408 / 0.680424 (-0.539016) | 0.015275 / 0.534201 (-0.518926) | 0.267407 / 0.579283 (-0.311877) | 0.281007 / 0.434364 (-0.153357) | 0.309617 / 0.540337 (-0.230720) | 0.414033 / 1.386936 (-0.972903) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6f3f3e3feec9d7d4d36111401787eb7b5fd51836 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004888 / 0.011353 (-0.006465) | 0.002775 / 0.011008 (-0.008233) | 0.062000 / 0.038508 (0.023492) | 0.050694 / 0.023109 (0.027584) | 0.257063 / 0.275898 (-0.018835) | 0.282743 / 0.323480 (-0.040736) | 0.002862 / 0.007986 (-0.005124) | 0.002305 / 0.004328 (-0.002023) | 0.049549 / 0.004250 (0.045299) | 0.038754 / 0.037052 (0.001701) | 0.264047 / 0.258489 (0.005558) | 0.310162 / 0.293841 (0.016321) | 0.022901 / 0.128546 (-0.105645) | 0.006894 / 0.075646 (-0.068752) | 0.202467 / 0.419271 (-0.216805) | 0.035901 / 0.043533 (-0.007631) | 0.262344 / 0.255139 (0.007205) | 0.285563 / 0.283200 (0.002364) | 0.017070 / 0.141683 (-0.124613) | 1.113972 / 1.452155 (-0.338182) | 1.176261 / 1.492716 (-0.316455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092912 / 0.018006 (0.074906) | 0.302610 / 0.000490 (0.302120) | 0.000204 / 0.000200 (0.000005) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018232 / 0.037411 (-0.019179) | 0.062367 / 0.014526 (0.047841) | 0.074570 / 0.176557 (-0.101987) | 0.120468 / 0.737135 (-0.616668) | 0.075187 / 0.296338 (-0.221151) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279760 / 0.215209 (0.064551) | 2.715372 / 2.077655 (0.637717) | 1.461636 / 1.504120 (-0.042484) | 1.324220 / 1.541195 (-0.216975) | 1.350724 / 1.468490 (-0.117766) | 0.395648 / 4.584777 (-4.189129) | 2.376548 / 3.745712 (-1.369164) | 2.594662 / 5.269862 (-2.675200) | 1.553528 / 4.565676 (-3.012148) | 0.047875 / 0.424275 (-0.376400) | 0.005287 / 0.007607 (-0.002321) | 0.334734 / 0.226044 (0.108689) | 3.294753 / 2.268929 (1.025825) | 1.797901 / 55.444624 (-53.646724) | 1.510907 / 6.876477 (-5.365570) | 1.536070 / 2.142072 (-0.606003) | 0.474672 / 4.805227 (-4.330555) | 0.099323 / 6.500664 (-6.401341) | 0.041703 / 0.075469 (-0.033766) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.947441 / 1.841788 (-0.894347) | 11.451378 / 8.074308 (3.377070) | 10.283213 / 10.191392 (0.091821) | 0.131032 / 0.680424 (-0.549392) | 0.014423 / 0.534201 (-0.519777) | 0.272568 / 0.579283 (-0.306715) | 0.267127 / 0.434364 (-0.167237) | 0.307361 / 0.540337 (-0.232976) | 0.403858 / 1.386936 (-0.983078) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004836 / 0.011353 (-0.006517) | 0.002544 / 0.011008 (-0.008464) | 0.047979 / 0.038508 (0.009471) | 0.052211 / 0.023109 (0.029102) | 0.273394 / 0.275898 (-0.002504) | 0.291202 / 0.323480 (-0.032277) | 0.004094 / 0.007986 (-0.003891) | 0.002415 / 0.004328 (-0.001914) | 0.048057 / 0.004250 (0.043807) | 0.039756 / 0.037052 (0.002703) | 0.277301 / 0.258489 (0.018812) | 0.297626 / 0.293841 (0.003785) | 0.024641 / 0.128546 (-0.103905) | 0.006957 / 0.075646 (-0.068690) | 0.053574 / 0.419271 (-0.365697) | 0.036532 / 0.043533 (-0.007001) | 0.273753 / 0.255139 (0.018614) | 0.294254 / 0.283200 (0.011054) | 0.022252 / 0.141683 (-0.119431) | 1.128609 / 1.452155 (-0.323546) | 1.217322 / 1.492716 (-0.275394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091050 / 0.018006 (0.073044) | 0.300089 / 0.000490 (0.299600) | 0.000215 / 0.000200 (0.000015) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021423 / 0.037411 (-0.015988) | 0.069892 / 0.014526 (0.055366) | 0.081125 / 0.176557 (-0.095432) | 0.118725 / 0.737135 (-0.618411) | 0.081357 / 0.296338 (-0.214981) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295046 / 0.215209 (0.079837) | 2.868813 / 2.077655 (0.791159) | 1.579613 / 1.504120 (0.075493) | 1.449308 / 1.541195 (-0.091887) | 1.478804 / 1.468490 (0.010314) | 0.416916 / 4.584777 (-4.167861) | 2.461093 / 3.745712 (-1.284619) | 2.449792 / 5.269862 (-2.820070) | 1.573930 / 4.565676 (-2.991746) | 0.046808 / 0.424275 (-0.377467) | 0.004811 / 0.007607 (-0.002796) | 0.352805 / 0.226044 (0.126761) | 3.495034 / 2.268929 (1.226105) | 1.952019 / 55.444624 (-53.492606) | 1.642607 / 6.876477 (-5.233869) | 1.775235 / 2.142072 (-0.366837) | 0.482196 / 4.805227 (-4.323032) | 0.099562 / 6.500664 (-6.401102) | 0.040709 / 0.075469 (-0.034760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972750 / 1.841788 (-0.869038) | 11.905172 / 8.074308 (3.830864) | 10.613847 / 10.191392 (0.422455) | 0.129892 / 0.680424 (-0.550532) | 0.015611 / 0.534201 (-0.518590) | 0.271884 / 0.579283 (-0.307400) | 0.275270 / 0.434364 (-0.159094) | 0.303213 / 0.540337 (-0.237125) | 0.402338 / 1.386936 (-0.984598) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf8fa7ad7609ad34d4cc689f529ea606dd2560e0 \"CML watermark\")\n",
"I think this PR can be merged.",
"you already have an approval, feel free to merge!\r\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004826 / 0.011353 (-0.006527) | 0.002979 / 0.011008 (-0.008029) | 0.062055 / 0.038508 (0.023547) | 0.056574 / 0.023109 (0.033465) | 0.244342 / 0.275898 (-0.031556) | 0.278040 / 0.323480 (-0.045439) | 0.004020 / 0.007986 (-0.003965) | 0.002474 / 0.004328 (-0.001855) | 0.048451 / 0.004250 (0.044200) | 0.038633 / 0.037052 (0.001580) | 0.251389 / 0.258489 (-0.007100) | 0.282739 / 0.293841 (-0.011102) | 0.023298 / 0.128546 (-0.105248) | 0.007513 / 0.075646 (-0.068134) | 0.203014 / 0.419271 (-0.216257) | 0.036216 / 0.043533 (-0.007317) | 0.250988 / 0.255139 (-0.004151) | 0.281228 / 0.283200 (-0.001972) | 0.018259 / 0.141683 (-0.123424) | 1.121200 / 1.452155 (-0.330955) | 1.184298 / 1.492716 (-0.308419) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093730 / 0.018006 (0.075724) | 0.301716 / 0.000490 (0.301226) | 0.000223 / 0.000200 (0.000023) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019238 / 0.037411 (-0.018173) | 0.064329 / 0.014526 (0.049803) | 0.075657 / 0.176557 (-0.100899) | 0.122616 / 0.737135 (-0.614519) | 0.077459 / 0.296338 (-0.218880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280153 / 0.215209 (0.064944) | 2.715488 / 2.077655 (0.637833) | 1.449666 / 1.504120 (-0.054454) | 1.331903 / 1.541195 (-0.209292) | 1.396200 / 1.468490 (-0.072290) | 0.398861 / 4.584777 (-4.185916) | 2.402814 / 3.745712 (-1.342898) | 2.664033 / 5.269862 (-2.605829) | 1.619589 / 4.565676 (-2.946088) | 0.044798 / 0.424275 (-0.379477) | 0.004989 / 0.007607 (-0.002618) | 0.336822 / 0.226044 (0.110777) | 3.245604 / 2.268929 (0.976676) | 1.815633 / 55.444624 (-53.628991) | 1.557975 / 6.876477 (-5.318501) | 1.603655 / 2.142072 (-0.538417) | 0.462980 / 4.805227 (-4.342247) | 0.098340 / 6.500664 (-6.402324) | 0.042750 / 0.075469 (-0.032719) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973785 / 1.841788 (-0.868003) | 12.379356 / 8.074308 (4.305048) | 10.540164 / 10.191392 (0.348772) | 0.144803 / 0.680424 (-0.535621) | 0.013875 / 0.534201 (-0.520326) | 0.270192 / 0.579283 (-0.309091) | 0.264614 / 0.434364 (-0.169750) | 0.313454 / 0.540337 (-0.226883) | 0.402310 / 1.386936 (-0.984626) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004987 / 0.011353 (-0.006366) | 0.003017 / 0.011008 (-0.007992) | 0.048592 / 0.038508 (0.010084) | 0.059370 / 0.023109 (0.036261) | 0.277536 / 0.275898 (0.001638) | 0.300592 / 0.323480 (-0.022888) | 0.004870 / 0.007986 (-0.003115) | 0.002452 / 0.004328 (-0.001876) | 0.047972 / 0.004250 (0.043721) | 0.042336 / 0.037052 (0.005283) | 0.277570 / 0.258489 (0.019081) | 0.304739 / 0.293841 (0.010898) | 0.025313 / 0.128546 (-0.103233) | 0.007219 / 0.075646 (-0.068427) | 0.053967 / 0.419271 (-0.365304) | 0.033314 / 0.043533 (-0.010219) | 0.273908 / 0.255139 (0.018769) | 0.291913 / 0.283200 (0.008713) | 0.019440 / 0.141683 (-0.122243) | 1.111047 / 1.452155 (-0.341107) | 1.191276 / 1.492716 (-0.301440) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093985 / 0.018006 (0.075979) | 0.303105 / 0.000490 (0.302615) | 0.000235 / 0.000200 (0.000035) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022226 / 0.037411 (-0.015186) | 0.072151 / 0.014526 (0.057625) | 0.081700 / 0.176557 (-0.094857) | 0.121407 / 0.737135 (-0.615729) | 0.083217 / 0.296338 (-0.213121) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297286 / 0.215209 (0.082077) | 2.913392 / 2.077655 (0.835738) | 1.591758 / 1.504120 (0.087638) | 1.463339 / 1.541195 (-0.077856) | 1.495095 / 1.468490 (0.026605) | 0.414341 / 4.584777 (-4.170436) | 2.412438 / 3.745712 (-1.333275) | 2.611452 / 5.269862 (-2.658410) | 1.658545 / 4.565676 (-2.907132) | 0.047269 / 0.424275 (-0.377007) | 0.004872 / 0.007607 (-0.002735) | 0.350746 / 0.226044 (0.124701) | 3.491482 / 2.268929 (1.222554) | 1.999009 / 55.444624 (-53.445616) | 1.672862 / 6.876477 (-5.203615) | 1.863095 / 2.142072 (-0.278977) | 0.484746 / 4.805227 (-4.320481) | 0.100774 / 6.500664 (-6.399890) | 0.042519 / 0.075469 (-0.032950) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984497 / 1.841788 (-0.857291) | 12.972576 / 8.074308 (4.898268) | 10.886021 / 10.191392 (0.694629) | 0.141639 / 0.680424 (-0.538785) | 0.015726 / 0.534201 (-0.518475) | 0.284160 / 0.579283 (-0.295123) | 0.291437 / 0.434364 (-0.142927) | 0.314121 / 0.540337 (-0.226217) | 0.420439 / 1.386936 (-0.966497) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87ad7c7767b9cda62113c207f0ff42506a8f27c0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004881 / 0.011353 (-0.006472) | 0.002550 / 0.011008 (-0.008458) | 0.062171 / 0.038508 (0.023663) | 0.055341 / 0.023109 (0.032232) | 0.243132 / 0.275898 (-0.032766) | 0.265174 / 0.323480 (-0.058306) | 0.002934 / 0.007986 (-0.005052) | 0.002233 / 0.004328 (-0.002096) | 0.049302 / 0.004250 (0.045052) | 0.039491 / 0.037052 (0.002439) | 0.252776 / 0.258489 (-0.005713) | 0.280923 / 0.293841 (-0.012918) | 0.022585 / 0.128546 (-0.105962) | 0.006888 / 0.075646 (-0.068759) | 0.202751 / 0.419271 (-0.216521) | 0.035250 / 0.043533 (-0.008283) | 0.251745 / 0.255139 (-0.003394) | 0.267431 / 0.283200 (-0.015768) | 0.019486 / 0.141683 (-0.122197) | 1.161783 / 1.452155 (-0.290372) | 1.194254 / 1.492716 (-0.298463) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097772 / 0.018006 (0.079766) | 0.309137 / 0.000490 (0.308647) | 0.000225 / 0.000200 (0.000025) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018719 / 0.037411 (-0.018693) | 0.062211 / 0.014526 (0.047686) | 0.074291 / 0.176557 (-0.102266) | 0.119436 / 0.737135 (-0.617699) | 0.075519 / 0.296338 (-0.220820) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279778 / 0.215209 (0.064569) | 2.730678 / 2.077655 (0.653023) | 1.413922 / 1.504120 (-0.090198) | 1.286747 / 1.541195 (-0.254447) | 1.299835 / 1.468490 (-0.168656) | 0.392516 / 4.584777 (-4.192261) | 2.381816 / 3.745712 (-1.363896) | 2.616944 / 5.269862 (-2.652918) | 1.606152 / 4.565676 (-2.959525) | 0.044867 / 0.424275 (-0.379408) | 0.004915 / 0.007607 (-0.002692) | 0.334078 / 0.226044 (0.108034) | 3.388096 / 2.268929 (1.119167) | 1.756666 / 55.444624 (-53.687958) | 1.497211 / 6.876477 (-5.379266) | 1.496787 / 2.142072 (-0.645285) | 0.469145 / 4.805227 (-4.336082) | 0.097821 / 6.500664 (-6.402843) | 0.041850 / 0.075469 (-0.033619) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956878 / 1.841788 (-0.884910) | 11.520184 / 8.074308 (3.445875) | 10.659216 / 10.191392 (0.467824) | 0.143687 / 0.680424 (-0.536737) | 0.014118 / 0.534201 (-0.520083) | 0.270990 / 0.579283 (-0.308293) | 0.270057 / 0.434364 (-0.164306) | 0.311109 / 0.540337 (-0.229229) | 0.407042 / 1.386936 (-0.979894) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004816 / 0.011353 (-0.006537) | 0.002898 / 0.011008 (-0.008110) | 0.048540 / 0.038508 (0.010032) | 0.055286 / 0.023109 (0.032176) | 0.279086 / 0.275898 (0.003187) | 0.298950 / 0.323480 (-0.024529) | 0.004090 / 0.007986 (-0.003896) | 0.002497 / 0.004328 (-0.001832) | 0.049160 / 0.004250 (0.044910) | 0.040612 / 0.037052 (0.003560) | 0.287832 / 0.258489 (0.029343) | 0.305617 / 0.293841 (0.011776) | 0.023936 / 0.128546 (-0.104610) | 0.007565 / 0.075646 (-0.068081) | 0.054037 / 0.419271 (-0.365235) | 0.032389 / 0.043533 (-0.011144) | 0.283031 / 0.255139 (0.027892) | 0.295411 / 0.283200 (0.012212) | 0.018466 / 0.141683 (-0.123217) | 1.134660 / 1.452155 (-0.317495) | 1.196212 / 1.492716 (-0.296504) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099961 / 0.018006 (0.081955) | 0.310831 / 0.000490 (0.310342) | 0.000238 / 0.000200 (0.000038) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021566 / 0.037411 (-0.015845) | 0.070255 / 0.014526 (0.055729) | 0.081221 / 0.176557 (-0.095336) | 0.119404 / 0.737135 (-0.617732) | 0.083005 / 0.296338 (-0.213333) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302788 / 0.215209 (0.087579) | 2.928876 / 2.077655 (0.851221) | 1.601221 / 1.504120 (0.097101) | 1.485147 / 1.541195 (-0.056047) | 1.508698 / 1.468490 (0.040207) | 0.402783 / 4.584777 (-4.181994) | 2.432151 / 3.745712 (-1.313561) | 2.476848 / 5.269862 (-2.793013) | 1.585487 / 4.565676 (-2.980189) | 0.045965 / 0.424275 (-0.378310) | 0.004818 / 0.007607 (-0.002789) | 0.354847 / 0.226044 (0.128803) | 3.500670 / 2.268929 (1.231742) | 1.951904 / 55.444624 (-53.492720) | 1.675152 / 6.876477 (-5.201325) | 1.795971 / 2.142072 (-0.346101) | 0.470625 / 4.805227 (-4.334602) | 0.126080 / 6.500664 (-6.374584) | 0.040506 / 0.075469 (-0.034963) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985251 / 1.841788 (-0.856536) | 12.316710 / 8.074308 (4.242402) | 10.674437 / 10.191392 (0.483045) | 0.133622 / 0.680424 (-0.546802) | 0.016756 / 0.534201 (-0.517445) | 0.269318 / 0.579283 (-0.309965) | 0.282258 / 0.434364 (-0.152106) | 0.309941 / 0.540337 (-0.230396) | 0.403189 / 1.386936 (-0.983747) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#08ceb927025575c453228cab31291b74043dba1a \"CML watermark\")\n",
"I am merging this PR because we need it by `datasets-server`.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004935 / 0.011353 (-0.006418) | 0.002643 / 0.011008 (-0.008365) | 0.064449 / 0.038508 (0.025941) | 0.053110 / 0.023109 (0.030001) | 0.261576 / 0.275898 (-0.014322) | 0.270866 / 0.323480 (-0.052614) | 0.002895 / 0.007986 (-0.005091) | 0.002349 / 0.004328 (-0.001979) | 0.047620 / 0.004250 (0.043370) | 0.038699 / 0.037052 (0.001647) | 0.246663 / 0.258489 (-0.011826) | 0.282021 / 0.293841 (-0.011820) | 0.022807 / 0.128546 (-0.105739) | 0.007242 / 0.075646 (-0.068404) | 0.204236 / 0.419271 (-0.215035) | 0.035429 / 0.043533 (-0.008104) | 0.241684 / 0.255139 (-0.013455) | 0.262343 / 0.283200 (-0.020857) | 0.020036 / 0.141683 (-0.121647) | 1.112687 / 1.452155 (-0.339467) | 1.167086 / 1.492716 (-0.325630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.107059 / 0.018006 (0.089053) | 0.301036 / 0.000490 (0.300546) | 0.000224 / 0.000200 (0.000024) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018464 / 0.037411 (-0.018947) | 0.063822 / 0.014526 (0.049296) | 0.073562 / 0.176557 (-0.102994) | 0.120136 / 0.737135 (-0.616999) | 0.074934 / 0.296338 (-0.221405) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275474 / 0.215209 (0.060265) | 2.714239 / 2.077655 (0.636584) | 1.455535 / 1.504120 (-0.048585) | 1.336530 / 1.541195 (-0.204665) | 1.359607 / 1.468490 (-0.108883) | 0.396303 / 4.584777 (-4.188474) | 2.366076 / 3.745712 (-1.379636) | 2.600755 / 5.269862 (-2.669107) | 1.572382 / 4.565676 (-2.993294) | 0.045795 / 0.424275 (-0.378480) | 0.004932 / 0.007607 (-0.002675) | 0.332175 / 0.226044 (0.106130) | 3.257843 / 2.268929 (0.988915) | 1.799021 / 55.444624 (-53.645603) | 1.532813 / 6.876477 (-5.343663) | 1.552279 / 2.142072 (-0.589794) | 0.471369 / 4.805227 (-4.333858) | 0.098931 / 6.500664 (-6.401733) | 0.042735 / 0.075469 (-0.032734) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960779 / 1.841788 (-0.881009) | 11.741631 / 8.074308 (3.667322) | 10.355721 / 10.191392 (0.164329) | 0.129025 / 0.680424 (-0.551399) | 0.013794 / 0.534201 (-0.520407) | 0.267268 / 0.579283 (-0.312015) | 0.265582 / 0.434364 (-0.168782) | 0.306242 / 0.540337 (-0.234095) | 0.400367 / 1.386936 (-0.986569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004966 / 0.011353 (-0.006387) | 0.002846 / 0.011008 (-0.008163) | 0.049104 / 0.038508 (0.010596) | 0.055436 / 0.023109 (0.032327) | 0.273892 / 0.275898 (-0.002006) | 0.300207 / 0.323480 (-0.023273) | 0.004017 / 0.007986 (-0.003969) | 0.002465 / 0.004328 (-0.001863) | 0.048088 / 0.004250 (0.043837) | 0.040037 / 0.037052 (0.002984) | 0.279918 / 0.258489 (0.021429) | 0.305378 / 0.293841 (0.011537) | 0.024326 / 0.128546 (-0.104220) | 0.006992 / 0.075646 (-0.068654) | 0.053545 / 0.419271 (-0.365726) | 0.032312 / 0.043533 (-0.011221) | 0.272899 / 0.255139 (0.017760) | 0.289683 / 0.283200 (0.006483) | 0.019121 / 0.141683 (-0.122562) | 1.133296 / 1.452155 (-0.318858) | 1.220989 / 1.492716 (-0.271728) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093193 / 0.018006 (0.075187) | 0.307658 / 0.000490 (0.307168) | 0.000224 / 0.000200 (0.000024) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022906 / 0.037411 (-0.014506) | 0.080931 / 0.014526 (0.066405) | 0.081442 / 0.176557 (-0.095115) | 0.121150 / 0.737135 (-0.615986) | 0.083387 / 0.296338 (-0.212952) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294979 / 0.215209 (0.079770) | 2.900090 / 2.077655 (0.822435) | 1.610061 / 1.504120 (0.105941) | 1.455118 / 1.541195 (-0.086077) | 1.456599 / 1.468490 (-0.011891) | 0.397919 / 4.584777 (-4.186858) | 2.421010 / 3.745712 (-1.324702) | 2.486527 / 5.269862 (-2.783334) | 1.573854 / 4.565676 (-2.991822) | 0.046199 / 0.424275 (-0.378076) | 0.004888 / 0.007607 (-0.002719) | 0.342183 / 0.226044 (0.116139) | 3.392068 / 2.268929 (1.123140) | 1.963688 / 55.444624 (-53.480936) | 1.667611 / 6.876477 (-5.208866) | 1.833706 / 2.142072 (-0.308367) | 0.509421 / 4.805227 (-4.295806) | 0.099669 / 6.500664 (-6.400995) | 0.041004 / 0.075469 (-0.034465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956314 / 1.841788 (-0.885474) | 12.190194 / 8.074308 (4.115886) | 10.417839 / 10.191392 (0.226447) | 0.144139 / 0.680424 (-0.536285) | 0.015841 / 0.534201 (-0.518359) | 0.270436 / 0.579283 (-0.308847) | 0.273952 / 0.434364 (-0.160412) | 0.303018 / 0.540337 (-0.237319) | 0.410163 / 1.386936 (-0.976773) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aa8558fc7fe1f9f7675c7c5d21a14d1a19598296 \"CML watermark\")\n"
] | 2023-11-16T16:02:55 | 2023-11-22T15:18:51 | 2023-11-22T15:12:33 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6431",
"html_url": "https://github.com/huggingface/datasets/pull/6431",
"diff_url": "https://github.com/huggingface/datasets/pull/6431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6431.patch",
"merged_at": "2023-11-22T15:12:33"
} | Create `DatasetNotFoundError` and `DataFilesNotFoundError`.
Fix #6397.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6431/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6429/comments | https://api.github.com/repos/huggingface/datasets/issues/6429/events | https://github.com/huggingface/datasets/pull/6429 | 1,996,723,698 | PR_kwDODunzps5fn1r_ | 6,429 | Add trust_remote_code argument | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004947 / 0.011353 (-0.006405) | 0.002961 / 0.011008 (-0.008047) | 0.063474 / 0.038508 (0.024966) | 0.030162 / 0.023109 (0.007053) | 0.232388 / 0.275898 (-0.043511) | 0.257654 / 0.323480 (-0.065826) | 0.002969 / 0.007986 (-0.005017) | 0.002336 / 0.004328 (-0.001993) | 0.049724 / 0.004250 (0.045473) | 0.045608 / 0.037052 (0.008555) | 0.236079 / 0.258489 (-0.022410) | 0.267809 / 0.293841 (-0.026032) | 0.023805 / 0.128546 (-0.104741) | 0.007177 / 0.075646 (-0.068470) | 0.202167 / 0.419271 (-0.217104) | 0.056181 / 0.043533 (0.012648) | 0.256464 / 0.255139 (0.001325) | 0.271908 / 0.283200 (-0.011292) | 0.020211 / 0.141683 (-0.121472) | 1.114112 / 1.452155 (-0.338042) | 1.174879 / 1.492716 (-0.317837) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093457 / 0.018006 (0.075451) | 0.307643 / 0.000490 (0.307154) | 0.000212 / 0.000200 (0.000012) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018635 / 0.037411 (-0.018777) | 0.062099 / 0.014526 (0.047573) | 0.073619 / 0.176557 (-0.102938) | 0.119986 / 0.737135 (-0.617149) | 0.075439 / 0.296338 (-0.220899) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280142 / 0.215209 (0.064933) | 2.733790 / 2.077655 (0.656136) | 1.457633 / 1.504120 (-0.046487) | 1.336288 / 1.541195 (-0.204907) | 1.363191 / 1.468490 (-0.105299) | 0.399331 / 4.584777 (-4.185446) | 2.343099 / 3.745712 (-1.402614) | 2.617059 / 5.269862 (-2.652802) | 1.575912 / 4.565676 (-2.989765) | 0.045621 / 0.424275 (-0.378655) | 0.004825 / 0.007607 (-0.002782) | 0.346669 / 0.226044 (0.120625) | 3.225982 / 2.268929 (0.957054) | 1.787067 / 55.444624 (-53.657557) | 1.503883 / 6.876477 (-5.372593) | 1.527593 / 2.142072 (-0.614479) | 0.466806 / 4.805227 (-4.338421) | 0.098537 / 6.500664 (-6.402127) | 0.042028 / 0.075469 (-0.033441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945040 / 1.841788 (-0.896748) | 11.970022 / 8.074308 (3.895714) | 10.261176 / 10.191392 (0.069784) | 0.138231 / 0.680424 (-0.542193) | 0.013933 / 0.534201 (-0.520268) | 0.270640 / 0.579283 (-0.308643) | 0.263185 / 0.434364 (-0.171178) | 0.306686 / 0.540337 (-0.233651) | 0.423164 / 1.386936 (-0.963772) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004765 / 0.011353 (-0.006588) | 0.003158 / 0.011008 (-0.007850) | 0.047813 / 0.038508 (0.009305) | 0.053363 / 0.023109 (0.030254) | 0.278570 / 0.275898 (0.002671) | 0.291500 / 0.323480 (-0.031980) | 0.003987 / 0.007986 (-0.003998) | 0.002430 / 0.004328 (-0.001898) | 0.048059 / 0.004250 (0.043809) | 0.038595 / 0.037052 (0.001542) | 0.276383 / 0.258489 (0.017894) | 0.304234 / 0.293841 (0.010393) | 0.024402 / 0.128546 (-0.104144) | 0.007303 / 0.075646 (-0.068343) | 0.055091 / 0.419271 (-0.364180) | 0.032735 / 0.043533 (-0.010797) | 0.270905 / 0.255139 (0.015766) | 0.287181 / 0.283200 (0.003981) | 0.018919 / 0.141683 (-0.122764) | 1.153814 / 1.452155 (-0.298341) | 1.197009 / 1.492716 (-0.295707) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093743 / 0.018006 (0.075737) | 0.302877 / 0.000490 (0.302387) | 0.000223 / 0.000200 (0.000023) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021279 / 0.037411 (-0.016133) | 0.070886 / 0.014526 (0.056360) | 0.081628 / 0.176557 (-0.094928) | 0.119721 / 0.737135 (-0.617414) | 0.083093 / 0.296338 (-0.213245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297788 / 0.215209 (0.082579) | 2.915235 / 2.077655 (0.837580) | 1.587580 / 1.504120 (0.083460) | 1.461699 / 1.541195 (-0.079495) | 1.520609 / 1.468490 (0.052119) | 0.398363 / 4.584777 (-4.186413) | 2.408415 / 3.745712 (-1.337297) | 2.552776 / 5.269862 (-2.717086) | 1.508219 / 4.565676 (-3.057457) | 0.045884 / 0.424275 (-0.378391) | 0.004842 / 0.007607 (-0.002765) | 0.341376 / 0.226044 (0.115331) | 3.420192 / 2.268929 (1.151264) | 1.974938 / 55.444624 (-53.469686) | 1.678283 / 6.876477 (-5.198194) | 1.702439 / 2.142072 (-0.439633) | 0.467056 / 4.805227 (-4.338172) | 0.098684 / 6.500664 (-6.401980) | 0.041052 / 0.075469 (-0.034417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990145 / 1.841788 (-0.851643) | 12.143198 / 8.074308 (4.068890) | 10.911039 / 10.191392 (0.719647) | 0.130384 / 0.680424 (-0.550040) | 0.015602 / 0.534201 (-0.518599) | 0.270799 / 0.579283 (-0.308484) | 0.279060 / 0.434364 (-0.155304) | 0.315108 / 0.540337 (-0.225230) | 0.413576 / 1.386936 (-0.973360) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d99b8225e28cca88ed9c2d9b1d8e0342762c4ece \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004911 / 0.011353 (-0.006442) | 0.002808 / 0.011008 (-0.008200) | 0.061367 / 0.038508 (0.022859) | 0.050154 / 0.023109 (0.027045) | 0.250403 / 0.275898 (-0.025495) | 0.273831 / 0.323480 (-0.049649) | 0.002914 / 0.007986 (-0.005071) | 0.002493 / 0.004328 (-0.001836) | 0.048288 / 0.004250 (0.044037) | 0.039219 / 0.037052 (0.002167) | 0.260043 / 0.258489 (0.001554) | 0.288177 / 0.293841 (-0.005664) | 0.023123 / 0.128546 (-0.105423) | 0.006981 / 0.075646 (-0.068666) | 0.201306 / 0.419271 (-0.217965) | 0.035670 / 0.043533 (-0.007863) | 0.255237 / 0.255139 (0.000098) | 0.283701 / 0.283200 (0.000502) | 0.019349 / 0.141683 (-0.122334) | 1.100963 / 1.452155 (-0.351192) | 1.152725 / 1.492716 (-0.339992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.106350 / 0.018006 (0.088344) | 0.300577 / 0.000490 (0.300087) | 0.000206 / 0.000200 (0.000006) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019028 / 0.037411 (-0.018384) | 0.062643 / 0.014526 (0.048118) | 0.072771 / 0.176557 (-0.103786) | 0.119873 / 0.737135 (-0.617263) | 0.074470 / 0.296338 (-0.221869) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287032 / 0.215209 (0.071823) | 2.826134 / 2.077655 (0.748480) | 1.507362 / 1.504120 (0.003242) | 1.382929 / 1.541195 (-0.158266) | 1.385361 / 1.468490 (-0.083129) | 0.412081 / 4.584777 (-4.172696) | 2.384289 / 3.745712 (-1.361423) | 2.551316 / 5.269862 (-2.718546) | 1.562954 / 4.565676 (-3.002722) | 0.046669 / 0.424275 (-0.377606) | 0.004804 / 0.007607 (-0.002803) | 0.337751 / 0.226044 (0.111707) | 3.378894 / 2.268929 (1.109965) | 1.848817 / 55.444624 (-53.595807) | 1.564560 / 6.876477 (-5.311917) | 1.579577 / 2.142072 (-0.562496) | 0.484531 / 4.805227 (-4.320697) | 0.101157 / 6.500664 (-6.399507) | 0.042272 / 0.075469 (-0.033197) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948289 / 1.841788 (-0.893498) | 11.490877 / 8.074308 (3.416569) | 10.492787 / 10.191392 (0.301395) | 0.128575 / 0.680424 (-0.551849) | 0.013716 / 0.534201 (-0.520485) | 0.271075 / 0.579283 (-0.308208) | 0.269749 / 0.434364 (-0.164615) | 0.306378 / 0.540337 (-0.233959) | 0.400204 / 1.386936 (-0.986732) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004821 / 0.011353 (-0.006532) | 0.002773 / 0.011008 (-0.008235) | 0.048934 / 0.038508 (0.010426) | 0.049490 / 0.023109 (0.026380) | 0.271107 / 0.275898 (-0.004791) | 0.291472 / 0.323480 (-0.032008) | 0.004734 / 0.007986 (-0.003252) | 0.002437 / 0.004328 (-0.001892) | 0.048840 / 0.004250 (0.044590) | 0.039757 / 0.037052 (0.002704) | 0.276037 / 0.258489 (0.017548) | 0.298220 / 0.293841 (0.004379) | 0.024595 / 0.128546 (-0.103952) | 0.007320 / 0.075646 (-0.068327) | 0.054693 / 0.419271 (-0.364578) | 0.032672 / 0.043533 (-0.010861) | 0.271555 / 0.255139 (0.016416) | 0.287685 / 0.283200 (0.004485) | 0.017159 / 0.141683 (-0.124524) | 1.118496 / 1.452155 (-0.333659) | 1.177389 / 1.492716 (-0.315327) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090469 / 0.018006 (0.072463) | 0.306014 / 0.000490 (0.305525) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021452 / 0.037411 (-0.015960) | 0.070014 / 0.014526 (0.055488) | 0.081917 / 0.176557 (-0.094639) | 0.120615 / 0.737135 (-0.616520) | 0.081745 / 0.296338 (-0.214593) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294049 / 0.215209 (0.078840) | 2.886802 / 2.077655 (0.809147) | 1.607817 / 1.504120 (0.103697) | 1.474172 / 1.541195 (-0.067023) | 1.474744 / 1.468490 (0.006254) | 0.398178 / 4.584777 (-4.186599) | 2.455908 / 3.745712 (-1.289804) | 2.463003 / 5.269862 (-2.806858) | 1.560402 / 4.565676 (-3.005275) | 0.046208 / 0.424275 (-0.378067) | 0.004862 / 0.007607 (-0.002745) | 0.350862 / 0.226044 (0.124817) | 3.463958 / 2.268929 (1.195030) | 1.934696 / 55.444624 (-53.509928) | 1.660090 / 6.876477 (-5.216387) | 1.770920 / 2.142072 (-0.371153) | 0.468409 / 4.805227 (-4.336819) | 0.096812 / 6.500664 (-6.403852) | 0.040580 / 0.075469 (-0.034889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978102 / 1.841788 (-0.863686) | 11.943265 / 8.074308 (3.868957) | 10.684995 / 10.191392 (0.493603) | 0.131554 / 0.680424 (-0.548870) | 0.015608 / 0.534201 (-0.518593) | 0.271449 / 0.579283 (-0.307834) | 0.282485 / 0.434364 (-0.151879) | 0.302376 / 0.540337 (-0.237962) | 0.524908 / 1.386936 (-0.862028) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2bb0b21e37a57257a7d428f8744c862ca92c0c7e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004926 / 0.011353 (-0.006427) | 0.003020 / 0.011008 (-0.007988) | 0.061899 / 0.038508 (0.023391) | 0.063836 / 0.023109 (0.040726) | 0.239252 / 0.275898 (-0.036646) | 0.268320 / 0.323480 (-0.055160) | 0.003939 / 0.007986 (-0.004046) | 0.002557 / 0.004328 (-0.001772) | 0.048469 / 0.004250 (0.044219) | 0.038707 / 0.037052 (0.001655) | 0.247563 / 0.258489 (-0.010926) | 0.281171 / 0.293841 (-0.012670) | 0.023564 / 0.128546 (-0.104983) | 0.007699 / 0.075646 (-0.067948) | 0.207561 / 0.419271 (-0.211710) | 0.036362 / 0.043533 (-0.007171) | 0.248324 / 0.255139 (-0.006814) | 0.269673 / 0.283200 (-0.013527) | 0.018841 / 0.141683 (-0.122842) | 1.123407 / 1.452155 (-0.328748) | 1.170422 / 1.492716 (-0.322295) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096278 / 0.018006 (0.078272) | 0.311477 / 0.000490 (0.310988) | 0.000217 / 0.000200 (0.000017) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019470 / 0.037411 (-0.017942) | 0.071888 / 0.014526 (0.057362) | 0.074264 / 0.176557 (-0.102292) | 0.124413 / 0.737135 (-0.612723) | 0.075602 / 0.296338 (-0.220737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284755 / 0.215209 (0.069546) | 2.770789 / 2.077655 (0.693135) | 1.478276 / 1.504120 (-0.025843) | 1.375287 / 1.541195 (-0.165907) | 1.398032 / 1.468490 (-0.070458) | 0.420457 / 4.584777 (-4.164320) | 2.445929 / 3.745712 (-1.299783) | 2.819548 / 5.269862 (-2.450313) | 1.628506 / 4.565676 (-2.937171) | 0.047687 / 0.424275 (-0.376588) | 0.004861 / 0.007607 (-0.002746) | 0.340173 / 0.226044 (0.114129) | 3.340703 / 2.268929 (1.071774) | 1.882803 / 55.444624 (-53.561821) | 1.587206 / 6.876477 (-5.289271) | 1.645298 / 2.142072 (-0.496774) | 0.490957 / 4.805227 (-4.314270) | 0.102779 / 6.500664 (-6.397885) | 0.048372 / 0.075469 (-0.027098) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.958311 / 1.841788 (-0.883477) | 12.354981 / 8.074308 (4.280673) | 10.864826 / 10.191392 (0.673434) | 0.149053 / 0.680424 (-0.531371) | 0.015078 / 0.534201 (-0.519123) | 0.270117 / 0.579283 (-0.309166) | 0.274495 / 0.434364 (-0.159869) | 0.307584 / 0.540337 (-0.232753) | 0.405603 / 1.386936 (-0.981333) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004996 / 0.011353 (-0.006357) | 0.002995 / 0.011008 (-0.008014) | 0.047897 / 0.038508 (0.009389) | 0.056413 / 0.023109 (0.033303) | 0.277669 / 0.275898 (0.001771) | 0.300679 / 0.323480 (-0.022801) | 0.004094 / 0.007986 (-0.003892) | 0.002519 / 0.004328 (-0.001810) | 0.049536 / 0.004250 (0.045285) | 0.042341 / 0.037052 (0.005288) | 0.281533 / 0.258489 (0.023044) | 0.306771 / 0.293841 (0.012930) | 0.025379 / 0.128546 (-0.103167) | 0.007495 / 0.075646 (-0.068152) | 0.054453 / 0.419271 (-0.364818) | 0.032616 / 0.043533 (-0.010917) | 0.277844 / 0.255139 (0.022705) | 0.296265 / 0.283200 (0.013065) | 0.019462 / 0.141683 (-0.122221) | 1.115841 / 1.452155 (-0.336313) | 1.169662 / 1.492716 (-0.323054) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095459 / 0.018006 (0.077453) | 0.301590 / 0.000490 (0.301100) | 0.000230 / 0.000200 (0.000030) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022182 / 0.037411 (-0.015229) | 0.085367 / 0.014526 (0.070842) | 0.084006 / 0.176557 (-0.092550) | 0.121260 / 0.737135 (-0.615876) | 0.084137 / 0.296338 (-0.212202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.310335 / 0.215209 (0.095126) | 3.002531 / 2.077655 (0.924876) | 1.642282 / 1.504120 (0.138162) | 1.573044 / 1.541195 (0.031849) | 1.572076 / 1.468490 (0.103586) | 0.422037 / 4.584777 (-4.162740) | 2.495295 / 3.745712 (-1.250417) | 2.523707 / 5.269862 (-2.746155) | 1.725824 / 4.565676 (-2.839853) | 0.047814 / 0.424275 (-0.376461) | 0.004868 / 0.007607 (-0.002739) | 0.352833 / 0.226044 (0.126789) | 3.477241 / 2.268929 (1.208313) | 1.983888 / 55.444624 (-53.460736) | 1.696883 / 6.876477 (-5.179594) | 1.831665 / 2.142072 (-0.310407) | 0.502976 / 4.805227 (-4.302251) | 0.101264 / 6.500664 (-6.399400) | 0.041779 / 0.075469 (-0.033690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981629 / 1.841788 (-0.860159) | 12.550634 / 8.074308 (4.476326) | 11.113382 / 10.191392 (0.921990) | 0.136565 / 0.680424 (-0.543859) | 0.016742 / 0.534201 (-0.517459) | 0.274316 / 0.579283 (-0.304967) | 0.284687 / 0.434364 (-0.149676) | 0.309966 / 0.540337 (-0.230372) | 0.557990 / 1.386936 (-0.828946) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b0c30facb87af83107a645eeffcd18c0775afe11 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004980 / 0.011353 (-0.006373) | 0.002786 / 0.011008 (-0.008222) | 0.062460 / 0.038508 (0.023952) | 0.051811 / 0.023109 (0.028702) | 0.231734 / 0.275898 (-0.044164) | 0.254075 / 0.323480 (-0.069405) | 0.002884 / 0.007986 (-0.005102) | 0.002317 / 0.004328 (-0.002011) | 0.049044 / 0.004250 (0.044793) | 0.038984 / 0.037052 (0.001931) | 0.241193 / 0.258489 (-0.017296) | 0.272091 / 0.293841 (-0.021750) | 0.023098 / 0.128546 (-0.105448) | 0.007190 / 0.075646 (-0.068456) | 0.201409 / 0.419271 (-0.217863) | 0.036100 / 0.043533 (-0.007433) | 0.238185 / 0.255139 (-0.016954) | 0.257127 / 0.283200 (-0.026072) | 0.019542 / 0.141683 (-0.122141) | 1.127925 / 1.452155 (-0.324230) | 1.174354 / 1.492716 (-0.318362) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099608 / 0.018006 (0.081601) | 0.315046 / 0.000490 (0.314556) | 0.000282 / 0.000200 (0.000082) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018710 / 0.037411 (-0.018701) | 0.062557 / 0.014526 (0.048031) | 0.074021 / 0.176557 (-0.102536) | 0.119670 / 0.737135 (-0.617465) | 0.076491 / 0.296338 (-0.219847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282940 / 0.215209 (0.067731) | 2.788542 / 2.077655 (0.710887) | 1.496039 / 1.504120 (-0.008080) | 1.367542 / 1.541195 (-0.173653) | 1.393705 / 1.468490 (-0.074785) | 0.405910 / 4.584777 (-4.178867) | 2.422544 / 3.745712 (-1.323168) | 2.602822 / 5.269862 (-2.667039) | 1.586853 / 4.565676 (-2.978823) | 0.045440 / 0.424275 (-0.378836) | 0.004792 / 0.007607 (-0.002815) | 0.342059 / 0.226044 (0.116015) | 3.366880 / 2.268929 (1.097952) | 1.810566 / 55.444624 (-53.634058) | 1.527112 / 6.876477 (-5.349364) | 1.548906 / 2.142072 (-0.593166) | 0.479491 / 4.805227 (-4.325736) | 0.099807 / 6.500664 (-6.400857) | 0.041951 / 0.075469 (-0.033518) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953723 / 1.841788 (-0.888065) | 11.837240 / 8.074308 (3.762932) | 10.562979 / 10.191392 (0.371587) | 0.145064 / 0.680424 (-0.535360) | 0.014285 / 0.534201 (-0.519916) | 0.270605 / 0.579283 (-0.308678) | 0.264086 / 0.434364 (-0.170278) | 0.308000 / 0.540337 (-0.232337) | 0.403916 / 1.386936 (-0.983020) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004796 / 0.011353 (-0.006557) | 0.002997 / 0.011008 (-0.008011) | 0.048702 / 0.038508 (0.010193) | 0.053377 / 0.023109 (0.030267) | 0.271852 / 0.275898 (-0.004046) | 0.293366 / 0.323480 (-0.030114) | 0.004041 / 0.007986 (-0.003945) | 0.002459 / 0.004328 (-0.001869) | 0.048197 / 0.004250 (0.043947) | 0.040094 / 0.037052 (0.003042) | 0.275837 / 0.258489 (0.017348) | 0.301174 / 0.293841 (0.007333) | 0.024433 / 0.128546 (-0.104113) | 0.007203 / 0.075646 (-0.068444) | 0.054080 / 0.419271 (-0.365192) | 0.033237 / 0.043533 (-0.010295) | 0.271177 / 0.255139 (0.016038) | 0.293062 / 0.283200 (0.009862) | 0.018399 / 0.141683 (-0.123284) | 1.149527 / 1.452155 (-0.302628) | 1.202717 / 1.492716 (-0.290000) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093168 / 0.018006 (0.075162) | 0.290536 / 0.000490 (0.290046) | 0.000290 / 0.000200 (0.000090) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021191 / 0.037411 (-0.016221) | 0.069990 / 0.014526 (0.055465) | 0.080636 / 0.176557 (-0.095920) | 0.120151 / 0.737135 (-0.616984) | 0.082944 / 0.296338 (-0.213395) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289673 / 0.215209 (0.074463) | 2.828419 / 2.077655 (0.750764) | 1.590741 / 1.504120 (0.086621) | 1.480969 / 1.541195 (-0.060226) | 1.512761 / 1.468490 (0.044271) | 0.398328 / 4.584777 (-4.186449) | 2.441134 / 3.745712 (-1.304578) | 2.487606 / 5.269862 (-2.782256) | 1.586604 / 4.565676 (-2.979073) | 0.045578 / 0.424275 (-0.378697) | 0.004842 / 0.007607 (-0.002766) | 0.344556 / 0.226044 (0.118512) | 3.395982 / 2.268929 (1.127053) | 1.963354 / 55.444624 (-53.481271) | 1.680496 / 6.876477 (-5.195980) | 1.827916 / 2.142072 (-0.314157) | 0.476203 / 4.805227 (-4.329024) | 0.098016 / 6.500664 (-6.402648) | 0.041234 / 0.075469 (-0.034235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977820 / 1.841788 (-0.863968) | 12.139614 / 8.074308 (4.065306) | 10.643071 / 10.191392 (0.451679) | 0.130928 / 0.680424 (-0.549496) | 0.015341 / 0.534201 (-0.518860) | 0.271304 / 0.579283 (-0.307979) | 0.284671 / 0.434364 (-0.149693) | 0.306210 / 0.540337 (-0.234128) | 0.546498 / 1.386936 (-0.840438) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1bf7408a171db4a744d1760a9e32ba21deb8d41d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004748 / 0.011353 (-0.006605) | 0.002942 / 0.011008 (-0.008066) | 0.061298 / 0.038508 (0.022790) | 0.052873 / 0.023109 (0.029764) | 0.250573 / 0.275898 (-0.025325) | 0.270636 / 0.323480 (-0.052844) | 0.002925 / 0.007986 (-0.005061) | 0.003126 / 0.004328 (-0.001203) | 0.047340 / 0.004250 (0.043090) | 0.038662 / 0.037052 (0.001609) | 0.252151 / 0.258489 (-0.006338) | 0.284700 / 0.293841 (-0.009141) | 0.025145 / 0.128546 (-0.103402) | 0.007075 / 0.075646 (-0.068572) | 0.200501 / 0.419271 (-0.218771) | 0.035623 / 0.043533 (-0.007910) | 0.249657 / 0.255139 (-0.005482) | 0.272384 / 0.283200 (-0.010815) | 0.018331 / 0.141683 (-0.123351) | 1.095064 / 1.452155 (-0.357091) | 1.145304 / 1.492716 (-0.347412) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092548 / 0.018006 (0.074542) | 0.299338 / 0.000490 (0.298848) | 0.000212 / 0.000200 (0.000012) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018723 / 0.037411 (-0.018688) | 0.062226 / 0.014526 (0.047700) | 0.072840 / 0.176557 (-0.103717) | 0.120073 / 0.737135 (-0.617063) | 0.074536 / 0.296338 (-0.221802) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284862 / 0.215209 (0.069653) | 2.791842 / 2.077655 (0.714188) | 1.506481 / 1.504120 (0.002361) | 1.368952 / 1.541195 (-0.172243) | 1.372555 / 1.468490 (-0.095935) | 0.408292 / 4.584777 (-4.176485) | 2.381155 / 3.745712 (-1.364558) | 2.613617 / 5.269862 (-2.656244) | 1.575892 / 4.565676 (-2.989785) | 0.047526 / 0.424275 (-0.376749) | 0.004792 / 0.007607 (-0.002815) | 0.344818 / 0.226044 (0.118773) | 3.344965 / 2.268929 (1.076036) | 1.883659 / 55.444624 (-53.560965) | 1.596039 / 6.876477 (-5.280437) | 1.584410 / 2.142072 (-0.557662) | 0.486672 / 4.805227 (-4.318555) | 0.101464 / 6.500664 (-6.399200) | 0.041824 / 0.075469 (-0.033645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930491 / 1.841788 (-0.911296) | 11.636526 / 8.074308 (3.562218) | 10.371829 / 10.191392 (0.180437) | 0.138181 / 0.680424 (-0.542243) | 0.014307 / 0.534201 (-0.519894) | 0.268322 / 0.579283 (-0.310961) | 0.264173 / 0.434364 (-0.170191) | 0.303649 / 0.540337 (-0.236688) | 0.399958 / 1.386936 (-0.986978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004802 / 0.011353 (-0.006551) | 0.002861 / 0.011008 (-0.008147) | 0.048843 / 0.038508 (0.010335) | 0.053887 / 0.023109 (0.030778) | 0.278690 / 0.275898 (0.002792) | 0.302729 / 0.323480 (-0.020751) | 0.003929 / 0.007986 (-0.004057) | 0.002376 / 0.004328 (-0.001953) | 0.048146 / 0.004250 (0.043896) | 0.039842 / 0.037052 (0.002790) | 0.281595 / 0.258489 (0.023106) | 0.305813 / 0.293841 (0.011972) | 0.024214 / 0.128546 (-0.104333) | 0.007201 / 0.075646 (-0.068446) | 0.053604 / 0.419271 (-0.365667) | 0.032841 / 0.043533 (-0.010691) | 0.276168 / 0.255139 (0.021029) | 0.293869 / 0.283200 (0.010669) | 0.017550 / 0.141683 (-0.124132) | 1.121508 / 1.452155 (-0.330647) | 1.177694 / 1.492716 (-0.315022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091805 / 0.018006 (0.073799) | 0.299026 / 0.000490 (0.298536) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021094 / 0.037411 (-0.016318) | 0.069769 / 0.014526 (0.055243) | 0.081191 / 0.176557 (-0.095366) | 0.118884 / 0.737135 (-0.618252) | 0.081955 / 0.296338 (-0.214383) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292159 / 0.215209 (0.076950) | 2.874473 / 2.077655 (0.796819) | 1.614695 / 1.504120 (0.110575) | 1.492123 / 1.541195 (-0.049071) | 1.505293 / 1.468490 (0.036803) | 0.394498 / 4.584777 (-4.190279) | 2.455539 / 3.745712 (-1.290173) | 2.458184 / 5.269862 (-2.811677) | 1.569108 / 4.565676 (-2.996569) | 0.046576 / 0.424275 (-0.377699) | 0.005093 / 0.007607 (-0.002514) | 0.346142 / 0.226044 (0.120098) | 3.398171 / 2.268929 (1.129242) | 1.971953 / 55.444624 (-53.472672) | 1.695275 / 6.876477 (-5.181201) | 1.840511 / 2.142072 (-0.301562) | 0.465932 / 4.805227 (-4.339295) | 0.098578 / 6.500664 (-6.402086) | 0.040456 / 0.075469 (-0.035013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977636 / 1.841788 (-0.864152) | 12.083585 / 8.074308 (4.009277) | 10.509082 / 10.191392 (0.317690) | 0.130717 / 0.680424 (-0.549707) | 0.015958 / 0.534201 (-0.518243) | 0.273504 / 0.579283 (-0.305780) | 0.276498 / 0.434364 (-0.157866) | 0.306139 / 0.540337 (-0.234199) | 0.522521 / 1.386936 (-0.864415) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6e17dd8acec9a958ba82a5f753276b842eaadf52 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004859 / 0.011353 (-0.006493) | 0.002423 / 0.011008 (-0.008585) | 0.060969 / 0.038508 (0.022461) | 0.048758 / 0.023109 (0.025649) | 0.245400 / 0.275898 (-0.030498) | 0.263686 / 0.323480 (-0.059794) | 0.002852 / 0.007986 (-0.005134) | 0.002273 / 0.004328 (-0.002055) | 0.047648 / 0.004250 (0.043398) | 0.038310 / 0.037052 (0.001258) | 0.249849 / 0.258489 (-0.008640) | 0.279305 / 0.293841 (-0.014536) | 0.022897 / 0.128546 (-0.105649) | 0.006882 / 0.075646 (-0.068764) | 0.202793 / 0.419271 (-0.216478) | 0.034557 / 0.043533 (-0.008976) | 0.252147 / 0.255139 (-0.002992) | 0.267414 / 0.283200 (-0.015785) | 0.019956 / 0.141683 (-0.121727) | 1.106181 / 1.452155 (-0.345973) | 1.158423 / 1.492716 (-0.334293) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.086848 / 0.018006 (0.068842) | 0.295235 / 0.000490 (0.294745) | 0.000211 / 0.000200 (0.000011) | 0.000041 / 0.000054 (-0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018209 / 0.037411 (-0.019203) | 0.061967 / 0.014526 (0.047441) | 0.071551 / 0.176557 (-0.105005) | 0.117525 / 0.737135 (-0.619611) | 0.073401 / 0.296338 (-0.222937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272388 / 0.215209 (0.057179) | 2.689797 / 2.077655 (0.612143) | 1.440897 / 1.504120 (-0.063223) | 1.334689 / 1.541195 (-0.206505) | 1.356395 / 1.468490 (-0.112095) | 0.387201 / 4.584777 (-4.197576) | 2.342908 / 3.745712 (-1.402804) | 2.480156 / 5.269862 (-2.789706) | 1.512342 / 4.565676 (-3.053335) | 0.042324 / 0.424275 (-0.381951) | 0.004744 / 0.007607 (-0.002863) | 0.323568 / 0.226044 (0.097523) | 3.190021 / 2.268929 (0.921093) | 1.765046 / 55.444624 (-53.679578) | 1.513958 / 6.876477 (-5.362519) | 1.504943 / 2.142072 (-0.637129) | 0.452302 / 4.805227 (-4.352925) | 0.094728 / 6.500664 (-6.405936) | 0.038641 / 0.075469 (-0.036828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939721 / 1.841788 (-0.902067) | 11.174180 / 8.074308 (3.099872) | 10.046717 / 10.191392 (-0.144675) | 0.124877 / 0.680424 (-0.555547) | 0.013687 / 0.534201 (-0.520514) | 0.261002 / 0.579283 (-0.318282) | 0.267349 / 0.434364 (-0.167015) | 0.306545 / 0.540337 (-0.233792) | 0.389322 / 1.386936 (-0.997614) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004702 / 0.011353 (-0.006651) | 0.002431 / 0.011008 (-0.008577) | 0.046138 / 0.038508 (0.007630) | 0.048356 / 0.023109 (0.025246) | 0.272154 / 0.275898 (-0.003744) | 0.292676 / 0.323480 (-0.030804) | 0.003870 / 0.007986 (-0.004115) | 0.002294 / 0.004328 (-0.002035) | 0.048129 / 0.004250 (0.043879) | 0.039026 / 0.037052 (0.001974) | 0.273900 / 0.258489 (0.015411) | 0.295927 / 0.293841 (0.002086) | 0.024044 / 0.128546 (-0.104502) | 0.006906 / 0.075646 (-0.068740) | 0.053268 / 0.419271 (-0.366004) | 0.032360 / 0.043533 (-0.011173) | 0.273470 / 0.255139 (0.018331) | 0.286207 / 0.283200 (0.003007) | 0.017580 / 0.141683 (-0.124103) | 1.091064 / 1.452155 (-0.361091) | 1.159645 / 1.492716 (-0.333071) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.087149 / 0.018006 (0.069143) | 0.293489 / 0.000490 (0.293000) | 0.000217 / 0.000200 (0.000017) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021779 / 0.037411 (-0.015632) | 0.066453 / 0.014526 (0.051928) | 0.078517 / 0.176557 (-0.098039) | 0.117317 / 0.737135 (-0.619819) | 0.079828 / 0.296338 (-0.216511) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287605 / 0.215209 (0.072396) | 2.811094 / 2.077655 (0.733439) | 1.572474 / 1.504120 (0.068354) | 1.450294 / 1.541195 (-0.090900) | 1.456052 / 1.468490 (-0.012438) | 0.402095 / 4.584777 (-4.182682) | 2.444709 / 3.745712 (-1.301003) | 2.390837 / 5.269862 (-2.879024) | 1.530519 / 4.565676 (-3.035157) | 0.043520 / 0.424275 (-0.380755) | 0.004788 / 0.007607 (-0.002819) | 0.337436 / 0.226044 (0.111391) | 3.326111 / 2.268929 (1.057182) | 1.889273 / 55.444624 (-53.555352) | 1.624423 / 6.876477 (-5.252054) | 1.715766 / 2.142072 (-0.426307) | 0.484570 / 4.805227 (-4.320657) | 0.091691 / 6.500664 (-6.408973) | 0.038278 / 0.075469 (-0.037191) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961708 / 1.841788 (-0.880079) | 11.496471 / 8.074308 (3.422162) | 10.211589 / 10.191392 (0.020197) | 0.127584 / 0.680424 (-0.552840) | 0.015178 / 0.534201 (-0.519023) | 0.267290 / 0.579283 (-0.311993) | 0.259305 / 0.434364 (-0.175059) | 0.303433 / 0.540337 (-0.236905) | 0.508016 / 1.386936 (-0.878920) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#72880aa8a3e4b49438db72b13fb9a2541331820b \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004558 / 0.011353 (-0.006795) | 0.002563 / 0.011008 (-0.008445) | 0.061314 / 0.038508 (0.022806) | 0.049312 / 0.023109 (0.026203) | 0.240988 / 0.275898 (-0.034910) | 0.260548 / 0.323480 (-0.062932) | 0.002817 / 0.007986 (-0.005169) | 0.002904 / 0.004328 (-0.001425) | 0.048515 / 0.004250 (0.044264) | 0.037511 / 0.037052 (0.000459) | 0.244880 / 0.258489 (-0.013609) | 0.276118 / 0.293841 (-0.017723) | 0.022636 / 0.128546 (-0.105910) | 0.006694 / 0.075646 (-0.068953) | 0.201336 / 0.419271 (-0.217936) | 0.035228 / 0.043533 (-0.008305) | 0.242424 / 0.255139 (-0.012715) | 0.260178 / 0.283200 (-0.023022) | 0.017836 / 0.141683 (-0.123847) | 1.122296 / 1.452155 (-0.329859) | 1.189024 / 1.492716 (-0.303692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090051 / 0.018006 (0.072045) | 0.298562 / 0.000490 (0.298073) | 0.000216 / 0.000200 (0.000016) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018228 / 0.037411 (-0.019184) | 0.062379 / 0.014526 (0.047853) | 0.073482 / 0.176557 (-0.103075) | 0.120341 / 0.737135 (-0.616794) | 0.073868 / 0.296338 (-0.222470) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280195 / 0.215209 (0.064986) | 2.743333 / 2.077655 (0.665678) | 1.470078 / 1.504120 (-0.034042) | 1.335874 / 1.541195 (-0.205321) | 1.342961 / 1.468490 (-0.125529) | 0.409203 / 4.584777 (-4.175574) | 2.392217 / 3.745712 (-1.353495) | 2.544161 / 5.269862 (-2.725701) | 1.544016 / 4.565676 (-3.021660) | 0.059485 / 0.424275 (-0.364790) | 0.004833 / 0.007607 (-0.002775) | 0.335114 / 0.226044 (0.109070) | 3.289009 / 2.268929 (1.020080) | 1.854666 / 55.444624 (-53.589959) | 1.566282 / 6.876477 (-5.310195) | 1.561287 / 2.142072 (-0.580786) | 0.484961 / 4.805227 (-4.320267) | 0.099651 / 6.500664 (-6.401013) | 0.041408 / 0.075469 (-0.034061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941743 / 1.841788 (-0.900044) | 11.165692 / 8.074308 (3.091383) | 10.236693 / 10.191392 (0.045301) | 0.129694 / 0.680424 (-0.550730) | 0.014879 / 0.534201 (-0.519322) | 0.275120 / 0.579283 (-0.304163) | 0.263822 / 0.434364 (-0.170542) | 0.306429 / 0.540337 (-0.233909) | 0.397611 / 1.386936 (-0.989325) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004714 / 0.011353 (-0.006639) | 0.002430 / 0.011008 (-0.008578) | 0.047644 / 0.038508 (0.009136) | 0.049710 / 0.023109 (0.026601) | 0.271950 / 0.275898 (-0.003948) | 0.290996 / 0.323480 (-0.032483) | 0.003888 / 0.007986 (-0.004097) | 0.002367 / 0.004328 (-0.001962) | 0.047623 / 0.004250 (0.043372) | 0.039574 / 0.037052 (0.002522) | 0.274540 / 0.258489 (0.016051) | 0.298065 / 0.293841 (0.004224) | 0.024677 / 0.128546 (-0.103869) | 0.006844 / 0.075646 (-0.068802) | 0.053180 / 0.419271 (-0.366091) | 0.032391 / 0.043533 (-0.011141) | 0.273222 / 0.255139 (0.018083) | 0.290336 / 0.283200 (0.007136) | 0.017911 / 0.141683 (-0.123772) | 1.105879 / 1.452155 (-0.346276) | 1.176979 / 1.492716 (-0.315737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089563 / 0.018006 (0.071557) | 0.296392 / 0.000490 (0.295903) | 0.000214 / 0.000200 (0.000014) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021588 / 0.037411 (-0.015824) | 0.069951 / 0.014526 (0.055425) | 0.080397 / 0.176557 (-0.096160) | 0.118772 / 0.737135 (-0.618363) | 0.080356 / 0.296338 (-0.215983) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288492 / 0.215209 (0.073283) | 2.839553 / 2.077655 (0.761898) | 1.597504 / 1.504120 (0.093384) | 1.475001 / 1.541195 (-0.066193) | 1.481561 / 1.468490 (0.013071) | 0.411851 / 4.584777 (-4.172926) | 2.397322 / 3.745712 (-1.348390) | 2.444078 / 5.269862 (-2.825784) | 1.557106 / 4.565676 (-3.008571) | 0.047159 / 0.424275 (-0.377116) | 0.004842 / 0.007607 (-0.002765) | 0.346221 / 0.226044 (0.120177) | 3.387900 / 2.268929 (1.118972) | 1.962167 / 55.444624 (-53.482457) | 1.675017 / 6.876477 (-5.201460) | 1.788745 / 2.142072 (-0.353328) | 0.488063 / 4.805227 (-4.317164) | 0.098878 / 6.500664 (-6.401786) | 0.040369 / 0.075469 (-0.035100) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977999 / 1.841788 (-0.863789) | 11.671558 / 8.074308 (3.597250) | 10.327847 / 10.191392 (0.136455) | 0.129317 / 0.680424 (-0.551107) | 0.015600 / 0.534201 (-0.518601) | 0.267967 / 0.579283 (-0.311316) | 0.273811 / 0.434364 (-0.160553) | 0.301749 / 0.540337 (-0.238588) | 0.515493 / 1.386936 (-0.871443) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5394939b0b3d124674f938e1f1cd9e8de3cbdbf7 \"CML watermark\")\n",
"I added tests and docs @mariosasko @albertvillanova let le know what you think !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004867 / 0.011353 (-0.006486) | 0.002952 / 0.011008 (-0.008056) | 0.062008 / 0.038508 (0.023500) | 0.055279 / 0.023109 (0.032170) | 0.248160 / 0.275898 (-0.027738) | 0.276173 / 0.323480 (-0.047307) | 0.003945 / 0.007986 (-0.004041) | 0.002371 / 0.004328 (-0.001958) | 0.048385 / 0.004250 (0.044134) | 0.038997 / 0.037052 (0.001945) | 0.257465 / 0.258489 (-0.001024) | 0.286920 / 0.293841 (-0.006921) | 0.023031 / 0.128546 (-0.105515) | 0.007075 / 0.075646 (-0.068571) | 0.201897 / 0.419271 (-0.217375) | 0.035637 / 0.043533 (-0.007896) | 0.252050 / 0.255139 (-0.003089) | 0.272580 / 0.283200 (-0.010620) | 0.018578 / 0.141683 (-0.123105) | 1.129427 / 1.452155 (-0.322727) | 1.172182 / 1.492716 (-0.320534) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091806 / 0.018006 (0.073800) | 0.298632 / 0.000490 (0.298143) | 0.000202 / 0.000200 (0.000002) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019123 / 0.037411 (-0.018288) | 0.062603 / 0.014526 (0.048077) | 0.074352 / 0.176557 (-0.102205) | 0.120431 / 0.737135 (-0.616704) | 0.074622 / 0.296338 (-0.221717) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276019 / 0.215209 (0.060810) | 2.701610 / 2.077655 (0.623955) | 1.398388 / 1.504120 (-0.105732) | 1.270902 / 1.541195 (-0.270292) | 1.307992 / 1.468490 (-0.160499) | 0.396350 / 4.584777 (-4.188427) | 2.351064 / 3.745712 (-1.394648) | 2.606229 / 5.269862 (-2.663632) | 1.591075 / 4.565676 (-2.974601) | 0.046429 / 0.424275 (-0.377846) | 0.004832 / 0.007607 (-0.002775) | 0.327887 / 0.226044 (0.101843) | 3.277847 / 2.268929 (1.008918) | 1.767210 / 55.444624 (-53.677414) | 1.483997 / 6.876477 (-5.392479) | 1.515689 / 2.142072 (-0.626383) | 0.471326 / 4.805227 (-4.333902) | 0.098821 / 6.500664 (-6.401843) | 0.041914 / 0.075469 (-0.033555) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956278 / 1.841788 (-0.885510) | 11.924373 / 8.074308 (3.850065) | 10.493926 / 10.191392 (0.302534) | 0.140214 / 0.680424 (-0.540210) | 0.013679 / 0.534201 (-0.520522) | 0.270304 / 0.579283 (-0.308979) | 0.266518 / 0.434364 (-0.167846) | 0.310113 / 0.540337 (-0.230224) | 0.399811 / 1.386936 (-0.987125) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004793 / 0.011353 (-0.006560) | 0.002879 / 0.011008 (-0.008130) | 0.048632 / 0.038508 (0.010124) | 0.051413 / 0.023109 (0.028304) | 0.272704 / 0.275898 (-0.003194) | 0.291541 / 0.323480 (-0.031939) | 0.003913 / 0.007986 (-0.004072) | 0.002387 / 0.004328 (-0.001941) | 0.049045 / 0.004250 (0.044795) | 0.040164 / 0.037052 (0.003112) | 0.273052 / 0.258489 (0.014563) | 0.300139 / 0.293841 (0.006298) | 0.024225 / 0.128546 (-0.104321) | 0.007060 / 0.075646 (-0.068587) | 0.054360 / 0.419271 (-0.364911) | 0.032882 / 0.043533 (-0.010650) | 0.270295 / 0.255139 (0.015157) | 0.312253 / 0.283200 (0.029054) | 0.017413 / 0.141683 (-0.124270) | 1.137306 / 1.452155 (-0.314849) | 1.203705 / 1.492716 (-0.289011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091083 / 0.018006 (0.073077) | 0.301607 / 0.000490 (0.301117) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021753 / 0.037411 (-0.015658) | 0.069693 / 0.014526 (0.055167) | 0.080481 / 0.176557 (-0.096075) | 0.118581 / 0.737135 (-0.618555) | 0.082231 / 0.296338 (-0.214108) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300014 / 0.215209 (0.084805) | 2.885934 / 2.077655 (0.808279) | 1.594120 / 1.504120 (0.090000) | 1.472312 / 1.541195 (-0.068883) | 1.491663 / 1.468490 (0.023173) | 0.412946 / 4.584777 (-4.171831) | 2.494168 / 3.745712 (-1.251544) | 2.527987 / 5.269862 (-2.741875) | 1.589187 / 4.565676 (-2.976490) | 0.046594 / 0.424275 (-0.377681) | 0.004810 / 0.007607 (-0.002797) | 0.345496 / 0.226044 (0.119452) | 3.428850 / 2.268929 (1.159921) | 1.952696 / 55.444624 (-53.491929) | 1.663285 / 6.876477 (-5.213191) | 1.822187 / 2.142072 (-0.319885) | 0.483798 / 4.805227 (-4.321430) | 0.101403 / 6.500664 (-6.399261) | 0.041773 / 0.075469 (-0.033696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974247 / 1.841788 (-0.867541) | 12.459980 / 8.074308 (4.385672) | 10.354792 / 10.191392 (0.163400) | 0.129083 / 0.680424 (-0.551341) | 0.015225 / 0.534201 (-0.518976) | 0.267673 / 0.579283 (-0.311610) | 0.281011 / 0.434364 (-0.153352) | 0.303054 / 0.540337 (-0.237283) | 0.405719 / 1.386936 (-0.981217) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33dc51fc1a8122b842bb7839ff0eda32f173c325 \"CML watermark\")\n",
"I switched to using `deepmind/code_contests` in examples in the docs to avoid having to pass trust_remote_code, and remove the DEFAULT naming stuff :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005169 / 0.011353 (-0.006184) | 0.003066 / 0.011008 (-0.007942) | 0.068884 / 0.038508 (0.030376) | 0.060345 / 0.023109 (0.037236) | 0.243050 / 0.275898 (-0.032848) | 0.265523 / 0.323480 (-0.057957) | 0.002918 / 0.007986 (-0.005067) | 0.002495 / 0.004328 (-0.001834) | 0.051538 / 0.004250 (0.047288) | 0.040010 / 0.037052 (0.002957) | 0.249603 / 0.258489 (-0.008886) | 0.287955 / 0.293841 (-0.005886) | 0.024003 / 0.128546 (-0.104543) | 0.007111 / 0.075646 (-0.068535) | 0.205041 / 0.419271 (-0.214231) | 0.036296 / 0.043533 (-0.007237) | 0.246135 / 0.255139 (-0.009004) | 0.268801 / 0.283200 (-0.014399) | 0.018451 / 0.141683 (-0.123232) | 1.130387 / 1.452155 (-0.321767) | 1.162041 / 1.492716 (-0.330675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096370 / 0.018006 (0.078364) | 0.309867 / 0.000490 (0.309377) | 0.000229 / 0.000200 (0.000029) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018688 / 0.037411 (-0.018723) | 0.062859 / 0.014526 (0.048333) | 0.076383 / 0.176557 (-0.100173) | 0.120385 / 0.737135 (-0.616750) | 0.080192 / 0.296338 (-0.216147) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282994 / 0.215209 (0.067785) | 2.742341 / 2.077655 (0.664686) | 1.432041 / 1.504120 (-0.072079) | 1.303282 / 1.541195 (-0.237913) | 1.347198 / 1.468490 (-0.121292) | 0.399145 / 4.584777 (-4.185632) | 2.359766 / 3.745712 (-1.385947) | 2.753577 / 5.269862 (-2.516285) | 1.639953 / 4.565676 (-2.925724) | 0.047111 / 0.424275 (-0.377164) | 0.004946 / 0.007607 (-0.002661) | 0.338857 / 0.226044 (0.112813) | 3.328709 / 2.268929 (1.059781) | 1.794729 / 55.444624 (-53.649895) | 1.508514 / 6.876477 (-5.367963) | 1.550737 / 2.142072 (-0.591335) | 0.484227 / 4.805227 (-4.321000) | 0.101001 / 6.500664 (-6.399663) | 0.042792 / 0.075469 (-0.032677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956471 / 1.841788 (-0.885317) | 12.031362 / 8.074308 (3.957054) | 10.512914 / 10.191392 (0.321522) | 0.141841 / 0.680424 (-0.538583) | 0.014343 / 0.534201 (-0.519858) | 0.273916 / 0.579283 (-0.305367) | 0.266150 / 0.434364 (-0.168214) | 0.312020 / 0.540337 (-0.228317) | 0.410465 / 1.386936 (-0.976471) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004945 / 0.011353 (-0.006408) | 0.003288 / 0.011008 (-0.007720) | 0.048247 / 0.038508 (0.009739) | 0.057892 / 0.023109 (0.034783) | 0.269741 / 0.275898 (-0.006157) | 0.293728 / 0.323480 (-0.029752) | 0.004789 / 0.007986 (-0.003197) | 0.002477 / 0.004328 (-0.001852) | 0.047825 / 0.004250 (0.043575) | 0.040780 / 0.037052 (0.003727) | 0.273355 / 0.258489 (0.014865) | 0.300057 / 0.293841 (0.006216) | 0.024481 / 0.128546 (-0.104066) | 0.007285 / 0.075646 (-0.068361) | 0.053046 / 0.419271 (-0.366226) | 0.032342 / 0.043533 (-0.011190) | 0.272293 / 0.255139 (0.017154) | 0.290842 / 0.283200 (0.007642) | 0.017546 / 0.141683 (-0.124137) | 1.155816 / 1.452155 (-0.296339) | 1.195839 / 1.492716 (-0.296878) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094177 / 0.018006 (0.076170) | 0.305122 / 0.000490 (0.304632) | 0.000237 / 0.000200 (0.000037) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021817 / 0.037411 (-0.015595) | 0.070711 / 0.014526 (0.056185) | 0.084028 / 0.176557 (-0.092528) | 0.120160 / 0.737135 (-0.616975) | 0.083085 / 0.296338 (-0.213254) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289127 / 0.215209 (0.073918) | 2.826365 / 2.077655 (0.748710) | 1.582910 / 1.504120 (0.078790) | 1.472796 / 1.541195 (-0.068399) | 1.497491 / 1.468490 (0.029000) | 0.412276 / 4.584777 (-4.172501) | 2.430692 / 3.745712 (-1.315020) | 2.556444 / 5.269862 (-2.713418) | 1.625782 / 4.565676 (-2.939895) | 0.047921 / 0.424275 (-0.376354) | 0.004809 / 0.007607 (-0.002798) | 0.345569 / 0.226044 (0.119524) | 3.417785 / 2.268929 (1.148856) | 1.959223 / 55.444624 (-53.485401) | 1.672765 / 6.876477 (-5.203712) | 1.852444 / 2.142072 (-0.289628) | 0.489225 / 4.805227 (-4.316002) | 0.100624 / 6.500664 (-6.400040) | 0.041242 / 0.075469 (-0.034227) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971130 / 1.841788 (-0.870658) | 12.652204 / 8.074308 (4.577896) | 10.661821 / 10.191392 (0.470429) | 0.147636 / 0.680424 (-0.532787) | 0.015738 / 0.534201 (-0.518463) | 0.272763 / 0.579283 (-0.306520) | 0.282623 / 0.434364 (-0.151741) | 0.341303 / 0.540337 (-0.199035) | 0.412149 / 1.386936 (-0.974787) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9499908c97ceef1792f69b71e93e36602880a4ae \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004589 / 0.011353 (-0.006764) | 0.002730 / 0.011008 (-0.008279) | 0.061862 / 0.038508 (0.023353) | 0.050945 / 0.023109 (0.027836) | 0.240776 / 0.275898 (-0.035122) | 0.266000 / 0.323480 (-0.057480) | 0.003823 / 0.007986 (-0.004162) | 0.002345 / 0.004328 (-0.001983) | 0.047821 / 0.004250 (0.043571) | 0.037813 / 0.037052 (0.000761) | 0.251075 / 0.258489 (-0.007415) | 0.279430 / 0.293841 (-0.014411) | 0.022957 / 0.128546 (-0.105590) | 0.007294 / 0.075646 (-0.068353) | 0.206092 / 0.419271 (-0.213180) | 0.035308 / 0.043533 (-0.008225) | 0.247197 / 0.255139 (-0.007942) | 0.264988 / 0.283200 (-0.018212) | 0.017588 / 0.141683 (-0.124095) | 1.093291 / 1.452155 (-0.358864) | 1.165477 / 1.492716 (-0.327240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104057 / 0.018006 (0.086051) | 0.303424 / 0.000490 (0.302934) | 0.000223 / 0.000200 (0.000023) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019040 / 0.037411 (-0.018371) | 0.063161 / 0.014526 (0.048635) | 0.085333 / 0.176557 (-0.091224) | 0.155973 / 0.737135 (-0.581162) | 0.077528 / 0.296338 (-0.218810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276104 / 0.215209 (0.060895) | 2.738174 / 2.077655 (0.660519) | 1.479484 / 1.504120 (-0.024636) | 1.354094 / 1.541195 (-0.187100) | 1.385312 / 1.468490 (-0.083178) | 0.401398 / 4.584777 (-4.183379) | 2.368503 / 3.745712 (-1.377209) | 2.586405 / 5.269862 (-2.683457) | 1.573978 / 4.565676 (-2.991699) | 0.046969 / 0.424275 (-0.377306) | 0.004874 / 0.007607 (-0.002733) | 0.334028 / 0.226044 (0.107984) | 3.269645 / 2.268929 (1.000717) | 1.834528 / 55.444624 (-53.610096) | 1.559883 / 6.876477 (-5.316594) | 1.581380 / 2.142072 (-0.560693) | 0.479580 / 4.805227 (-4.325647) | 0.099077 / 6.500664 (-6.401587) | 0.041166 / 0.075469 (-0.034303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.918810 / 1.841788 (-0.922978) | 11.505017 / 8.074308 (3.430709) | 10.331934 / 10.191392 (0.140542) | 0.128079 / 0.680424 (-0.552345) | 0.013716 / 0.534201 (-0.520485) | 0.271567 / 0.579283 (-0.307716) | 0.264846 / 0.434364 (-0.169518) | 0.305245 / 0.540337 (-0.235092) | 0.401391 / 1.386936 (-0.985546) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004860 / 0.011353 (-0.006493) | 0.002854 / 0.011008 (-0.008155) | 0.048327 / 0.038508 (0.009819) | 0.051377 / 0.023109 (0.028268) | 0.264344 / 0.275898 (-0.011554) | 0.286800 / 0.323480 (-0.036680) | 0.003969 / 0.007986 (-0.004016) | 0.002415 / 0.004328 (-0.001914) | 0.048498 / 0.004250 (0.044247) | 0.040399 / 0.037052 (0.003347) | 0.267254 / 0.258489 (0.008765) | 0.292049 / 0.293841 (-0.001792) | 0.024730 / 0.128546 (-0.103817) | 0.007275 / 0.075646 (-0.068371) | 0.053725 / 0.419271 (-0.365546) | 0.033142 / 0.043533 (-0.010391) | 0.265418 / 0.255139 (0.010279) | 0.286242 / 0.283200 (0.003042) | 0.017824 / 0.141683 (-0.123859) | 1.135978 / 1.452155 (-0.316176) | 1.192506 / 1.492716 (-0.300210) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091907 / 0.018006 (0.073900) | 0.307152 / 0.000490 (0.306663) | 0.000223 / 0.000200 (0.000023) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021909 / 0.037411 (-0.015502) | 0.070676 / 0.014526 (0.056150) | 0.081651 / 0.176557 (-0.094906) | 0.120915 / 0.737135 (-0.616220) | 0.085882 / 0.296338 (-0.210456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288008 / 0.215209 (0.072799) | 2.861352 / 2.077655 (0.783697) | 1.539045 / 1.504120 (0.034925) | 1.412175 / 1.541195 (-0.129019) | 1.421236 / 1.468490 (-0.047254) | 0.404921 / 4.584777 (-4.179856) | 2.480211 / 3.745712 (-1.265501) | 2.473083 / 5.269862 (-2.796779) | 1.558894 / 4.565676 (-3.006783) | 0.046692 / 0.424275 (-0.377584) | 0.004802 / 0.007607 (-0.002805) | 0.346046 / 0.226044 (0.120001) | 3.464387 / 2.268929 (1.195459) | 1.937298 / 55.444624 (-53.507326) | 1.593701 / 6.876477 (-5.282776) | 1.730688 / 2.142072 (-0.411385) | 0.481069 / 4.805227 (-4.324158) | 0.098991 / 6.500664 (-6.401673) | 0.040491 / 0.075469 (-0.034978) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967809 / 1.841788 (-0.873979) | 11.952335 / 8.074308 (3.878027) | 10.616711 / 10.191392 (0.425319) | 0.128938 / 0.680424 (-0.551486) | 0.015455 / 0.534201 (-0.518746) | 0.272100 / 0.579283 (-0.307183) | 0.278275 / 0.434364 (-0.156089) | 0.309711 / 0.540337 (-0.230627) | 0.411026 / 1.386936 (-0.975910) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#495bc04226a67983f523d12d42b680172f8d4893 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008470 / 0.011353 (-0.002883) | 0.003201 / 0.011008 (-0.007808) | 0.063193 / 0.038508 (0.024685) | 0.064174 / 0.023109 (0.041064) | 0.248316 / 0.275898 (-0.027582) | 0.281598 / 0.323480 (-0.041882) | 0.004076 / 0.007986 (-0.003909) | 0.002397 / 0.004328 (-0.001932) | 0.048834 / 0.004250 (0.044584) | 0.056517 / 0.037052 (0.019465) | 0.254164 / 0.258489 (-0.004326) | 0.289800 / 0.293841 (-0.004041) | 0.031092 / 0.128546 (-0.097454) | 0.010885 / 0.075646 (-0.064762) | 0.219198 / 0.419271 (-0.200073) | 0.040087 / 0.043533 (-0.003446) | 0.250900 / 0.255139 (-0.004239) | 0.267787 / 0.283200 (-0.015413) | 0.019666 / 0.141683 (-0.122017) | 1.114960 / 1.452155 (-0.337194) | 1.266675 / 1.492716 (-0.226041) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091429 / 0.018006 (0.073422) | 0.301804 / 0.000490 (0.301314) | 0.000212 / 0.000200 (0.000012) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021053 / 0.037411 (-0.016358) | 0.062407 / 0.014526 (0.047881) | 0.073166 / 0.176557 (-0.103391) | 0.119642 / 0.737135 (-0.617493) | 0.074771 / 0.296338 (-0.221567) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278582 / 0.215209 (0.063373) | 2.773023 / 2.077655 (0.695368) | 1.459977 / 1.504120 (-0.044143) | 1.330453 / 1.541195 (-0.210742) | 1.372797 / 1.468490 (-0.095693) | 0.628845 / 4.584777 (-3.955932) | 3.428779 / 3.745712 (-0.316933) | 3.138967 / 5.269862 (-2.130895) | 2.126891 / 4.565676 (-2.438785) | 0.062340 / 0.424275 (-0.361935) | 0.004939 / 0.007607 (-0.002668) | 0.336058 / 0.226044 (0.110014) | 3.463741 / 2.268929 (1.194813) | 1.847504 / 55.444624 (-53.597120) | 1.984173 / 6.876477 (-4.892304) | 1.602962 / 2.142072 (-0.539110) | 0.637683 / 4.805227 (-4.167545) | 0.117898 / 6.500664 (-6.382766) | 0.043308 / 0.075469 (-0.032161) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.087773 / 1.841788 (-0.754014) | 14.959526 / 8.074308 (6.885218) | 10.886003 / 10.191392 (0.694611) | 0.163385 / 0.680424 (-0.517039) | 0.016679 / 0.534201 (-0.517522) | 0.351913 / 0.579283 (-0.227370) | 0.359007 / 0.434364 (-0.075357) | 0.323824 / 0.540337 (-0.216513) | 0.549268 / 1.386936 (-0.837668) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005265 / 0.011353 (-0.006088) | 0.003367 / 0.011008 (-0.007641) | 0.062741 / 0.038508 (0.024233) | 0.068463 / 0.023109 (0.045354) | 0.258497 / 0.275898 (-0.017401) | 0.355360 / 0.323480 (0.031880) | 0.003910 / 0.007986 (-0.004075) | 0.002399 / 0.004328 (-0.001929) | 0.055564 / 0.004250 (0.051313) | 0.039644 / 0.037052 (0.002591) | 0.258313 / 0.258489 (-0.000176) | 0.328927 / 0.293841 (0.035086) | 0.035634 / 0.128546 (-0.092912) | 0.010378 / 0.075646 (-0.065268) | 0.073109 / 0.419271 (-0.346163) | 0.039752 / 0.043533 (-0.003781) | 0.258237 / 0.255139 (0.003098) | 0.330329 / 0.283200 (0.047129) | 0.023924 / 0.141683 (-0.117759) | 1.198639 / 1.452155 (-0.253515) | 1.202307 / 1.492716 (-0.290409) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091297 / 0.018006 (0.073290) | 0.298729 / 0.000490 (0.298240) | 0.000210 / 0.000200 (0.000010) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022381 / 0.037411 (-0.015030) | 0.070226 / 0.014526 (0.055700) | 0.080549 / 0.176557 (-0.096007) | 0.119677 / 0.737135 (-0.617458) | 0.082612 / 0.296338 (-0.213727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289270 / 0.215209 (0.074061) | 2.853830 / 2.077655 (0.776175) | 1.528938 / 1.504120 (0.024818) | 1.398429 / 1.541195 (-0.142766) | 1.472465 / 1.468490 (0.003975) | 0.779015 / 4.584777 (-3.805762) | 3.287724 / 3.745712 (-0.457988) | 3.020908 / 5.269862 (-2.248953) | 1.926094 / 4.565676 (-2.639583) | 0.063163 / 0.424275 (-0.361112) | 0.005175 / 0.007607 (-0.002432) | 0.342884 / 0.226044 (0.116840) | 3.476837 / 2.268929 (1.207908) | 1.880683 / 55.444624 (-53.563942) | 1.613845 / 6.876477 (-5.262632) | 1.624734 / 2.142072 (-0.517338) | 0.626220 / 4.805227 (-4.179007) | 0.114976 / 6.500664 (-6.385689) | 0.040670 / 0.075469 (-0.034799) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.116815 / 1.841788 (-0.724973) | 15.388426 / 8.074308 (7.314118) | 10.825276 / 10.191392 (0.633884) | 0.172659 / 0.680424 (-0.507765) | 0.015468 / 0.534201 (-0.518733) | 0.285552 / 0.579283 (-0.293731) | 0.346886 / 0.434364 (-0.087478) | 0.348696 / 0.540337 (-0.191641) | 0.729335 / 1.386936 (-0.657601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d7bbf346dc268b8084dee406b2a6e2b96d44bc3b \"CML watermark\")\n"
] | 2023-11-16T12:12:54 | 2023-11-28T16:10:39 | 2023-11-28T16:03:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6429",
"html_url": "https://github.com/huggingface/datasets/pull/6429",
"diff_url": "https://github.com/huggingface/datasets/pull/6429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6429.patch",
"merged_at": "2023-11-28T16:03:43"
} | Draft about adding `trust_remote_code` to `load_dataset`.
```python
ds = load_dataset(..., trust_remote_code=True) # run remote code (current default)
```
It would default to `True` (current behavior) and in the next major release it will prompt the user to check the code before running it (we'll communicate on this before doing it of course).
```python
# in the future
ds = load_dataset(...) # prompt the user to check the code before running it (future default)
ds = load_dataset(..., trust_remote_code=True) # run remote code
ds = load_dataset(..., trust_remote_code=False) # disallow remote code
```
Related to https://github.com/huggingface/datasets/issues/6400
Will do a separate PR to use the parquet export when possible | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6429/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6428/comments | https://api.github.com/repos/huggingface/datasets/issues/6428/events | https://github.com/huggingface/datasets/pull/6428 | 1,996,306,394 | PR_kwDODunzps5fmakS | 6,428 | Set dev version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6428). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004839 / 0.011353 (-0.006514) | 0.002928 / 0.011008 (-0.008080) | 0.061730 / 0.038508 (0.023221) | 0.030523 / 0.023109 (0.007414) | 0.252679 / 0.275898 (-0.023219) | 0.281597 / 0.323480 (-0.041883) | 0.003025 / 0.007986 (-0.004961) | 0.002374 / 0.004328 (-0.001955) | 0.048134 / 0.004250 (0.043884) | 0.045843 / 0.037052 (0.008791) | 0.256274 / 0.258489 (-0.002215) | 0.288704 / 0.293841 (-0.005137) | 0.023486 / 0.128546 (-0.105060) | 0.007186 / 0.075646 (-0.068461) | 0.202519 / 0.419271 (-0.216753) | 0.058192 / 0.043533 (0.014659) | 0.256448 / 0.255139 (0.001309) | 0.279417 / 0.283200 (-0.003783) | 0.019942 / 0.141683 (-0.121740) | 1.100954 / 1.452155 (-0.351201) | 1.168183 / 1.492716 (-0.324533) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091314 / 0.018006 (0.073308) | 0.298614 / 0.000490 (0.298124) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018071 / 0.037411 (-0.019340) | 0.062265 / 0.014526 (0.047740) | 0.073228 / 0.176557 (-0.103328) | 0.119163 / 0.737135 (-0.617972) | 0.074717 / 0.296338 (-0.221622) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273906 / 0.215209 (0.058697) | 2.683995 / 2.077655 (0.606340) | 1.418773 / 1.504120 (-0.085347) | 1.310473 / 1.541195 (-0.230722) | 1.303152 / 1.468490 (-0.165339) | 0.390846 / 4.584777 (-4.193931) | 2.346407 / 3.745712 (-1.399305) | 2.582945 / 5.269862 (-2.686916) | 1.569549 / 4.565676 (-2.996128) | 0.044893 / 0.424275 (-0.379383) | 0.004754 / 0.007607 (-0.002853) | 0.323491 / 0.226044 (0.097447) | 3.229736 / 2.268929 (0.960808) | 1.783551 / 55.444624 (-53.661074) | 1.499685 / 6.876477 (-5.376792) | 1.515826 / 2.142072 (-0.626246) | 0.475768 / 4.805227 (-4.329460) | 0.099579 / 6.500664 (-6.401085) | 0.042709 / 0.075469 (-0.032760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.926120 / 1.841788 (-0.915667) | 11.597189 / 8.074308 (3.522881) | 10.327055 / 10.191392 (0.135663) | 0.127479 / 0.680424 (-0.552945) | 0.014844 / 0.534201 (-0.519357) | 0.261181 / 0.579283 (-0.318102) | 0.258407 / 0.434364 (-0.175957) | 0.303192 / 0.540337 (-0.237146) | 0.416665 / 1.386936 (-0.970271) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004759 / 0.011353 (-0.006594) | 0.002780 / 0.011008 (-0.008228) | 0.047991 / 0.038508 (0.009483) | 0.052263 / 0.023109 (0.029153) | 0.261228 / 0.275898 (-0.014670) | 0.287779 / 0.323480 (-0.035701) | 0.003961 / 0.007986 (-0.004024) | 0.002357 / 0.004328 (-0.001971) | 0.047755 / 0.004250 (0.043505) | 0.038066 / 0.037052 (0.001014) | 0.269502 / 0.258489 (0.011013) | 0.298348 / 0.293841 (0.004507) | 0.024398 / 0.128546 (-0.104149) | 0.007189 / 0.075646 (-0.068457) | 0.053356 / 0.419271 (-0.365915) | 0.032459 / 0.043533 (-0.011074) | 0.266389 / 0.255139 (0.011250) | 0.305367 / 0.283200 (0.022168) | 0.017629 / 0.141683 (-0.124054) | 1.145789 / 1.452155 (-0.306366) | 1.204778 / 1.492716 (-0.287938) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091347 / 0.018006 (0.073341) | 0.298671 / 0.000490 (0.298181) | 0.000229 / 0.000200 (0.000029) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021374 / 0.037411 (-0.016037) | 0.068869 / 0.014526 (0.054344) | 0.080443 / 0.176557 (-0.096113) | 0.118759 / 0.737135 (-0.618376) | 0.081646 / 0.296338 (-0.214692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295274 / 0.215209 (0.080065) | 2.889349 / 2.077655 (0.811695) | 1.561020 / 1.504120 (0.056900) | 1.425025 / 1.541195 (-0.116170) | 1.495446 / 1.468490 (0.026956) | 0.403825 / 4.584777 (-4.180952) | 2.404905 / 3.745712 (-1.340807) | 2.590104 / 5.269862 (-2.679758) | 1.570559 / 4.565676 (-2.995118) | 0.046342 / 0.424275 (-0.377933) | 0.004799 / 0.007607 (-0.002809) | 0.349981 / 0.226044 (0.123937) | 3.437341 / 2.268929 (1.168412) | 1.948155 / 55.444624 (-53.496469) | 1.637765 / 6.876477 (-5.238711) | 1.671521 / 2.142072 (-0.470551) | 0.479500 / 4.805227 (-4.325727) | 0.098305 / 6.500664 (-6.402359) | 0.040864 / 0.075469 (-0.034605) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979986 / 1.841788 (-0.861801) | 12.169722 / 8.074308 (4.095413) | 11.297345 / 10.191392 (1.105953) | 0.129123 / 0.680424 (-0.551301) | 0.015389 / 0.534201 (-0.518812) | 0.270964 / 0.579283 (-0.308319) | 0.269590 / 0.434364 (-0.164774) | 0.310662 / 0.540337 (-0.229675) | 0.406272 / 1.386936 (-0.980664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#31873f1e9acbe013e6d396d1ed5492db8cd59dd3 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004620 / 0.011353 (-0.006733) | 0.002971 / 0.011008 (-0.008038) | 0.062864 / 0.038508 (0.024355) | 0.028743 / 0.023109 (0.005634) | 0.246729 / 0.275898 (-0.029169) | 0.271165 / 0.323480 (-0.052315) | 0.003930 / 0.007986 (-0.004056) | 0.002422 / 0.004328 (-0.001906) | 0.047430 / 0.004250 (0.043180) | 0.044895 / 0.037052 (0.007843) | 0.249128 / 0.258489 (-0.009361) | 0.283384 / 0.293841 (-0.010457) | 0.023288 / 0.128546 (-0.105259) | 0.007241 / 0.075646 (-0.068405) | 0.207551 / 0.419271 (-0.211720) | 0.055008 / 0.043533 (0.011475) | 0.252781 / 0.255139 (-0.002358) | 0.296924 / 0.283200 (0.013724) | 0.017860 / 0.141683 (-0.123822) | 1.094597 / 1.452155 (-0.357558) | 1.162314 / 1.492716 (-0.330402) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091423 / 0.018006 (0.073417) | 0.302833 / 0.000490 (0.302343) | 0.000242 / 0.000200 (0.000042) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018143 / 0.037411 (-0.019268) | 0.066371 / 0.014526 (0.051845) | 0.072774 / 0.176557 (-0.103783) | 0.119062 / 0.737135 (-0.618073) | 0.102836 / 0.296338 (-0.193502) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280117 / 0.215209 (0.064908) | 2.757955 / 2.077655 (0.680301) | 1.494994 / 1.504120 (-0.009126) | 1.375325 / 1.541195 (-0.165870) | 1.384179 / 1.468490 (-0.084311) | 0.399824 / 4.584777 (-4.184953) | 2.368575 / 3.745712 (-1.377137) | 2.574035 / 5.269862 (-2.695827) | 1.548738 / 4.565676 (-3.016939) | 0.045841 / 0.424275 (-0.378434) | 0.004799 / 0.007607 (-0.002808) | 0.331522 / 0.226044 (0.105478) | 3.324471 / 2.268929 (1.055543) | 1.838637 / 55.444624 (-53.605987) | 1.562854 / 6.876477 (-5.313623) | 1.581736 / 2.142072 (-0.560336) | 0.468832 / 4.805227 (-4.336396) | 0.099309 / 6.500664 (-6.401355) | 0.042078 / 0.075469 (-0.033391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.928468 / 1.841788 (-0.913320) | 11.331143 / 8.074308 (3.256835) | 10.296213 / 10.191392 (0.104821) | 0.138912 / 0.680424 (-0.541511) | 0.014044 / 0.534201 (-0.520157) | 0.267293 / 0.579283 (-0.311991) | 0.267267 / 0.434364 (-0.167097) | 0.306560 / 0.540337 (-0.233778) | 0.423926 / 1.386936 (-0.963010) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004842 / 0.011353 (-0.006511) | 0.002917 / 0.011008 (-0.008091) | 0.048263 / 0.038508 (0.009755) | 0.051453 / 0.023109 (0.028344) | 0.278330 / 0.275898 (0.002432) | 0.298569 / 0.323480 (-0.024911) | 0.003936 / 0.007986 (-0.004049) | 0.002479 / 0.004328 (-0.001850) | 0.048281 / 0.004250 (0.044031) | 0.038925 / 0.037052 (0.001872) | 0.285258 / 0.258489 (0.026769) | 0.313701 / 0.293841 (0.019860) | 0.024916 / 0.128546 (-0.103630) | 0.007142 / 0.075646 (-0.068504) | 0.053634 / 0.419271 (-0.365638) | 0.032842 / 0.043533 (-0.010690) | 0.279373 / 0.255139 (0.024234) | 0.295844 / 0.283200 (0.012644) | 0.018142 / 0.141683 (-0.123541) | 1.136960 / 1.452155 (-0.315195) | 1.184438 / 1.492716 (-0.308278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090271 / 0.018006 (0.072264) | 0.299940 / 0.000490 (0.299450) | 0.000234 / 0.000200 (0.000034) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021175 / 0.037411 (-0.016237) | 0.070924 / 0.014526 (0.056398) | 0.080584 / 0.176557 (-0.095972) | 0.119278 / 0.737135 (-0.617857) | 0.082361 / 0.296338 (-0.213977) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298312 / 0.215209 (0.083103) | 2.895361 / 2.077655 (0.817706) | 1.616120 / 1.504120 (0.112001) | 1.484444 / 1.541195 (-0.056750) | 1.541893 / 1.468490 (0.073403) | 0.409968 / 4.584777 (-4.174809) | 2.423639 / 3.745712 (-1.322073) | 2.585122 / 5.269862 (-2.684740) | 1.540343 / 4.565676 (-3.025333) | 0.046604 / 0.424275 (-0.377671) | 0.004742 / 0.007607 (-0.002865) | 0.341659 / 0.226044 (0.115614) | 3.409259 / 2.268929 (1.140330) | 2.007068 / 55.444624 (-53.437556) | 1.681348 / 6.876477 (-5.195129) | 1.719253 / 2.142072 (-0.422819) | 0.482301 / 4.805227 (-4.322926) | 0.099619 / 6.500664 (-6.401045) | 0.041247 / 0.075469 (-0.034222) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971783 / 1.841788 (-0.870004) | 12.208000 / 8.074308 (4.133692) | 10.948230 / 10.191392 (0.756838) | 0.131824 / 0.680424 (-0.548599) | 0.015696 / 0.534201 (-0.518505) | 0.272265 / 0.579283 (-0.307018) | 0.276093 / 0.434364 (-0.158270) | 0.305897 / 0.540337 (-0.234441) | 0.411632 / 1.386936 (-0.975304) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2bf75fe522c6fedd16d00b4a928f613dee11f73c \"CML watermark\")\n"
] | 2023-11-16T08:12:55 | 2023-11-16T08:19:39 | 2023-11-16T08:13:28 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6428",
"html_url": "https://github.com/huggingface/datasets/pull/6428",
"diff_url": "https://github.com/huggingface/datasets/pull/6428.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6428.patch",
"merged_at": "2023-11-16T08:13:28"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6428/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6427/comments | https://api.github.com/repos/huggingface/datasets/issues/6427/events | https://github.com/huggingface/datasets/pull/6427 | 1,996,248,605 | PR_kwDODunzps5fmN1_ | 6,427 | Release: 2.15.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004331 / 0.011353 (-0.007022) | 0.002573 / 0.011008 (-0.008435) | 0.061002 / 0.038508 (0.022494) | 0.029259 / 0.023109 (0.006149) | 0.242983 / 0.275898 (-0.032915) | 0.267629 / 0.323480 (-0.055851) | 0.003906 / 0.007986 (-0.004080) | 0.002383 / 0.004328 (-0.001946) | 0.047574 / 0.004250 (0.043323) | 0.042153 / 0.037052 (0.005101) | 0.245821 / 0.258489 (-0.012668) | 0.276479 / 0.293841 (-0.017362) | 0.022498 / 0.128546 (-0.106049) | 0.006775 / 0.075646 (-0.068871) | 0.201795 / 0.419271 (-0.217477) | 0.052443 / 0.043533 (0.008910) | 0.248320 / 0.255139 (-0.006819) | 0.261964 / 0.283200 (-0.021235) | 0.016764 / 0.141683 (-0.124919) | 1.118702 / 1.452155 (-0.333453) | 1.203079 / 1.492716 (-0.289638) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088808 / 0.018006 (0.070801) | 0.296526 / 0.000490 (0.296037) | 0.000203 / 0.000200 (0.000003) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018816 / 0.037411 (-0.018595) | 0.062295 / 0.014526 (0.047769) | 0.075228 / 0.176557 (-0.101329) | 0.119916 / 0.737135 (-0.617219) | 0.077206 / 0.296338 (-0.219132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276723 / 0.215209 (0.061514) | 2.711431 / 2.077655 (0.633776) | 1.425590 / 1.504120 (-0.078530) | 1.301383 / 1.541195 (-0.239812) | 1.316314 / 1.468490 (-0.152176) | 0.402709 / 4.584777 (-4.182068) | 2.347229 / 3.745712 (-1.398483) | 2.596937 / 5.269862 (-2.672925) | 1.560658 / 4.565676 (-3.005018) | 0.046162 / 0.424275 (-0.378113) | 0.004760 / 0.007607 (-0.002848) | 0.330522 / 0.226044 (0.104478) | 3.244072 / 2.268929 (0.975143) | 1.747603 / 55.444624 (-53.697021) | 1.475534 / 6.876477 (-5.400943) | 1.485135 / 2.142072 (-0.656938) | 0.476794 / 4.805227 (-4.328433) | 0.098496 / 6.500664 (-6.402168) | 0.040740 / 0.075469 (-0.034729) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939020 / 1.841788 (-0.902768) | 11.235187 / 8.074308 (3.160878) | 10.194975 / 10.191392 (0.003583) | 0.126241 / 0.680424 (-0.554182) | 0.013990 / 0.534201 (-0.520211) | 0.269149 / 0.579283 (-0.310134) | 0.256950 / 0.434364 (-0.177414) | 0.301282 / 0.540337 (-0.239056) | 0.421490 / 1.386936 (-0.965446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004956 / 0.011353 (-0.006397) | 0.002478 / 0.011008 (-0.008530) | 0.047773 / 0.038508 (0.009265) | 0.050076 / 0.023109 (0.026967) | 0.261915 / 0.275898 (-0.013983) | 0.282553 / 0.323480 (-0.040927) | 0.003881 / 0.007986 (-0.004105) | 0.002329 / 0.004328 (-0.001999) | 0.048091 / 0.004250 (0.043841) | 0.038188 / 0.037052 (0.001135) | 0.265502 / 0.258489 (0.007013) | 0.292568 / 0.293841 (-0.001273) | 0.024172 / 0.128546 (-0.104374) | 0.006865 / 0.075646 (-0.068781) | 0.053199 / 0.419271 (-0.366072) | 0.032201 / 0.043533 (-0.011332) | 0.265774 / 0.255139 (0.010635) | 0.277954 / 0.283200 (-0.005245) | 0.017798 / 0.141683 (-0.123885) | 1.121503 / 1.452155 (-0.330652) | 1.176319 / 1.492716 (-0.316398) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.087027 / 0.018006 (0.069020) | 0.296182 / 0.000490 (0.295693) | 0.000216 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020990 / 0.037411 (-0.016421) | 0.069693 / 0.014526 (0.055168) | 0.081098 / 0.176557 (-0.095459) | 0.117760 / 0.737135 (-0.619375) | 0.081493 / 0.296338 (-0.214845) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295078 / 0.215209 (0.079869) | 2.876602 / 2.077655 (0.798947) | 1.558011 / 1.504120 (0.053891) | 1.426715 / 1.541195 (-0.114480) | 1.443785 / 1.468490 (-0.024705) | 0.400826 / 4.584777 (-4.183951) | 2.378903 / 3.745712 (-1.366810) | 2.473128 / 5.269862 (-2.796734) | 1.500785 / 4.565676 (-3.064891) | 0.045438 / 0.424275 (-0.378837) | 0.004953 / 0.007607 (-0.002654) | 0.348182 / 0.226044 (0.122137) | 3.427751 / 2.268929 (1.158822) | 1.925173 / 55.444624 (-53.519451) | 1.633354 / 6.876477 (-5.243123) | 1.651573 / 2.142072 (-0.490499) | 0.473260 / 4.805227 (-4.331968) | 0.097613 / 6.500664 (-6.403051) | 0.040196 / 0.075469 (-0.035273) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951780 / 1.841788 (-0.890008) | 11.709342 / 8.074308 (3.635034) | 10.571831 / 10.191392 (0.380439) | 0.134344 / 0.680424 (-0.546079) | 0.022116 / 0.534201 (-0.512084) | 0.269651 / 0.579283 (-0.309632) | 0.272310 / 0.434364 (-0.162054) | 0.306434 / 0.540337 (-0.233903) | 0.408320 / 1.386936 (-0.978616) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ea64b77079cf76675421917472c05d06ace63fc \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004402 / 0.011353 (-0.006951) | 0.002732 / 0.011008 (-0.008277) | 0.062799 / 0.038508 (0.024291) | 0.029155 / 0.023109 (0.006046) | 0.241925 / 0.275898 (-0.033973) | 0.275694 / 0.323480 (-0.047786) | 0.003989 / 0.007986 (-0.003997) | 0.002528 / 0.004328 (-0.001801) | 0.048410 / 0.004250 (0.044160) | 0.043729 / 0.037052 (0.006677) | 0.248843 / 0.258489 (-0.009646) | 0.282980 / 0.293841 (-0.010860) | 0.023828 / 0.128546 (-0.104718) | 0.006972 / 0.075646 (-0.068675) | 0.213222 / 0.419271 (-0.206049) | 0.054883 / 0.043533 (0.011350) | 0.251353 / 0.255139 (-0.003786) | 0.269818 / 0.283200 (-0.013381) | 0.016906 / 0.141683 (-0.124777) | 1.114109 / 1.452155 (-0.338045) | 1.162942 / 1.492716 (-0.329774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093724 / 0.018006 (0.075718) | 0.301989 / 0.000490 (0.301499) | 0.000213 / 0.000200 (0.000014) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018245 / 0.037411 (-0.019166) | 0.062237 / 0.014526 (0.047712) | 0.075644 / 0.176557 (-0.100913) | 0.119655 / 0.737135 (-0.617480) | 0.074525 / 0.296338 (-0.221814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.274534 / 0.215209 (0.059324) | 2.683678 / 2.077655 (0.606024) | 1.453306 / 1.504120 (-0.050814) | 1.347630 / 1.541195 (-0.193564) | 1.352875 / 1.468490 (-0.115615) | 0.398425 / 4.584777 (-4.186352) | 2.375738 / 3.745712 (-1.369974) | 2.591573 / 5.269862 (-2.678289) | 1.555527 / 4.565676 (-3.010150) | 0.045656 / 0.424275 (-0.378619) | 0.004898 / 0.007607 (-0.002709) | 0.330591 / 0.226044 (0.104547) | 3.247638 / 2.268929 (0.978710) | 1.816676 / 55.444624 (-53.627948) | 1.531754 / 6.876477 (-5.344723) | 1.543196 / 2.142072 (-0.598877) | 0.472489 / 4.805227 (-4.332739) | 0.099311 / 6.500664 (-6.401353) | 0.042139 / 0.075469 (-0.033330) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945472 / 1.841788 (-0.896316) | 11.476550 / 8.074308 (3.402242) | 10.281157 / 10.191392 (0.089765) | 0.141062 / 0.680424 (-0.539362) | 0.013634 / 0.534201 (-0.520567) | 0.268778 / 0.579283 (-0.310505) | 0.263542 / 0.434364 (-0.170822) | 0.307918 / 0.540337 (-0.232420) | 0.421231 / 1.386936 (-0.965705) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005090 / 0.011353 (-0.006263) | 0.003135 / 0.011008 (-0.007873) | 0.048058 / 0.038508 (0.009550) | 0.052898 / 0.023109 (0.029789) | 0.273233 / 0.275898 (-0.002665) | 0.299516 / 0.323480 (-0.023964) | 0.004126 / 0.007986 (-0.003860) | 0.002331 / 0.004328 (-0.001997) | 0.047627 / 0.004250 (0.043376) | 0.039076 / 0.037052 (0.002023) | 0.276625 / 0.258489 (0.018136) | 0.308180 / 0.293841 (0.014340) | 0.024929 / 0.128546 (-0.103618) | 0.007396 / 0.075646 (-0.068251) | 0.053408 / 0.419271 (-0.365863) | 0.032896 / 0.043533 (-0.010637) | 0.275412 / 0.255139 (0.020273) | 0.292014 / 0.283200 (0.008814) | 0.018336 / 0.141683 (-0.123347) | 1.123565 / 1.452155 (-0.328589) | 1.175382 / 1.492716 (-0.317334) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093799 / 0.018006 (0.075793) | 0.304219 / 0.000490 (0.303729) | 0.000231 / 0.000200 (0.000031) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021034 / 0.037411 (-0.016377) | 0.069961 / 0.014526 (0.055435) | 0.080311 / 0.176557 (-0.096246) | 0.118603 / 0.737135 (-0.618532) | 0.084003 / 0.296338 (-0.212335) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305610 / 0.215209 (0.090401) | 2.962027 / 2.077655 (0.884372) | 1.598604 / 1.504120 (0.094484) | 1.476227 / 1.541195 (-0.064967) | 1.528960 / 1.468490 (0.060470) | 0.404545 / 4.584777 (-4.180232) | 2.423147 / 3.745712 (-1.322565) | 2.516632 / 5.269862 (-2.753229) | 1.529000 / 4.565676 (-3.036677) | 0.045780 / 0.424275 (-0.378495) | 0.004784 / 0.007607 (-0.002823) | 0.358836 / 0.226044 (0.132792) | 3.508782 / 2.268929 (1.239853) | 1.954513 / 55.444624 (-53.490111) | 1.672824 / 6.876477 (-5.203653) | 1.683482 / 2.142072 (-0.458590) | 0.479014 / 4.805227 (-4.326213) | 0.098325 / 6.500664 (-6.402340) | 0.040934 / 0.075469 (-0.034536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974770 / 1.841788 (-0.867017) | 11.956137 / 8.074308 (3.881829) | 10.956458 / 10.191392 (0.765066) | 0.141800 / 0.680424 (-0.538624) | 0.015439 / 0.534201 (-0.518762) | 0.271783 / 0.579283 (-0.307500) | 0.278058 / 0.434364 (-0.156306) | 0.305823 / 0.540337 (-0.234514) | 0.415677 / 1.386936 (-0.971259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0caf91285116ec910f409e82cc6e1f4cff7496e3 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004483 / 0.011353 (-0.006870) | 0.002560 / 0.011008 (-0.008448) | 0.061428 / 0.038508 (0.022920) | 0.029460 / 0.023109 (0.006351) | 0.238971 / 0.275898 (-0.036927) | 0.271768 / 0.323480 (-0.051712) | 0.003970 / 0.007986 (-0.004016) | 0.002408 / 0.004328 (-0.001921) | 0.047755 / 0.004250 (0.043505) | 0.043358 / 0.037052 (0.006306) | 0.245543 / 0.258489 (-0.012946) | 0.278230 / 0.293841 (-0.015611) | 0.023819 / 0.128546 (-0.104727) | 0.006856 / 0.075646 (-0.068790) | 0.204603 / 0.419271 (-0.214668) | 0.054521 / 0.043533 (0.010989) | 0.246277 / 0.255139 (-0.008862) | 0.271230 / 0.283200 (-0.011969) | 0.017283 / 0.141683 (-0.124400) | 1.088955 / 1.452155 (-0.363200) | 1.245141 / 1.492716 (-0.247575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091534 / 0.018006 (0.073528) | 0.299517 / 0.000490 (0.299027) | 0.000215 / 0.000200 (0.000015) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018105 / 0.037411 (-0.019306) | 0.061860 / 0.014526 (0.047334) | 0.074494 / 0.176557 (-0.102063) | 0.120107 / 0.737135 (-0.617029) | 0.073406 / 0.296338 (-0.222932) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278140 / 0.215209 (0.062931) | 2.746208 / 2.077655 (0.668553) | 1.476264 / 1.504120 (-0.027856) | 1.356498 / 1.541195 (-0.184697) | 1.362998 / 1.468490 (-0.105492) | 0.401884 / 4.584777 (-4.182893) | 2.409836 / 3.745712 (-1.335877) | 2.579087 / 5.269862 (-2.690775) | 1.545021 / 4.565676 (-3.020656) | 0.046001 / 0.424275 (-0.378274) | 0.004812 / 0.007607 (-0.002795) | 0.339767 / 0.226044 (0.113722) | 3.341599 / 2.268929 (1.072670) | 1.821498 / 55.444624 (-53.623127) | 1.559311 / 6.876477 (-5.317166) | 1.570368 / 2.142072 (-0.571704) | 0.472688 / 4.805227 (-4.332539) | 0.099549 / 6.500664 (-6.401115) | 0.041644 / 0.075469 (-0.033825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951988 / 1.841788 (-0.889799) | 11.371459 / 8.074308 (3.297150) | 10.229446 / 10.191392 (0.038054) | 0.128105 / 0.680424 (-0.552319) | 0.014418 / 0.534201 (-0.519783) | 0.268615 / 0.579283 (-0.310668) | 0.263956 / 0.434364 (-0.170407) | 0.302213 / 0.540337 (-0.238125) | 0.427224 / 1.386936 (-0.959712) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005150 / 0.011353 (-0.006203) | 0.002557 / 0.011008 (-0.008451) | 0.048092 / 0.038508 (0.009584) | 0.050522 / 0.023109 (0.027413) | 0.272195 / 0.275898 (-0.003703) | 0.294191 / 0.323480 (-0.029289) | 0.004098 / 0.007986 (-0.003887) | 0.002350 / 0.004328 (-0.001978) | 0.048682 / 0.004250 (0.044432) | 0.038381 / 0.037052 (0.001328) | 0.275530 / 0.258489 (0.017041) | 0.303991 / 0.293841 (0.010150) | 0.024734 / 0.128546 (-0.103812) | 0.006926 / 0.075646 (-0.068720) | 0.053683 / 0.419271 (-0.365588) | 0.032675 / 0.043533 (-0.010858) | 0.272816 / 0.255139 (0.017677) | 0.291754 / 0.283200 (0.008554) | 0.018290 / 0.141683 (-0.123392) | 1.127696 / 1.452155 (-0.324459) | 1.187080 / 1.492716 (-0.305636) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091224 / 0.018006 (0.073218) | 0.288838 / 0.000490 (0.288348) | 0.000226 / 0.000200 (0.000026) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021409 / 0.037411 (-0.016003) | 0.069846 / 0.014526 (0.055320) | 0.079962 / 0.176557 (-0.096594) | 0.118575 / 0.737135 (-0.618560) | 0.080223 / 0.296338 (-0.216115) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290835 / 0.215209 (0.075626) | 2.831787 / 2.077655 (0.754133) | 1.587728 / 1.504120 (0.083608) | 1.461939 / 1.541195 (-0.079256) | 1.495257 / 1.468490 (0.026767) | 0.397653 / 4.584777 (-4.187124) | 2.399903 / 3.745712 (-1.345809) | 2.527615 / 5.269862 (-2.742247) | 1.501555 / 4.565676 (-3.064121) | 0.045742 / 0.424275 (-0.378533) | 0.004797 / 0.007607 (-0.002811) | 0.339139 / 0.226044 (0.113094) | 3.358340 / 2.268929 (1.089412) | 1.968955 / 55.444624 (-53.475670) | 1.663598 / 6.876477 (-5.212879) | 1.673995 / 2.142072 (-0.468078) | 0.463444 / 4.805227 (-4.341783) | 0.098008 / 6.500664 (-6.402656) | 0.040836 / 0.075469 (-0.034633) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974033 / 1.841788 (-0.867755) | 11.863206 / 8.074308 (3.788897) | 10.892389 / 10.191392 (0.700997) | 0.128884 / 0.680424 (-0.551540) | 0.015319 / 0.534201 (-0.518882) | 0.268931 / 0.579283 (-0.310353) | 0.274148 / 0.434364 (-0.160216) | 0.305407 / 0.540337 (-0.234930) | 0.410574 / 1.386936 (-0.976362) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0caf91285116ec910f409e82cc6e1f4cff7496e3 \"CML watermark\")\n"
] | 2023-11-16T07:37:20 | 2023-11-16T08:12:12 | 2023-11-16T07:43:05 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6427",
"html_url": "https://github.com/huggingface/datasets/pull/6427",
"diff_url": "https://github.com/huggingface/datasets/pull/6427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6427.patch",
"merged_at": "2023-11-16T07:43:05"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6427/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6426/comments | https://api.github.com/repos/huggingface/datasets/issues/6426/events | https://github.com/huggingface/datasets/pull/6426 | 1,995,363,264 | PR_kwDODunzps5fjOEK | 6,426 | More robust temporary directory deletion | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6426). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004750 / 0.011353 (-0.006603) | 0.002928 / 0.011008 (-0.008080) | 0.061962 / 0.038508 (0.023454) | 0.029878 / 0.023109 (0.006768) | 0.233380 / 0.275898 (-0.042518) | 0.262221 / 0.323480 (-0.061259) | 0.002982 / 0.007986 (-0.005004) | 0.003698 / 0.004328 (-0.000630) | 0.048565 / 0.004250 (0.044314) | 0.046107 / 0.037052 (0.009055) | 0.240090 / 0.258489 (-0.018399) | 0.267294 / 0.293841 (-0.026547) | 0.023335 / 0.128546 (-0.105211) | 0.007221 / 0.075646 (-0.068425) | 0.200903 / 0.419271 (-0.218369) | 0.059237 / 0.043533 (0.015705) | 0.234929 / 0.255139 (-0.020210) | 0.256326 / 0.283200 (-0.026874) | 0.018549 / 0.141683 (-0.123134) | 1.103519 / 1.452155 (-0.348635) | 1.156573 / 1.492716 (-0.336143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091205 / 0.018006 (0.073199) | 0.303533 / 0.000490 (0.303043) | 0.000204 / 0.000200 (0.000004) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018572 / 0.037411 (-0.018839) | 0.062323 / 0.014526 (0.047797) | 0.074528 / 0.176557 (-0.102029) | 0.120295 / 0.737135 (-0.616841) | 0.076786 / 0.296338 (-0.219552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278814 / 0.215209 (0.063605) | 2.745483 / 2.077655 (0.667829) | 1.486073 / 1.504120 (-0.018047) | 1.385334 / 1.541195 (-0.155861) | 1.386351 / 1.468490 (-0.082139) | 0.395545 / 4.584777 (-4.189232) | 2.409468 / 3.745712 (-1.336244) | 2.670702 / 5.269862 (-2.599159) | 1.629245 / 4.565676 (-2.936432) | 0.045990 / 0.424275 (-0.378286) | 0.004782 / 0.007607 (-0.002825) | 0.332912 / 0.226044 (0.106867) | 3.249277 / 2.268929 (0.980349) | 1.888690 / 55.444624 (-53.555934) | 1.533462 / 6.876477 (-5.343015) | 1.576045 / 2.142072 (-0.566027) | 0.473090 / 4.805227 (-4.332138) | 0.099448 / 6.500664 (-6.401216) | 0.042613 / 0.075469 (-0.032857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.944229 / 1.841788 (-0.897559) | 12.103621 / 8.074308 (4.029313) | 10.643471 / 10.191392 (0.452079) | 0.143004 / 0.680424 (-0.537420) | 0.013872 / 0.534201 (-0.520329) | 0.272026 / 0.579283 (-0.307257) | 0.298701 / 0.434364 (-0.135663) | 0.310299 / 0.540337 (-0.230038) | 0.420934 / 1.386936 (-0.966002) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004904 / 0.011353 (-0.006449) | 0.003064 / 0.011008 (-0.007945) | 0.047982 / 0.038508 (0.009474) | 0.056354 / 0.023109 (0.033245) | 0.292893 / 0.275898 (0.016995) | 0.348744 / 0.323480 (0.025264) | 0.003988 / 0.007986 (-0.003997) | 0.002431 / 0.004328 (-0.001898) | 0.049108 / 0.004250 (0.044857) | 0.039055 / 0.037052 (0.002002) | 0.278129 / 0.258489 (0.019640) | 0.318547 / 0.293841 (0.024706) | 0.025040 / 0.128546 (-0.103507) | 0.007166 / 0.075646 (-0.068480) | 0.053967 / 0.419271 (-0.365305) | 0.033128 / 0.043533 (-0.010405) | 0.272849 / 0.255139 (0.017710) | 0.312143 / 0.283200 (0.028943) | 0.017942 / 0.141683 (-0.123741) | 1.192297 / 1.452155 (-0.259857) | 1.328102 / 1.492716 (-0.164615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090903 / 0.018006 (0.072896) | 0.301260 / 0.000490 (0.300770) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021112 / 0.037411 (-0.016300) | 0.070181 / 0.014526 (0.055656) | 0.082431 / 0.176557 (-0.094126) | 0.121973 / 0.737135 (-0.615163) | 0.083617 / 0.296338 (-0.212721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289587 / 0.215209 (0.074378) | 2.877895 / 2.077655 (0.800240) | 1.721417 / 1.504120 (0.217297) | 1.536023 / 1.541195 (-0.005171) | 1.550917 / 1.468490 (0.082427) | 0.402978 / 4.584777 (-4.181799) | 2.431767 / 3.745712 (-1.313946) | 2.544419 / 5.269862 (-2.725442) | 1.554562 / 4.565676 (-3.011115) | 0.046260 / 0.424275 (-0.378015) | 0.004923 / 0.007607 (-0.002684) | 0.341584 / 0.226044 (0.115540) | 3.362133 / 2.268929 (1.093205) | 1.928741 / 55.444624 (-53.515884) | 1.654798 / 6.876477 (-5.221679) | 1.715111 / 2.142072 (-0.426962) | 0.471029 / 4.805227 (-4.334198) | 0.098912 / 6.500664 (-6.401752) | 0.041018 / 0.075469 (-0.034451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992880 / 1.841788 (-0.848907) | 12.083890 / 8.074308 (4.009582) | 11.023833 / 10.191392 (0.832441) | 0.139217 / 0.680424 (-0.541207) | 0.015183 / 0.534201 (-0.519018) | 0.271637 / 0.579283 (-0.307646) | 0.278910 / 0.434364 (-0.155454) | 0.306891 / 0.540337 (-0.233447) | 0.424412 / 1.386936 (-0.962524) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d51f37eb9996d4c52250ee6e987ccce0d74f2f4 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004545 / 0.011353 (-0.006808) | 0.002955 / 0.011008 (-0.008054) | 0.062119 / 0.038508 (0.023611) | 0.029357 / 0.023109 (0.006248) | 0.240068 / 0.275898 (-0.035830) | 0.273376 / 0.323480 (-0.050104) | 0.003884 / 0.007986 (-0.004102) | 0.002390 / 0.004328 (-0.001938) | 0.048621 / 0.004250 (0.044371) | 0.043867 / 0.037052 (0.006815) | 0.247240 / 0.258489 (-0.011249) | 0.279187 / 0.293841 (-0.014654) | 0.023377 / 0.128546 (-0.105169) | 0.007261 / 0.075646 (-0.068385) | 0.201913 / 0.419271 (-0.217359) | 0.057063 / 0.043533 (0.013530) | 0.245698 / 0.255139 (-0.009441) | 0.265644 / 0.283200 (-0.017556) | 0.018077 / 0.141683 (-0.123606) | 1.133225 / 1.452155 (-0.318930) | 1.186380 / 1.492716 (-0.306336) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089639 / 0.018006 (0.071632) | 0.298918 / 0.000490 (0.298428) | 0.000198 / 0.000200 (-0.000002) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019037 / 0.037411 (-0.018374) | 0.062580 / 0.014526 (0.048055) | 0.072974 / 0.176557 (-0.103582) | 0.119909 / 0.737135 (-0.617226) | 0.075021 / 0.296338 (-0.221317) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276561 / 0.215209 (0.061352) | 2.697281 / 2.077655 (0.619626) | 1.419772 / 1.504120 (-0.084348) | 1.302079 / 1.541195 (-0.239115) | 1.329143 / 1.468490 (-0.139347) | 0.395528 / 4.584777 (-4.189249) | 2.365788 / 3.745712 (-1.379925) | 2.583802 / 5.269862 (-2.686059) | 1.561983 / 4.565676 (-3.003694) | 0.045269 / 0.424275 (-0.379006) | 0.004826 / 0.007607 (-0.002781) | 0.331041 / 0.226044 (0.104996) | 3.292523 / 2.268929 (1.023595) | 1.797865 / 55.444624 (-53.646759) | 1.509229 / 6.876477 (-5.367248) | 1.498884 / 2.142072 (-0.643188) | 0.458518 / 4.805227 (-4.346709) | 0.098076 / 6.500664 (-6.402588) | 0.042290 / 0.075469 (-0.033179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.922331 / 1.841788 (-0.919457) | 11.605041 / 8.074308 (3.530732) | 10.471664 / 10.191392 (0.280272) | 0.130325 / 0.680424 (-0.550098) | 0.014084 / 0.534201 (-0.520117) | 0.278877 / 0.579283 (-0.300406) | 0.263104 / 0.434364 (-0.171259) | 0.306723 / 0.540337 (-0.233615) | 0.416238 / 1.386936 (-0.970698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005094 / 0.011353 (-0.006259) | 0.002794 / 0.011008 (-0.008214) | 0.048189 / 0.038508 (0.009680) | 0.050409 / 0.023109 (0.027300) | 0.272618 / 0.275898 (-0.003280) | 0.293589 / 0.323480 (-0.029891) | 0.003995 / 0.007986 (-0.003991) | 0.002373 / 0.004328 (-0.001956) | 0.048269 / 0.004250 (0.044018) | 0.038751 / 0.037052 (0.001698) | 0.273495 / 0.258489 (0.015006) | 0.309244 / 0.293841 (0.015403) | 0.024681 / 0.128546 (-0.103866) | 0.007390 / 0.075646 (-0.068256) | 0.053844 / 0.419271 (-0.365427) | 0.032395 / 0.043533 (-0.011137) | 0.271963 / 0.255139 (0.016824) | 0.289557 / 0.283200 (0.006357) | 0.018659 / 0.141683 (-0.123024) | 1.154478 / 1.452155 (-0.297676) | 1.199772 / 1.492716 (-0.292944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089771 / 0.018006 (0.071764) | 0.299468 / 0.000490 (0.298978) | 0.000219 / 0.000200 (0.000020) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021854 / 0.037411 (-0.015558) | 0.070280 / 0.014526 (0.055754) | 0.080956 / 0.176557 (-0.095600) | 0.119430 / 0.737135 (-0.617705) | 0.082778 / 0.296338 (-0.213561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304273 / 0.215209 (0.089064) | 2.968264 / 2.077655 (0.890609) | 1.592363 / 1.504120 (0.088243) | 1.460795 / 1.541195 (-0.080400) | 1.501545 / 1.468490 (0.033055) | 0.411001 / 4.584777 (-4.173776) | 2.464273 / 3.745712 (-1.281439) | 2.524585 / 5.269862 (-2.745277) | 1.537443 / 4.565676 (-3.028234) | 0.046163 / 0.424275 (-0.378112) | 0.004783 / 0.007607 (-0.002824) | 0.354251 / 0.226044 (0.128206) | 3.512087 / 2.268929 (1.243158) | 1.968156 / 55.444624 (-53.476468) | 1.664966 / 6.876477 (-5.211510) | 1.685013 / 2.142072 (-0.457060) | 0.485793 / 4.805227 (-4.319435) | 0.099789 / 6.500664 (-6.400875) | 0.040705 / 0.075469 (-0.034764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966570 / 1.841788 (-0.875218) | 12.023188 / 8.074308 (3.948880) | 11.122602 / 10.191392 (0.931210) | 0.141002 / 0.680424 (-0.539422) | 0.015955 / 0.534201 (-0.518246) | 0.270293 / 0.579283 (-0.308990) | 0.281839 / 0.434364 (-0.152525) | 0.307279 / 0.540337 (-0.233058) | 0.434687 / 1.386936 (-0.952249) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7eaad71464e85c7358eaa36494227a43257ffcd8 \"CML watermark\")\n",
"What would be the impact for non-windows users ?\r\n\r\nAlso I wonder if a gc.collect() after the `del` could help to remove the PermissionError ? Or register the dataset for deletion on copy/pickle maybe ?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004973 / 0.011353 (-0.006380) | 0.002753 / 0.011008 (-0.008256) | 0.061489 / 0.038508 (0.022981) | 0.051122 / 0.023109 (0.028012) | 0.228783 / 0.275898 (-0.047115) | 0.256982 / 0.323480 (-0.066498) | 0.002873 / 0.007986 (-0.005112) | 0.003544 / 0.004328 (-0.000784) | 0.048721 / 0.004250 (0.044471) | 0.039137 / 0.037052 (0.002085) | 0.244988 / 0.258489 (-0.013501) | 0.275230 / 0.293841 (-0.018611) | 0.023034 / 0.128546 (-0.105513) | 0.006988 / 0.075646 (-0.068658) | 0.202780 / 0.419271 (-0.216492) | 0.035325 / 0.043533 (-0.008207) | 0.241722 / 0.255139 (-0.013417) | 0.259671 / 0.283200 (-0.023528) | 0.019875 / 0.141683 (-0.121808) | 1.098667 / 1.452155 (-0.353488) | 1.161444 / 1.492716 (-0.331272) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093591 / 0.018006 (0.075585) | 0.298703 / 0.000490 (0.298213) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018319 / 0.037411 (-0.019092) | 0.062993 / 0.014526 (0.048467) | 0.074313 / 0.176557 (-0.102244) | 0.123089 / 0.737135 (-0.614046) | 0.075177 / 0.296338 (-0.221162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.268584 / 0.215209 (0.053375) | 2.633116 / 2.077655 (0.555461) | 1.390743 / 1.504120 (-0.113377) | 1.277385 / 1.541195 (-0.263810) | 1.287934 / 1.468490 (-0.180556) | 0.387934 / 4.584777 (-4.196843) | 2.345819 / 3.745712 (-1.399893) | 2.558169 / 5.269862 (-2.711693) | 1.569812 / 4.565676 (-2.995865) | 0.045297 / 0.424275 (-0.378978) | 0.005238 / 0.007607 (-0.002369) | 0.359704 / 0.226044 (0.133659) | 3.204688 / 2.268929 (0.935759) | 1.753321 / 55.444624 (-53.691303) | 1.492223 / 6.876477 (-5.384254) | 1.498207 / 2.142072 (-0.643865) | 0.459830 / 4.805227 (-4.345397) | 0.098194 / 6.500664 (-6.402470) | 0.042632 / 0.075469 (-0.032837) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963020 / 1.841788 (-0.878768) | 11.500470 / 8.074308 (3.426161) | 10.451882 / 10.191392 (0.260490) | 0.127706 / 0.680424 (-0.552718) | 0.014084 / 0.534201 (-0.520117) | 0.269728 / 0.579283 (-0.309555) | 0.260283 / 0.434364 (-0.174080) | 0.303717 / 0.540337 (-0.236620) | 0.397028 / 1.386936 (-0.989908) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004823 / 0.011353 (-0.006529) | 0.002751 / 0.011008 (-0.008257) | 0.048719 / 0.038508 (0.010211) | 0.051409 / 0.023109 (0.028300) | 0.267139 / 0.275898 (-0.008759) | 0.287659 / 0.323480 (-0.035821) | 0.003959 / 0.007986 (-0.004027) | 0.002376 / 0.004328 (-0.001953) | 0.047942 / 0.004250 (0.043692) | 0.039742 / 0.037052 (0.002690) | 0.268348 / 0.258489 (0.009859) | 0.297201 / 0.293841 (0.003360) | 0.024226 / 0.128546 (-0.104320) | 0.007103 / 0.075646 (-0.068544) | 0.053310 / 0.419271 (-0.365961) | 0.032716 / 0.043533 (-0.010816) | 0.269469 / 0.255139 (0.014330) | 0.287752 / 0.283200 (0.004553) | 0.018191 / 0.141683 (-0.123492) | 1.114086 / 1.452155 (-0.338069) | 1.188054 / 1.492716 (-0.304662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091072 / 0.018006 (0.073066) | 0.300367 / 0.000490 (0.299877) | 0.000218 / 0.000200 (0.000018) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020970 / 0.037411 (-0.016441) | 0.070356 / 0.014526 (0.055830) | 0.081339 / 0.176557 (-0.095218) | 0.120741 / 0.737135 (-0.616394) | 0.081677 / 0.296338 (-0.214662) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290405 / 0.215209 (0.075196) | 2.863877 / 2.077655 (0.786222) | 1.524603 / 1.504120 (0.020483) | 1.397917 / 1.541195 (-0.143278) | 1.402635 / 1.468490 (-0.065855) | 0.405525 / 4.584777 (-4.179252) | 2.432474 / 3.745712 (-1.313239) | 2.446277 / 5.269862 (-2.823585) | 1.550300 / 4.565676 (-3.015377) | 0.046545 / 0.424275 (-0.377730) | 0.004824 / 0.007607 (-0.002783) | 0.343578 / 0.226044 (0.117534) | 3.436850 / 2.268929 (1.167922) | 1.897200 / 55.444624 (-53.547425) | 1.625222 / 6.876477 (-5.251255) | 1.730488 / 2.142072 (-0.411585) | 0.482099 / 4.805227 (-4.323129) | 0.097828 / 6.500664 (-6.402836) | 0.040385 / 0.075469 (-0.035084) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950975 / 1.841788 (-0.890812) | 11.875024 / 8.074308 (3.800715) | 10.430301 / 10.191392 (0.238909) | 0.130546 / 0.680424 (-0.549878) | 0.015423 / 0.534201 (-0.518778) | 0.269592 / 0.579283 (-0.309691) | 0.282505 / 0.434364 (-0.151859) | 0.305567 / 0.540337 (-0.234771) | 0.522142 / 1.386936 (-0.864794) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c166692aa955528180dd4d55474a984f6044896d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004983 / 0.011353 (-0.006369) | 0.003346 / 0.011008 (-0.007662) | 0.062233 / 0.038508 (0.023725) | 0.050246 / 0.023109 (0.027137) | 0.305738 / 0.275898 (0.029839) | 0.321863 / 0.323480 (-0.001617) | 0.003870 / 0.007986 (-0.004116) | 0.002610 / 0.004328 (-0.001718) | 0.047734 / 0.004250 (0.043483) | 0.037611 / 0.037052 (0.000559) | 0.299121 / 0.258489 (0.040632) | 0.327370 / 0.293841 (0.033529) | 0.027009 / 0.128546 (-0.101537) | 0.010816 / 0.075646 (-0.064830) | 0.204627 / 0.419271 (-0.214645) | 0.035708 / 0.043533 (-0.007825) | 0.291837 / 0.255139 (0.036698) | 0.313646 / 0.283200 (0.030447) | 0.017277 / 0.141683 (-0.124405) | 1.097907 / 1.452155 (-0.354248) | 1.163203 / 1.492716 (-0.329513) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091933 / 0.018006 (0.073926) | 0.298787 / 0.000490 (0.298297) | 0.000204 / 0.000200 (0.000004) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018349 / 0.037411 (-0.019062) | 0.061520 / 0.014526 (0.046994) | 0.073159 / 0.176557 (-0.103397) | 0.118657 / 0.737135 (-0.618478) | 0.073601 / 0.296338 (-0.222737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276297 / 0.215209 (0.061088) | 2.725668 / 2.077655 (0.648013) | 1.458079 / 1.504120 (-0.046041) | 1.331236 / 1.541195 (-0.209959) | 1.347919 / 1.468490 (-0.120571) | 0.565954 / 4.584777 (-4.018823) | 2.380883 / 3.745712 (-1.364829) | 2.800533 / 5.269862 (-2.469329) | 1.740534 / 4.565676 (-2.825142) | 0.065617 / 0.424275 (-0.358658) | 0.004907 / 0.007607 (-0.002700) | 0.335973 / 0.226044 (0.109929) | 3.337405 / 2.268929 (1.068476) | 1.819852 / 55.444624 (-53.624772) | 1.542724 / 6.876477 (-5.333752) | 1.509508 / 2.142072 (-0.632565) | 0.648618 / 4.805227 (-4.156609) | 0.116812 / 6.500664 (-6.383852) | 0.041561 / 0.075469 (-0.033909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943488 / 1.841788 (-0.898299) | 11.184770 / 8.074308 (3.110462) | 10.406311 / 10.191392 (0.214919) | 0.129841 / 0.680424 (-0.550583) | 0.013736 / 0.534201 (-0.520465) | 0.287281 / 0.579283 (-0.292002) | 0.267403 / 0.434364 (-0.166961) | 0.325319 / 0.540337 (-0.215019) | 0.454207 / 1.386936 (-0.932729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005169 / 0.011353 (-0.006183) | 0.003155 / 0.011008 (-0.007854) | 0.048101 / 0.038508 (0.009593) | 0.048726 / 0.023109 (0.025617) | 0.275768 / 0.275898 (-0.000130) | 0.291209 / 0.323480 (-0.032271) | 0.003984 / 0.007986 (-0.004001) | 0.002586 / 0.004328 (-0.001742) | 0.047751 / 0.004250 (0.043500) | 0.040176 / 0.037052 (0.003124) | 0.279161 / 0.258489 (0.020672) | 0.297371 / 0.293841 (0.003530) | 0.028502 / 0.128546 (-0.100044) | 0.010103 / 0.075646 (-0.065544) | 0.056920 / 0.419271 (-0.362351) | 0.032174 / 0.043533 (-0.011359) | 0.271925 / 0.255139 (0.016786) | 0.289572 / 0.283200 (0.006372) | 0.017981 / 0.141683 (-0.123702) | 1.192972 / 1.452155 (-0.259183) | 1.223231 / 1.492716 (-0.269485) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091363 / 0.018006 (0.073356) | 0.298106 / 0.000490 (0.297616) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021509 / 0.037411 (-0.015902) | 0.068377 / 0.014526 (0.053851) | 0.079798 / 0.176557 (-0.096759) | 0.120546 / 0.737135 (-0.616589) | 0.080602 / 0.296338 (-0.215737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300809 / 0.215209 (0.085600) | 2.921144 / 2.077655 (0.843489) | 1.621096 / 1.504120 (0.116976) | 1.504265 / 1.541195 (-0.036930) | 1.508050 / 1.468490 (0.039560) | 0.554291 / 4.584777 (-4.030486) | 2.418798 / 3.745712 (-1.326914) | 2.768088 / 5.269862 (-2.501773) | 1.728267 / 4.565676 (-2.837410) | 0.062943 / 0.424275 (-0.361332) | 0.004891 / 0.007607 (-0.002716) | 0.350298 / 0.226044 (0.124254) | 3.442782 / 2.268929 (1.173853) | 1.960163 / 55.444624 (-53.484461) | 1.682000 / 6.876477 (-5.194477) | 1.680311 / 2.142072 (-0.461761) | 0.631201 / 4.805227 (-4.174026) | 0.115211 / 6.500664 (-6.385453) | 0.041279 / 0.075469 (-0.034190) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962478 / 1.841788 (-0.879310) | 11.671463 / 8.074308 (3.597155) | 10.640129 / 10.191392 (0.448737) | 0.130649 / 0.680424 (-0.549775) | 0.016169 / 0.534201 (-0.518032) | 0.286894 / 0.579283 (-0.292389) | 0.269319 / 0.434364 (-0.165045) | 0.324512 / 0.540337 (-0.215825) | 0.550874 / 1.386936 (-0.836062) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69f135121beb1616f1d7c7584b317d4e41e21275 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005078 / 0.011353 (-0.006275) | 0.003950 / 0.011008 (-0.007058) | 0.063345 / 0.038508 (0.024837) | 0.054486 / 0.023109 (0.031377) | 0.243213 / 0.275898 (-0.032685) | 0.264079 / 0.323480 (-0.059401) | 0.003922 / 0.007986 (-0.004064) | 0.002631 / 0.004328 (-0.001698) | 0.048660 / 0.004250 (0.044409) | 0.037205 / 0.037052 (0.000153) | 0.244577 / 0.258489 (-0.013912) | 0.276025 / 0.293841 (-0.017816) | 0.027134 / 0.128546 (-0.101412) | 0.010921 / 0.075646 (-0.064726) | 0.209792 / 0.419271 (-0.209479) | 0.035999 / 0.043533 (-0.007534) | 0.245671 / 0.255139 (-0.009468) | 0.262807 / 0.283200 (-0.020393) | 0.018173 / 0.141683 (-0.123510) | 1.084417 / 1.452155 (-0.367738) | 1.148284 / 1.492716 (-0.344432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093128 / 0.018006 (0.075122) | 0.301606 / 0.000490 (0.301117) | 0.000221 / 0.000200 (0.000021) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018718 / 0.037411 (-0.018693) | 0.060819 / 0.014526 (0.046293) | 0.073050 / 0.176557 (-0.103507) | 0.120043 / 0.737135 (-0.617092) | 0.075374 / 0.296338 (-0.220965) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291080 / 0.215209 (0.075871) | 2.808802 / 2.077655 (0.731148) | 1.485686 / 1.504120 (-0.018434) | 1.354356 / 1.541195 (-0.186839) | 1.347863 / 1.468490 (-0.120627) | 0.571501 / 4.584777 (-4.013276) | 2.377960 / 3.745712 (-1.367752) | 2.768023 / 5.269862 (-2.501839) | 1.754360 / 4.565676 (-2.811316) | 0.063115 / 0.424275 (-0.361160) | 0.004941 / 0.007607 (-0.002666) | 0.338281 / 0.226044 (0.112237) | 3.340587 / 2.268929 (1.071658) | 1.849479 / 55.444624 (-53.595145) | 1.551846 / 6.876477 (-5.324631) | 1.539090 / 2.142072 (-0.602983) | 0.644522 / 4.805227 (-4.160705) | 0.117398 / 6.500664 (-6.383266) | 0.042239 / 0.075469 (-0.033230) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949496 / 1.841788 (-0.892291) | 11.548352 / 8.074308 (3.474044) | 10.478065 / 10.191392 (0.286673) | 0.129534 / 0.680424 (-0.550890) | 0.015378 / 0.534201 (-0.518822) | 0.287221 / 0.579283 (-0.292062) | 0.262944 / 0.434364 (-0.171419) | 0.321727 / 0.540337 (-0.218611) | 0.432354 / 1.386936 (-0.954582) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005256 / 0.011353 (-0.006097) | 0.003491 / 0.011008 (-0.007517) | 0.048647 / 0.038508 (0.010139) | 0.054011 / 0.023109 (0.030901) | 0.271786 / 0.275898 (-0.004112) | 0.291964 / 0.323480 (-0.031516) | 0.004035 / 0.007986 (-0.003950) | 0.002671 / 0.004328 (-0.001657) | 0.048108 / 0.004250 (0.043857) | 0.040421 / 0.037052 (0.003368) | 0.278594 / 0.258489 (0.020105) | 0.300707 / 0.293841 (0.006867) | 0.028924 / 0.128546 (-0.099623) | 0.010600 / 0.075646 (-0.065047) | 0.057649 / 0.419271 (-0.361623) | 0.034221 / 0.043533 (-0.009312) | 0.276692 / 0.255139 (0.021553) | 0.293545 / 0.283200 (0.010345) | 0.017908 / 0.141683 (-0.123775) | 1.135108 / 1.452155 (-0.317047) | 1.190823 / 1.492716 (-0.301893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095243 / 0.018006 (0.077237) | 0.301885 / 0.000490 (0.301396) | 0.000235 / 0.000200 (0.000035) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021561 / 0.037411 (-0.015850) | 0.069054 / 0.014526 (0.054529) | 0.080466 / 0.176557 (-0.096091) | 0.121323 / 0.737135 (-0.615812) | 0.081891 / 0.296338 (-0.214448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293957 / 0.215209 (0.078748) | 2.869035 / 2.077655 (0.791380) | 1.608837 / 1.504120 (0.104717) | 1.440594 / 1.541195 (-0.100601) | 1.464775 / 1.468490 (-0.003715) | 0.565663 / 4.584777 (-4.019114) | 2.439456 / 3.745712 (-1.306256) | 2.794775 / 5.269862 (-2.475087) | 1.750026 / 4.565676 (-2.815651) | 0.063291 / 0.424275 (-0.360984) | 0.004930 / 0.007607 (-0.002677) | 0.347169 / 0.226044 (0.121125) | 3.408260 / 2.268929 (1.139331) | 1.920933 / 55.444624 (-53.523691) | 1.648821 / 6.876477 (-5.227656) | 1.639022 / 2.142072 (-0.503051) | 0.642870 / 4.805227 (-4.162357) | 0.117077 / 6.500664 (-6.383587) | 0.040784 / 0.075469 (-0.034685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993501 / 1.841788 (-0.848287) | 12.012423 / 8.074308 (3.938115) | 10.740932 / 10.191392 (0.549540) | 0.132409 / 0.680424 (-0.548015) | 0.015294 / 0.534201 (-0.518907) | 0.287902 / 0.579283 (-0.291381) | 0.281350 / 0.434364 (-0.153014) | 0.329201 / 0.540337 (-0.211137) | 0.553199 / 1.386936 (-0.833737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ecd3a22c5dec2133491a320515e12956512439eb \"CML watermark\")\n"
] | 2023-11-15T19:06:42 | 2023-12-01T15:37:32 | 2023-12-01T15:31:19 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6426",
"html_url": "https://github.com/huggingface/datasets/pull/6426",
"diff_url": "https://github.com/huggingface/datasets/pull/6426.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6426.patch",
"merged_at": "2023-12-01T15:31:19"
} | While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler approach to the issue than https://github.com/huggingface/datasets/pull/2403 - it tries to delete the temporary cache directory, and if this fails, logs a warning message about using a `delete-temp-cache` CLI command to delete it manually. The problematic references are freed after the session exits, so the CLI command should then succeed.~~ This PR implements `Dataset.__setstate__` to register datasets with temporary cache files for deletion.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6426/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6425/comments | https://api.github.com/repos/huggingface/datasets/issues/6425/events | https://github.com/huggingface/datasets/pull/6425 | 1,995,269,382 | PR_kwDODunzps5fi5ye | 6,425 | Fix deprecation warning when building conda package | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004811 / 0.011353 (-0.006542) | 0.002478 / 0.011008 (-0.008530) | 0.062241 / 0.038508 (0.023733) | 0.031153 / 0.023109 (0.008044) | 0.248896 / 0.275898 (-0.027002) | 0.276860 / 0.323480 (-0.046620) | 0.002934 / 0.007986 (-0.005052) | 0.002428 / 0.004328 (-0.001901) | 0.048507 / 0.004250 (0.044257) | 0.044567 / 0.037052 (0.007515) | 0.253570 / 0.258489 (-0.004919) | 0.280762 / 0.293841 (-0.013079) | 0.023549 / 0.128546 (-0.104997) | 0.006985 / 0.075646 (-0.068661) | 0.206227 / 0.419271 (-0.213044) | 0.054027 / 0.043533 (0.010494) | 0.257655 / 0.255139 (0.002516) | 0.273498 / 0.283200 (-0.009702) | 0.018997 / 0.141683 (-0.122685) | 1.111732 / 1.452155 (-0.340422) | 1.162078 / 1.492716 (-0.330639) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091816 / 0.018006 (0.073810) | 0.299428 / 0.000490 (0.298938) | 0.000211 / 0.000200 (0.000012) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018503 / 0.037411 (-0.018908) | 0.062933 / 0.014526 (0.048407) | 0.076349 / 0.176557 (-0.100208) | 0.123291 / 0.737135 (-0.613844) | 0.077491 / 0.296338 (-0.218847) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280770 / 0.215209 (0.065561) | 2.762185 / 2.077655 (0.684530) | 1.429124 / 1.504120 (-0.074996) | 1.303162 / 1.541195 (-0.238033) | 1.307523 / 1.468490 (-0.160967) | 0.405593 / 4.584777 (-4.179184) | 2.396992 / 3.745712 (-1.348721) | 2.550968 / 5.269862 (-2.718894) | 1.557358 / 4.565676 (-3.008318) | 0.046149 / 0.424275 (-0.378126) | 0.004808 / 0.007607 (-0.002799) | 0.341870 / 0.226044 (0.115825) | 3.362478 / 2.268929 (1.093550) | 1.786360 / 55.444624 (-53.658264) | 1.483419 / 6.876477 (-5.393058) | 1.493463 / 2.142072 (-0.648609) | 0.470605 / 4.805227 (-4.334623) | 0.098372 / 6.500664 (-6.402292) | 0.041722 / 0.075469 (-0.033748) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.938148 / 1.841788 (-0.903640) | 11.219184 / 8.074308 (3.144876) | 10.454439 / 10.191392 (0.263047) | 0.139645 / 0.680424 (-0.540778) | 0.014453 / 0.534201 (-0.519748) | 0.268975 / 0.579283 (-0.310308) | 0.262060 / 0.434364 (-0.172304) | 0.313652 / 0.540337 (-0.226686) | 0.423992 / 1.386936 (-0.962944) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004829 / 0.011353 (-0.006524) | 0.002426 / 0.011008 (-0.008582) | 0.049064 / 0.038508 (0.010555) | 0.049728 / 0.023109 (0.026619) | 0.273263 / 0.275898 (-0.002635) | 0.295645 / 0.323480 (-0.027835) | 0.004156 / 0.007986 (-0.003830) | 0.002397 / 0.004328 (-0.001932) | 0.048902 / 0.004250 (0.044652) | 0.038414 / 0.037052 (0.001362) | 0.276176 / 0.258489 (0.017687) | 0.306844 / 0.293841 (0.013003) | 0.024546 / 0.128546 (-0.104000) | 0.006946 / 0.075646 (-0.068701) | 0.054024 / 0.419271 (-0.365247) | 0.032444 / 0.043533 (-0.011089) | 0.274125 / 0.255139 (0.018986) | 0.293226 / 0.283200 (0.010027) | 0.018003 / 0.141683 (-0.123680) | 1.130402 / 1.452155 (-0.321752) | 1.195969 / 1.492716 (-0.296748) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090043 / 0.018006 (0.072037) | 0.298699 / 0.000490 (0.298209) | 0.000214 / 0.000200 (0.000014) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021284 / 0.037411 (-0.016127) | 0.069954 / 0.014526 (0.055428) | 0.080445 / 0.176557 (-0.096111) | 0.119461 / 0.737135 (-0.617674) | 0.080632 / 0.296338 (-0.215706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302246 / 0.215209 (0.087037) | 2.991936 / 2.077655 (0.914281) | 1.662969 / 1.504120 (0.158850) | 1.533141 / 1.541195 (-0.008054) | 1.583183 / 1.468490 (0.114693) | 0.402864 / 4.584777 (-4.181913) | 2.424119 / 3.745712 (-1.321593) | 2.489558 / 5.269862 (-2.780303) | 1.502196 / 4.565676 (-3.063481) | 0.045980 / 0.424275 (-0.378295) | 0.004768 / 0.007607 (-0.002839) | 0.356089 / 0.226044 (0.130044) | 3.481333 / 2.268929 (1.212404) | 2.009713 / 55.444624 (-53.434912) | 1.730021 / 6.876477 (-5.146455) | 1.704656 / 2.142072 (-0.437416) | 0.470832 / 4.805227 (-4.334395) | 0.097473 / 6.500664 (-6.403191) | 0.040437 / 0.075469 (-0.035032) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981497 / 1.841788 (-0.860291) | 11.827242 / 8.074308 (3.752933) | 10.888324 / 10.191392 (0.696932) | 0.129249 / 0.680424 (-0.551174) | 0.015812 / 0.534201 (-0.518389) | 0.269657 / 0.579283 (-0.309626) | 0.275585 / 0.434364 (-0.158779) | 0.305698 / 0.540337 (-0.234639) | 0.411497 / 1.386936 (-0.975439) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bcde318293af04fd5044b42ddfcb650f9b092d45 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6425). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005402 / 0.011353 (-0.005951) | 0.003955 / 0.011008 (-0.007053) | 0.064096 / 0.038508 (0.025588) | 0.062330 / 0.023109 (0.039221) | 0.254729 / 0.275898 (-0.021169) | 0.276259 / 0.323480 (-0.047221) | 0.003052 / 0.007986 (-0.004934) | 0.003474 / 0.004328 (-0.000854) | 0.048938 / 0.004250 (0.044687) | 0.038635 / 0.037052 (0.001583) | 0.267953 / 0.258489 (0.009464) | 0.293725 / 0.293841 (-0.000116) | 0.028266 / 0.128546 (-0.100280) | 0.011188 / 0.075646 (-0.064458) | 0.221204 / 0.419271 (-0.198067) | 0.036549 / 0.043533 (-0.006984) | 0.252484 / 0.255139 (-0.002655) | 0.273855 / 0.283200 (-0.009345) | 0.017975 / 0.141683 (-0.123708) | 1.112265 / 1.452155 (-0.339890) | 1.185647 / 1.492716 (-0.307069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096223 / 0.018006 (0.078217) | 0.305010 / 0.000490 (0.304520) | 0.000227 / 0.000200 (0.000027) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018924 / 0.037411 (-0.018488) | 0.061910 / 0.014526 (0.047384) | 0.073751 / 0.176557 (-0.102806) | 0.120956 / 0.737135 (-0.616179) | 0.075090 / 0.296338 (-0.221249) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293277 / 0.215209 (0.078068) | 2.867468 / 2.077655 (0.789813) | 1.518218 / 1.504120 (0.014098) | 1.393741 / 1.541195 (-0.147454) | 1.424979 / 1.468490 (-0.043511) | 0.579766 / 4.584777 (-4.005011) | 2.434951 / 3.745712 (-1.310761) | 2.909924 / 5.269862 (-2.359937) | 1.838123 / 4.565676 (-2.727554) | 0.064260 / 0.424275 (-0.360015) | 0.005169 / 0.007607 (-0.002438) | 0.348228 / 0.226044 (0.122184) | 3.447558 / 2.268929 (1.178629) | 1.884988 / 55.444624 (-53.559636) | 1.570921 / 6.876477 (-5.305556) | 1.646341 / 2.142072 (-0.495732) | 0.660189 / 4.805227 (-4.145038) | 0.120026 / 6.500664 (-6.380638) | 0.043715 / 0.075469 (-0.031754) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953253 / 1.841788 (-0.888535) | 12.576112 / 8.074308 (4.501804) | 11.132637 / 10.191392 (0.941245) | 0.132870 / 0.680424 (-0.547553) | 0.014720 / 0.534201 (-0.519481) | 0.291866 / 0.579283 (-0.287417) | 0.265456 / 0.434364 (-0.168908) | 0.338629 / 0.540337 (-0.201709) | 0.456323 / 1.386936 (-0.930613) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005644 / 0.011353 (-0.005709) | 0.003624 / 0.011008 (-0.007384) | 0.049043 / 0.038508 (0.010535) | 0.059572 / 0.023109 (0.036463) | 0.277159 / 0.275898 (0.001261) | 0.303933 / 0.323480 (-0.019547) | 0.004294 / 0.007986 (-0.003692) | 0.002744 / 0.004328 (-0.001584) | 0.048187 / 0.004250 (0.043937) | 0.043655 / 0.037052 (0.006603) | 0.282441 / 0.258489 (0.023952) | 0.317130 / 0.293841 (0.023289) | 0.030159 / 0.128546 (-0.098387) | 0.011300 / 0.075646 (-0.064346) | 0.057451 / 0.419271 (-0.361821) | 0.033666 / 0.043533 (-0.009866) | 0.274554 / 0.255139 (0.019415) | 0.292470 / 0.283200 (0.009270) | 0.018757 / 0.141683 (-0.122926) | 1.170094 / 1.452155 (-0.282060) | 1.244626 / 1.492716 (-0.248090) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094920 / 0.018006 (0.076914) | 0.304156 / 0.000490 (0.303666) | 0.000226 / 0.000200 (0.000026) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022297 / 0.037411 (-0.015115) | 0.068908 / 0.014526 (0.054383) | 0.081520 / 0.176557 (-0.095037) | 0.122422 / 0.737135 (-0.614714) | 0.082533 / 0.296338 (-0.213806) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296080 / 0.215209 (0.080871) | 2.883120 / 2.077655 (0.805465) | 1.607950 / 1.504120 (0.103830) | 1.496191 / 1.541195 (-0.045004) | 1.520549 / 1.468490 (0.052059) | 0.562081 / 4.584777 (-4.022696) | 2.453447 / 3.745712 (-1.292265) | 2.943676 / 5.269862 (-2.326186) | 1.820581 / 4.565676 (-2.745096) | 0.064518 / 0.424275 (-0.359757) | 0.005406 / 0.007607 (-0.002201) | 0.349022 / 0.226044 (0.122978) | 3.472117 / 2.268929 (1.203188) | 2.006928 / 55.444624 (-53.437696) | 1.704800 / 6.876477 (-5.171677) | 1.719025 / 2.142072 (-0.423048) | 0.643719 / 4.805227 (-4.161508) | 0.117723 / 6.500664 (-6.382941) | 0.043158 / 0.075469 (-0.032311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981229 / 1.841788 (-0.860559) | 12.637620 / 8.074308 (4.563312) | 10.848775 / 10.191392 (0.657383) | 0.143981 / 0.680424 (-0.536443) | 0.015950 / 0.534201 (-0.518251) | 0.287542 / 0.579283 (-0.291741) | 0.278989 / 0.434364 (-0.155375) | 0.331786 / 0.540337 (-0.208552) | 0.607238 / 1.386936 (-0.779698) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06fb2f9973962ee97d1af7888209819b8ba7de37 \"CML watermark\")\n"
] | 2023-11-15T18:00:11 | 2023-12-13T14:22:30 | 2023-12-13T14:16:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6425",
"html_url": "https://github.com/huggingface/datasets/pull/6425",
"diff_url": "https://github.com/huggingface/datasets/pull/6425.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6425.patch",
"merged_at": "2023-12-13T14:16:00"
} | When building/releasing conda package, we get this deprecation warning:
```
/usr/share/miniconda/envs/build-datasets/bin/conda-build:11: DeprecationWarning: conda_build.cli.main_build.main is deprecated and will be removed in 4.0.0. Use `conda build` instead.
```
This PR fixes the deprecation warning by using `conda build` instead. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6425/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6424/comments | https://api.github.com/repos/huggingface/datasets/issues/6424/events | https://github.com/huggingface/datasets/pull/6424 | 1,995,224,516 | PR_kwDODunzps5fiwDC | 6,424 | [docs] troubleshooting guide | {
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6424). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005323 / 0.011353 (-0.006030) | 0.003560 / 0.011008 (-0.007448) | 0.062572 / 0.038508 (0.024064) | 0.049549 / 0.023109 (0.026440) | 0.236522 / 0.275898 (-0.039376) | 0.260601 / 0.323480 (-0.062879) | 0.002887 / 0.007986 (-0.005099) | 0.003225 / 0.004328 (-0.001103) | 0.048210 / 0.004250 (0.043960) | 0.038783 / 0.037052 (0.001731) | 0.242506 / 0.258489 (-0.015983) | 0.273906 / 0.293841 (-0.019935) | 0.027202 / 0.128546 (-0.101344) | 0.010577 / 0.075646 (-0.065069) | 0.211669 / 0.419271 (-0.207603) | 0.035727 / 0.043533 (-0.007806) | 0.242303 / 0.255139 (-0.012836) | 0.260468 / 0.283200 (-0.022732) | 0.020109 / 0.141683 (-0.121573) | 1.089603 / 1.452155 (-0.362552) | 1.149899 / 1.492716 (-0.342817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088768 / 0.018006 (0.070761) | 0.300300 / 0.000490 (0.299810) | 0.000212 / 0.000200 (0.000013) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018758 / 0.037411 (-0.018653) | 0.060097 / 0.014526 (0.045571) | 0.074060 / 0.176557 (-0.102496) | 0.119977 / 0.737135 (-0.617158) | 0.075298 / 0.296338 (-0.221040) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278640 / 0.215209 (0.063431) | 2.715574 / 2.077655 (0.637919) | 1.466644 / 1.504120 (-0.037476) | 1.344470 / 1.541195 (-0.196725) | 1.386984 / 1.468490 (-0.081506) | 0.575796 / 4.584777 (-4.008981) | 2.392324 / 3.745712 (-1.353388) | 2.826284 / 5.269862 (-2.443578) | 1.758997 / 4.565676 (-2.806679) | 0.062474 / 0.424275 (-0.361801) | 0.004930 / 0.007607 (-0.002678) | 0.332595 / 0.226044 (0.106551) | 3.240076 / 2.268929 (0.971147) | 1.785283 / 55.444624 (-53.659341) | 1.527594 / 6.876477 (-5.348882) | 1.562840 / 2.142072 (-0.579233) | 0.655474 / 4.805227 (-4.149754) | 0.116682 / 6.500664 (-6.383983) | 0.042664 / 0.075469 (-0.032805) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936306 / 1.841788 (-0.905481) | 11.561239 / 8.074308 (3.486931) | 10.341918 / 10.191392 (0.150526) | 0.140602 / 0.680424 (-0.539822) | 0.013857 / 0.534201 (-0.520344) | 0.294241 / 0.579283 (-0.285042) | 0.268359 / 0.434364 (-0.166005) | 0.326344 / 0.540337 (-0.213993) | 0.430936 / 1.386936 (-0.956000) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005197 / 0.011353 (-0.006156) | 0.003543 / 0.011008 (-0.007465) | 0.049051 / 0.038508 (0.010542) | 0.052742 / 0.023109 (0.029633) | 0.277032 / 0.275898 (0.001134) | 0.300799 / 0.323480 (-0.022681) | 0.003922 / 0.007986 (-0.004064) | 0.002573 / 0.004328 (-0.001755) | 0.047270 / 0.004250 (0.043019) | 0.039782 / 0.037052 (0.002730) | 0.282780 / 0.258489 (0.024291) | 0.308858 / 0.293841 (0.015017) | 0.028641 / 0.128546 (-0.099905) | 0.010516 / 0.075646 (-0.065131) | 0.056367 / 0.419271 (-0.362904) | 0.032346 / 0.043533 (-0.011186) | 0.277591 / 0.255139 (0.022452) | 0.298539 / 0.283200 (0.015339) | 0.018168 / 0.141683 (-0.123515) | 1.104331 / 1.452155 (-0.347823) | 1.187691 / 1.492716 (-0.305025) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089511 / 0.018006 (0.071505) | 0.301309 / 0.000490 (0.300820) | 0.000213 / 0.000200 (0.000013) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021466 / 0.037411 (-0.015945) | 0.069917 / 0.014526 (0.055391) | 0.081105 / 0.176557 (-0.095452) | 0.119619 / 0.737135 (-0.617516) | 0.083928 / 0.296338 (-0.212410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296471 / 0.215209 (0.081262) | 2.912139 / 2.077655 (0.834484) | 1.588861 / 1.504120 (0.084741) | 1.452148 / 1.541195 (-0.089047) | 1.475388 / 1.468490 (0.006898) | 0.555779 / 4.584777 (-4.028998) | 2.425599 / 3.745712 (-1.320113) | 2.792848 / 5.269862 (-2.477013) | 1.718757 / 4.565676 (-2.846919) | 0.077687 / 0.424275 (-0.346588) | 0.007522 / 0.007607 (-0.000085) | 0.348254 / 0.226044 (0.122210) | 3.439315 / 2.268929 (1.170386) | 1.925907 / 55.444624 (-53.518717) | 1.646163 / 6.876477 (-5.230314) | 1.662148 / 2.142072 (-0.479924) | 0.637277 / 4.805227 (-4.167950) | 0.116159 / 6.500664 (-6.384505) | 0.041518 / 0.075469 (-0.033952) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966358 / 1.841788 (-0.875430) | 12.125201 / 8.074308 (4.050892) | 10.629939 / 10.191392 (0.438547) | 0.132439 / 0.680424 (-0.547984) | 0.015622 / 0.534201 (-0.518579) | 0.288824 / 0.579283 (-0.290459) | 0.277634 / 0.434364 (-0.156730) | 0.327200 / 0.540337 (-0.213138) | 0.549679 / 1.386936 (-0.837257) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0850f663f5498e0f296461e99a345dfd65e3358f \"CML watermark\")\n"
] | 2023-11-15T17:28:14 | 2023-11-30T17:29:55 | 2023-11-30T17:23:46 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6424",
"html_url": "https://github.com/huggingface/datasets/pull/6424",
"diff_url": "https://github.com/huggingface/datasets/pull/6424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6424.patch",
"merged_at": "2023-11-30T17:23:46"
} | Hi all! This is a PR adding a troubleshooting guide for Datasets docs.
I went through the library's GitHub Issues and Forum questions and identified a few issues that are common enough that I think it would be valuable to include them in the troubleshooting guide. These are:
- creating a dataset from a folder and not following the required format
- authentication issues when using `push_to_hub`
- `Too Many Requests` with `push_to_hub`
- Pickling issues when using Dataset.from_generator()
There's also a section on asking for help. Please let me know if there are other common issues or advice that we can include here. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6424/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6424/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6423/comments | https://api.github.com/repos/huggingface/datasets/issues/6423/events | https://github.com/huggingface/datasets/pull/6423 | 1,994,946,847 | PR_kwDODunzps5fhzD6 | 6,423 | Fix conda release by adding pyarrow-hotfix dependency | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004476 / 0.011353 (-0.006877) | 0.002691 / 0.011008 (-0.008317) | 0.061400 / 0.038508 (0.022892) | 0.030096 / 0.023109 (0.006986) | 0.279868 / 0.275898 (0.003970) | 0.310320 / 0.323480 (-0.013159) | 0.003873 / 0.007986 (-0.004112) | 0.002394 / 0.004328 (-0.001935) | 0.048307 / 0.004250 (0.044056) | 0.043326 / 0.037052 (0.006273) | 0.288256 / 0.258489 (0.029767) | 0.311449 / 0.293841 (0.017609) | 0.022970 / 0.128546 (-0.105576) | 0.006714 / 0.075646 (-0.068932) | 0.201656 / 0.419271 (-0.217615) | 0.052811 / 0.043533 (0.009278) | 0.285123 / 0.255139 (0.029984) | 0.301495 / 0.283200 (0.018295) | 0.017531 / 0.141683 (-0.124152) | 1.097660 / 1.452155 (-0.354494) | 1.161986 / 1.492716 (-0.330731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089223 / 0.018006 (0.071217) | 0.297815 / 0.000490 (0.297326) | 0.000205 / 0.000200 (0.000005) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018679 / 0.037411 (-0.018732) | 0.062742 / 0.014526 (0.048216) | 0.072869 / 0.176557 (-0.103687) | 0.120730 / 0.737135 (-0.616406) | 0.074526 / 0.296338 (-0.221813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299977 / 0.215209 (0.084768) | 2.921029 / 2.077655 (0.843375) | 1.632283 / 1.504120 (0.128163) | 1.508008 / 1.541195 (-0.033187) | 1.513967 / 1.468490 (0.045477) | 0.403056 / 4.584777 (-4.181721) | 2.340011 / 3.745712 (-1.405701) | 2.552319 / 5.269862 (-2.717543) | 1.549741 / 4.565676 (-3.015935) | 0.046303 / 0.424275 (-0.377972) | 0.004768 / 0.007607 (-0.002839) | 0.356921 / 0.226044 (0.130877) | 3.506410 / 2.268929 (1.237482) | 1.975394 / 55.444624 (-53.469230) | 1.688683 / 6.876477 (-5.187794) | 1.715502 / 2.142072 (-0.426571) | 0.471016 / 4.805227 (-4.334212) | 0.099552 / 6.500664 (-6.401112) | 0.042095 / 0.075469 (-0.033374) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955784 / 1.841788 (-0.886004) | 11.191802 / 8.074308 (3.117494) | 10.127818 / 10.191392 (-0.063574) | 0.141225 / 0.680424 (-0.539199) | 0.014486 / 0.534201 (-0.519715) | 0.267204 / 0.579283 (-0.312079) | 0.289108 / 0.434364 (-0.145256) | 0.309458 / 0.540337 (-0.230880) | 0.422802 / 1.386936 (-0.964134) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004797 / 0.011353 (-0.006556) | 0.002907 / 0.011008 (-0.008101) | 0.047666 / 0.038508 (0.009158) | 0.051183 / 0.023109 (0.028074) | 0.266315 / 0.275898 (-0.009583) | 0.286429 / 0.323480 (-0.037051) | 0.003954 / 0.007986 (-0.004031) | 0.002041 / 0.004328 (-0.002288) | 0.047652 / 0.004250 (0.043401) | 0.038211 / 0.037052 (0.001158) | 0.272210 / 0.258489 (0.013721) | 0.299425 / 0.293841 (0.005584) | 0.024266 / 0.128546 (-0.104280) | 0.006747 / 0.075646 (-0.068900) | 0.052959 / 0.419271 (-0.366312) | 0.032094 / 0.043533 (-0.011439) | 0.265677 / 0.255139 (0.010538) | 0.285373 / 0.283200 (0.002174) | 0.017577 / 0.141683 (-0.124106) | 1.114514 / 1.452155 (-0.337640) | 1.212970 / 1.492716 (-0.279746) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088347 / 0.018006 (0.070341) | 0.296678 / 0.000490 (0.296188) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021159 / 0.037411 (-0.016253) | 0.069886 / 0.014526 (0.055360) | 0.079832 / 0.176557 (-0.096725) | 0.115512 / 0.737135 (-0.621623) | 0.081600 / 0.296338 (-0.214739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292659 / 0.215209 (0.077450) | 2.872556 / 2.077655 (0.794901) | 1.573017 / 1.504120 (0.068897) | 1.445122 / 1.541195 (-0.096072) | 1.485584 / 1.468490 (0.017094) | 0.388638 / 4.584777 (-4.196139) | 2.434847 / 3.745712 (-1.310865) | 2.518167 / 5.269862 (-2.751695) | 1.503000 / 4.565676 (-3.062676) | 0.045123 / 0.424275 (-0.379153) | 0.004778 / 0.007607 (-0.002829) | 0.347955 / 0.226044 (0.121910) | 3.384819 / 2.268929 (1.115891) | 1.920185 / 55.444624 (-53.524439) | 1.646910 / 6.876477 (-5.229567) | 1.638092 / 2.142072 (-0.503980) | 0.450535 / 4.805227 (-4.354692) | 0.095301 / 6.500664 (-6.405363) | 0.040275 / 0.075469 (-0.035194) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956088 / 1.841788 (-0.885700) | 11.776642 / 8.074308 (3.702334) | 10.651063 / 10.191392 (0.459671) | 0.127079 / 0.680424 (-0.553345) | 0.015080 / 0.534201 (-0.519121) | 0.273737 / 0.579283 (-0.305546) | 0.271434 / 0.434364 (-0.162929) | 0.308448 / 0.540337 (-0.231889) | 0.412467 / 1.386936 (-0.974469) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#af014830363401a0166a2b8435ca2f863cb468d4 \"CML watermark\")\n",
"Once this PR is merged, we should upload the missing version to conda.\r\n\r\n@lhoestq you did this in the past. If you tell me your approach (I see a tag called `VERSION`...), I could do it myself.",
"Maybe open a PR against the 2.14 branch and update `release-conda.yml` like this ?\r\n\r\n```diff\r\n- on:\r\n- push:\r\n- tags:\r\n- - \"[0-9]+.[0-9]+.[0-9]+*\"\r\n+ on: push\r\n```\r\n\r\nand then set it back to normal after the release is done",
"After having cherry-picked the commit in this PR, I have released the conda package. See: \r\n- https://github.com/huggingface/datasets/actions/runs/6880182419/job/18713812449\r\n- https://anaconda.org/HuggingFace/datasets/files?version=2.14.7\r\n\r\nI am merging this PR.\r\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004993 / 0.011353 (-0.006360) | 0.002964 / 0.011008 (-0.008044) | 0.062588 / 0.038508 (0.024080) | 0.030794 / 0.023109 (0.007685) | 0.234856 / 0.275898 (-0.041042) | 0.264807 / 0.323480 (-0.058673) | 0.003139 / 0.007986 (-0.004847) | 0.002498 / 0.004328 (-0.001831) | 0.048058 / 0.004250 (0.043807) | 0.048349 / 0.037052 (0.011296) | 0.238210 / 0.258489 (-0.020279) | 0.278144 / 0.293841 (-0.015697) | 0.023219 / 0.128546 (-0.105327) | 0.007296 / 0.075646 (-0.068351) | 0.203263 / 0.419271 (-0.216008) | 0.058844 / 0.043533 (0.015311) | 0.246330 / 0.255139 (-0.008809) | 0.264550 / 0.283200 (-0.018649) | 0.018580 / 0.141683 (-0.123103) | 1.084163 / 1.452155 (-0.367992) | 1.154891 / 1.492716 (-0.337825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092393 / 0.018006 (0.074387) | 0.300545 / 0.000490 (0.300055) | 0.000203 / 0.000200 (0.000003) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018648 / 0.037411 (-0.018763) | 0.063151 / 0.014526 (0.048625) | 0.074206 / 0.176557 (-0.102350) | 0.120929 / 0.737135 (-0.616207) | 0.075970 / 0.296338 (-0.220368) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278489 / 0.215209 (0.063279) | 2.664804 / 2.077655 (0.587150) | 1.433040 / 1.504120 (-0.071080) | 1.321416 / 1.541195 (-0.219779) | 1.320964 / 1.468490 (-0.147526) | 0.401289 / 4.584777 (-4.183488) | 2.365310 / 3.745712 (-1.380402) | 2.635798 / 5.269862 (-2.634063) | 1.584384 / 4.565676 (-2.981293) | 0.045675 / 0.424275 (-0.378600) | 0.004854 / 0.007607 (-0.002753) | 0.337592 / 0.226044 (0.111548) | 3.330462 / 2.268929 (1.061534) | 1.794507 / 55.444624 (-53.650117) | 1.531284 / 6.876477 (-5.345193) | 1.507165 / 2.142072 (-0.634908) | 0.478622 / 4.805227 (-4.326606) | 0.099105 / 6.500664 (-6.401560) | 0.041575 / 0.075469 (-0.033894) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941790 / 1.841788 (-0.899997) | 11.609871 / 8.074308 (3.535563) | 10.770869 / 10.191392 (0.579477) | 0.138931 / 0.680424 (-0.541493) | 0.014406 / 0.534201 (-0.519795) | 0.269681 / 0.579283 (-0.309602) | 0.260556 / 0.434364 (-0.173808) | 0.308244 / 0.540337 (-0.232093) | 0.428867 / 1.386936 (-0.958069) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004803 / 0.011353 (-0.006550) | 0.003263 / 0.011008 (-0.007745) | 0.049143 / 0.038508 (0.010635) | 0.052033 / 0.023109 (0.028924) | 0.267815 / 0.275898 (-0.008083) | 0.288733 / 0.323480 (-0.034747) | 0.004159 / 0.007986 (-0.003826) | 0.002407 / 0.004328 (-0.001921) | 0.048978 / 0.004250 (0.044728) | 0.038994 / 0.037052 (0.001942) | 0.264028 / 0.258489 (0.005539) | 0.303930 / 0.293841 (0.010090) | 0.024283 / 0.128546 (-0.104263) | 0.007201 / 0.075646 (-0.068446) | 0.053810 / 0.419271 (-0.365461) | 0.032611 / 0.043533 (-0.010922) | 0.266730 / 0.255139 (0.011591) | 0.281564 / 0.283200 (-0.001635) | 0.018720 / 0.141683 (-0.122963) | 1.140676 / 1.452155 (-0.311479) | 1.206604 / 1.492716 (-0.286113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.109390 / 0.018006 (0.091384) | 0.313783 / 0.000490 (0.313294) | 0.000228 / 0.000200 (0.000028) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021228 / 0.037411 (-0.016183) | 0.070505 / 0.014526 (0.055979) | 0.081961 / 0.176557 (-0.094595) | 0.119943 / 0.737135 (-0.617193) | 0.083582 / 0.296338 (-0.212757) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295702 / 0.215209 (0.080493) | 2.886865 / 2.077655 (0.809210) | 1.583206 / 1.504120 (0.079086) | 1.451129 / 1.541195 (-0.090065) | 1.486253 / 1.468490 (0.017763) | 0.403207 / 4.584777 (-4.181570) | 2.408889 / 3.745712 (-1.336824) | 2.578480 / 5.269862 (-2.691381) | 1.533066 / 4.565676 (-3.032610) | 0.046075 / 0.424275 (-0.378200) | 0.004877 / 0.007607 (-0.002730) | 0.345995 / 0.226044 (0.119950) | 3.377039 / 2.268929 (1.108110) | 1.944614 / 55.444624 (-53.500010) | 1.677691 / 6.876477 (-5.198786) | 1.672828 / 2.142072 (-0.469244) | 0.468426 / 4.805227 (-4.336802) | 0.097290 / 6.500664 (-6.403374) | 0.040695 / 0.075469 (-0.034774) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965778 / 1.841788 (-0.876010) | 12.092639 / 8.074308 (4.018331) | 11.210968 / 10.191392 (1.019576) | 0.131212 / 0.680424 (-0.549212) | 0.015865 / 0.534201 (-0.518336) | 0.285702 / 0.579283 (-0.293581) | 0.278319 / 0.434364 (-0.156045) | 0.336063 / 0.540337 (-0.204275) | 0.426265 / 1.386936 (-0.960671) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d122b3ddc67705cc2b622bcbd79de9ff943a5742 \"CML watermark\")\n"
] | 2023-11-15T14:57:12 | 2023-11-15T17:15:33 | 2023-11-15T17:09:24 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6423",
"html_url": "https://github.com/huggingface/datasets/pull/6423",
"diff_url": "https://github.com/huggingface/datasets/pull/6423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6423.patch",
"merged_at": "2023-11-15T17:09:24"
} | Fix conda release by adding pyarrow-hotfix dependency.
Note that conda release failed in latest 2.14.7 release: https://github.com/huggingface/datasets/actions/runs/6874667214/job/18696761723
```
Traceback (most recent call last):
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/test_tmp/run_test.py", line 2, in <module>
import datasets
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 67, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import Features, Image, Value
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/features/__init__.py", line 18, in <module>
from .features import Array2D, Array3D, Array4D, Array5D, ClassLabel, Features, Sequence, Value
File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/features/features.py", line 34, in <module>
import pyarrow_hotfix # noqa: F401 # to fix vulnerability on pyarrow<14.0.1
^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'pyarrow_hotfix'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6423/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6422/comments | https://api.github.com/repos/huggingface/datasets/issues/6422/events | https://github.com/huggingface/datasets/issues/6422 | 1,994,579,267 | I_kwDODunzps524t1D | 6,422 | Allow to choose the `writer_batch_size` when using `save_to_disk` | {
"login": "NathanGodey",
"id": 38216711,
"node_id": "MDQ6VXNlcjM4MjE2NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/38216711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NathanGodey",
"html_url": "https://github.com/NathanGodey",
"followers_url": "https://api.github.com/users/NathanGodey/followers",
"following_url": "https://api.github.com/users/NathanGodey/following{/other_user}",
"gists_url": "https://api.github.com/users/NathanGodey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NathanGodey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NathanGodey/subscriptions",
"organizations_url": "https://api.github.com/users/NathanGodey/orgs",
"repos_url": "https://api.github.com/users/NathanGodey/repos",
"events_url": "https://api.github.com/users/NathanGodey/events{/privacy}",
"received_events_url": "https://api.github.com/users/NathanGodey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"We have a config variable that controls the batch size in `save_to_disk`:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = <smaller_batch_size>\r\n...\r\nds.save_to_disk(...)\r\n```",
"Thank you for your answer!\r\n\r\nFrom what I am reading in `https://github.com/huggingface/datasets/blob/2.14.5/src/datasets/arrow_dataset.py`, every function involved (`select`, `shard`, ...) has a default hardcoded batch size of 1000, as such:\r\n```python\r\ndef select(\r\n self,\r\n indices: Iterable,\r\n keep_in_memory: bool = False,\r\n indices_cache_file_name: Optional[str] = None,\r\n writer_batch_size: Optional[int] = 1000,\r\n new_fingerprint: Optional[str] = None,\r\n ) -> \"Dataset\":\r\n...\r\n```\r\nThen, `ArrowWriter` is instantiated with the specified `writer_batch_size`. In `ArrowWriter`, `writer_batch_size` is set to `datasets.config.DEFAULT_MAX_BATCH_SIZE` if it is `None`(https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_writer.py#L345C14-L345C31). However, in our case, it is already set to 1000 by \"parent\" methods, so it won't happen.\r\n\r\nNevertheless, due to this: \r\n```python\r\ndef _save_to_disk_single(job_id: int, shard: \"Dataset\", fpath: str, storage_options: Optional[dict]):\r\n batch_size = config.DEFAULT_MAX_BATCH_SIZE\r\n...\r\n```\r\nit seems to work. I will use it as such, but it should maybe be added to documentation? And maybe improved in next versions?"
] | 2023-11-15T11:18:34 | 2023-11-16T10:00:21 | null | NONE | null | null | null | ### Feature request
Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods.
### Motivation
The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in RAM saturation when using a lot of processes on long text sequences or other modalities, or for specific IO configs.
### Your contribution
I would be glad to submit a PR, as long as it does not imply extensive tests refactoring. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6422/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6421/comments | https://api.github.com/repos/huggingface/datasets/issues/6421/events | https://github.com/huggingface/datasets/pull/6421 | 1,994,451,553 | PR_kwDODunzps5fgG1h | 6,421 | Add pyarrow-hotfix to release docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004755 / 0.011353 (-0.006598) | 0.002683 / 0.011008 (-0.008325) | 0.061701 / 0.038508 (0.023193) | 0.030123 / 0.023109 (0.007013) | 0.238186 / 0.275898 (-0.037712) | 0.266570 / 0.323480 (-0.056910) | 0.002898 / 0.007986 (-0.005088) | 0.002381 / 0.004328 (-0.001948) | 0.048033 / 0.004250 (0.043782) | 0.044529 / 0.037052 (0.007477) | 0.246728 / 0.258489 (-0.011761) | 0.302066 / 0.293841 (0.008225) | 0.024008 / 0.128546 (-0.104539) | 0.006626 / 0.075646 (-0.069020) | 0.202000 / 0.419271 (-0.217272) | 0.056492 / 0.043533 (0.012959) | 0.243417 / 0.255139 (-0.011722) | 0.263947 / 0.283200 (-0.019253) | 0.020481 / 0.141683 (-0.121202) | 1.130635 / 1.452155 (-0.321520) | 1.180570 / 1.492716 (-0.312146) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095541 / 0.018006 (0.077535) | 0.306152 / 0.000490 (0.305662) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018593 / 0.037411 (-0.018818) | 0.063029 / 0.014526 (0.048503) | 0.074312 / 0.176557 (-0.102245) | 0.119882 / 0.737135 (-0.617254) | 0.074066 / 0.296338 (-0.222273) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275409 / 0.215209 (0.060200) | 2.727061 / 2.077655 (0.649407) | 1.415632 / 1.504120 (-0.088488) | 1.294922 / 1.541195 (-0.246273) | 1.341636 / 1.468490 (-0.126854) | 0.403250 / 4.584777 (-4.181527) | 2.384657 / 3.745712 (-1.361055) | 2.604131 / 5.269862 (-2.665731) | 1.558888 / 4.565676 (-3.006789) | 0.046008 / 0.424275 (-0.378267) | 0.004819 / 0.007607 (-0.002789) | 0.331046 / 0.226044 (0.105002) | 3.340950 / 2.268929 (1.072021) | 1.801077 / 55.444624 (-53.643548) | 1.479162 / 6.876477 (-5.397315) | 1.503713 / 2.142072 (-0.638359) | 0.474931 / 4.805227 (-4.330296) | 0.101869 / 6.500664 (-6.398795) | 0.041946 / 0.075469 (-0.033523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.955641 / 1.841788 (-0.886147) | 11.441032 / 8.074308 (3.366724) | 10.267731 / 10.191392 (0.076339) | 0.128735 / 0.680424 (-0.551689) | 0.013942 / 0.534201 (-0.520259) | 0.266620 / 0.579283 (-0.312663) | 0.262334 / 0.434364 (-0.172029) | 0.302713 / 0.540337 (-0.237624) | 0.430323 / 1.386936 (-0.956613) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004670 / 0.011353 (-0.006683) | 0.002671 / 0.011008 (-0.008338) | 0.048949 / 0.038508 (0.010441) | 0.052520 / 0.023109 (0.029411) | 0.272614 / 0.275898 (-0.003284) | 0.292618 / 0.323480 (-0.030862) | 0.004016 / 0.007986 (-0.003969) | 0.002430 / 0.004328 (-0.001899) | 0.048313 / 0.004250 (0.044063) | 0.038647 / 0.037052 (0.001595) | 0.279893 / 0.258489 (0.021404) | 0.305371 / 0.293841 (0.011530) | 0.023710 / 0.128546 (-0.104836) | 0.006999 / 0.075646 (-0.068648) | 0.053315 / 0.419271 (-0.365956) | 0.032417 / 0.043533 (-0.011115) | 0.272066 / 0.255139 (0.016927) | 0.291717 / 0.283200 (0.008518) | 0.018127 / 0.141683 (-0.123556) | 1.173611 / 1.452155 (-0.278544) | 1.183659 / 1.492716 (-0.309057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094831 / 0.018006 (0.076824) | 0.304911 / 0.000490 (0.304421) | 0.000225 / 0.000200 (0.000025) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020948 / 0.037411 (-0.016463) | 0.070255 / 0.014526 (0.055729) | 0.081371 / 0.176557 (-0.095186) | 0.118932 / 0.737135 (-0.618203) | 0.082207 / 0.296338 (-0.214132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294067 / 0.215209 (0.078858) | 2.856981 / 2.077655 (0.779326) | 1.598392 / 1.504120 (0.094273) | 1.479093 / 1.541195 (-0.062102) | 1.509495 / 1.468490 (0.041005) | 0.396303 / 4.584777 (-4.188473) | 2.429077 / 3.745712 (-1.316635) | 2.525037 / 5.269862 (-2.744824) | 1.503332 / 4.565676 (-3.062345) | 0.046191 / 0.424275 (-0.378084) | 0.004858 / 0.007607 (-0.002750) | 0.349528 / 0.226044 (0.123484) | 3.401451 / 2.268929 (1.132522) | 1.989613 / 55.444624 (-53.455012) | 1.664528 / 6.876477 (-5.211949) | 1.669076 / 2.142072 (-0.472997) | 0.467090 / 4.805227 (-4.338137) | 0.098137 / 6.500664 (-6.402527) | 0.040448 / 0.075469 (-0.035021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969578 / 1.841788 (-0.872210) | 12.064705 / 8.074308 (3.990396) | 10.991438 / 10.191392 (0.800046) | 0.130149 / 0.680424 (-0.550275) | 0.015357 / 0.534201 (-0.518844) | 0.266567 / 0.579283 (-0.312717) | 0.270619 / 0.434364 (-0.163744) | 0.305978 / 0.540337 (-0.234359) | 0.411164 / 1.386936 (-0.975772) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#86a2cf3174c55899535ee5f1707892a430ee53bc \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009810 / 0.011353 (-0.001543) | 0.005411 / 0.011008 (-0.005598) | 0.111670 / 0.038508 (0.073162) | 0.050288 / 0.023109 (0.027179) | 0.415625 / 0.275898 (0.139727) | 0.479382 / 0.323480 (0.155902) | 0.005104 / 0.007986 (-0.002882) | 0.007122 / 0.004328 (0.002793) | 0.079626 / 0.004250 (0.075375) | 0.079421 / 0.037052 (0.042369) | 0.406722 / 0.258489 (0.148233) | 0.461511 / 0.293841 (0.167670) | 0.053812 / 0.128546 (-0.074734) | 0.014315 / 0.075646 (-0.061331) | 0.389636 / 0.419271 (-0.029636) | 0.111859 / 0.043533 (0.068326) | 0.411703 / 0.255139 (0.156564) | 0.457072 / 0.283200 (0.173872) | 0.039807 / 0.141683 (-0.101876) | 1.744064 / 1.452155 (0.291909) | 1.968321 / 1.492716 (0.475604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.341839 / 0.018006 (0.323833) | 0.628083 / 0.000490 (0.627593) | 0.023787 / 0.000200 (0.023587) | 0.000601 / 0.000054 (0.000547) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034170 / 0.037411 (-0.003241) | 0.091159 / 0.014526 (0.076633) | 0.108993 / 0.176557 (-0.067563) | 0.186906 / 0.737135 (-0.550229) | 0.109753 / 0.296338 (-0.186586) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684138 / 0.215209 (0.468929) | 6.634852 / 2.077655 (4.557198) | 3.102870 / 1.504120 (1.598750) | 2.831023 / 1.541195 (1.289828) | 2.831597 / 1.468490 (1.363107) | 0.903584 / 4.584777 (-3.681193) | 5.503341 / 3.745712 (1.757629) | 4.970283 / 5.269862 (-0.299579) | 3.139413 / 4.565676 (-1.426264) | 0.109848 / 0.424275 (-0.314427) | 0.008501 / 0.007607 (0.000894) | 0.823815 / 0.226044 (0.597770) | 7.963355 / 2.268929 (5.694426) | 4.002010 / 55.444624 (-51.442614) | 3.229390 / 6.876477 (-3.647087) | 3.166413 / 2.142072 (1.024341) | 1.030313 / 4.805227 (-3.774914) | 0.219394 / 6.500664 (-6.281270) | 0.077760 / 0.075469 (0.002291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580309 / 1.841788 (-0.261479) | 24.279185 / 8.074308 (16.204877) | 22.305293 / 10.191392 (12.113901) | 0.235711 / 0.680424 (-0.444713) | 0.030342 / 0.534201 (-0.503859) | 0.498137 / 0.579283 (-0.081146) | 0.619173 / 0.434364 (0.184809) | 0.529904 / 0.540337 (-0.010434) | 0.822547 / 1.386936 (-0.564389) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009375 / 0.011353 (-0.001978) | 0.006009 / 0.011008 (-0.004999) | 0.074080 / 0.038508 (0.035572) | 0.089454 / 0.023109 (0.066345) | 0.473458 / 0.275898 (0.197560) | 0.462558 / 0.323480 (0.139078) | 0.006415 / 0.007986 (-0.001571) | 0.004777 / 0.004328 (0.000448) | 0.076563 / 0.004250 (0.072313) | 0.062793 / 0.037052 (0.025741) | 0.455860 / 0.258489 (0.197371) | 0.485281 / 0.293841 (0.191440) | 0.052966 / 0.128546 (-0.075580) | 0.021600 / 0.075646 (-0.054046) | 0.090407 / 0.419271 (-0.328864) | 0.063951 / 0.043533 (0.020418) | 0.487561 / 0.255139 (0.232422) | 0.479958 / 0.283200 (0.196758) | 0.039263 / 0.141683 (-0.102420) | 1.727215 / 1.452155 (0.275061) | 1.962039 / 1.492716 (0.469323) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296267 / 0.018006 (0.278261) | 0.604982 / 0.000490 (0.604493) | 0.007842 / 0.000200 (0.007642) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034317 / 0.037411 (-0.003094) | 0.097796 / 0.014526 (0.083270) | 0.126034 / 0.176557 (-0.050522) | 0.180873 / 0.737135 (-0.556262) | 0.125410 / 0.296338 (-0.170928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.608278 / 0.215209 (0.393069) | 6.154006 / 2.077655 (4.076351) | 2.822342 / 1.504120 (1.318222) | 2.568263 / 1.541195 (1.027068) | 2.518545 / 1.468490 (1.050055) | 0.863186 / 4.584777 (-3.721591) | 5.367969 / 3.745712 (1.622257) | 4.737691 / 5.269862 (-0.532170) | 2.917620 / 4.565676 (-1.648056) | 0.100731 / 0.424275 (-0.323544) | 0.008611 / 0.007607 (0.001004) | 0.735523 / 0.226044 (0.509479) | 7.552790 / 2.268929 (5.283862) | 3.821835 / 55.444624 (-51.622789) | 2.878259 / 6.876477 (-3.998217) | 2.957686 / 2.142072 (0.815613) | 0.964630 / 4.805227 (-3.840598) | 0.207098 / 6.500664 (-6.293566) | 0.084215 / 0.075469 (0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.711020 / 1.841788 (-0.130768) | 24.034122 / 8.074308 (15.959814) | 21.378504 / 10.191392 (11.187112) | 0.233433 / 0.680424 (-0.446990) | 0.037214 / 0.534201 (-0.496987) | 0.511952 / 0.579283 (-0.067332) | 0.591486 / 0.434364 (0.157123) | 0.606549 / 0.540337 (0.066211) | 0.833773 / 1.386936 (-0.553163) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#671f9b32fc559a35996c1b9070fad1a2647a7fef \"CML watermark\")\n"
] | 2023-11-15T10:06:44 | 2023-11-15T13:49:55 | 2023-11-15T13:38:22 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6421",
"html_url": "https://github.com/huggingface/datasets/pull/6421",
"diff_url": "https://github.com/huggingface/datasets/pull/6421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6421.patch",
"merged_at": "2023-11-15T13:38:22"
} | Add `pyarrow-hotfix` to release docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6421/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6420/comments | https://api.github.com/repos/huggingface/datasets/issues/6420/events | https://github.com/huggingface/datasets/pull/6420 | 1,994,278,903 | PR_kwDODunzps5ffhdi | 6,420 | Set dev version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6420). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004536 / 0.011353 (-0.006816) | 0.002979 / 0.011008 (-0.008030) | 0.061984 / 0.038508 (0.023476) | 0.029382 / 0.023109 (0.006273) | 0.245237 / 0.275898 (-0.030661) | 0.270571 / 0.323480 (-0.052909) | 0.003956 / 0.007986 (-0.004029) | 0.002453 / 0.004328 (-0.001876) | 0.047967 / 0.004250 (0.043717) | 0.043695 / 0.037052 (0.006643) | 0.248457 / 0.258489 (-0.010032) | 0.283293 / 0.293841 (-0.010548) | 0.023603 / 0.128546 (-0.104943) | 0.007225 / 0.075646 (-0.068422) | 0.200533 / 0.419271 (-0.218739) | 0.055310 / 0.043533 (0.011777) | 0.245152 / 0.255139 (-0.009987) | 0.267187 / 0.283200 (-0.016012) | 0.018158 / 0.141683 (-0.123525) | 1.126079 / 1.452155 (-0.326075) | 1.185137 / 1.492716 (-0.307580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092436 / 0.018006 (0.074430) | 0.300132 / 0.000490 (0.299642) | 0.000206 / 0.000200 (0.000006) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018476 / 0.037411 (-0.018935) | 0.062827 / 0.014526 (0.048301) | 0.074605 / 0.176557 (-0.101952) | 0.119768 / 0.737135 (-0.617368) | 0.076044 / 0.296338 (-0.220294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279717 / 0.215209 (0.064508) | 2.752308 / 2.077655 (0.674654) | 1.434954 / 1.504120 (-0.069166) | 1.314700 / 1.541195 (-0.226495) | 1.347689 / 1.468490 (-0.120802) | 0.400332 / 4.584777 (-4.184445) | 2.383024 / 3.745712 (-1.362689) | 2.583130 / 5.269862 (-2.686732) | 1.567670 / 4.565676 (-2.998007) | 0.045446 / 0.424275 (-0.378829) | 0.004813 / 0.007607 (-0.002794) | 0.336191 / 0.226044 (0.110147) | 3.319837 / 2.268929 (1.050909) | 1.816808 / 55.444624 (-53.627817) | 1.539052 / 6.876477 (-5.337424) | 1.550765 / 2.142072 (-0.591307) | 0.484253 / 4.805227 (-4.320974) | 0.100494 / 6.500664 (-6.400170) | 0.041614 / 0.075469 (-0.033855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.940857 / 1.841788 (-0.900931) | 11.784946 / 8.074308 (3.710638) | 10.397038 / 10.191392 (0.205646) | 0.141458 / 0.680424 (-0.538965) | 0.014193 / 0.534201 (-0.520008) | 0.268304 / 0.579283 (-0.310979) | 0.267059 / 0.434364 (-0.167305) | 0.309389 / 0.540337 (-0.230949) | 0.420628 / 1.386936 (-0.966308) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004776 / 0.011353 (-0.006577) | 0.002941 / 0.011008 (-0.008067) | 0.048659 / 0.038508 (0.010151) | 0.053334 / 0.023109 (0.030225) | 0.273342 / 0.275898 (-0.002556) | 0.302278 / 0.323480 (-0.021202) | 0.004001 / 0.007986 (-0.003984) | 0.002414 / 0.004328 (-0.001914) | 0.047504 / 0.004250 (0.043254) | 0.038581 / 0.037052 (0.001529) | 0.277768 / 0.258489 (0.019279) | 0.306772 / 0.293841 (0.012931) | 0.024146 / 0.128546 (-0.104400) | 0.007233 / 0.075646 (-0.068413) | 0.053308 / 0.419271 (-0.365964) | 0.032617 / 0.043533 (-0.010916) | 0.277390 / 0.255139 (0.022251) | 0.296015 / 0.283200 (0.012816) | 0.018733 / 0.141683 (-0.122950) | 1.124895 / 1.452155 (-0.327260) | 1.182579 / 1.492716 (-0.310137) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093375 / 0.018006 (0.075369) | 0.301555 / 0.000490 (0.301066) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021284 / 0.037411 (-0.016127) | 0.070158 / 0.014526 (0.055632) | 0.080187 / 0.176557 (-0.096370) | 0.119282 / 0.737135 (-0.617854) | 0.081672 / 0.296338 (-0.214666) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.314396 / 0.215209 (0.099187) | 2.975114 / 2.077655 (0.897459) | 1.724658 / 1.504120 (0.220539) | 1.604464 / 1.541195 (0.063269) | 1.652736 / 1.468490 (0.184246) | 0.395064 / 4.584777 (-4.189713) | 2.412768 / 3.745712 (-1.332944) | 2.564427 / 5.269862 (-2.705435) | 1.507627 / 4.565676 (-3.058050) | 0.045463 / 0.424275 (-0.378812) | 0.004797 / 0.007607 (-0.002810) | 0.383115 / 0.226044 (0.157071) | 3.501976 / 2.268929 (1.233048) | 2.087512 / 55.444624 (-53.357113) | 1.793132 / 6.876477 (-5.083345) | 1.804178 / 2.142072 (-0.337895) | 0.468287 / 4.805227 (-4.336940) | 0.097247 / 6.500664 (-6.403417) | 0.041139 / 0.075469 (-0.034330) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976034 / 1.841788 (-0.865754) | 12.431248 / 8.074308 (4.356940) | 10.896064 / 10.191392 (0.704672) | 0.129137 / 0.680424 (-0.551287) | 0.015636 / 0.534201 (-0.518565) | 0.268219 / 0.579283 (-0.311064) | 0.278345 / 0.434364 (-0.156019) | 0.302696 / 0.540337 (-0.237642) | 0.408465 / 1.386936 (-0.978471) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51c53e94acd7a273c24899c045446df021314cd2 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007703 / 0.011353 (-0.003650) | 0.004614 / 0.011008 (-0.006394) | 0.101425 / 0.038508 (0.062917) | 0.040122 / 0.023109 (0.017013) | 0.398890 / 0.275898 (0.122992) | 0.424392 / 0.323480 (0.100912) | 0.005411 / 0.007986 (-0.002575) | 0.003747 / 0.004328 (-0.000582) | 0.080494 / 0.004250 (0.076243) | 0.059392 / 0.037052 (0.022340) | 0.398025 / 0.258489 (0.139536) | 0.454293 / 0.293841 (0.160452) | 0.043662 / 0.128546 (-0.084884) | 0.013726 / 0.075646 (-0.061920) | 0.352910 / 0.419271 (-0.066362) | 0.088572 / 0.043533 (0.045039) | 0.401677 / 0.255139 (0.146538) | 0.421774 / 0.283200 (0.138575) | 0.033377 / 0.141683 (-0.108305) | 1.728499 / 1.452155 (0.276344) | 1.821557 / 1.492716 (0.328841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230744 / 0.018006 (0.212738) | 0.496188 / 0.000490 (0.495698) | 0.010315 / 0.000200 (0.010115) | 0.000402 / 0.000054 (0.000348) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028859 / 0.037411 (-0.008552) | 0.089688 / 0.014526 (0.075163) | 0.111697 / 0.176557 (-0.064860) | 0.183238 / 0.737135 (-0.553898) | 0.112407 / 0.296338 (-0.183931) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.558394 / 0.215209 (0.343185) | 5.643048 / 2.077655 (3.565393) | 2.454622 / 1.504120 (0.950502) | 2.183338 / 1.541195 (0.642143) | 2.324793 / 1.468490 (0.856303) | 0.859482 / 4.584777 (-3.725295) | 4.959346 / 3.745712 (1.213634) | 4.599224 / 5.269862 (-0.670638) | 2.764382 / 4.565676 (-1.801295) | 0.089976 / 0.424275 (-0.334299) | 0.008144 / 0.007607 (0.000537) | 0.634675 / 0.226044 (0.408631) | 6.555693 / 2.268929 (4.286765) | 3.080252 / 55.444624 (-52.364373) | 2.442715 / 6.876477 (-4.433762) | 2.475126 / 2.142072 (0.333053) | 0.986459 / 4.805227 (-3.818768) | 0.193859 / 6.500664 (-6.306805) | 0.063652 / 0.075469 (-0.011817) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.545318 / 1.841788 (-0.296469) | 21.928751 / 8.074308 (13.854442) | 20.598229 / 10.191392 (10.406837) | 0.234046 / 0.680424 (-0.446377) | 0.025947 / 0.534201 (-0.508254) | 0.459773 / 0.579283 (-0.119510) | 0.598026 / 0.434364 (0.163662) | 0.555260 / 0.540337 (0.014922) | 0.782767 / 1.386936 (-0.604169) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009322 / 0.011353 (-0.002030) | 0.004650 / 0.011008 (-0.006358) | 0.079326 / 0.038508 (0.040818) | 0.079112 / 0.023109 (0.056003) | 0.428708 / 0.275898 (0.152810) | 0.481647 / 0.323480 (0.158168) | 0.006419 / 0.007986 (-0.001566) | 0.003878 / 0.004328 (-0.000450) | 0.079013 / 0.004250 (0.074762) | 0.058107 / 0.037052 (0.021055) | 0.436967 / 0.258489 (0.178478) | 0.501120 / 0.293841 (0.207279) | 0.052972 / 0.128546 (-0.075574) | 0.014414 / 0.075646 (-0.061232) | 0.098587 / 0.419271 (-0.320685) | 0.061626 / 0.043533 (0.018093) | 0.451623 / 0.255139 (0.196484) | 0.468893 / 0.283200 (0.185693) | 0.032479 / 0.141683 (-0.109203) | 1.911743 / 1.452155 (0.459588) | 1.969024 / 1.492716 (0.476308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232015 / 0.018006 (0.214009) | 0.508637 / 0.000490 (0.508147) | 0.005470 / 0.000200 (0.005270) | 0.000131 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035345 / 0.037411 (-0.002066) | 0.106319 / 0.014526 (0.091794) | 0.117205 / 0.176557 (-0.059352) | 0.176527 / 0.737135 (-0.560608) | 0.121566 / 0.296338 (-0.174773) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584920 / 0.215209 (0.369711) | 5.745688 / 2.077655 (3.668034) | 2.519875 / 1.504120 (1.015755) | 2.197593 / 1.541195 (0.656398) | 2.296670 / 1.468490 (0.828180) | 0.831938 / 4.584777 (-3.752839) | 5.130594 / 3.745712 (1.384882) | 4.581385 / 5.269862 (-0.688476) | 2.829516 / 4.565676 (-1.736161) | 0.099015 / 0.424275 (-0.325260) | 0.011468 / 0.007607 (0.003861) | 0.702717 / 0.226044 (0.476672) | 6.856099 / 2.268929 (4.587170) | 3.372966 / 55.444624 (-52.071658) | 2.567664 / 6.876477 (-4.308812) | 2.699200 / 2.142072 (0.557127) | 0.992316 / 4.805227 (-3.812911) | 0.190463 / 6.500664 (-6.310201) | 0.063305 / 0.075469 (-0.012165) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.591491 / 1.841788 (-0.250296) | 21.696492 / 8.074308 (13.622184) | 19.695404 / 10.191392 (9.504012) | 0.222853 / 0.680424 (-0.457571) | 0.032936 / 0.534201 (-0.501265) | 0.431209 / 0.579283 (-0.148074) | 0.543101 / 0.434364 (0.108737) | 0.543427 / 0.540337 (0.003089) | 0.742102 / 1.386936 (-0.644834) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#534a227179265df9093230885613c95390325705 \"CML watermark\")\n"
] | 2023-11-15T08:22:19 | 2023-11-15T08:33:36 | 2023-11-15T08:22:33 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6420",
"html_url": "https://github.com/huggingface/datasets/pull/6420",
"diff_url": "https://github.com/huggingface/datasets/pull/6420.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6420.patch",
"merged_at": "2023-11-15T08:22:33"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6420/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6419/comments | https://api.github.com/repos/huggingface/datasets/issues/6419/events | https://github.com/huggingface/datasets/pull/6419 | 1,994,257,873 | PR_kwDODunzps5ffc7d | 6,419 | Release: 2.14.7 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004943 / 0.011353 (-0.006410) | 0.002900 / 0.011008 (-0.008109) | 0.061495 / 0.038508 (0.022987) | 0.053575 / 0.023109 (0.030466) | 0.249318 / 0.275898 (-0.026580) | 0.271773 / 0.323480 (-0.051706) | 0.003074 / 0.007986 (-0.004911) | 0.003738 / 0.004328 (-0.000590) | 0.047624 / 0.004250 (0.043373) | 0.045141 / 0.037052 (0.008089) | 0.255467 / 0.258489 (-0.003022) | 0.286577 / 0.293841 (-0.007264) | 0.023113 / 0.128546 (-0.105433) | 0.007189 / 0.075646 (-0.068458) | 0.204441 / 0.419271 (-0.214830) | 0.036829 / 0.043533 (-0.006704) | 0.252474 / 0.255139 (-0.002665) | 0.270960 / 0.283200 (-0.012239) | 0.019666 / 0.141683 (-0.122017) | 1.095139 / 1.452155 (-0.357015) | 1.158659 / 1.492716 (-0.334057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091046 / 0.018006 (0.073040) | 0.298346 / 0.000490 (0.297856) | 0.000215 / 0.000200 (0.000015) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018702 / 0.037411 (-0.018709) | 0.062213 / 0.014526 (0.047687) | 0.073364 / 0.176557 (-0.103193) | 0.119841 / 0.737135 (-0.617294) | 0.074070 / 0.296338 (-0.222268) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282388 / 0.215209 (0.067179) | 2.792029 / 2.077655 (0.714375) | 1.471483 / 1.504120 (-0.032637) | 1.386236 / 1.541195 (-0.154959) | 1.377489 / 1.468490 (-0.091001) | 0.410335 / 4.584777 (-4.174442) | 2.424866 / 3.745712 (-1.320846) | 2.610609 / 5.269862 (-2.659253) | 1.574636 / 4.565676 (-2.991041) | 0.046716 / 0.424275 (-0.377559) | 0.004768 / 0.007607 (-0.002839) | 0.339831 / 0.226044 (0.113787) | 3.297579 / 2.268929 (1.028651) | 1.851410 / 55.444624 (-53.593214) | 1.550048 / 6.876477 (-5.326428) | 1.576647 / 2.142072 (-0.565425) | 0.482538 / 4.805227 (-4.322689) | 0.101381 / 6.500664 (-6.399283) | 0.042066 / 0.075469 (-0.033403) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972664 / 1.841788 (-0.869123) | 11.580700 / 8.074308 (3.506392) | 10.586747 / 10.191392 (0.395355) | 0.127844 / 0.680424 (-0.552580) | 0.014270 / 0.534201 (-0.519931) | 0.269678 / 0.579283 (-0.309605) | 0.264022 / 0.434364 (-0.170342) | 0.309395 / 0.540337 (-0.230942) | 0.429228 / 1.386936 (-0.957708) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004815 / 0.011353 (-0.006538) | 0.002890 / 0.011008 (-0.008119) | 0.048039 / 0.038508 (0.009531) | 0.053029 / 0.023109 (0.029920) | 0.271346 / 0.275898 (-0.004552) | 0.294488 / 0.323480 (-0.028992) | 0.003983 / 0.007986 (-0.004003) | 0.002439 / 0.004328 (-0.001889) | 0.048250 / 0.004250 (0.044000) | 0.038855 / 0.037052 (0.001803) | 0.284723 / 0.258489 (0.026234) | 0.303604 / 0.293841 (0.009763) | 0.024384 / 0.128546 (-0.104163) | 0.007021 / 0.075646 (-0.068625) | 0.053850 / 0.419271 (-0.365422) | 0.032177 / 0.043533 (-0.011356) | 0.270039 / 0.255139 (0.014900) | 0.289669 / 0.283200 (0.006469) | 0.018840 / 0.141683 (-0.122842) | 1.122191 / 1.452155 (-0.329963) | 1.187083 / 1.492716 (-0.305634) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090609 / 0.018006 (0.072603) | 0.298915 / 0.000490 (0.298425) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020919 / 0.037411 (-0.016492) | 0.070474 / 0.014526 (0.055948) | 0.082421 / 0.176557 (-0.094135) | 0.126967 / 0.737135 (-0.610168) | 0.083447 / 0.296338 (-0.212892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300153 / 0.215209 (0.084944) | 2.958992 / 2.077655 (0.881337) | 1.631228 / 1.504120 (0.127108) | 1.497991 / 1.541195 (-0.043204) | 1.536963 / 1.468490 (0.068473) | 0.403047 / 4.584777 (-4.181730) | 2.448782 / 3.745712 (-1.296930) | 2.571954 / 5.269862 (-2.697908) | 1.556346 / 4.565676 (-3.009331) | 0.045992 / 0.424275 (-0.378283) | 0.004785 / 0.007607 (-0.002822) | 0.357448 / 0.226044 (0.131404) | 3.558808 / 2.268929 (1.289880) | 1.992624 / 55.444624 (-53.452001) | 1.695027 / 6.876477 (-5.181450) | 1.695183 / 2.142072 (-0.446889) | 0.477001 / 4.805227 (-4.328226) | 0.097485 / 6.500664 (-6.403179) | 0.040530 / 0.075469 (-0.034939) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976342 / 1.841788 (-0.865445) | 12.141698 / 8.074308 (4.067390) | 10.881101 / 10.191392 (0.689709) | 0.142443 / 0.680424 (-0.537981) | 0.015583 / 0.534201 (-0.518618) | 0.269727 / 0.579283 (-0.309556) | 0.275890 / 0.434364 (-0.158474) | 0.306351 / 0.540337 (-0.233987) | 0.412003 / 1.386936 (-0.974933) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7c744261000fd684f54c54de8ac4f15a726092d7 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004946 / 0.011353 (-0.006407) | 0.002863 / 0.011008 (-0.008146) | 0.061888 / 0.038508 (0.023380) | 0.050664 / 0.023109 (0.027554) | 0.242635 / 0.275898 (-0.033263) | 0.271741 / 0.323480 (-0.051739) | 0.003023 / 0.007986 (-0.004963) | 0.003088 / 0.004328 (-0.001241) | 0.049286 / 0.004250 (0.045036) | 0.044699 / 0.037052 (0.007647) | 0.249581 / 0.258489 (-0.008908) | 0.285633 / 0.293841 (-0.008208) | 0.023048 / 0.128546 (-0.105499) | 0.007235 / 0.075646 (-0.068412) | 0.202989 / 0.419271 (-0.216282) | 0.036357 / 0.043533 (-0.007175) | 0.245980 / 0.255139 (-0.009159) | 0.277486 / 0.283200 (-0.005713) | 0.019215 / 0.141683 (-0.122468) | 1.096456 / 1.452155 (-0.355699) | 1.152196 / 1.492716 (-0.340520) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092026 / 0.018006 (0.074020) | 0.303038 / 0.000490 (0.302549) | 0.000209 / 0.000200 (0.000009) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018670 / 0.037411 (-0.018741) | 0.061972 / 0.014526 (0.047446) | 0.072963 / 0.176557 (-0.103594) | 0.119984 / 0.737135 (-0.617151) | 0.074074 / 0.296338 (-0.222265) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282444 / 0.215209 (0.067235) | 2.754571 / 2.077655 (0.676916) | 1.482635 / 1.504120 (-0.021485) | 1.352039 / 1.541195 (-0.189155) | 1.359333 / 1.468490 (-0.109157) | 0.399690 / 4.584777 (-4.185087) | 2.364844 / 3.745712 (-1.380868) | 2.603942 / 5.269862 (-2.665919) | 1.569512 / 4.565676 (-2.996164) | 0.046074 / 0.424275 (-0.378201) | 0.004745 / 0.007607 (-0.002862) | 0.339066 / 0.226044 (0.113022) | 3.363456 / 2.268929 (1.094527) | 1.822213 / 55.444624 (-53.622411) | 1.536622 / 6.876477 (-5.339854) | 1.574772 / 2.142072 (-0.567300) | 0.474418 / 4.805227 (-4.330809) | 0.099572 / 6.500664 (-6.401092) | 0.041824 / 0.075469 (-0.033645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.956300 / 1.841788 (-0.885487) | 11.648886 / 8.074308 (3.574578) | 10.645700 / 10.191392 (0.454308) | 0.138924 / 0.680424 (-0.541499) | 0.013936 / 0.534201 (-0.520265) | 0.270319 / 0.579283 (-0.308964) | 0.269735 / 0.434364 (-0.164629) | 0.309699 / 0.540337 (-0.230639) | 0.429139 / 1.386936 (-0.957797) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004838 / 0.011353 (-0.006515) | 0.002937 / 0.011008 (-0.008072) | 0.048094 / 0.038508 (0.009586) | 0.053131 / 0.023109 (0.030022) | 0.271893 / 0.275898 (-0.004005) | 0.291025 / 0.323480 (-0.032454) | 0.004058 / 0.007986 (-0.003928) | 0.002410 / 0.004328 (-0.001919) | 0.047939 / 0.004250 (0.043689) | 0.038996 / 0.037052 (0.001944) | 0.274983 / 0.258489 (0.016494) | 0.306175 / 0.293841 (0.012334) | 0.024388 / 0.128546 (-0.104159) | 0.007242 / 0.075646 (-0.068404) | 0.054011 / 0.419271 (-0.365261) | 0.032750 / 0.043533 (-0.010783) | 0.271147 / 0.255139 (0.016008) | 0.288163 / 0.283200 (0.004963) | 0.018383 / 0.141683 (-0.123299) | 1.116134 / 1.452155 (-0.336021) | 1.185964 / 1.492716 (-0.306752) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093289 / 0.018006 (0.075283) | 0.303058 / 0.000490 (0.302568) | 0.000241 / 0.000200 (0.000041) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021422 / 0.037411 (-0.015990) | 0.069974 / 0.014526 (0.055449) | 0.081164 / 0.176557 (-0.095392) | 0.119991 / 0.737135 (-0.617144) | 0.082154 / 0.296338 (-0.214184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292298 / 0.215209 (0.077089) | 2.851475 / 2.077655 (0.773821) | 1.558283 / 1.504120 (0.054163) | 1.432431 / 1.541195 (-0.108764) | 1.479282 / 1.468490 (0.010792) | 0.413124 / 4.584777 (-4.171653) | 2.473005 / 3.745712 (-1.272707) | 2.548779 / 5.269862 (-2.721082) | 1.520776 / 4.565676 (-3.044900) | 0.046476 / 0.424275 (-0.377799) | 0.004814 / 0.007607 (-0.002794) | 0.347036 / 0.226044 (0.120992) | 3.424928 / 2.268929 (1.155999) | 1.963274 / 55.444624 (-53.481351) | 1.653794 / 6.876477 (-5.222683) | 1.643874 / 2.142072 (-0.498198) | 0.469086 / 4.805227 (-4.336141) | 0.097417 / 6.500664 (-6.403247) | 0.040468 / 0.075469 (-0.035002) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972783 / 1.841788 (-0.869005) | 12.122994 / 8.074308 (4.048686) | 10.876396 / 10.191392 (0.685004) | 0.130573 / 0.680424 (-0.549850) | 0.016693 / 0.534201 (-0.517508) | 0.270952 / 0.579283 (-0.308331) | 0.273834 / 0.434364 (-0.160530) | 0.305049 / 0.540337 (-0.235289) | 0.408776 / 1.386936 (-0.978160) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4216e5d57ea07e6b1ed73a3ec2cf845c6e59f70 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004606 / 0.011353 (-0.006747) | 0.002433 / 0.011008 (-0.008576) | 0.061985 / 0.038508 (0.023477) | 0.048853 / 0.023109 (0.025744) | 0.244506 / 0.275898 (-0.031392) | 0.270159 / 0.323480 (-0.053321) | 0.003962 / 0.007986 (-0.004024) | 0.002376 / 0.004328 (-0.001952) | 0.048067 / 0.004250 (0.043817) | 0.041864 / 0.037052 (0.004812) | 0.249743 / 0.258489 (-0.008746) | 0.287723 / 0.293841 (-0.006117) | 0.022954 / 0.128546 (-0.105593) | 0.006845 / 0.075646 (-0.068801) | 0.206313 / 0.419271 (-0.212959) | 0.035780 / 0.043533 (-0.007753) | 0.244286 / 0.255139 (-0.010853) | 0.270026 / 0.283200 (-0.013173) | 0.018177 / 0.141683 (-0.123506) | 1.083998 / 1.452155 (-0.368157) | 1.156086 / 1.492716 (-0.336630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093754 / 0.018006 (0.075748) | 0.302157 / 0.000490 (0.301667) | 0.000215 / 0.000200 (0.000015) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018745 / 0.037411 (-0.018666) | 0.061707 / 0.014526 (0.047181) | 0.074356 / 0.176557 (-0.102200) | 0.121643 / 0.737135 (-0.615492) | 0.075885 / 0.296338 (-0.220454) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289156 / 0.215209 (0.073947) | 2.881327 / 2.077655 (0.803672) | 1.483568 / 1.504120 (-0.020552) | 1.355933 / 1.541195 (-0.185262) | 1.389693 / 1.468490 (-0.078797) | 0.402834 / 4.584777 (-4.181943) | 2.390634 / 3.745712 (-1.355078) | 2.596761 / 5.269862 (-2.673101) | 1.527602 / 4.565676 (-3.038074) | 0.046434 / 0.424275 (-0.377841) | 0.004783 / 0.007607 (-0.002824) | 0.341017 / 0.226044 (0.114972) | 3.429023 / 2.268929 (1.160095) | 1.832988 / 55.444624 (-53.611637) | 1.526510 / 6.876477 (-5.349967) | 1.539382 / 2.142072 (-0.602690) | 0.475734 / 4.805227 (-4.329493) | 0.098710 / 6.500664 (-6.401954) | 0.041136 / 0.075469 (-0.034333) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.922023 / 1.841788 (-0.919765) | 11.428215 / 8.074308 (3.353907) | 10.356668 / 10.191392 (0.165276) | 0.139575 / 0.680424 (-0.540848) | 0.014541 / 0.534201 (-0.519660) | 0.271359 / 0.579283 (-0.307924) | 0.266701 / 0.434364 (-0.167663) | 0.309449 / 0.540337 (-0.230888) | 0.422047 / 1.386936 (-0.964889) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004892 / 0.011353 (-0.006461) | 0.002792 / 0.011008 (-0.008216) | 0.048027 / 0.038508 (0.009519) | 0.059256 / 0.023109 (0.036147) | 0.270150 / 0.275898 (-0.005748) | 0.294530 / 0.323480 (-0.028950) | 0.004162 / 0.007986 (-0.003823) | 0.002470 / 0.004328 (-0.001858) | 0.047993 / 0.004250 (0.043743) | 0.040380 / 0.037052 (0.003328) | 0.275247 / 0.258489 (0.016758) | 0.305684 / 0.293841 (0.011843) | 0.025072 / 0.128546 (-0.103474) | 0.007183 / 0.075646 (-0.068463) | 0.054875 / 0.419271 (-0.364397) | 0.033053 / 0.043533 (-0.010480) | 0.271281 / 0.255139 (0.016142) | 0.288057 / 0.283200 (0.004858) | 0.018692 / 0.141683 (-0.122991) | 1.125224 / 1.452155 (-0.326930) | 1.171083 / 1.492716 (-0.321633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103102 / 0.018006 (0.085096) | 0.309099 / 0.000490 (0.308609) | 0.000232 / 0.000200 (0.000032) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021532 / 0.037411 (-0.015879) | 0.069927 / 0.014526 (0.055401) | 0.080920 / 0.176557 (-0.095637) | 0.122214 / 0.737135 (-0.614921) | 0.082268 / 0.296338 (-0.214071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298121 / 0.215209 (0.082912) | 2.933000 / 2.077655 (0.855345) | 1.608782 / 1.504120 (0.104662) | 1.554083 / 1.541195 (0.012889) | 1.552700 / 1.468490 (0.084209) | 0.400576 / 4.584777 (-4.184201) | 2.412914 / 3.745712 (-1.332798) | 2.545706 / 5.269862 (-2.724155) | 1.548797 / 4.565676 (-3.016879) | 0.045553 / 0.424275 (-0.378722) | 0.004751 / 0.007607 (-0.002857) | 0.343002 / 0.226044 (0.116958) | 3.402866 / 2.268929 (1.133937) | 1.969910 / 55.444624 (-53.474715) | 1.686639 / 6.876477 (-5.189838) | 1.768474 / 2.142072 (-0.373599) | 0.471299 / 4.805227 (-4.333928) | 0.097696 / 6.500664 (-6.402968) | 0.041693 / 0.075469 (-0.033776) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971380 / 1.841788 (-0.870408) | 12.686033 / 8.074308 (4.611725) | 11.370946 / 10.191392 (1.179554) | 0.138377 / 0.680424 (-0.542047) | 0.015623 / 0.534201 (-0.518578) | 0.270935 / 0.579283 (-0.308348) | 0.276235 / 0.434364 (-0.158129) | 0.310196 / 0.540337 (-0.230141) | 0.416908 / 1.386936 (-0.970028) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf02cff8d70180a9e89328961ded9e3d8510fd22 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004581 / 0.011353 (-0.006772) | 0.002468 / 0.011008 (-0.008541) | 0.061420 / 0.038508 (0.022912) | 0.047685 / 0.023109 (0.024575) | 0.237756 / 0.275898 (-0.038142) | 0.267548 / 0.323480 (-0.055932) | 0.003899 / 0.007986 (-0.004086) | 0.002338 / 0.004328 (-0.001990) | 0.048794 / 0.004250 (0.044543) | 0.042485 / 0.037052 (0.005433) | 0.250165 / 0.258489 (-0.008324) | 0.278791 / 0.293841 (-0.015050) | 0.022371 / 0.128546 (-0.106175) | 0.006923 / 0.075646 (-0.068723) | 0.201401 / 0.419271 (-0.217870) | 0.035867 / 0.043533 (-0.007665) | 0.244628 / 0.255139 (-0.010511) | 0.271137 / 0.283200 (-0.012063) | 0.017257 / 0.141683 (-0.124426) | 1.097261 / 1.452155 (-0.354894) | 1.163314 / 1.492716 (-0.329402) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089060 / 0.018006 (0.071054) | 0.297489 / 0.000490 (0.296999) | 0.000207 / 0.000200 (0.000007) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018583 / 0.037411 (-0.018828) | 0.061974 / 0.014526 (0.047449) | 0.073300 / 0.176557 (-0.103256) | 0.118871 / 0.737135 (-0.618264) | 0.075513 / 0.296338 (-0.220826) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285544 / 0.215209 (0.070335) | 2.799871 / 2.077655 (0.722216) | 1.479871 / 1.504120 (-0.024249) | 1.351128 / 1.541195 (-0.190067) | 1.377540 / 1.468490 (-0.090950) | 0.393056 / 4.584777 (-4.191721) | 2.341791 / 3.745712 (-1.403921) | 2.546854 / 5.269862 (-2.723007) | 1.547368 / 4.565676 (-3.018309) | 0.046056 / 0.424275 (-0.378219) | 0.004765 / 0.007607 (-0.002842) | 0.336384 / 0.226044 (0.110339) | 3.283277 / 2.268929 (1.014348) | 1.784535 / 55.444624 (-53.660089) | 1.557809 / 6.876477 (-5.318667) | 1.581728 / 2.142072 (-0.560344) | 0.470527 / 4.805227 (-4.334700) | 0.098383 / 6.500664 (-6.402281) | 0.041563 / 0.075469 (-0.033906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946924 / 1.841788 (-0.894863) | 11.202775 / 8.074308 (3.128467) | 10.249760 / 10.191392 (0.058368) | 0.142337 / 0.680424 (-0.538087) | 0.013784 / 0.534201 (-0.520417) | 0.267237 / 0.579283 (-0.312046) | 0.264142 / 0.434364 (-0.170222) | 0.306343 / 0.540337 (-0.233994) | 0.423681 / 1.386936 (-0.963255) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004786 / 0.011353 (-0.006567) | 0.002398 / 0.011008 (-0.008610) | 0.047325 / 0.038508 (0.008817) | 0.050753 / 0.023109 (0.027644) | 0.271132 / 0.275898 (-0.004766) | 0.290854 / 0.323480 (-0.032626) | 0.003953 / 0.007986 (-0.004033) | 0.002238 / 0.004328 (-0.002090) | 0.047463 / 0.004250 (0.043213) | 0.038504 / 0.037052 (0.001451) | 0.273182 / 0.258489 (0.014693) | 0.303449 / 0.293841 (0.009608) | 0.024069 / 0.128546 (-0.104477) | 0.006712 / 0.075646 (-0.068934) | 0.053032 / 0.419271 (-0.366239) | 0.032221 / 0.043533 (-0.011312) | 0.271770 / 0.255139 (0.016631) | 0.287876 / 0.283200 (0.004677) | 0.018040 / 0.141683 (-0.123643) | 1.138749 / 1.452155 (-0.313405) | 1.192048 / 1.492716 (-0.300668) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089132 / 0.018006 (0.071126) | 0.298636 / 0.000490 (0.298146) | 0.000220 / 0.000200 (0.000020) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020808 / 0.037411 (-0.016603) | 0.069506 / 0.014526 (0.054980) | 0.079412 / 0.176557 (-0.097145) | 0.118188 / 0.737135 (-0.618947) | 0.083044 / 0.296338 (-0.213294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293502 / 0.215209 (0.078293) | 2.863692 / 2.077655 (0.786037) | 1.590877 / 1.504120 (0.086757) | 1.483634 / 1.541195 (-0.057561) | 1.502113 / 1.468490 (0.033623) | 0.402170 / 4.584777 (-4.182607) | 2.414188 / 3.745712 (-1.331524) | 2.500146 / 5.269862 (-2.769716) | 1.506977 / 4.565676 (-3.058699) | 0.045849 / 0.424275 (-0.378426) | 0.004755 / 0.007607 (-0.002852) | 0.343073 / 0.226044 (0.117029) | 3.354985 / 2.268929 (1.086056) | 1.952594 / 55.444624 (-53.492030) | 1.664084 / 6.876477 (-5.212392) | 1.664203 / 2.142072 (-0.477869) | 0.475858 / 4.805227 (-4.329370) | 0.097539 / 6.500664 (-6.403125) | 0.040201 / 0.075469 (-0.035268) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980051 / 1.841788 (-0.861736) | 11.615291 / 8.074308 (3.540983) | 10.492092 / 10.191392 (0.300700) | 0.130450 / 0.680424 (-0.549974) | 0.015883 / 0.534201 (-0.518318) | 0.267575 / 0.579283 (-0.311708) | 0.276981 / 0.434364 (-0.157383) | 0.310221 / 0.540337 (-0.230116) | 0.417143 / 1.386936 (-0.969793) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf02cff8d70180a9e89328961ded9e3d8510fd22 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004721 / 0.011353 (-0.006632) | 0.002931 / 0.011008 (-0.008077) | 0.061948 / 0.038508 (0.023440) | 0.051066 / 0.023109 (0.027957) | 0.245431 / 0.275898 (-0.030467) | 0.295627 / 0.323480 (-0.027852) | 0.003997 / 0.007986 (-0.003988) | 0.002408 / 0.004328 (-0.001920) | 0.048292 / 0.004250 (0.044041) | 0.044716 / 0.037052 (0.007664) | 0.255119 / 0.258489 (-0.003371) | 0.287384 / 0.293841 (-0.006457) | 0.022835 / 0.128546 (-0.105711) | 0.007162 / 0.075646 (-0.068484) | 0.201352 / 0.419271 (-0.217920) | 0.036626 / 0.043533 (-0.006906) | 0.249590 / 0.255139 (-0.005549) | 0.270822 / 0.283200 (-0.012378) | 0.018152 / 0.141683 (-0.123531) | 1.097046 / 1.452155 (-0.355109) | 1.160461 / 1.492716 (-0.332255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091712 / 0.018006 (0.073705) | 0.299121 / 0.000490 (0.298631) | 0.000244 / 0.000200 (0.000044) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018998 / 0.037411 (-0.018413) | 0.062811 / 0.014526 (0.048285) | 0.076348 / 0.176557 (-0.100209) | 0.123898 / 0.737135 (-0.613238) | 0.076249 / 0.296338 (-0.220090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282780 / 0.215209 (0.067571) | 2.739028 / 2.077655 (0.661373) | 1.472564 / 1.504120 (-0.031556) | 1.347343 / 1.541195 (-0.193852) | 1.387130 / 1.468490 (-0.081360) | 0.403348 / 4.584777 (-4.181429) | 2.369924 / 3.745712 (-1.375788) | 2.612875 / 5.269862 (-2.656987) | 1.588079 / 4.565676 (-2.977598) | 0.045233 / 0.424275 (-0.379042) | 0.004767 / 0.007607 (-0.002840) | 0.336614 / 0.226044 (0.110570) | 3.300485 / 2.268929 (1.031556) | 1.834365 / 55.444624 (-53.610259) | 1.559799 / 6.876477 (-5.316677) | 1.601265 / 2.142072 (-0.540808) | 0.468158 / 4.805227 (-4.337069) | 0.099811 / 6.500664 (-6.400853) | 0.042688 / 0.075469 (-0.032782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.934097 / 1.841788 (-0.907691) | 11.687713 / 8.074308 (3.613405) | 10.412723 / 10.191392 (0.221331) | 0.139276 / 0.680424 (-0.541148) | 0.014042 / 0.534201 (-0.520159) | 0.270306 / 0.579283 (-0.308978) | 0.266609 / 0.434364 (-0.167755) | 0.314179 / 0.540337 (-0.226158) | 0.437744 / 1.386936 (-0.949192) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004893 / 0.011353 (-0.006460) | 0.002952 / 0.011008 (-0.008056) | 0.050441 / 0.038508 (0.011933) | 0.051838 / 0.023109 (0.028729) | 0.271163 / 0.275898 (-0.004735) | 0.293031 / 0.323480 (-0.030449) | 0.003976 / 0.007986 (-0.004010) | 0.002396 / 0.004328 (-0.001933) | 0.048103 / 0.004250 (0.043852) | 0.038732 / 0.037052 (0.001680) | 0.274276 / 0.258489 (0.015787) | 0.305112 / 0.293841 (0.011271) | 0.024112 / 0.128546 (-0.104434) | 0.007203 / 0.075646 (-0.068443) | 0.053502 / 0.419271 (-0.365770) | 0.032360 / 0.043533 (-0.011173) | 0.270154 / 0.255139 (0.015015) | 0.286689 / 0.283200 (0.003489) | 0.018285 / 0.141683 (-0.123397) | 1.141421 / 1.452155 (-0.310734) | 1.244062 / 1.492716 (-0.248654) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090960 / 0.018006 (0.072954) | 0.286134 / 0.000490 (0.285644) | 0.000207 / 0.000200 (0.000007) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020789 / 0.037411 (-0.016622) | 0.070850 / 0.014526 (0.056324) | 0.080750 / 0.176557 (-0.095807) | 0.120046 / 0.737135 (-0.617089) | 0.083630 / 0.296338 (-0.212708) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290654 / 0.215209 (0.075445) | 2.846669 / 2.077655 (0.769014) | 1.561752 / 1.504120 (0.057632) | 1.442968 / 1.541195 (-0.098227) | 1.503551 / 1.468490 (0.035061) | 0.399731 / 4.584777 (-4.185046) | 2.430099 / 3.745712 (-1.315613) | 2.556169 / 5.269862 (-2.713692) | 1.545591 / 4.565676 (-3.020085) | 0.045967 / 0.424275 (-0.378309) | 0.004851 / 0.007607 (-0.002756) | 0.340167 / 0.226044 (0.114122) | 3.392738 / 2.268929 (1.123809) | 1.943577 / 55.444624 (-53.501047) | 1.650057 / 6.876477 (-5.226420) | 1.686872 / 2.142072 (-0.455201) | 0.470305 / 4.805227 (-4.334923) | 0.097296 / 6.500664 (-6.403368) | 0.041399 / 0.075469 (-0.034070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985660 / 1.841788 (-0.856128) | 12.300826 / 8.074308 (4.226518) | 10.972591 / 10.191392 (0.781199) | 0.131512 / 0.680424 (-0.548912) | 0.015742 / 0.534201 (-0.518459) | 0.270630 / 0.579283 (-0.308653) | 0.276039 / 0.434364 (-0.158325) | 0.302288 / 0.540337 (-0.238050) | 0.409415 / 1.386936 (-0.977521) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf02cff8d70180a9e89328961ded9e3d8510fd22 \"CML watermark\")\n"
] | 2023-11-15T08:07:37 | 2023-11-15T17:35:30 | 2023-11-15T08:12:59 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6419",
"html_url": "https://github.com/huggingface/datasets/pull/6419",
"diff_url": "https://github.com/huggingface/datasets/pull/6419.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6419.patch",
"merged_at": "2023-11-15T08:12:59"
} | Release 2.14.7. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6419/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6418/comments | https://api.github.com/repos/huggingface/datasets/issues/6418/events | https://github.com/huggingface/datasets/pull/6418 | 1,993,224,629 | PR_kwDODunzps5fb7lu | 6,418 | Remove token value from warnings | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.002950 / 0.011008 (-0.008058) | 0.062316 / 0.038508 (0.023808) | 0.030068 / 0.023109 (0.006959) | 0.251998 / 0.275898 (-0.023900) | 0.274806 / 0.323480 (-0.048674) | 0.003067 / 0.007986 (-0.004919) | 0.003082 / 0.004328 (-0.001247) | 0.048503 / 0.004250 (0.044253) | 0.045167 / 0.037052 (0.008114) | 0.254277 / 0.258489 (-0.004212) | 0.290528 / 0.293841 (-0.003313) | 0.023666 / 0.128546 (-0.104880) | 0.007049 / 0.075646 (-0.068597) | 0.202367 / 0.419271 (-0.216905) | 0.056291 / 0.043533 (0.012758) | 0.251923 / 0.255139 (-0.003216) | 0.273595 / 0.283200 (-0.009605) | 0.019065 / 0.141683 (-0.122618) | 1.100832 / 1.452155 (-0.351322) | 1.266758 / 1.492716 (-0.225959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094311 / 0.018006 (0.076305) | 0.303199 / 0.000490 (0.302709) | 0.000238 / 0.000200 (0.000039) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019413 / 0.037411 (-0.017999) | 0.062618 / 0.014526 (0.048092) | 0.072850 / 0.176557 (-0.103707) | 0.119124 / 0.737135 (-0.618012) | 0.074044 / 0.296338 (-0.222294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273660 / 0.215209 (0.058451) | 2.682371 / 2.077655 (0.604716) | 1.426041 / 1.504120 (-0.078079) | 1.317186 / 1.541195 (-0.224009) | 1.332385 / 1.468490 (-0.136106) | 0.394599 / 4.584777 (-4.190178) | 2.368167 / 3.745712 (-1.377545) | 2.683728 / 5.269862 (-2.586134) | 1.668348 / 4.565676 (-2.897329) | 0.046177 / 0.424275 (-0.378098) | 0.004833 / 0.007607 (-0.002774) | 0.331413 / 0.226044 (0.105369) | 3.278984 / 2.268929 (1.010055) | 1.797600 / 55.444624 (-53.647024) | 1.492202 / 6.876477 (-5.384274) | 1.536039 / 2.142072 (-0.606034) | 0.470601 / 4.805227 (-4.334626) | 0.100833 / 6.500664 (-6.399831) | 0.042787 / 0.075469 (-0.032682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959036 / 1.841788 (-0.882752) | 11.632956 / 8.074308 (3.558648) | 10.384574 / 10.191392 (0.193182) | 0.127477 / 0.680424 (-0.552946) | 0.014072 / 0.534201 (-0.520129) | 0.269534 / 0.579283 (-0.309749) | 0.259753 / 0.434364 (-0.174611) | 0.313450 / 0.540337 (-0.226888) | 0.431799 / 1.386936 (-0.955137) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004964 / 0.011353 (-0.006389) | 0.002906 / 0.011008 (-0.008102) | 0.048145 / 0.038508 (0.009637) | 0.056457 / 0.023109 (0.033348) | 0.274131 / 0.275898 (-0.001767) | 0.298534 / 0.323480 (-0.024946) | 0.004145 / 0.007986 (-0.003841) | 0.002415 / 0.004328 (-0.001913) | 0.048558 / 0.004250 (0.044308) | 0.039031 / 0.037052 (0.001978) | 0.278948 / 0.258489 (0.020459) | 0.312358 / 0.293841 (0.018517) | 0.024902 / 0.128546 (-0.103645) | 0.007286 / 0.075646 (-0.068360) | 0.053839 / 0.419271 (-0.365433) | 0.032510 / 0.043533 (-0.011023) | 0.272023 / 0.255139 (0.016884) | 0.293420 / 0.283200 (0.010221) | 0.018932 / 0.141683 (-0.122750) | 1.122792 / 1.452155 (-0.329362) | 1.167385 / 1.492716 (-0.325331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094574 / 0.018006 (0.076567) | 0.303810 / 0.000490 (0.303321) | 0.000227 / 0.000200 (0.000027) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021675 / 0.037411 (-0.015737) | 0.070289 / 0.014526 (0.055763) | 0.080345 / 0.176557 (-0.096211) | 0.120220 / 0.737135 (-0.616915) | 0.084080 / 0.296338 (-0.212259) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300134 / 0.215209 (0.084925) | 2.945831 / 2.077655 (0.868176) | 1.605303 / 1.504120 (0.101183) | 1.480135 / 1.541195 (-0.061059) | 1.526039 / 1.468490 (0.057549) | 0.398264 / 4.584777 (-4.186512) | 2.461391 / 3.745712 (-1.284321) | 2.559929 / 5.269862 (-2.709933) | 1.541391 / 4.565676 (-3.024286) | 0.045319 / 0.424275 (-0.378957) | 0.004834 / 0.007607 (-0.002773) | 0.352186 / 0.226044 (0.126141) | 3.500108 / 2.268929 (1.231180) | 1.966394 / 55.444624 (-53.478230) | 1.675500 / 6.876477 (-5.200977) | 1.683134 / 2.142072 (-0.458938) | 0.465085 / 4.805227 (-4.340142) | 0.097235 / 6.500664 (-6.403429) | 0.040764 / 0.075469 (-0.034705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982813 / 1.841788 (-0.858975) | 12.382529 / 8.074308 (4.308221) | 11.082660 / 10.191392 (0.891268) | 0.129113 / 0.680424 (-0.551310) | 0.015718 / 0.534201 (-0.518483) | 0.272776 / 0.579283 (-0.306507) | 0.275513 / 0.434364 (-0.158850) | 0.304933 / 0.540337 (-0.235404) | 0.414591 / 1.386936 (-0.972345) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8723b129a64928eba40baf70ffd462060ade9f97 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004400 / 0.011353 (-0.006953) | 0.002580 / 0.011008 (-0.008428) | 0.060975 / 0.038508 (0.022467) | 0.029337 / 0.023109 (0.006228) | 0.248643 / 0.275898 (-0.027255) | 0.274476 / 0.323480 (-0.049004) | 0.003925 / 0.007986 (-0.004061) | 0.002332 / 0.004328 (-0.001997) | 0.049501 / 0.004250 (0.045251) | 0.042730 / 0.037052 (0.005678) | 0.255823 / 0.258489 (-0.002666) | 0.281748 / 0.293841 (-0.012093) | 0.023118 / 0.128546 (-0.105428) | 0.006957 / 0.075646 (-0.068690) | 0.201630 / 0.419271 (-0.217641) | 0.054258 / 0.043533 (0.010725) | 0.252289 / 0.255139 (-0.002850) | 0.267561 / 0.283200 (-0.015639) | 0.016903 / 0.141683 (-0.124780) | 1.104322 / 1.452155 (-0.347833) | 1.160027 / 1.492716 (-0.332689) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096340 / 0.018006 (0.078333) | 0.305187 / 0.000490 (0.304697) | 0.000222 / 0.000200 (0.000022) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018733 / 0.037411 (-0.018678) | 0.062382 / 0.014526 (0.047856) | 0.072309 / 0.176557 (-0.104248) | 0.119772 / 0.737135 (-0.617364) | 0.074655 / 0.296338 (-0.221683) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286150 / 0.215209 (0.070941) | 2.770328 / 2.077655 (0.692673) | 1.494593 / 1.504120 (-0.009527) | 1.358611 / 1.541195 (-0.182583) | 1.396308 / 1.468490 (-0.072182) | 0.394806 / 4.584777 (-4.189971) | 2.349100 / 3.745712 (-1.396613) | 2.600541 / 5.269862 (-2.669321) | 1.568975 / 4.565676 (-2.996701) | 0.046212 / 0.424275 (-0.378063) | 0.004821 / 0.007607 (-0.002786) | 0.332286 / 0.226044 (0.106242) | 3.302643 / 2.268929 (1.033714) | 1.838992 / 55.444624 (-53.605633) | 1.571919 / 6.876477 (-5.304557) | 1.574956 / 2.142072 (-0.567117) | 0.464156 / 4.805227 (-4.341071) | 0.097983 / 6.500664 (-6.402681) | 0.042243 / 0.075469 (-0.033226) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941675 / 1.841788 (-0.900113) | 11.450326 / 8.074308 (3.376017) | 10.169943 / 10.191392 (-0.021449) | 0.137879 / 0.680424 (-0.542545) | 0.013765 / 0.534201 (-0.520436) | 0.268633 / 0.579283 (-0.310650) | 0.265083 / 0.434364 (-0.169281) | 0.302099 / 0.540337 (-0.238238) | 0.423033 / 1.386936 (-0.963903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004998 / 0.011353 (-0.006355) | 0.003174 / 0.011008 (-0.007834) | 0.047924 / 0.038508 (0.009416) | 0.057598 / 0.023109 (0.034489) | 0.278823 / 0.275898 (0.002925) | 0.334349 / 0.323480 (0.010869) | 0.004053 / 0.007986 (-0.003932) | 0.002554 / 0.004328 (-0.001774) | 0.047797 / 0.004250 (0.043547) | 0.039802 / 0.037052 (0.002749) | 0.278295 / 0.258489 (0.019806) | 0.319597 / 0.293841 (0.025757) | 0.024802 / 0.128546 (-0.103744) | 0.007362 / 0.075646 (-0.068284) | 0.066983 / 0.419271 (-0.352288) | 0.032707 / 0.043533 (-0.010826) | 0.277350 / 0.255139 (0.022211) | 0.296829 / 0.283200 (0.013629) | 0.017902 / 0.141683 (-0.123781) | 1.129765 / 1.452155 (-0.322390) | 1.201940 / 1.492716 (-0.290777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095631 / 0.018006 (0.077625) | 0.296999 / 0.000490 (0.296510) | 0.000234 / 0.000200 (0.000034) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021547 / 0.037411 (-0.015865) | 0.070003 / 0.014526 (0.055477) | 0.083173 / 0.176557 (-0.093384) | 0.121676 / 0.737135 (-0.615459) | 0.082974 / 0.296338 (-0.213364) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298982 / 0.215209 (0.083773) | 2.918666 / 2.077655 (0.841011) | 1.582054 / 1.504120 (0.077934) | 1.463804 / 1.541195 (-0.077391) | 1.484384 / 1.468490 (0.015893) | 0.399443 / 4.584777 (-4.185334) | 2.393515 / 3.745712 (-1.352197) | 2.533004 / 5.269862 (-2.736858) | 1.490411 / 4.565676 (-3.075266) | 0.045274 / 0.424275 (-0.379002) | 0.004783 / 0.007607 (-0.002824) | 0.350510 / 0.226044 (0.124465) | 3.437927 / 2.268929 (1.168998) | 1.940115 / 55.444624 (-53.504509) | 1.662025 / 6.876477 (-5.214452) | 1.640621 / 2.142072 (-0.501452) | 0.464014 / 4.805227 (-4.341214) | 0.095506 / 6.500664 (-6.405158) | 0.040172 / 0.075469 (-0.035297) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975618 / 1.841788 (-0.866169) | 12.561067 / 8.074308 (4.486759) | 11.408037 / 10.191392 (1.216645) | 0.130699 / 0.680424 (-0.549725) | 0.016796 / 0.534201 (-0.517405) | 0.271130 / 0.579283 (-0.308153) | 0.283506 / 0.434364 (-0.150857) | 0.304482 / 0.540337 (-0.235856) | 0.413673 / 1.386936 (-0.973263) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#723038a73248dd12dc0673d2b341e9295c441ea3 \"CML watermark\")\n"
] | 2023-11-14T17:34:06 | 2023-11-14T22:26:04 | 2023-11-14T22:19:45 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6418",
"html_url": "https://github.com/huggingface/datasets/pull/6418",
"diff_url": "https://github.com/huggingface/datasets/pull/6418.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6418.patch",
"merged_at": "2023-11-14T22:19:45"
} | Fix #6412 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6418/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6417/comments | https://api.github.com/repos/huggingface/datasets/issues/6417/events | https://github.com/huggingface/datasets/issues/6417 | 1,993,149,416 | I_kwDODunzps52zQvo | 6,417 | Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error | {
"login": "Davo00",
"id": 57496007,
"node_id": "MDQ6VXNlcjU3NDk2MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/57496007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Davo00",
"html_url": "https://github.com/Davo00",
"followers_url": "https://api.github.com/users/Davo00/followers",
"following_url": "https://api.github.com/users/Davo00/following{/other_user}",
"gists_url": "https://api.github.com/users/Davo00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Davo00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Davo00/subscriptions",
"organizations_url": "https://api.github.com/users/Davo00/orgs",
"repos_url": "https://api.github.com/users/Davo00/repos",
"events_url": "https://api.github.com/users/Davo00/events{/privacy}",
"received_events_url": "https://api.github.com/users/Davo00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Very strange: `datasets-cli env`\r\n> \r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.9.0\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.13\r\n> - PyArrow version: 8.0.0\r\n> - Pandas version: 1.3.5\r\n\r\nAfter updating datasets and pyarrow on base environment, although I am using a different one called layoutLM\r\n\r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.14.6\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.18\r\n> - Huggingface_hub version: 0.17.3\r\n> - PyArrow version: 14.0.1\r\n> - Pandas version: 2.1.3",
"Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.",
"> Hi! The latest (patch) release (published a few hours ago) includes a fix for this [PyArrow security issue](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). To install it, run `pip install -U datasets`.\r\n\r\nThanks for the info and the latest release, it seems this has also solved my issue. First run after the update worked and I am training right now :D\r\nWill close the Issu"
] | 2023-11-14T16:53:20 | 2023-11-16T20:23:41 | 2023-11-16T20:23:41 | NONE | null | null | null | ### Describe the bug
Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab.
**Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb
**Error**: `ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.`
**Caused by**:
```
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
```
### Steps to reproduce the bug
Run the notebook provided, locally. If possible also on M1.
### Expected behavior
The cell where features are mapped to Array2D and Array3D should work without any issues.
### Environment info
Tried with Python 3.9 and 3.10 conda envs. Running Mac M1.
`pip show datasets`
> Name: datasets
Version: 2.14.6
Summary: HuggingFace community-driven open-source library of datasets
`pip list`
> Package Version
> ------------------------- ------------
> accelerate 0.24.1
> aiohttp 3.8.6
> aiosignal 1.3.1
> anyio 3.5.0
> appnope 0.1.2
> argon2-cffi 21.3.0
> argon2-cffi-bindings 21.2.0
> asttokens 2.0.5
> async-timeout 4.0.3
> attrs 23.1.0
> backcall 0.2.0
> beautifulsoup4 4.12.2
> bleach 4.1.0
> certifi 2023.7.22
> cffi 1.15.1
> charset-normalizer 3.3.2
> comm 0.1.2
> datasets 2.14.6
> debugpy 1.6.7
> decorator 5.1.1
> defusedxml 0.7.1
> dill 0.3.7
> entrypoints 0.4
> exceptiongroup 1.0.4
> executing 0.8.3
> fastjsonschema 2.16.2
> filelock 3.13.1
> frozenlist 1.4.0
> fsspec 2023.10.0
> huggingface-hub 0.17.3
> idna 3.4
> importlib-metadata 6.0.0
> IProgress 0.4
> ipykernel 6.25.0
> ipython 8.15.0
> ipython-genutils 0.2.0
> jedi 0.18.1
> Jinja2 3.1.2
> joblib 1.3.2
> jsonschema 4.19.2
> jsonschema-specifications 2023.7.1
> jupyter_client 7.4.9
> jupyter_core 5.5.0
> jupyter-server 1.23.4
> jupyterlab-pygments 0.1.2
> MarkupSafe 2.1.1
> matplotlib-inline 0.1.6
> mistune 2.0.4
> mpmath 1.3.0
> multidict 6.0.4
> multiprocess 0.70.15
> nbclassic 1.0.0
> nbclient 0.8.0
> nbconvert 7.10.0
> nbformat 5.9.2
> nest-asyncio 1.5.6
> networkx 3.2.1
> notebook 6.5.4
> notebook_shim 0.2.3
> numpy 1.26.1
> packaging 23.1
> pandas 2.1.3
> pandocfilters 1.5.0
> parso 0.8.3
> pexpect 4.8.0
> pickleshare 0.7.5
> Pillow 10.1.0
> pip 23.3
> platformdirs 3.10.0
> prometheus-client 0.14.1
> prompt-toolkit 3.0.36
> psutil 5.9.0
> ptyprocess 0.7.0
> pure-eval 0.2.2
> pyarrow 14.0.1
> pycparser 2.21
> Pygments 2.15.1
> python-dateutil 2.8.2
> pytz 2023.3.post1
> PyYAML 6.0.1
> pyzmq 23.2.0
> referencing 0.30.2
> regex 2023.10.3
> requests 2.31.0
> rpds-py 0.10.6
> safetensors 0.4.0
> scikit-learn 1.3.2
> scipy 1.11.3
> Send2Trash 1.8.2
> seqeval 1.2.2
> setuptools 68.0.0
> six 1.16.0
> sniffio 1.2.0
> soupsieve 2.5
> stack-data 0.2.0
> sympy 1.12
> terminado 0.17.1
> threadpoolctl 3.2.0
> tinycss2 1.2.1
> tokenizers 0.14.1
> torch 2.1.0
> tornado 6.3.3
> tqdm 4.66.1
> traitlets 5.7.1
> transformers 4.36.0.dev0
> typing_extensions 4.7.1
> tzdata 2023.3
> urllib3 2.0.7
> wcwidth 0.2.5
> webencodings 0.5.1
> websocket-client 0.58.0
> wheel 0.41.2
> xxhash 3.4.1
> yarl 1.9.2
> zipp 3.11.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6417/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6416/comments | https://api.github.com/repos/huggingface/datasets/issues/6416/events | https://github.com/huggingface/datasets/pull/6416 | 1,992,954,723 | PR_kwDODunzps5fbA4H | 6,416 | Rename audio_classificiation.py to audio_classification.py | {
"login": "carlthome",
"id": 1595907,
"node_id": "MDQ6VXNlcjE1OTU5MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1595907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carlthome",
"html_url": "https://github.com/carlthome",
"followers_url": "https://api.github.com/users/carlthome/followers",
"following_url": "https://api.github.com/users/carlthome/following{/other_user}",
"gists_url": "https://api.github.com/users/carlthome/gists{/gist_id}",
"starred_url": "https://api.github.com/users/carlthome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carlthome/subscriptions",
"organizations_url": "https://api.github.com/users/carlthome/orgs",
"repos_url": "https://api.github.com/users/carlthome/repos",
"events_url": "https://api.github.com/users/carlthome/events{/privacy}",
"received_events_url": "https://api.github.com/users/carlthome/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh good catch. Can you also rename it in `src/datasets/tasks/__init__.py` ?",
"Fixed! \r\n\r\n(I think, tough word to spell right TBH)",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004737 / 0.011353 (-0.006616) | 0.002446 / 0.011008 (-0.008563) | 0.060928 / 0.038508 (0.022420) | 0.030479 / 0.023109 (0.007370) | 0.238385 / 0.275898 (-0.037513) | 0.265563 / 0.323480 (-0.057917) | 0.002910 / 0.007986 (-0.005076) | 0.002325 / 0.004328 (-0.002004) | 0.047817 / 0.004250 (0.043566) | 0.044243 / 0.037052 (0.007191) | 0.245190 / 0.258489 (-0.013299) | 0.275449 / 0.293841 (-0.018392) | 0.023384 / 0.128546 (-0.105162) | 0.006820 / 0.075646 (-0.068826) | 0.201488 / 0.419271 (-0.217783) | 0.057758 / 0.043533 (0.014225) | 0.245279 / 0.255139 (-0.009860) | 0.266094 / 0.283200 (-0.017106) | 0.019254 / 0.141683 (-0.122429) | 1.107497 / 1.452155 (-0.344658) | 1.161412 / 1.492716 (-0.331304) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094909 / 0.018006 (0.076903) | 0.305185 / 0.000490 (0.304695) | 0.000221 / 0.000200 (0.000021) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018352 / 0.037411 (-0.019059) | 0.062441 / 0.014526 (0.047915) | 0.072386 / 0.176557 (-0.104171) | 0.118836 / 0.737135 (-0.618299) | 0.074514 / 0.296338 (-0.221824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283632 / 0.215209 (0.068423) | 2.751845 / 2.077655 (0.674190) | 1.478620 / 1.504120 (-0.025499) | 1.357221 / 1.541195 (-0.183974) | 1.415297 / 1.468490 (-0.053194) | 0.400093 / 4.584777 (-4.184684) | 2.404607 / 3.745712 (-1.341105) | 2.617572 / 5.269862 (-2.652289) | 1.587622 / 4.565676 (-2.978055) | 0.045997 / 0.424275 (-0.378278) | 0.004872 / 0.007607 (-0.002735) | 0.338901 / 0.226044 (0.112856) | 3.371362 / 2.268929 (1.102434) | 1.870469 / 55.444624 (-53.574155) | 1.561670 / 6.876477 (-5.314807) | 1.573186 / 2.142072 (-0.568886) | 0.478735 / 4.805227 (-4.326492) | 0.098743 / 6.500664 (-6.401921) | 0.041780 / 0.075469 (-0.033689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945422 / 1.841788 (-0.896366) | 11.563464 / 8.074308 (3.489156) | 10.368731 / 10.191392 (0.177339) | 0.129910 / 0.680424 (-0.550513) | 0.014014 / 0.534201 (-0.520187) | 0.269036 / 0.579283 (-0.310247) | 0.265516 / 0.434364 (-0.168848) | 0.311082 / 0.540337 (-0.229255) | 0.431510 / 1.386936 (-0.955426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006284) | 0.002989 / 0.011008 (-0.008019) | 0.048213 / 0.038508 (0.009705) | 0.056133 / 0.023109 (0.033024) | 0.283347 / 0.275898 (0.007449) | 0.307505 / 0.323480 (-0.015975) | 0.004041 / 0.007986 (-0.003944) | 0.002477 / 0.004328 (-0.001852) | 0.047771 / 0.004250 (0.043521) | 0.039361 / 0.037052 (0.002309) | 0.283764 / 0.258489 (0.025275) | 0.320644 / 0.293841 (0.026803) | 0.024972 / 0.128546 (-0.103575) | 0.007599 / 0.075646 (-0.068048) | 0.054732 / 0.419271 (-0.364539) | 0.032774 / 0.043533 (-0.010759) | 0.285594 / 0.255139 (0.030455) | 0.301500 / 0.283200 (0.018300) | 0.018181 / 0.141683 (-0.123501) | 1.126311 / 1.452155 (-0.325843) | 1.187147 / 1.492716 (-0.305569) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097397 / 0.018006 (0.079391) | 0.315112 / 0.000490 (0.314622) | 0.000224 / 0.000200 (0.000024) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021529 / 0.037411 (-0.015882) | 0.073208 / 0.014526 (0.058682) | 0.081683 / 0.176557 (-0.094874) | 0.120475 / 0.737135 (-0.616660) | 0.083265 / 0.296338 (-0.213073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289976 / 0.215209 (0.074767) | 2.839860 / 2.077655 (0.762205) | 1.592635 / 1.504120 (0.088515) | 1.466722 / 1.541195 (-0.074472) | 1.552850 / 1.468490 (0.084360) | 0.418693 / 4.584777 (-4.166084) | 2.526620 / 3.745712 (-1.219093) | 2.706182 / 5.269862 (-2.563680) | 1.618514 / 4.565676 (-2.947162) | 0.046303 / 0.424275 (-0.377972) | 0.004873 / 0.007607 (-0.002734) | 0.345146 / 0.226044 (0.119102) | 3.378448 / 2.268929 (1.109520) | 1.986393 / 55.444624 (-53.458231) | 1.681838 / 6.876477 (-5.194639) | 1.738093 / 2.142072 (-0.403980) | 0.484386 / 4.805227 (-4.320842) | 0.100693 / 6.500664 (-6.399971) | 0.043084 / 0.075469 (-0.032385) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976399 / 1.841788 (-0.865389) | 13.122968 / 8.074308 (5.048660) | 11.245031 / 10.191392 (1.053639) | 0.134433 / 0.680424 (-0.545991) | 0.017439 / 0.534201 (-0.516762) | 0.274083 / 0.579283 (-0.305200) | 0.287353 / 0.434364 (-0.147011) | 0.309231 / 0.540337 (-0.231106) | 0.418003 / 1.386936 (-0.968933) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#939f136f255eab68a5bf6441db2a395f8af78511 \"CML watermark\")\n"
] | 2023-11-14T15:15:29 | 2023-11-15T11:59:32 | 2023-11-15T11:53:20 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6416",
"html_url": "https://github.com/huggingface/datasets/pull/6416",
"diff_url": "https://github.com/huggingface/datasets/pull/6416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6416.patch",
"merged_at": "2023-11-15T11:53:20"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6416/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6415/comments | https://api.github.com/repos/huggingface/datasets/issues/6415/events | https://github.com/huggingface/datasets/pull/6415 | 1,992,917,248 | PR_kwDODunzps5fa4n7 | 6,415 | Fix multi gpu map example | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004537 / 0.011353 (-0.006816) | 0.002844 / 0.011008 (-0.008164) | 0.062506 / 0.038508 (0.023998) | 0.029675 / 0.023109 (0.006566) | 0.238080 / 0.275898 (-0.037818) | 0.259858 / 0.323480 (-0.063622) | 0.004015 / 0.007986 (-0.003970) | 0.002432 / 0.004328 (-0.001897) | 0.049477 / 0.004250 (0.045227) | 0.045383 / 0.037052 (0.008331) | 0.241934 / 0.258489 (-0.016555) | 0.270759 / 0.293841 (-0.023082) | 0.023207 / 0.128546 (-0.105339) | 0.007107 / 0.075646 (-0.068539) | 0.207626 / 0.419271 (-0.211645) | 0.056706 / 0.043533 (0.013173) | 0.239713 / 0.255139 (-0.015426) | 0.256639 / 0.283200 (-0.026560) | 0.017514 / 0.141683 (-0.124169) | 1.105201 / 1.452155 (-0.346953) | 1.173087 / 1.492716 (-0.319629) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093391 / 0.018006 (0.075384) | 0.302673 / 0.000490 (0.302184) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019447 / 0.037411 (-0.017965) | 0.063349 / 0.014526 (0.048823) | 0.075600 / 0.176557 (-0.100957) | 0.121098 / 0.737135 (-0.616037) | 0.075028 / 0.296338 (-0.221311) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291479 / 0.215209 (0.076270) | 2.787231 / 2.077655 (0.709576) | 1.480205 / 1.504120 (-0.023915) | 1.417656 / 1.541195 (-0.123538) | 1.394529 / 1.468490 (-0.073962) | 0.408843 / 4.584777 (-4.175934) | 2.398691 / 3.745712 (-1.347021) | 2.635457 / 5.269862 (-2.634404) | 1.591722 / 4.565676 (-2.973955) | 0.048445 / 0.424275 (-0.375830) | 0.004864 / 0.007607 (-0.002743) | 0.349014 / 0.226044 (0.122969) | 3.436962 / 2.268929 (1.168033) | 1.839266 / 55.444624 (-53.605359) | 1.535252 / 6.876477 (-5.341225) | 1.581048 / 2.142072 (-0.561025) | 0.491150 / 4.805227 (-4.314078) | 0.101279 / 6.500664 (-6.399385) | 0.041938 / 0.075469 (-0.033532) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946986 / 1.841788 (-0.894801) | 11.766196 / 8.074308 (3.691888) | 10.425615 / 10.191392 (0.234223) | 0.129957 / 0.680424 (-0.550467) | 0.014859 / 0.534201 (-0.519342) | 0.268046 / 0.579283 (-0.311237) | 0.263724 / 0.434364 (-0.170640) | 0.311028 / 0.540337 (-0.229309) | 0.434715 / 1.386936 (-0.952221) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004874 / 0.011353 (-0.006479) | 0.002942 / 0.011008 (-0.008067) | 0.048250 / 0.038508 (0.009742) | 0.053726 / 0.023109 (0.030617) | 0.268870 / 0.275898 (-0.007028) | 0.289152 / 0.323480 (-0.034328) | 0.003982 / 0.007986 (-0.004004) | 0.002488 / 0.004328 (-0.001840) | 0.047902 / 0.004250 (0.043652) | 0.038732 / 0.037052 (0.001680) | 0.271021 / 0.258489 (0.012532) | 0.299967 / 0.293841 (0.006126) | 0.024672 / 0.128546 (-0.103874) | 0.007311 / 0.075646 (-0.068336) | 0.053721 / 0.419271 (-0.365550) | 0.032407 / 0.043533 (-0.011126) | 0.266604 / 0.255139 (0.011465) | 0.286816 / 0.283200 (0.003617) | 0.018973 / 0.141683 (-0.122710) | 1.122460 / 1.452155 (-0.329695) | 1.177720 / 1.492716 (-0.314997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093968 / 0.018006 (0.075962) | 0.304010 / 0.000490 (0.303521) | 0.000228 / 0.000200 (0.000028) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021203 / 0.037411 (-0.016208) | 0.070318 / 0.014526 (0.055793) | 0.081688 / 0.176557 (-0.094869) | 0.120916 / 0.737135 (-0.616219) | 0.083452 / 0.296338 (-0.212886) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293961 / 0.215209 (0.078752) | 2.858514 / 2.077655 (0.780860) | 1.556169 / 1.504120 (0.052049) | 1.431523 / 1.541195 (-0.109671) | 1.478145 / 1.468490 (0.009654) | 0.408927 / 4.584777 (-4.175850) | 2.440630 / 3.745712 (-1.305082) | 2.586327 / 5.269862 (-2.683534) | 1.529495 / 4.565676 (-3.036182) | 0.047387 / 0.424275 (-0.376888) | 0.004817 / 0.007607 (-0.002790) | 0.345009 / 0.226044 (0.118965) | 3.386313 / 2.268929 (1.117384) | 1.922361 / 55.444624 (-53.522264) | 1.640814 / 6.876477 (-5.235663) | 1.657005 / 2.142072 (-0.485068) | 0.483844 / 4.805227 (-4.321383) | 0.099470 / 6.500664 (-6.401194) | 0.040735 / 0.075469 (-0.034734) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986311 / 1.841788 (-0.855476) | 12.327425 / 8.074308 (4.253117) | 10.995135 / 10.191392 (0.803743) | 0.146814 / 0.680424 (-0.533610) | 0.015820 / 0.534201 (-0.518381) | 0.272319 / 0.579283 (-0.306964) | 0.274858 / 0.434364 (-0.159506) | 0.305728 / 0.540337 (-0.234609) | 0.421400 / 1.386936 (-0.965536) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#611a03d70378d6e48a19fac89e7616cf556b920a \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007995 / 0.011353 (-0.003358) | 0.004596 / 0.011008 (-0.006412) | 0.099818 / 0.038508 (0.061310) | 0.053539 / 0.023109 (0.030429) | 0.367757 / 0.275898 (0.091859) | 0.409351 / 0.323480 (0.085871) | 0.007423 / 0.007986 (-0.000563) | 0.003770 / 0.004328 (-0.000558) | 0.075635 / 0.004250 (0.071385) | 0.078844 / 0.037052 (0.041791) | 0.374523 / 0.258489 (0.116034) | 0.423378 / 0.293841 (0.129537) | 0.038901 / 0.128546 (-0.089645) | 0.009985 / 0.075646 (-0.065661) | 0.342793 / 0.419271 (-0.076479) | 0.098045 / 0.043533 (0.054512) | 0.368077 / 0.255139 (0.112938) | 0.394251 / 0.283200 (0.111051) | 0.030624 / 0.141683 (-0.111059) | 1.782728 / 1.452155 (0.330574) | 1.867571 / 1.492716 (0.374855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265550 / 0.018006 (0.247544) | 0.504045 / 0.000490 (0.503555) | 0.016523 / 0.000200 (0.016323) | 0.000757 / 0.000054 (0.000702) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034239 / 0.037411 (-0.003172) | 0.099953 / 0.014526 (0.085427) | 0.113728 / 0.176557 (-0.062829) | 0.180113 / 0.737135 (-0.557023) | 0.114506 / 0.296338 (-0.181833) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507186 / 0.215209 (0.291977) | 5.033590 / 2.077655 (2.955935) | 2.480111 / 1.504120 (0.975991) | 2.258966 / 1.541195 (0.717771) | 2.316045 / 1.468490 (0.847555) | 0.622482 / 4.584777 (-3.962295) | 4.400909 / 3.745712 (0.655197) | 4.012443 / 5.269862 (-1.257419) | 2.408294 / 4.565676 (-2.157383) | 0.067608 / 0.424275 (-0.356668) | 0.008638 / 0.007607 (0.001031) | 0.546558 / 0.226044 (0.320513) | 5.472973 / 2.268929 (3.204044) | 2.795147 / 55.444624 (-52.649477) | 2.371153 / 6.876477 (-4.505324) | 2.440883 / 2.142072 (0.298811) | 0.682380 / 4.805227 (-4.122847) | 0.156819 / 6.500664 (-6.343845) | 0.071969 / 0.075469 (-0.003500) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.500200 / 1.841788 (-0.341588) | 22.854103 / 8.074308 (14.779795) | 16.691945 / 10.191392 (6.500553) | 0.210945 / 0.680424 (-0.469479) | 0.023234 / 0.534201 (-0.510967) | 0.475641 / 0.579283 (-0.103642) | 0.491553 / 0.434364 (0.057189) | 0.549311 / 0.540337 (0.008974) | 0.858498 / 1.386936 (-0.528439) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009020 / 0.011353 (-0.002333) | 0.004768 / 0.011008 (-0.006240) | 0.082841 / 0.038508 (0.044333) | 0.095111 / 0.023109 (0.072002) | 0.486050 / 0.275898 (0.210151) | 0.527074 / 0.323480 (0.203594) | 0.006622 / 0.007986 (-0.001364) | 0.003961 / 0.004328 (-0.000367) | 0.083361 / 0.004250 (0.079111) | 0.068571 / 0.037052 (0.031518) | 0.494575 / 0.258489 (0.236086) | 0.545593 / 0.293841 (0.251752) | 0.047671 / 0.128546 (-0.080875) | 0.010715 / 0.075646 (-0.064932) | 0.096239 / 0.419271 (-0.323033) | 0.061556 / 0.043533 (0.018023) | 0.484301 / 0.255139 (0.229162) | 0.492189 / 0.283200 (0.208989) | 0.029374 / 0.141683 (-0.112309) | 1.911833 / 1.452155 (0.459678) | 2.005744 / 1.492716 (0.513028) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265402 / 0.018006 (0.247396) | 0.501034 / 0.000490 (0.500545) | 0.004039 / 0.000200 (0.003839) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.041005 / 0.037411 (0.003594) | 0.119204 / 0.014526 (0.104678) | 0.134583 / 0.176557 (-0.041973) | 0.195995 / 0.737135 (-0.541140) | 0.133125 / 0.296338 (-0.163214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.503012 / 0.215209 (0.287803) | 5.021972 / 2.077655 (2.944318) | 2.912987 / 1.504120 (1.408867) | 2.707637 / 1.541195 (1.166442) | 2.824065 / 1.468490 (1.355575) | 0.664285 / 4.584777 (-3.920492) | 4.341905 / 3.745712 (0.596193) | 4.152839 / 5.269862 (-1.117022) | 2.438138 / 4.565676 (-2.127539) | 0.076169 / 0.424275 (-0.348106) | 0.010471 / 0.007607 (0.002864) | 0.680918 / 0.226044 (0.454874) | 6.424209 / 2.268929 (4.155281) | 3.285353 / 55.444624 (-52.159271) | 2.865458 / 6.876477 (-4.011019) | 2.946246 / 2.142072 (0.804173) | 0.700051 / 4.805227 (-4.105176) | 0.155299 / 6.500664 (-6.345365) | 0.069372 / 0.075469 (-0.006097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.749517 / 1.841788 (-0.092271) | 23.382582 / 8.074308 (15.308274) | 17.708718 / 10.191392 (7.517326) | 0.197042 / 0.680424 (-0.483382) | 0.023874 / 0.534201 (-0.510327) | 0.471631 / 0.579283 (-0.107652) | 0.512649 / 0.434364 (0.078285) | 0.614479 / 0.540337 (0.074142) | 0.771859 / 1.386936 (-0.615077) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f084b2d85f5004ed969d2387027093b2d765a4f \"CML watermark\")\n",
"Merging this one, but lmk if you have more comments for subsequent improvements @NielsRogge ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004874 / 0.011353 (-0.006479) | 0.002866 / 0.011008 (-0.008142) | 0.061761 / 0.038508 (0.023253) | 0.052185 / 0.023109 (0.029076) | 0.242264 / 0.275898 (-0.033634) | 0.267816 / 0.323480 (-0.055664) | 0.002844 / 0.007986 (-0.005142) | 0.002349 / 0.004328 (-0.001979) | 0.048393 / 0.004250 (0.044142) | 0.038590 / 0.037052 (0.001538) | 0.257483 / 0.258489 (-0.001006) | 0.279704 / 0.293841 (-0.014137) | 0.023125 / 0.128546 (-0.105421) | 0.007044 / 0.075646 (-0.068602) | 0.203606 / 0.419271 (-0.215665) | 0.035489 / 0.043533 (-0.008044) | 0.248419 / 0.255139 (-0.006719) | 0.266357 / 0.283200 (-0.016843) | 0.020178 / 0.141683 (-0.121505) | 1.163674 / 1.452155 (-0.288481) | 1.191340 / 1.492716 (-0.301376) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092972 / 0.018006 (0.074966) | 0.295260 / 0.000490 (0.294770) | 0.000214 / 0.000200 (0.000014) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018109 / 0.037411 (-0.019302) | 0.061743 / 0.014526 (0.047217) | 0.073965 / 0.176557 (-0.102592) | 0.119493 / 0.737135 (-0.617642) | 0.075646 / 0.296338 (-0.220692) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275700 / 0.215209 (0.060491) | 2.666846 / 2.077655 (0.589191) | 1.401452 / 1.504120 (-0.102668) | 1.276009 / 1.541195 (-0.265186) | 1.309914 / 1.468490 (-0.158576) | 0.396411 / 4.584777 (-4.188365) | 2.347193 / 3.745712 (-1.398519) | 2.568006 / 5.269862 (-2.701856) | 1.564572 / 4.565676 (-3.001105) | 0.045450 / 0.424275 (-0.378825) | 0.004827 / 0.007607 (-0.002780) | 0.333092 / 0.226044 (0.107048) | 3.284295 / 2.268929 (1.015367) | 1.809928 / 55.444624 (-53.634696) | 1.486041 / 6.876477 (-5.390436) | 1.528198 / 2.142072 (-0.613875) | 0.470053 / 4.805227 (-4.335174) | 0.098559 / 6.500664 (-6.402105) | 0.041637 / 0.075469 (-0.033832) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948915 / 1.841788 (-0.892873) | 11.513211 / 8.074308 (3.438903) | 10.386419 / 10.191392 (0.195027) | 0.129513 / 0.680424 (-0.550910) | 0.021772 / 0.534201 (-0.512429) | 0.295627 / 0.579283 (-0.283656) | 0.261008 / 0.434364 (-0.173355) | 0.305869 / 0.540337 (-0.234469) | 0.399676 / 1.386936 (-0.987260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004799 / 0.011353 (-0.006553) | 0.002764 / 0.011008 (-0.008244) | 0.048469 / 0.038508 (0.009961) | 0.051346 / 0.023109 (0.028236) | 0.274853 / 0.275898 (-0.001045) | 0.300770 / 0.323480 (-0.022710) | 0.003986 / 0.007986 (-0.003999) | 0.002376 / 0.004328 (-0.001952) | 0.048545 / 0.004250 (0.044294) | 0.039854 / 0.037052 (0.002801) | 0.280053 / 0.258489 (0.021564) | 0.312797 / 0.293841 (0.018957) | 0.024513 / 0.128546 (-0.104033) | 0.006971 / 0.075646 (-0.068675) | 0.053030 / 0.419271 (-0.366241) | 0.035580 / 0.043533 (-0.007953) | 0.276078 / 0.255139 (0.020939) | 0.299345 / 0.283200 (0.016145) | 0.020423 / 0.141683 (-0.121260) | 1.103053 / 1.452155 (-0.349102) | 1.179747 / 1.492716 (-0.312969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093042 / 0.018006 (0.075036) | 0.299421 / 0.000490 (0.298932) | 0.000232 / 0.000200 (0.000033) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021966 / 0.037411 (-0.015445) | 0.070978 / 0.014526 (0.056452) | 0.083841 / 0.176557 (-0.092715) | 0.121223 / 0.737135 (-0.615912) | 0.082829 / 0.296338 (-0.213510) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289436 / 0.215209 (0.074227) | 2.838074 / 2.077655 (0.760419) | 1.597013 / 1.504120 (0.092893) | 1.476888 / 1.541195 (-0.064307) | 1.504582 / 1.468490 (0.036092) | 0.398050 / 4.584777 (-4.186727) | 2.434446 / 3.745712 (-1.311266) | 2.493545 / 5.269862 (-2.776316) | 1.584159 / 4.565676 (-2.981517) | 0.046461 / 0.424275 (-0.377814) | 0.004876 / 0.007607 (-0.002731) | 0.344166 / 0.226044 (0.118122) | 3.388530 / 2.268929 (1.119602) | 1.939585 / 55.444624 (-53.505039) | 1.672495 / 6.876477 (-5.203982) | 1.811825 / 2.142072 (-0.330247) | 0.470798 / 4.805227 (-4.334429) | 0.097522 / 6.500664 (-6.403142) | 0.040887 / 0.075469 (-0.034582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990081 / 1.841788 (-0.851707) | 12.619827 / 8.074308 (4.545519) | 10.748062 / 10.191392 (0.556670) | 0.130409 / 0.680424 (-0.550015) | 0.016624 / 0.534201 (-0.517577) | 0.272381 / 0.579283 (-0.306902) | 0.270597 / 0.434364 (-0.163767) | 0.306458 / 0.540337 (-0.233879) | 0.408700 / 1.386936 (-0.978236) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc44d2188a1baac50d28a6c8110d6e5497f409de \"CML watermark\")\n"
] | 2023-11-14T14:57:18 | 2023-11-22T15:48:27 | 2023-11-22T15:42:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6415",
"html_url": "https://github.com/huggingface/datasets/pull/6415",
"diff_url": "https://github.com/huggingface/datasets/pull/6415.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6415.patch",
"merged_at": "2023-11-22T15:42:19"
} | - use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES `
- add `if __name__ == "__main__"`
fix https://github.com/huggingface/datasets/issues/6186 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6415/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6414/comments | https://api.github.com/repos/huggingface/datasets/issues/6414/events | https://github.com/huggingface/datasets/pull/6414 | 1,992,482,491 | PR_kwDODunzps5fZZ2l | 6,414 | Set `usedforsecurity=False` in hashlib methods (FIPS compliance) | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008434 / 0.011353 (-0.002919) | 0.006755 / 0.011008 (-0.004253) | 0.106169 / 0.038508 (0.067661) | 0.049329 / 0.023109 (0.026220) | 0.433610 / 0.275898 (0.157712) | 0.441993 / 0.323480 (0.118513) | 0.004703 / 0.007986 (-0.003282) | 0.006996 / 0.004328 (0.002667) | 0.080330 / 0.004250 (0.076080) | 0.066098 / 0.037052 (0.029045) | 0.435444 / 0.258489 (0.176955) | 0.490442 / 0.293841 (0.196601) | 0.047050 / 0.128546 (-0.081496) | 0.014520 / 0.075646 (-0.061127) | 0.339805 / 0.419271 (-0.079467) | 0.101161 / 0.043533 (0.057629) | 0.423236 / 0.255139 (0.168097) | 0.455627 / 0.283200 (0.172427) | 0.036218 / 0.141683 (-0.105465) | 1.766128 / 1.452155 (0.313973) | 1.923919 / 1.492716 (0.431203) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242939 / 0.018006 (0.224933) | 0.515582 / 0.000490 (0.515093) | 0.020271 / 0.000200 (0.020071) | 0.000383 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030927 / 0.037411 (-0.006484) | 0.093951 / 0.014526 (0.079425) | 0.109028 / 0.176557 (-0.067529) | 0.174947 / 0.737135 (-0.562188) | 0.120538 / 0.296338 (-0.175800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.553884 / 0.215209 (0.338675) | 5.424566 / 2.077655 (3.346911) | 2.439420 / 1.504120 (0.935301) | 2.019324 / 1.541195 (0.478129) | 2.170781 / 1.468490 (0.702290) | 0.924424 / 4.584777 (-3.660353) | 5.706029 / 3.745712 (1.960317) | 5.096911 / 5.269862 (-0.172951) | 3.168261 / 4.565676 (-1.397416) | 0.094336 / 0.424275 (-0.329940) | 0.015899 / 0.007607 (0.008292) | 0.709684 / 0.226044 (0.483639) | 7.476865 / 2.268929 (5.207936) | 3.350983 / 55.444624 (-52.093641) | 2.653419 / 6.876477 (-4.223058) | 2.802201 / 2.142072 (0.660129) | 1.081442 / 4.805227 (-3.723785) | 0.217025 / 6.500664 (-6.283639) | 0.077248 / 0.075469 (0.001779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.598621 / 1.841788 (-0.243167) | 23.490338 / 8.074308 (15.416030) | 21.853488 / 10.191392 (11.662096) | 0.209625 / 0.680424 (-0.470799) | 0.028166 / 0.534201 (-0.506035) | 0.473883 / 0.579283 (-0.105400) | 0.584226 / 0.434364 (0.149862) | 0.538605 / 0.540337 (-0.001732) | 0.837060 / 1.386936 (-0.549876) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009029 / 0.011353 (-0.002324) | 0.004945 / 0.011008 (-0.006063) | 0.084539 / 0.038508 (0.046031) | 0.081014 / 0.023109 (0.057905) | 0.431291 / 0.275898 (0.155393) | 0.478913 / 0.323480 (0.155433) | 0.006107 / 0.007986 (-0.001879) | 0.003939 / 0.004328 (-0.000390) | 0.079932 / 0.004250 (0.075682) | 0.057936 / 0.037052 (0.020884) | 0.437295 / 0.258489 (0.178806) | 0.489790 / 0.293841 (0.195949) | 0.049544 / 0.128546 (-0.079003) | 0.013675 / 0.075646 (-0.061972) | 0.093143 / 0.419271 (-0.326128) | 0.064104 / 0.043533 (0.020571) | 0.444699 / 0.255139 (0.189560) | 0.443688 / 0.283200 (0.160489) | 0.034331 / 0.141683 (-0.107352) | 1.753014 / 1.452155 (0.300859) | 1.877274 / 1.492716 (0.384558) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250460 / 0.018006 (0.232454) | 0.527241 / 0.000490 (0.526752) | 0.007679 / 0.000200 (0.007479) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033269 / 0.037411 (-0.004142) | 0.111262 / 0.014526 (0.096736) | 0.133503 / 0.176557 (-0.043053) | 0.177998 / 0.737135 (-0.559137) | 0.117899 / 0.296338 (-0.178440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633588 / 0.215209 (0.418379) | 6.105283 / 2.077655 (4.027628) | 2.779309 / 1.504120 (1.275189) | 2.445788 / 1.541195 (0.904594) | 2.396443 / 1.468490 (0.927953) | 0.925928 / 4.584777 (-3.658849) | 5.266142 / 3.745712 (1.520430) | 4.868830 / 5.269862 (-0.401031) | 2.998768 / 4.565676 (-1.566909) | 0.103135 / 0.424275 (-0.321140) | 0.008059 / 0.007607 (0.000452) | 0.753159 / 0.226044 (0.527115) | 7.532170 / 2.268929 (5.263242) | 3.563941 / 55.444624 (-51.880683) | 2.829208 / 6.876477 (-4.047269) | 2.913954 / 2.142072 (0.771881) | 1.085843 / 4.805227 (-3.719384) | 0.214195 / 6.500664 (-6.286469) | 0.071509 / 0.075469 (-0.003960) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544819 / 1.841788 (-0.296968) | 23.790149 / 8.074308 (15.715841) | 23.086019 / 10.191392 (12.894627) | 0.242695 / 0.680424 (-0.437729) | 0.041706 / 0.534201 (-0.492495) | 0.552402 / 0.579283 (-0.026881) | 0.652518 / 0.434364 (0.218154) | 0.581876 / 0.540337 (0.041539) | 0.795425 / 1.386936 (-0.591511) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#117fdfccc8523fe150521ad74e478459fe2f297c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004573 / 0.011353 (-0.006780) | 0.002965 / 0.011008 (-0.008043) | 0.061913 / 0.038508 (0.023405) | 0.029474 / 0.023109 (0.006365) | 0.258117 / 0.275898 (-0.017781) | 0.279854 / 0.323480 (-0.043626) | 0.003954 / 0.007986 (-0.004031) | 0.002479 / 0.004328 (-0.001850) | 0.048685 / 0.004250 (0.044434) | 0.044733 / 0.037052 (0.007681) | 0.256659 / 0.258489 (-0.001830) | 0.285235 / 0.293841 (-0.008606) | 0.023566 / 0.128546 (-0.104981) | 0.007291 / 0.075646 (-0.068355) | 0.202701 / 0.419271 (-0.216570) | 0.055706 / 0.043533 (0.012173) | 0.258790 / 0.255139 (0.003651) | 0.278675 / 0.283200 (-0.004525) | 0.018574 / 0.141683 (-0.123109) | 1.109359 / 1.452155 (-0.342796) | 1.184434 / 1.492716 (-0.308282) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095048 / 0.018006 (0.077042) | 0.305027 / 0.000490 (0.304537) | 0.000310 / 0.000200 (0.000110) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018183 / 0.037411 (-0.019228) | 0.066130 / 0.014526 (0.051604) | 0.073948 / 0.176557 (-0.102608) | 0.120458 / 0.737135 (-0.616678) | 0.075995 / 0.296338 (-0.220343) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279419 / 0.215209 (0.064210) | 2.728591 / 2.077655 (0.650936) | 1.439016 / 1.504120 (-0.065104) | 1.325798 / 1.541195 (-0.215397) | 1.352050 / 1.468490 (-0.116440) | 0.395041 / 4.584777 (-4.189736) | 2.377651 / 3.745712 (-1.368061) | 2.618473 / 5.269862 (-2.651389) | 1.587580 / 4.565676 (-2.978096) | 0.045910 / 0.424275 (-0.378365) | 0.004843 / 0.007607 (-0.002764) | 0.335491 / 0.226044 (0.109447) | 3.378441 / 2.268929 (1.109512) | 1.827757 / 55.444624 (-53.616868) | 1.502360 / 6.876477 (-5.374117) | 1.508460 / 2.142072 (-0.633612) | 0.471309 / 4.805227 (-4.333918) | 0.098934 / 6.500664 (-6.401730) | 0.041705 / 0.075469 (-0.033764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945067 / 1.841788 (-0.896720) | 11.548209 / 8.074308 (3.473900) | 10.422628 / 10.191392 (0.231236) | 0.141494 / 0.680424 (-0.538929) | 0.014345 / 0.534201 (-0.519856) | 0.267750 / 0.579283 (-0.311533) | 0.261488 / 0.434364 (-0.172876) | 0.307192 / 0.540337 (-0.233145) | 0.427926 / 1.386936 (-0.959010) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004831 / 0.011353 (-0.006522) | 0.002876 / 0.011008 (-0.008132) | 0.048629 / 0.038508 (0.010121) | 0.055090 / 0.023109 (0.031981) | 0.271381 / 0.275898 (-0.004517) | 0.292350 / 0.323480 (-0.031130) | 0.004001 / 0.007986 (-0.003985) | 0.002389 / 0.004328 (-0.001939) | 0.047527 / 0.004250 (0.043277) | 0.038065 / 0.037052 (0.001012) | 0.277387 / 0.258489 (0.018898) | 0.307209 / 0.293841 (0.013368) | 0.025136 / 0.128546 (-0.103411) | 0.007309 / 0.075646 (-0.068338) | 0.054483 / 0.419271 (-0.364789) | 0.032807 / 0.043533 (-0.010726) | 0.274364 / 0.255139 (0.019225) | 0.290280 / 0.283200 (0.007080) | 0.017855 / 0.141683 (-0.123828) | 1.185912 / 1.452155 (-0.266243) | 1.228141 / 1.492716 (-0.264576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094787 / 0.018006 (0.076781) | 0.314191 / 0.000490 (0.313701) | 0.000217 / 0.000200 (0.000017) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020920 / 0.037411 (-0.016491) | 0.070446 / 0.014526 (0.055920) | 0.081371 / 0.176557 (-0.095186) | 0.119127 / 0.737135 (-0.618009) | 0.085658 / 0.296338 (-0.210680) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290601 / 0.215209 (0.075392) | 2.874091 / 2.077655 (0.796436) | 1.598934 / 1.504120 (0.094814) | 1.464329 / 1.541195 (-0.076866) | 1.504943 / 1.468490 (0.036453) | 0.410457 / 4.584777 (-4.174320) | 2.428706 / 3.745712 (-1.317006) | 2.596510 / 5.269862 (-2.673352) | 1.547084 / 4.565676 (-3.018592) | 0.047546 / 0.424275 (-0.376729) | 0.004740 / 0.007607 (-0.002867) | 0.351168 / 0.226044 (0.125123) | 3.424554 / 2.268929 (1.155626) | 1.969792 / 55.444624 (-53.474832) | 1.676731 / 6.876477 (-5.199745) | 1.668769 / 2.142072 (-0.473304) | 0.482486 / 4.805227 (-4.322741) | 0.100018 / 6.500664 (-6.400646) | 0.040956 / 0.075469 (-0.034513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966306 / 1.841788 (-0.875482) | 12.158909 / 8.074308 (4.084601) | 10.926447 / 10.191392 (0.735055) | 0.130359 / 0.680424 (-0.550065) | 0.016162 / 0.534201 (-0.518039) | 0.269977 / 0.579283 (-0.309306) | 0.283366 / 0.434364 (-0.150997) | 0.304517 / 0.540337 (-0.235821) | 0.410398 / 1.386936 (-0.976539) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d5d6e57913465c22bb8074b0c0f968252cb12b \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004686 / 0.011353 (-0.006667) | 0.002764 / 0.011008 (-0.008244) | 0.061411 / 0.038508 (0.022902) | 0.030450 / 0.023109 (0.007341) | 0.247648 / 0.275898 (-0.028250) | 0.278033 / 0.323480 (-0.045447) | 0.002903 / 0.007986 (-0.005082) | 0.002350 / 0.004328 (-0.001979) | 0.047514 / 0.004250 (0.043264) | 0.044446 / 0.037052 (0.007393) | 0.256170 / 0.258489 (-0.002319) | 0.285977 / 0.293841 (-0.007864) | 0.023407 / 0.128546 (-0.105139) | 0.007223 / 0.075646 (-0.068423) | 0.201274 / 0.419271 (-0.217997) | 0.054022 / 0.043533 (0.010489) | 0.253841 / 0.255139 (-0.001298) | 0.278219 / 0.283200 (-0.004980) | 0.017796 / 0.141683 (-0.123886) | 1.105950 / 1.452155 (-0.346205) | 1.182021 / 1.492716 (-0.310695) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089584 / 0.018006 (0.071578) | 0.299338 / 0.000490 (0.298849) | 0.000202 / 0.000200 (0.000003) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018974 / 0.037411 (-0.018437) | 0.062352 / 0.014526 (0.047826) | 0.073667 / 0.176557 (-0.102889) | 0.119225 / 0.737135 (-0.617911) | 0.075393 / 0.296338 (-0.220945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282749 / 0.215209 (0.067540) | 2.795822 / 2.077655 (0.718167) | 1.492946 / 1.504120 (-0.011174) | 1.382340 / 1.541195 (-0.158855) | 1.377281 / 1.468490 (-0.091209) | 0.397361 / 4.584777 (-4.187415) | 2.379416 / 3.745712 (-1.366296) | 2.552967 / 5.269862 (-2.716895) | 1.546347 / 4.565676 (-3.019330) | 0.045851 / 0.424275 (-0.378424) | 0.004830 / 0.007607 (-0.002777) | 0.351194 / 0.226044 (0.125150) | 3.407406 / 2.268929 (1.138478) | 1.852983 / 55.444624 (-53.591641) | 1.536381 / 6.876477 (-5.340095) | 1.542786 / 2.142072 (-0.599287) | 0.471960 / 4.805227 (-4.333267) | 0.098336 / 6.500664 (-6.402328) | 0.041569 / 0.075469 (-0.033900) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.912718 / 1.841788 (-0.929070) | 11.339404 / 8.074308 (3.265095) | 10.480593 / 10.191392 (0.289201) | 0.139508 / 0.680424 (-0.540916) | 0.014210 / 0.534201 (-0.519991) | 0.268152 / 0.579283 (-0.311131) | 0.260503 / 0.434364 (-0.173860) | 0.304735 / 0.540337 (-0.235602) | 0.422155 / 1.386936 (-0.964781) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004714 / 0.011353 (-0.006638) | 0.002638 / 0.011008 (-0.008370) | 0.047967 / 0.038508 (0.009459) | 0.050758 / 0.023109 (0.027649) | 0.265619 / 0.275898 (-0.010279) | 0.286920 / 0.323480 (-0.036560) | 0.003936 / 0.007986 (-0.004050) | 0.002351 / 0.004328 (-0.001977) | 0.047642 / 0.004250 (0.043392) | 0.038412 / 0.037052 (0.001360) | 0.269561 / 0.258489 (0.011072) | 0.302057 / 0.293841 (0.008216) | 0.023893 / 0.128546 (-0.104653) | 0.006793 / 0.075646 (-0.068854) | 0.053091 / 0.419271 (-0.366180) | 0.032228 / 0.043533 (-0.011305) | 0.267110 / 0.255139 (0.011971) | 0.287211 / 0.283200 (0.004011) | 0.017945 / 0.141683 (-0.123738) | 1.191770 / 1.452155 (-0.260384) | 1.269644 / 1.492716 (-0.223072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088067 / 0.018006 (0.070061) | 0.298383 / 0.000490 (0.297893) | 0.000202 / 0.000200 (0.000002) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020685 / 0.037411 (-0.016726) | 0.069883 / 0.014526 (0.055357) | 0.080107 / 0.176557 (-0.096450) | 0.119311 / 0.737135 (-0.617825) | 0.080791 / 0.296338 (-0.215548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295781 / 0.215209 (0.080572) | 2.905536 / 2.077655 (0.827881) | 1.579184 / 1.504120 (0.075064) | 1.475937 / 1.541195 (-0.065258) | 1.533708 / 1.468490 (0.065218) | 0.409851 / 4.584777 (-4.174926) | 2.443217 / 3.745712 (-1.302496) | 2.543980 / 5.269862 (-2.725882) | 1.512187 / 4.565676 (-3.053489) | 0.046390 / 0.424275 (-0.377885) | 0.004762 / 0.007607 (-0.002845) | 0.345066 / 0.226044 (0.119021) | 3.485133 / 2.268929 (1.216204) | 1.954690 / 55.444624 (-53.489934) | 1.671104 / 6.876477 (-5.205372) | 1.655330 / 2.142072 (-0.486743) | 0.487910 / 4.805227 (-4.317317) | 0.097707 / 6.500664 (-6.402957) | 0.040379 / 0.075469 (-0.035090) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981620 / 1.841788 (-0.860168) | 11.806530 / 8.074308 (3.732222) | 10.868275 / 10.191392 (0.676883) | 0.141230 / 0.680424 (-0.539194) | 0.015785 / 0.534201 (-0.518416) | 0.271416 / 0.579283 (-0.307867) | 0.276048 / 0.434364 (-0.158316) | 0.310988 / 0.540337 (-0.229349) | 0.410078 / 1.386936 (-0.976858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ec565740dee10c466ade16f81dee2783e442ba55 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004803 / 0.011353 (-0.006550) | 0.002961 / 0.011008 (-0.008047) | 0.061431 / 0.038508 (0.022923) | 0.030189 / 0.023109 (0.007080) | 0.255755 / 0.275898 (-0.020143) | 0.277841 / 0.323480 (-0.045639) | 0.003083 / 0.007986 (-0.004902) | 0.002432 / 0.004328 (-0.001896) | 0.047674 / 0.004250 (0.043424) | 0.045066 / 0.037052 (0.008014) | 0.268701 / 0.258489 (0.010211) | 0.286673 / 0.293841 (-0.007168) | 0.023663 / 0.128546 (-0.104883) | 0.007148 / 0.075646 (-0.068499) | 0.201962 / 0.419271 (-0.217310) | 0.054953 / 0.043533 (0.011420) | 0.257155 / 0.255139 (0.002016) | 0.277769 / 0.283200 (-0.005431) | 0.017803 / 0.141683 (-0.123880) | 1.100270 / 1.452155 (-0.351884) | 1.146975 / 1.492716 (-0.345741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092776 / 0.018006 (0.074770) | 0.303786 / 0.000490 (0.303296) | 0.000237 / 0.000200 (0.000037) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019647 / 0.037411 (-0.017765) | 0.063211 / 0.014526 (0.048686) | 0.076684 / 0.176557 (-0.099873) | 0.121952 / 0.737135 (-0.615184) | 0.077202 / 0.296338 (-0.219137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282087 / 0.215209 (0.066878) | 2.789204 / 2.077655 (0.711550) | 1.510376 / 1.504120 (0.006256) | 1.384241 / 1.541195 (-0.156954) | 1.414949 / 1.468490 (-0.053541) | 0.402206 / 4.584777 (-4.182570) | 2.377601 / 3.745712 (-1.368111) | 2.585354 / 5.269862 (-2.684508) | 1.592937 / 4.565676 (-2.972740) | 0.045217 / 0.424275 (-0.379058) | 0.004772 / 0.007607 (-0.002835) | 0.339584 / 0.226044 (0.113539) | 3.373184 / 2.268929 (1.104256) | 1.855196 / 55.444624 (-53.589428) | 1.599559 / 6.876477 (-5.276918) | 1.604421 / 2.142072 (-0.537651) | 0.467754 / 4.805227 (-4.337474) | 0.098244 / 6.500664 (-6.402420) | 0.042631 / 0.075469 (-0.032838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.947680 / 1.841788 (-0.894108) | 11.539875 / 8.074308 (3.465567) | 10.340830 / 10.191392 (0.149438) | 0.145591 / 0.680424 (-0.534833) | 0.014367 / 0.534201 (-0.519834) | 0.270506 / 0.579283 (-0.308777) | 0.268825 / 0.434364 (-0.165539) | 0.308372 / 0.540337 (-0.231966) | 0.425039 / 1.386936 (-0.961897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004813 / 0.011353 (-0.006540) | 0.002931 / 0.011008 (-0.008078) | 0.047997 / 0.038508 (0.009489) | 0.050753 / 0.023109 (0.027644) | 0.272704 / 0.275898 (-0.003194) | 0.294045 / 0.323480 (-0.029435) | 0.004059 / 0.007986 (-0.003927) | 0.002491 / 0.004328 (-0.001838) | 0.047621 / 0.004250 (0.043371) | 0.038824 / 0.037052 (0.001772) | 0.275322 / 0.258489 (0.016833) | 0.306447 / 0.293841 (0.012606) | 0.024402 / 0.128546 (-0.104145) | 0.007252 / 0.075646 (-0.068394) | 0.053346 / 0.419271 (-0.365925) | 0.032224 / 0.043533 (-0.011309) | 0.271468 / 0.255139 (0.016329) | 0.289429 / 0.283200 (0.006229) | 0.018285 / 0.141683 (-0.123398) | 1.116743 / 1.452155 (-0.335412) | 1.182724 / 1.492716 (-0.309993) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091899 / 0.018006 (0.073893) | 0.299161 / 0.000490 (0.298671) | 0.000224 / 0.000200 (0.000024) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021823 / 0.037411 (-0.015588) | 0.071227 / 0.014526 (0.056701) | 0.080503 / 0.176557 (-0.096053) | 0.120243 / 0.737135 (-0.616892) | 0.082328 / 0.296338 (-0.214010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.324951 / 0.215209 (0.109742) | 2.842358 / 2.077655 (0.764703) | 1.602317 / 1.504120 (0.098197) | 1.481103 / 1.541195 (-0.060091) | 1.497557 / 1.468490 (0.029067) | 0.406523 / 4.584777 (-4.178254) | 2.402743 / 3.745712 (-1.342970) | 2.545435 / 5.269862 (-2.724427) | 1.534071 / 4.565676 (-3.031605) | 0.046914 / 0.424275 (-0.377361) | 0.004728 / 0.007607 (-0.002879) | 0.341544 / 0.226044 (0.115499) | 3.412017 / 2.268929 (1.143089) | 1.937442 / 55.444624 (-53.507182) | 1.668774 / 6.876477 (-5.207703) | 1.668908 / 2.142072 (-0.473165) | 0.477398 / 4.805227 (-4.327829) | 0.098531 / 6.500664 (-6.402133) | 0.041077 / 0.075469 (-0.034392) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983888 / 1.841788 (-0.857900) | 12.072703 / 8.074308 (3.998395) | 11.028622 / 10.191392 (0.837230) | 0.148097 / 0.680424 (-0.532327) | 0.015869 / 0.534201 (-0.518332) | 0.267609 / 0.579283 (-0.311674) | 0.272345 / 0.434364 (-0.162019) | 0.303840 / 0.540337 (-0.236497) | 0.409199 / 1.386936 (-0.977737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1487df064580bd23458234fab2e85876d9364e03 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005016 / 0.011353 (-0.006337) | 0.002931 / 0.011008 (-0.008077) | 0.062142 / 0.038508 (0.023634) | 0.030758 / 0.023109 (0.007648) | 0.251689 / 0.275898 (-0.024209) | 0.272114 / 0.323480 (-0.051366) | 0.004102 / 0.007986 (-0.003884) | 0.002500 / 0.004328 (-0.001828) | 0.049187 / 0.004250 (0.044937) | 0.047150 / 0.037052 (0.010098) | 0.256497 / 0.258489 (-0.001992) | 0.288069 / 0.293841 (-0.005772) | 0.023915 / 0.128546 (-0.104632) | 0.007204 / 0.075646 (-0.068442) | 0.204257 / 0.419271 (-0.215015) | 0.063879 / 0.043533 (0.020346) | 0.253008 / 0.255139 (-0.002131) | 0.266554 / 0.283200 (-0.016645) | 0.018929 / 0.141683 (-0.122754) | 1.140547 / 1.452155 (-0.311608) | 1.197049 / 1.492716 (-0.295668) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094111 / 0.018006 (0.076105) | 0.301618 / 0.000490 (0.301128) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018614 / 0.037411 (-0.018797) | 0.062426 / 0.014526 (0.047900) | 0.073079 / 0.176557 (-0.103477) | 0.120313 / 0.737135 (-0.616823) | 0.076445 / 0.296338 (-0.219894) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285151 / 0.215209 (0.069942) | 2.754272 / 2.077655 (0.676617) | 1.485254 / 1.504120 (-0.018866) | 1.368412 / 1.541195 (-0.172783) | 1.402819 / 1.468490 (-0.065671) | 0.396561 / 4.584777 (-4.188216) | 2.375708 / 3.745712 (-1.370004) | 2.656088 / 5.269862 (-2.613773) | 1.588676 / 4.565676 (-2.977001) | 0.048662 / 0.424275 (-0.375613) | 0.004963 / 0.007607 (-0.002644) | 0.339747 / 0.226044 (0.113702) | 3.315841 / 2.268929 (1.046912) | 1.841439 / 55.444624 (-53.603186) | 1.547803 / 6.876477 (-5.328674) | 1.601872 / 2.142072 (-0.540200) | 0.468637 / 4.805227 (-4.336591) | 0.099423 / 6.500664 (-6.401241) | 0.041926 / 0.075469 (-0.033543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933058 / 1.841788 (-0.908730) | 11.680870 / 8.074308 (3.606561) | 10.239009 / 10.191392 (0.047617) | 0.129974 / 0.680424 (-0.550450) | 0.014081 / 0.534201 (-0.520120) | 0.273076 / 0.579283 (-0.306207) | 0.261914 / 0.434364 (-0.172450) | 0.305982 / 0.540337 (-0.234356) | 0.430623 / 1.386936 (-0.956313) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004969 / 0.011353 (-0.006384) | 0.003084 / 0.011008 (-0.007924) | 0.048686 / 0.038508 (0.010178) | 0.057234 / 0.023109 (0.034125) | 0.295408 / 0.275898 (0.019510) | 0.323774 / 0.323480 (0.000294) | 0.004014 / 0.007986 (-0.003972) | 0.002423 / 0.004328 (-0.001905) | 0.048000 / 0.004250 (0.043749) | 0.039872 / 0.037052 (0.002820) | 0.294717 / 0.258489 (0.036228) | 0.331149 / 0.293841 (0.037309) | 0.027884 / 0.128546 (-0.100662) | 0.007155 / 0.075646 (-0.068491) | 0.053812 / 0.419271 (-0.365460) | 0.032483 / 0.043533 (-0.011050) | 0.293402 / 0.255139 (0.038263) | 0.312553 / 0.283200 (0.029354) | 0.017848 / 0.141683 (-0.123835) | 1.125600 / 1.452155 (-0.326554) | 1.189469 / 1.492716 (-0.303248) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096198 / 0.018006 (0.078191) | 0.305096 / 0.000490 (0.304607) | 0.000229 / 0.000200 (0.000029) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021992 / 0.037411 (-0.015419) | 0.072082 / 0.014526 (0.057556) | 0.082704 / 0.176557 (-0.093853) | 0.124512 / 0.737135 (-0.612624) | 0.084541 / 0.296338 (-0.211797) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296440 / 0.215209 (0.081231) | 2.923392 / 2.077655 (0.845738) | 1.599057 / 1.504120 (0.094937) | 1.480473 / 1.541195 (-0.060722) | 1.551837 / 1.468490 (0.083347) | 0.418618 / 4.584777 (-4.166159) | 2.472727 / 3.745712 (-1.272985) | 2.796141 / 5.269862 (-2.473721) | 1.629139 / 4.565676 (-2.936538) | 0.047703 / 0.424275 (-0.376572) | 0.004971 / 0.007607 (-0.002636) | 0.354453 / 0.226044 (0.128408) | 3.514861 / 2.268929 (1.245932) | 1.993597 / 55.444624 (-53.451028) | 1.694386 / 6.876477 (-5.182090) | 1.748562 / 2.142072 (-0.393510) | 0.487158 / 4.805227 (-4.318070) | 0.102021 / 6.500664 (-6.398643) | 0.042648 / 0.075469 (-0.032821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974950 / 1.841788 (-0.866837) | 13.391204 / 8.074308 (5.316896) | 11.474696 / 10.191392 (1.283304) | 0.142618 / 0.680424 (-0.537806) | 0.016163 / 0.534201 (-0.518038) | 0.271453 / 0.579283 (-0.307830) | 0.287049 / 0.434364 (-0.147315) | 0.309069 / 0.540337 (-0.231268) | 0.417117 / 1.386936 (-0.969819) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35a3422cfcebfef5b09ae70c22843ffadaf44c46 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004974 / 0.011353 (-0.006379) | 0.002950 / 0.011008 (-0.008058) | 0.061856 / 0.038508 (0.023348) | 0.030539 / 0.023109 (0.007429) | 0.250105 / 0.275898 (-0.025793) | 0.276687 / 0.323480 (-0.046793) | 0.003077 / 0.007986 (-0.004908) | 0.002412 / 0.004328 (-0.001916) | 0.048336 / 0.004250 (0.044086) | 0.045849 / 0.037052 (0.008797) | 0.251757 / 0.258489 (-0.006732) | 0.284914 / 0.293841 (-0.008927) | 0.024033 / 0.128546 (-0.104513) | 0.007343 / 0.075646 (-0.068303) | 0.202867 / 0.419271 (-0.216405) | 0.061294 / 0.043533 (0.017762) | 0.263590 / 0.255139 (0.008451) | 0.272744 / 0.283200 (-0.010455) | 0.019613 / 0.141683 (-0.122070) | 1.104263 / 1.452155 (-0.347892) | 1.164128 / 1.492716 (-0.328588) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094261 / 0.018006 (0.076255) | 0.303340 / 0.000490 (0.302850) | 0.000215 / 0.000200 (0.000015) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018381 / 0.037411 (-0.019030) | 0.062727 / 0.014526 (0.048201) | 0.074955 / 0.176557 (-0.101602) | 0.124810 / 0.737135 (-0.612326) | 0.074335 / 0.296338 (-0.222004) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279368 / 0.215209 (0.064159) | 2.721641 / 2.077655 (0.643986) | 1.510773 / 1.504120 (0.006653) | 1.364349 / 1.541195 (-0.176845) | 1.386044 / 1.468490 (-0.082446) | 0.403051 / 4.584777 (-4.181726) | 2.416525 / 3.745712 (-1.329187) | 2.623198 / 5.269862 (-2.646663) | 1.560869 / 4.565676 (-3.004808) | 0.046613 / 0.424275 (-0.377662) | 0.004861 / 0.007607 (-0.002746) | 0.337875 / 0.226044 (0.111830) | 3.289956 / 2.268929 (1.021028) | 1.851707 / 55.444624 (-53.592917) | 1.571092 / 6.876477 (-5.305385) | 1.600328 / 2.142072 (-0.541745) | 0.480766 / 4.805227 (-4.324461) | 0.099138 / 6.500664 (-6.401526) | 0.041691 / 0.075469 (-0.033779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941162 / 1.841788 (-0.900626) | 11.745335 / 8.074308 (3.671027) | 10.645509 / 10.191392 (0.454117) | 0.132506 / 0.680424 (-0.547918) | 0.015192 / 0.534201 (-0.519009) | 0.272483 / 0.579283 (-0.306800) | 0.270269 / 0.434364 (-0.164094) | 0.309580 / 0.540337 (-0.230758) | 0.431513 / 1.386936 (-0.955423) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006285) | 0.003069 / 0.011008 (-0.007939) | 0.048605 / 0.038508 (0.010097) | 0.059557 / 0.023109 (0.036448) | 0.275092 / 0.275898 (-0.000806) | 0.298910 / 0.323480 (-0.024570) | 0.004198 / 0.007986 (-0.003788) | 0.002499 / 0.004328 (-0.001830) | 0.048248 / 0.004250 (0.043997) | 0.040302 / 0.037052 (0.003249) | 0.279539 / 0.258489 (0.021050) | 0.312500 / 0.293841 (0.018659) | 0.025407 / 0.128546 (-0.103140) | 0.007364 / 0.075646 (-0.068282) | 0.053086 / 0.419271 (-0.366186) | 0.033291 / 0.043533 (-0.010242) | 0.276521 / 0.255139 (0.021382) | 0.292943 / 0.283200 (0.009743) | 0.019416 / 0.141683 (-0.122267) | 1.151734 / 1.452155 (-0.300421) | 1.205021 / 1.492716 (-0.287695) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094112 / 0.018006 (0.076106) | 0.309534 / 0.000490 (0.309044) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021539 / 0.037411 (-0.015872) | 0.070325 / 0.014526 (0.055799) | 0.080468 / 0.176557 (-0.096089) | 0.121095 / 0.737135 (-0.616040) | 0.082008 / 0.296338 (-0.214331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302591 / 0.215209 (0.087382) | 2.943475 / 2.077655 (0.865820) | 1.597970 / 1.504120 (0.093850) | 1.468774 / 1.541195 (-0.072421) | 1.504812 / 1.468490 (0.036322) | 0.413715 / 4.584777 (-4.171062) | 2.418319 / 3.745712 (-1.327393) | 2.616656 / 5.269862 (-2.653206) | 1.558165 / 4.565676 (-3.007512) | 0.047169 / 0.424275 (-0.377106) | 0.004761 / 0.007607 (-0.002846) | 0.347225 / 0.226044 (0.121180) | 3.479624 / 2.268929 (1.210696) | 1.961253 / 55.444624 (-53.483371) | 1.673532 / 6.876477 (-5.202944) | 1.698900 / 2.142072 (-0.443172) | 0.488373 / 4.805227 (-4.316855) | 0.098322 / 6.500664 (-6.402342) | 0.040832 / 0.075469 (-0.034637) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009133 / 1.841788 (-0.832655) | 13.373258 / 8.074308 (5.298949) | 11.327360 / 10.191392 (1.135968) | 0.135778 / 0.680424 (-0.544646) | 0.015813 / 0.534201 (-0.518388) | 0.275404 / 0.579283 (-0.303879) | 0.282564 / 0.434364 (-0.151799) | 0.311830 / 0.540337 (-0.228507) | 0.419008 / 1.386936 (-0.967928) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4592709e5399f91b5b392f4fd73687985365c909 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004899 / 0.011353 (-0.006454) | 0.002780 / 0.011008 (-0.008229) | 0.061997 / 0.038508 (0.023489) | 0.029909 / 0.023109 (0.006800) | 0.233445 / 0.275898 (-0.042453) | 0.254128 / 0.323480 (-0.069351) | 0.002927 / 0.007986 (-0.005058) | 0.002396 / 0.004328 (-0.001932) | 0.048118 / 0.004250 (0.043868) | 0.044520 / 0.037052 (0.007468) | 0.237594 / 0.258489 (-0.020895) | 0.268407 / 0.293841 (-0.025434) | 0.023517 / 0.128546 (-0.105029) | 0.007035 / 0.075646 (-0.068612) | 0.202803 / 0.419271 (-0.216469) | 0.057692 / 0.043533 (0.014159) | 0.237058 / 0.255139 (-0.018081) | 0.252966 / 0.283200 (-0.030233) | 0.017934 / 0.141683 (-0.123748) | 1.096406 / 1.452155 (-0.355749) | 1.153509 / 1.492716 (-0.339207) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091812 / 0.018006 (0.073806) | 0.298410 / 0.000490 (0.297920) | 0.000228 / 0.000200 (0.000028) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018333 / 0.037411 (-0.019078) | 0.062685 / 0.014526 (0.048159) | 0.073295 / 0.176557 (-0.103261) | 0.119234 / 0.737135 (-0.617901) | 0.074603 / 0.296338 (-0.221736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279078 / 0.215209 (0.063869) | 2.768535 / 2.077655 (0.690880) | 1.457049 / 1.504120 (-0.047071) | 1.326870 / 1.541195 (-0.214325) | 1.349657 / 1.468490 (-0.118833) | 0.405003 / 4.584777 (-4.179774) | 2.428726 / 3.745712 (-1.316986) | 2.595776 / 5.269862 (-2.674086) | 1.557879 / 4.565676 (-3.007797) | 0.045985 / 0.424275 (-0.378291) | 0.004854 / 0.007607 (-0.002753) | 0.336437 / 0.226044 (0.110392) | 3.317330 / 2.268929 (1.048401) | 1.784525 / 55.444624 (-53.660100) | 1.500295 / 6.876477 (-5.376182) | 1.529869 / 2.142072 (-0.612203) | 0.473426 / 4.805227 (-4.331801) | 0.099609 / 6.500664 (-6.401055) | 0.042054 / 0.075469 (-0.033415) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.937154 / 1.841788 (-0.904633) | 11.482383 / 8.074308 (3.408075) | 10.468769 / 10.191392 (0.277377) | 0.132724 / 0.680424 (-0.547700) | 0.015242 / 0.534201 (-0.518959) | 0.281124 / 0.579283 (-0.298159) | 0.268603 / 0.434364 (-0.165761) | 0.311410 / 0.540337 (-0.228928) | 0.431817 / 1.386936 (-0.955119) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004695 / 0.011353 (-0.006658) | 0.002873 / 0.011008 (-0.008135) | 0.048133 / 0.038508 (0.009625) | 0.052505 / 0.023109 (0.029396) | 0.271679 / 0.275898 (-0.004219) | 0.292530 / 0.323480 (-0.030950) | 0.003844 / 0.007986 (-0.004142) | 0.002417 / 0.004328 (-0.001912) | 0.048619 / 0.004250 (0.044369) | 0.039152 / 0.037052 (0.002100) | 0.276575 / 0.258489 (0.018086) | 0.307836 / 0.293841 (0.013995) | 0.023877 / 0.128546 (-0.104669) | 0.006897 / 0.075646 (-0.068749) | 0.053241 / 0.419271 (-0.366031) | 0.032487 / 0.043533 (-0.011046) | 0.274205 / 0.255139 (0.019066) | 0.289701 / 0.283200 (0.006502) | 0.018250 / 0.141683 (-0.123432) | 1.137902 / 1.452155 (-0.314253) | 1.202043 / 1.492716 (-0.290673) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091453 / 0.018006 (0.073446) | 0.297032 / 0.000490 (0.296543) | 0.000224 / 0.000200 (0.000024) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021062 / 0.037411 (-0.016349) | 0.069848 / 0.014526 (0.055322) | 0.084337 / 0.176557 (-0.092219) | 0.119951 / 0.737135 (-0.617184) | 0.082805 / 0.296338 (-0.213533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297056 / 0.215209 (0.081846) | 2.890110 / 2.077655 (0.812456) | 1.609918 / 1.504120 (0.105798) | 1.491184 / 1.541195 (-0.050011) | 1.529433 / 1.468490 (0.060943) | 0.396081 / 4.584777 (-4.188696) | 2.408310 / 3.745712 (-1.337402) | 2.567905 / 5.269862 (-2.701957) | 1.514465 / 4.565676 (-3.051212) | 0.045329 / 0.424275 (-0.378946) | 0.004738 / 0.007607 (-0.002869) | 0.344373 / 0.226044 (0.118328) | 3.428333 / 2.268929 (1.159404) | 1.981401 / 55.444624 (-53.463223) | 1.688007 / 6.876477 (-5.188470) | 1.685542 / 2.142072 (-0.456531) | 0.478045 / 4.805227 (-4.327182) | 0.096664 / 6.500664 (-6.404001) | 0.040335 / 0.075469 (-0.035135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972912 / 1.841788 (-0.868876) | 12.055045 / 8.074308 (3.980737) | 10.821073 / 10.191392 (0.629681) | 0.139177 / 0.680424 (-0.541247) | 0.015046 / 0.534201 (-0.519155) | 0.275670 / 0.579283 (-0.303613) | 0.280366 / 0.434364 (-0.153998) | 0.315781 / 0.540337 (-0.224556) | 0.424536 / 1.386936 (-0.962400) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0684b471d6ca8a235162f5575f624b6eda7956c5 \"CML watermark\")\n",
"I'm finally merging as `transformers`/`tokenizers` dependency pins have been removed + `huggingface_hub 0.19.4` has fixed the deps incompatibility issue. All good now :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004435 / 0.011353 (-0.006918) | 0.002924 / 0.011008 (-0.008084) | 0.062159 / 0.038508 (0.023651) | 0.029639 / 0.023109 (0.006529) | 0.237470 / 0.275898 (-0.038428) | 0.269641 / 0.323480 (-0.053839) | 0.004124 / 0.007986 (-0.003862) | 0.002528 / 0.004328 (-0.001800) | 0.048114 / 0.004250 (0.043864) | 0.046055 / 0.037052 (0.009002) | 0.245844 / 0.258489 (-0.012645) | 0.278085 / 0.293841 (-0.015756) | 0.023152 / 0.128546 (-0.105394) | 0.007194 / 0.075646 (-0.068452) | 0.206493 / 0.419271 (-0.212778) | 0.055687 / 0.043533 (0.012155) | 0.243301 / 0.255139 (-0.011838) | 0.267645 / 0.283200 (-0.015555) | 0.017413 / 0.141683 (-0.124270) | 1.113071 / 1.452155 (-0.339083) | 1.201436 / 1.492716 (-0.291280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092576 / 0.018006 (0.074570) | 0.303516 / 0.000490 (0.303027) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019108 / 0.037411 (-0.018303) | 0.062326 / 0.014526 (0.047800) | 0.073711 / 0.176557 (-0.102846) | 0.120414 / 0.737135 (-0.616721) | 0.075837 / 0.296338 (-0.220501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278267 / 0.215209 (0.063058) | 2.766231 / 2.077655 (0.688576) | 1.455613 / 1.504120 (-0.048507) | 1.337128 / 1.541195 (-0.204066) | 1.357659 / 1.468490 (-0.110831) | 0.404549 / 4.584777 (-4.180228) | 2.409084 / 3.745712 (-1.336628) | 2.645000 / 5.269862 (-2.624861) | 1.600475 / 4.565676 (-2.965201) | 0.046680 / 0.424275 (-0.377595) | 0.004887 / 0.007607 (-0.002720) | 0.340338 / 0.226044 (0.114294) | 3.332647 / 2.268929 (1.063719) | 1.852529 / 55.444624 (-53.592096) | 1.532442 / 6.876477 (-5.344035) | 1.550383 / 2.142072 (-0.591689) | 0.482702 / 4.805227 (-4.322525) | 0.101067 / 6.500664 (-6.399597) | 0.042132 / 0.075469 (-0.033337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945481 / 1.841788 (-0.896307) | 11.886240 / 8.074308 (3.811932) | 10.484620 / 10.191392 (0.293228) | 0.130906 / 0.680424 (-0.549518) | 0.014880 / 0.534201 (-0.519321) | 0.268836 / 0.579283 (-0.310447) | 0.268112 / 0.434364 (-0.166251) | 0.304300 / 0.540337 (-0.236038) | 0.440262 / 1.386936 (-0.946674) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005028 / 0.011353 (-0.006325) | 0.002937 / 0.011008 (-0.008071) | 0.049038 / 0.038508 (0.010530) | 0.057763 / 0.023109 (0.034653) | 0.273196 / 0.275898 (-0.002702) | 0.295519 / 0.323480 (-0.027961) | 0.004102 / 0.007986 (-0.003883) | 0.002487 / 0.004328 (-0.001841) | 0.049148 / 0.004250 (0.044898) | 0.040303 / 0.037052 (0.003251) | 0.279187 / 0.258489 (0.020698) | 0.311086 / 0.293841 (0.017245) | 0.024961 / 0.128546 (-0.103585) | 0.007264 / 0.075646 (-0.068382) | 0.055711 / 0.419271 (-0.363561) | 0.032355 / 0.043533 (-0.011178) | 0.274304 / 0.255139 (0.019165) | 0.290953 / 0.283200 (0.007753) | 0.018358 / 0.141683 (-0.123325) | 1.115984 / 1.452155 (-0.336170) | 1.190409 / 1.492716 (-0.302308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095765 / 0.018006 (0.077759) | 0.287947 / 0.000490 (0.287457) | 0.000242 / 0.000200 (0.000042) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022165 / 0.037411 (-0.015246) | 0.070465 / 0.014526 (0.055940) | 0.082078 / 0.176557 (-0.094479) | 0.120209 / 0.737135 (-0.616926) | 0.084573 / 0.296338 (-0.211765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298492 / 0.215209 (0.083283) | 2.924981 / 2.077655 (0.847327) | 1.597326 / 1.504120 (0.093206) | 1.459132 / 1.541195 (-0.082062) | 1.511471 / 1.468490 (0.042981) | 0.406671 / 4.584777 (-4.178106) | 2.443154 / 3.745712 (-1.302558) | 2.591131 / 5.269862 (-2.678731) | 1.549931 / 4.565676 (-3.015745) | 0.047042 / 0.424275 (-0.377234) | 0.004891 / 0.007607 (-0.002716) | 0.346274 / 0.226044 (0.120230) | 3.456050 / 2.268929 (1.187121) | 1.959328 / 55.444624 (-53.485296) | 1.647631 / 6.876477 (-5.228845) | 1.692024 / 2.142072 (-0.450049) | 0.478307 / 4.805227 (-4.326920) | 0.098738 / 6.500664 (-6.401926) | 0.041743 / 0.075469 (-0.033726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984619 / 1.841788 (-0.857168) | 12.403984 / 8.074308 (4.329676) | 10.974347 / 10.191392 (0.782955) | 0.132893 / 0.680424 (-0.547530) | 0.015504 / 0.534201 (-0.518697) | 0.275354 / 0.579283 (-0.303929) | 0.283312 / 0.434364 (-0.151052) | 0.313661 / 0.540337 (-0.226677) | 0.419065 / 1.386936 (-0.967871) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c65315e4a8308f04fcb025039afe2a2e43b5684e \"CML watermark\")\n"
] | 2023-11-14T10:47:09 | 2023-11-17T14:23:20 | 2023-11-17T14:17:00 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6414",
"html_url": "https://github.com/huggingface/datasets/pull/6414",
"diff_url": "https://github.com/huggingface/datasets/pull/6414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6414.patch",
"merged_at": "2023-11-17T14:17:00"
} | Related to https://github.com/huggingface/transformers/issues/27034 and https://github.com/huggingface/huggingface_hub/pull/1782.
**TL;DR:** `hashlib` is not a secure library for cryptography-related stuff. We are only using `hashlib` for non-security-related purposes in `datasets` so it's fine. From Python 3.9 we set can `usedforsecurity=False` in any `hashlib` method which is mandatory for companies that forbid the use of `hashlib` for security purposes. This PR fixes that.
**Note:** before merging this we need to release a new tokenizers version that would allow the newest `huggingface_hub` version (see https://github.com/huggingface/tokenizers/pull/1385). Otherwise it might create friction to users that want to install `datasets` + `tokenizers` at the same time. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6414/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6412/comments | https://api.github.com/repos/huggingface/datasets/issues/6412/events | https://github.com/huggingface/datasets/issues/6412 | 1,992,401,594 | I_kwDODunzps52waK6 | 6,412 | User token is printed out! | {
"login": "mohsen-goodarzi",
"id": 25702692,
"node_id": "MDQ6VXNlcjI1NzAyNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/25702692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohsen-goodarzi",
"html_url": "https://github.com/mohsen-goodarzi",
"followers_url": "https://api.github.com/users/mohsen-goodarzi/followers",
"following_url": "https://api.github.com/users/mohsen-goodarzi/following{/other_user}",
"gists_url": "https://api.github.com/users/mohsen-goodarzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mohsen-goodarzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohsen-goodarzi/subscriptions",
"organizations_url": "https://api.github.com/users/mohsen-goodarzi/orgs",
"repos_url": "https://api.github.com/users/mohsen-goodarzi/repos",
"events_url": "https://api.github.com/users/mohsen-goodarzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mohsen-goodarzi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, this is not a good practice. I've opened a PR that removes the token value from the (deprecation) warning."
] | 2023-11-14T10:01:34 | 2023-11-14T22:19:46 | 2023-11-14T22:19:46 | NONE | null | null | null | This line prints user token on command line! Is it safe?
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6412/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6411/comments | https://api.github.com/repos/huggingface/datasets/issues/6411/events | https://github.com/huggingface/datasets/pull/6411 | 1,992,386,630 | PR_kwDODunzps5fZE9F | 6,411 | Fix dependency conflict within CI build documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-11-14T09:52:51 | 2023-11-14T10:05:59 | 2023-11-14T10:05:35 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6411",
"html_url": "https://github.com/huggingface/datasets/pull/6411",
"diff_url": "https://github.com/huggingface/datasets/pull/6411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6411.patch",
"merged_at": "2023-11-14T10:05:34"
} | Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`).
This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`.
Fix #6406. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6411/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6410/comments | https://api.github.com/repos/huggingface/datasets/issues/6410/events | https://github.com/huggingface/datasets/issues/6410 | 1,992,100,209 | I_kwDODunzps52vQlx | 6,410 | Datasets does not load HuggingFace Repository properly | {
"login": "MikeDoes",
"id": 40600201,
"node_id": "MDQ6VXNlcjQwNjAwMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/40600201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeDoes",
"html_url": "https://github.com/MikeDoes",
"followers_url": "https://api.github.com/users/MikeDoes/followers",
"following_url": "https://api.github.com/users/MikeDoes/following{/other_user}",
"gists_url": "https://api.github.com/users/MikeDoes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MikeDoes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MikeDoes/subscriptions",
"organizations_url": "https://api.github.com/users/MikeDoes/orgs",
"repos_url": "https://api.github.com/users/MikeDoes/repos",
"events_url": "https://api.github.com/users/MikeDoes/events{/privacy}",
"received_events_url": "https://api.github.com/users/MikeDoes/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi! You can avoid the error by requesting only the `jsonl` files. `dataset = load_dataset(\"ai4privacy/pii-masking-200k\", data_files=[\"*.jsonl\"])`.\r\n\r\nOur data file inference does not filter out (incompatible) `json` files because `json` and `jsonl` use the same builder. Still, I think the inference should differentiate these extensions because it's safe to assume that loading them together will lead to an error. WDYT @lhoestq? ",
"Raising an error if there is a mix of json and jsonl in the builder makes sense yea"
] | 2023-11-14T06:50:49 | 2023-11-16T06:54:36 | null | NONE | null | null | null | ### Describe the bug
Dear Datasets team,
We just have published a dataset on Huggingface:
https://huggingface.co./ai4privacy
However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me know and we would be more than happy to adapt the structure of the repository or meta data so it works easier:
```python
from datasets import load_dataset
dataset = load_dataset("ai4privacy/pii-masking-200k")
```
```
Downloading readme: 100%
11.8k/11.8k [00:00<00:00, 512kB/s]
Downloading data files: 100%
1/1 [00:11<00:00, 11.16s/it]
Downloading data: 100%
64.3M/64.3M [00:02<00:00, 32.9MB/s]
Downloading data: 100%
113M/113M [00:03<00:00, 35.0MB/s]
Downloading data: 100%
97.7M/97.7M [00:02<00:00, 46.1MB/s]
Downloading data: 100%
90.8M/90.8M [00:02<00:00, 44.9MB/s]
Downloading data: 100%
7.63k/7.63k [00:00<00:00, 41.0kB/s]
Downloading data: 100%
1.03k/1.03k [00:00<00:00, 9.44kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 29.26it/s]
Generating train split:
209261/0 [00:05<00:00, 41201.25 examples/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1939 )
-> 1940 writer.write_table(table)
1941 num_examples_progress_update += len(table)
8 frames
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size)
571 pa_table = pa_table.combine_chunks()
--> 572 pa_table = table_cast(pa_table, self._schema)
573 if self.embed_local_files:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema)
2327 if table.schema != schema:
-> 2328 return cast_table_to_schema(table, schema)
2329 elif table.schema.metadata != schema.metadata:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema)
2285 if sorted(table.column_names) != sorted(features):
-> 2286 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2287 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
ValueError: Couldn't cast
JOBTYPE: int64
PHONEIMEI: int64
ACCOUNTNAME: int64
VEHICLEVIN: int64
GENDER: int64
CURRENCYCODE: int64
CREDITCARDISSUER: int64
JOBTITLE: int64
SEX: int64
CURRENCYSYMBOL: int64
IP: int64
EYECOLOR: int64
MASKEDNUMBER: int64
SECONDARYADDRESS: int64
JOBAREA: int64
ACCOUNTNUMBER: int64
language: string
BITCOINADDRESS: int64
MAC: int64
SSN: int64
EMAIL: int64
ETHEREUMADDRESS: int64
DOB: int64
VEHICLEVRM: int64
IPV6: int64
AMOUNT: int64
URL: int64
PHONENUMBER: int64
PIN: int64
TIME: int64
CREDITCARDNUMBER: int64
FIRSTNAME: int64
IBAN: int64
BIC: int64
COUNTY: int64
STATE: int64
LASTNAME: int64
ZIPCODE: int64
HEIGHT: int64
ORDINALDIRECTION: int64
MIDDLENAME: int64
STREET: int64
USERNAME: int64
CURRENCY: int64
PREFIX: int64
USERAGENT: int64
CURRENCYNAME: int64
LITECOINADDRESS: int64
CREDITCARDCVV: int64
AGE: int64
CITY: int64
PASSWORD: int64
BUILDINGNUMBER: int64
IPV4: int64
NEARBYGPSCOORDINATE: int64
DATE: int64
COMPANYNAME: int64
to
{'masked_text': Value(dtype='string', id=None), 'unmasked_text': Value(dtype='string', id=None), 'privacy_mask': Value(dtype='string', id=None), 'span_labels': Value(dtype='string', id=None), 'bio_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'tokenised_text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-2-f1c6811e9c83>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("ai4privacy/pii-masking-200k")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2151
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1959
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
Thank you and have a great day ahead
### Steps to reproduce the bug
Open Google Colab Notebook:
Run command:
!pip3 install datasets
Run code:
from datasets import load_dataset
dataset = load_dataset("ai4privacy/pii-masking-200k")
### Expected behavior
Download the dataset successfully from HuggingFace to the notebook so that we can start working with it
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6410/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6410/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6409/comments | https://api.github.com/repos/huggingface/datasets/issues/6409/events | https://github.com/huggingface/datasets/issues/6409 | 1,991,960,865 | I_kwDODunzps52uukh | 6,409 | using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception | {
"login": "neiblegy",
"id": 16574677,
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neiblegy",
"html_url": "https://github.com/neiblegy",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-11-14T04:21:01 | 2023-11-22T16:42:09 | 2023-11-22T16:42:09 | NONE | null | null | null | ### Describe the bug
i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows:
`AttributeError: 'function' object has no attribute 'close'
Exception ignored in: <function TqdmCallback.__del__ at 0x7fa8683d84c0>
Traceback (most recent call last):
File "/home/protoss.gao/.local/lib/python3.9/site-packages/fsspec/callbacks.py", line 233, in __del__
self.tqdm.close()`
i check your source code in datasets/utils/file_utils.py:348 you define TqdmCallback derive from fsspec.callbacks.TqdmCallback
but in the newest fsspec code [https://github.com/fsspec/filesystem_spec/blob/master/fsspec/callbacks.py](url) , line 146, in this case, _DEFAULT_CALLBACK will take effect, but in line 234, it calls "close()" function which _DEFAULT_CALLBACK don't have such thing.
so i think the class "TqdmCallback" in datasets/utils/file_utils.py may override "__del__" function or report this bug to fsspec.
### Steps to reproduce the bug
as i said
### Expected behavior
no exception
### Environment info
datasets: 2.14.4
python: 3.9
platform: x86_64 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6409/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6408/comments | https://api.github.com/repos/huggingface/datasets/issues/6408/events | https://github.com/huggingface/datasets/issues/6408 | 1,991,902,972 | I_kwDODunzps52ugb8 | 6,408 | `IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns` | {
"login": "shmily326",
"id": 24571857,
"node_id": "MDQ6VXNlcjI0NTcxODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/24571857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shmily326",
"html_url": "https://github.com/shmily326",
"followers_url": "https://api.github.com/users/shmily326/followers",
"following_url": "https://api.github.com/users/shmily326/following{/other_user}",
"gists_url": "https://api.github.com/users/shmily326/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shmily326/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shmily326/subscriptions",
"organizations_url": "https://api.github.com/users/shmily326/orgs",
"repos_url": "https://api.github.com/users/shmily326/repos",
"events_url": "https://api.github.com/users/shmily326/events{/privacy}",
"received_events_url": "https://api.github.com/users/shmily326/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-11-14T03:12:08 | 2023-11-16T06:24:10 | null | NONE | null | null | null | ### Describe the bug
IterableDataset lost but not keep columns when map function adding columns with names in remove_columns,
Dataset not.
May be related to the code below:
https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756
### Steps to reproduce the bug
```python
dataset: IterableDataset = load_dataset("Anthropic/hh-rlhf", streaming=True, split="train")
column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected']
# map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx}
dataset = dataset.map(map_fn, batched=True, remove_columns=column_names)
next(iter(dataset))
# output
# {'prompt': 'xxx, 'history': xxx}
```
```python
# when load_dataset with streaming=False, the column_names are kept:
dataset: Dataset = load_dataset("Anthropic/hh-rlhf", streaming=False, split="train")
column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected']
# map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx}
dataset = dataset.map(map_fn, batched=True, remove_columns=column_names)
next(iter(dataset))
# output
# {'prompt': 'xxx, 'history': xxx, "chosen": xxx, "rejected": xxx}
```
### Expected behavior
IterableDataset keep columns when map function adding columns with names in remove_columns
### Environment info
datasets==2.14.6 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6408/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6407/comments | https://api.github.com/repos/huggingface/datasets/issues/6407/events | https://github.com/huggingface/datasets/issues/6407 | 1,991,514,079 | I_kwDODunzps52tBff | 6,407 | Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object" | {
"login": "eawer",
"id": 1741779,
"node_id": "MDQ6VXNlcjE3NDE3Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1741779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eawer",
"html_url": "https://github.com/eawer",
"followers_url": "https://api.github.com/users/eawer/followers",
"following_url": "https://api.github.com/users/eawer/following{/other_user}",
"gists_url": "https://api.github.com/users/eawer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eawer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eawer/subscriptions",
"organizations_url": "https://api.github.com/users/eawer/orgs",
"repos_url": "https://api.github.com/users/eawer/repos",
"events_url": "https://api.github.com/users/eawer/events{/privacy}",
"received_events_url": "https://api.github.com/users/eawer/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-11-13T21:27:43 | 2023-11-13T21:27:43 | null | NONE | null | null | null | ### Describe the bug
I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error
I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bucket for obvious reasons, but I'll try to give all possible outputs.
### Steps to reproduce the bug
```python
import s3fs
from datasets import load_dataset
from aiobotocore.session import get_session
DATA_PATH = "s3://bucket_name/path/validation.parquet"
fs = s3fs.S3FileSystem(session=get_session())
```
`fs.stat` returns the data, so we can say that fs is working and we have all permissions
```python
fs.stat(DATA_PATH)
# Returns:
# {'ETag': '"123123a-19"',
# 'LastModified': datetime.datetime(2023, 11, 1, 10, 16, 57, tzinfo=tzutc()),
# 'size': 312237170,
# 'name': 'bucket_name/path/validation.parquet',
# 'type': 'file',
# 'StorageClass': 'STANDARD',
# 'VersionId': 'Abc.HtmsC9h.as',
# 'ContentType': 'binary/octet-stream'}
```
```python
fs.storage_options
# Returns:
# {'session': <aiobotocore.session.AioSession at 0x7f9193fa53c0>}
```
```python
ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
```
<details>
<summary>Returns such error (expandable)</summary>
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[88], line 1
----> 1 ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options)
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
2156 verification_mode=verification_mode,
2157 try_from_hf_gcs=try_from_hf_gcs,
2158 num_proc=num_proc,
2159 storage_options=storage_options,
2160 )
2162 # Build dataset for splits
2163 keep_in_memory = (
2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2165 )
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1025 split_dict = SplitDict(dataset_name=self.dataset_name)
1026 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
-> 1027 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
1029 # Checksums verification
1030 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)
32 if not self.config.data_files:
33 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}")
---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)
35 if isinstance(data_files, (str, list, tuple)):
36 files = data_files
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:565, in DownloadManager.download_and_extract(self, url_or_urls)
549 def download_and_extract(self, url_or_urls):
550 """Download and extract given `url_or_urls`.
551
552 Is roughly equivalent to:
(...)
563 extracted_path(s): `str`, extracted paths of given URL(s).
564 """
--> 565 return self.extract(self.download(url_or_urls))
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:420, in DownloadManager.download(self, url_or_urls)
401 def download(self, url_or_urls):
402 """Download given URL(s).
403
404 By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior.
(...)
418 ```
419 """
--> 420 download_config = self.download_config.copy()
421 download_config.extract_compressed_file = False
422 if download_config.download_desc is None:
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in DownloadConfig.copy(self)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in <dictcomp>(.0)
93 def copy(self) -> "DownloadConfig":
---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (2 times), deepcopy at line 146 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:206, in _deepcopy_list(x, memo, deepcopy)
204 append = y.append
205 for a in x:
--> 206 append(deepcopy(a, memo))
207 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:238, in _deepcopy_method(x, memo)
237 def _deepcopy_method(x, memo): # Copy instance methods
--> 238 return type(x)(x.__func__, deepcopy(x.__self__, memo))
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (3 times), deepcopy at line 146 (3 times), deepcopy at line 172 (3 times), _reconstruct at line 271 (2 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
[... skipping similar frames: _deepcopy_dict at line 231 (1 times), deepcopy at line 146 (1 times)]
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:265, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
263 if deep and args:
264 args = (deepcopy(arg, memo) for arg in args)
--> 265 y = func(*args)
266 if deep:
267 memo[id(x)] = y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:264, in <genexpr>(.0)
262 deep = memo is not None
263 if deep and args:
--> 264 args = (deepcopy(arg, memo) for arg in args)
265 y = func(*args)
266 if deep:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
269 if state is not None:
270 if deep:
--> 271 state = deepcopy(state, memo)
272 if hasattr(y, '__setstate__'):
273 y.__setstate__(state)
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0)
210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):
--> 211 y = [deepcopy(a, memo) for a in x]
212 # We're not going to put the tuple in the memo, but it's still important we
213 # check for it, in case the tuple contains recursive mutable structures.
214 try:
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy)
229 memo[id(x)] = y
230 for key, value in x.items():
--> 231 y[deepcopy(key, memo)] = deepcopy(value, memo)
232 return y
File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:161, in deepcopy(x, memo, _nil)
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_contextvars.Context' object
```
</details>
### Expected behavior
If I choose to load the file from the public bucket with `anon=True` passed - everything works, so I expected loading from the private bucket to work as well
### Environment info
- `datasets` version: 2.14.6
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.19.1
- PyArrow version: 14.0.1
- Pandas version: 1.5.3
- s3fs version: 2023.10.0
- fsspec version: 2023.10.0
- aiobotocore version: 2.7.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6407/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6406/comments | https://api.github.com/repos/huggingface/datasets/issues/6406/events | https://github.com/huggingface/datasets/issues/6406 | 1,990,469,045 | I_kwDODunzps52pCW1 | 6,406 | CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 2023-11-13T11:36:10 | 2023-11-14T10:05:36 | 2023-11-14T10:05:36 | MEMBER | null | null | null | Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6406/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6405/comments | https://api.github.com/repos/huggingface/datasets/issues/6405/events | https://github.com/huggingface/datasets/issues/6405 | 1,990,358,743 | I_kwDODunzps52onbX | 6,405 | ConfigNamesError on a simple CSV file | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [
"The viewer is working now. \r\n\r\nBased on the repo commit history, the bug was due to the incorrect format of the `features` field in the README YAML (`Value` requires `dtype`, e.g., `Value(\"string\")`, but it was not specified)",
"Feel free to close the issue",
"Oh, OK! Thanks. So, there was no reason to open an issue"
] | 2023-11-13T10:28:29 | 2023-11-13T20:01:24 | 2023-11-13T20:01:24 | CONTRIBUTOR | null | null | null | See https://huggingface.co./datasets/Nguyendo1999/mmath/discussions/1
```
Error code: ConfigNamesError
Exception: TypeError
Message: __init__() missing 1 required positional argument: 'dtype'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1039, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 468, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 399, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1838, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1690, in from_dict
obj = generate_from_dict(dic)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1353, in generate_from_dict
return class_type(**{k: v for k, v in obj.items() if k in field_names})
TypeError: __init__() missing 1 required positional argument: 'dtype'
```
This is the CSV file: https://huggingface.co./datasets/Nguyendo1999/mmath/blob/dbcdd7c2c6fc447f852ec136a7532292802bb46f/math_train.csv | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6405/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6404/comments | https://api.github.com/repos/huggingface/datasets/issues/6404/events | https://github.com/huggingface/datasets/pull/6404 | 1,990,211,901 | PR_kwDODunzps5fRrJ- | 6,404 | Support pyarrow 14.0.1 and fix vulnerability CVE-2023-47248 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005378) | 0.003707 / 0.011008 (-0.007301) | 0.079908 / 0.038508 (0.041399) | 0.036891 / 0.023109 (0.013781) | 0.390355 / 0.275898 (0.114457) | 0.424439 / 0.323480 (0.100960) | 0.004936 / 0.007986 (-0.003050) | 0.002886 / 0.004328 (-0.001442) | 0.062793 / 0.004250 (0.058542) | 0.054192 / 0.037052 (0.017139) | 0.394697 / 0.258489 (0.136208) | 0.437775 / 0.293841 (0.143934) | 0.027596 / 0.128546 (-0.100950) | 0.008006 / 0.075646 (-0.067640) | 0.262515 / 0.419271 (-0.156757) | 0.071014 / 0.043533 (0.027481) | 0.392964 / 0.255139 (0.137825) | 0.417449 / 0.283200 (0.134249) | 0.021819 / 0.141683 (-0.119864) | 1.458083 / 1.452155 (0.005929) | 1.489042 / 1.492716 (-0.003674) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230303 / 0.018006 (0.212297) | 0.439361 / 0.000490 (0.438871) | 0.010615 / 0.000200 (0.010415) | 0.000303 / 0.000054 (0.000249) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026600 / 0.037411 (-0.010811) | 0.078605 / 0.014526 (0.064079) | 0.088552 / 0.176557 (-0.088005) | 0.149429 / 0.737135 (-0.587706) | 0.087921 / 0.296338 (-0.208417) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422063 / 0.215209 (0.206854) | 4.201333 / 2.077655 (2.123678) | 1.982284 / 1.504120 (0.478164) | 1.779625 / 1.541195 (0.238431) | 1.872454 / 1.468490 (0.403964) | 0.502713 / 4.584777 (-4.082063) | 3.103372 / 3.745712 (-0.642340) | 3.030516 / 5.269862 (-2.239346) | 1.909123 / 4.565676 (-2.656554) | 0.057134 / 0.424275 (-0.367141) | 0.006405 / 0.007607 (-0.001202) | 0.494452 / 0.226044 (0.268408) | 4.839345 / 2.268929 (2.570417) | 2.424721 / 55.444624 (-53.019904) | 2.028618 / 6.876477 (-4.847859) | 2.082528 / 2.142072 (-0.059545) | 0.587396 / 4.805227 (-4.217831) | 0.125013 / 6.500664 (-6.375651) | 0.061369 / 0.075469 (-0.014100) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235799 / 1.841788 (-0.605989) | 17.919977 / 8.074308 (9.845669) | 13.868524 / 10.191392 (3.677132) | 0.146058 / 0.680424 (-0.534366) | 0.016826 / 0.534201 (-0.517375) | 0.337512 / 0.579283 (-0.241771) | 0.390263 / 0.434364 (-0.044101) | 0.385336 / 0.540337 (-0.155001) | 0.566004 / 1.386936 (-0.820932) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006537 / 0.011353 (-0.004816) | 0.003787 / 0.011008 (-0.007221) | 0.062568 / 0.038508 (0.024060) | 0.066672 / 0.023109 (0.043563) | 0.420447 / 0.275898 (0.144549) | 0.457260 / 0.323480 (0.133780) | 0.005005 / 0.007986 (-0.002981) | 0.003037 / 0.004328 (-0.001291) | 0.062095 / 0.004250 (0.057844) | 0.049619 / 0.037052 (0.012567) | 0.429935 / 0.258489 (0.171446) | 0.471566 / 0.293841 (0.177725) | 0.029688 / 0.128546 (-0.098859) | 0.008028 / 0.075646 (-0.067619) | 0.067915 / 0.419271 (-0.351356) | 0.042066 / 0.043533 (-0.001467) | 0.419275 / 0.255139 (0.164136) | 0.444819 / 0.283200 (0.161619) | 0.020100 / 0.141683 (-0.121583) | 1.439057 / 1.452155 (-0.013098) | 1.495657 / 1.492716 (0.002940) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211148 / 0.018006 (0.193142) | 0.423777 / 0.000490 (0.423288) | 0.005892 / 0.000200 (0.005693) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026469 / 0.037411 (-0.010942) | 0.081438 / 0.014526 (0.066912) | 0.092007 / 0.176557 (-0.084550) | 0.143433 / 0.737135 (-0.593703) | 0.093039 / 0.296338 (-0.203300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410468 / 0.215209 (0.195259) | 4.083783 / 2.077655 (2.006128) | 2.234501 / 1.504120 (0.730381) | 2.122323 / 1.541195 (0.581128) | 2.255036 / 1.468490 (0.786546) | 0.497712 / 4.584777 (-4.087065) | 3.231187 / 3.745712 (-0.514525) | 3.005399 / 5.269862 (-2.264463) | 1.909516 / 4.565676 (-2.656161) | 0.057529 / 0.424275 (-0.366746) | 0.006475 / 0.007607 (-0.001132) | 0.477282 / 0.226044 (0.251238) | 4.799566 / 2.268929 (2.530637) | 2.497070 / 55.444624 (-52.947554) | 2.206359 / 6.876477 (-4.670118) | 2.281614 / 2.142072 (0.139541) | 0.581710 / 4.805227 (-4.223518) | 0.121572 / 6.500664 (-6.379092) | 0.058774 / 0.075469 (-0.016695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301880 / 1.841788 (-0.539908) | 18.287330 / 8.074308 (10.213021) | 14.939642 / 10.191392 (4.748250) | 0.153941 / 0.680424 (-0.526483) | 0.018345 / 0.534201 (-0.515856) | 0.335986 / 0.579283 (-0.243297) | 0.384264 / 0.434364 (-0.050099) | 0.393115 / 0.540337 (-0.147223) | 0.573343 / 1.386936 (-0.813594) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d54b6459f4ed0b2519ddec605dd71956d2d1d3e4 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004805 / 0.011353 (-0.006548) | 0.003261 / 0.011008 (-0.007747) | 0.061585 / 0.038508 (0.023077) | 0.030236 / 0.023109 (0.007127) | 0.234767 / 0.275898 (-0.041131) | 0.260478 / 0.323480 (-0.063002) | 0.004121 / 0.007986 (-0.003865) | 0.002525 / 0.004328 (-0.001803) | 0.048213 / 0.004250 (0.043962) | 0.045229 / 0.037052 (0.008176) | 0.245143 / 0.258489 (-0.013346) | 0.271818 / 0.293841 (-0.022023) | 0.023594 / 0.128546 (-0.104952) | 0.007335 / 0.075646 (-0.068311) | 0.206246 / 0.419271 (-0.213026) | 0.060783 / 0.043533 (0.017250) | 0.238588 / 0.255139 (-0.016551) | 0.274985 / 0.283200 (-0.008214) | 0.018342 / 0.141683 (-0.123341) | 1.135445 / 1.452155 (-0.316710) | 1.184836 / 1.492716 (-0.307881) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095603 / 0.018006 (0.077597) | 0.290340 / 0.000490 (0.289850) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018804 / 0.037411 (-0.018607) | 0.062525 / 0.014526 (0.047999) | 0.074797 / 0.176557 (-0.101760) | 0.120360 / 0.737135 (-0.616775) | 0.076182 / 0.296338 (-0.220156) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.274981 / 0.215209 (0.059772) | 2.684931 / 2.077655 (0.607276) | 1.453845 / 1.504120 (-0.050275) | 1.348361 / 1.541195 (-0.192834) | 1.402820 / 1.468490 (-0.065670) | 0.396311 / 4.584777 (-4.188466) | 2.396314 / 3.745712 (-1.349398) | 2.744379 / 5.269862 (-2.525482) | 1.615268 / 4.565676 (-2.950409) | 0.045920 / 0.424275 (-0.378355) | 0.004844 / 0.007607 (-0.002763) | 0.331132 / 0.226044 (0.105087) | 3.325484 / 2.268929 (1.056556) | 1.845734 / 55.444624 (-53.598890) | 1.537268 / 6.876477 (-5.339209) | 1.565155 / 2.142072 (-0.576918) | 0.480032 / 4.805227 (-4.325195) | 0.099917 / 6.500664 (-6.400747) | 0.042276 / 0.075469 (-0.033193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973128 / 1.841788 (-0.868660) | 12.643790 / 8.074308 (4.569482) | 10.319586 / 10.191392 (0.128194) | 0.131733 / 0.680424 (-0.548691) | 0.014849 / 0.534201 (-0.519352) | 0.270960 / 0.579283 (-0.308323) | 0.265409 / 0.434364 (-0.168955) | 0.309073 / 0.540337 (-0.231264) | 0.466204 / 1.386936 (-0.920732) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005067 / 0.011353 (-0.006286) | 0.003344 / 0.011008 (-0.007665) | 0.047917 / 0.038508 (0.009409) | 0.059556 / 0.023109 (0.036447) | 0.275777 / 0.275898 (-0.000121) | 0.299703 / 0.323480 (-0.023777) | 0.004185 / 0.007986 (-0.003801) | 0.002602 / 0.004328 (-0.001726) | 0.048723 / 0.004250 (0.044472) | 0.040686 / 0.037052 (0.003634) | 0.281078 / 0.258489 (0.022589) | 0.314725 / 0.293841 (0.020885) | 0.024645 / 0.128546 (-0.103901) | 0.007465 / 0.075646 (-0.068182) | 0.053827 / 0.419271 (-0.365445) | 0.033395 / 0.043533 (-0.010138) | 0.273675 / 0.255139 (0.018536) | 0.291261 / 0.283200 (0.008062) | 0.019733 / 0.141683 (-0.121950) | 1.134084 / 1.452155 (-0.318071) | 1.189186 / 1.492716 (-0.303531) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.114960 / 0.018006 (0.096954) | 0.308800 / 0.000490 (0.308311) | 0.000237 / 0.000200 (0.000037) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021633 / 0.037411 (-0.015778) | 0.073192 / 0.014526 (0.058666) | 0.081598 / 0.176557 (-0.094959) | 0.123085 / 0.737135 (-0.614050) | 0.088677 / 0.296338 (-0.207661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300865 / 0.215209 (0.085656) | 2.956847 / 2.077655 (0.879192) | 1.613890 / 1.504120 (0.109770) | 1.494074 / 1.541195 (-0.047121) | 1.550345 / 1.468490 (0.081855) | 0.408880 / 4.584777 (-4.175897) | 2.422848 / 3.745712 (-1.322865) | 2.690623 / 5.269862 (-2.579239) | 1.546922 / 4.565676 (-3.018755) | 0.047192 / 0.424275 (-0.377083) | 0.004882 / 0.007607 (-0.002725) | 0.360625 / 0.226044 (0.134580) | 3.512678 / 2.268929 (1.243749) | 1.978633 / 55.444624 (-53.465992) | 1.686927 / 6.876477 (-5.189549) | 1.748387 / 2.142072 (-0.393685) | 0.480780 / 4.805227 (-4.324447) | 0.099163 / 6.500664 (-6.401501) | 0.041194 / 0.075469 (-0.034275) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989087 / 1.841788 (-0.852700) | 12.341951 / 8.074308 (4.267643) | 11.109329 / 10.191392 (0.917936) | 0.143329 / 0.680424 (-0.537095) | 0.015565 / 0.534201 (-0.518636) | 0.269532 / 0.579283 (-0.309751) | 0.274899 / 0.434364 (-0.159465) | 0.309308 / 0.540337 (-0.231030) | 0.439651 / 1.386936 (-0.947285) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#04a3f006a1a88c894ea10610d66dfddd73ad1490 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007880 / 0.011353 (-0.003473) | 0.004386 / 0.011008 (-0.006622) | 0.099067 / 0.038508 (0.060559) | 0.048036 / 0.023109 (0.024927) | 0.368349 / 0.275898 (0.092451) | 0.400052 / 0.323480 (0.076572) | 0.004493 / 0.007986 (-0.003493) | 0.003732 / 0.004328 (-0.000597) | 0.076153 / 0.004250 (0.071902) | 0.071024 / 0.037052 (0.033972) | 0.379771 / 0.258489 (0.121282) | 0.425005 / 0.293841 (0.131164) | 0.036092 / 0.128546 (-0.092454) | 0.009825 / 0.075646 (-0.065822) | 0.340217 / 0.419271 (-0.079055) | 0.089571 / 0.043533 (0.046038) | 0.371426 / 0.255139 (0.116287) | 0.397864 / 0.283200 (0.114664) | 0.029440 / 0.141683 (-0.112243) | 1.778100 / 1.452155 (0.325945) | 1.857202 / 1.492716 (0.364486) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254022 / 0.018006 (0.236015) | 0.549844 / 0.000490 (0.549354) | 0.012824 / 0.000200 (0.012624) | 0.000378 / 0.000054 (0.000324) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032334 / 0.037411 (-0.005077) | 0.096101 / 0.014526 (0.081576) | 0.117825 / 0.176557 (-0.058731) | 0.179277 / 0.737135 (-0.557858) | 0.112614 / 0.296338 (-0.183724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455051 / 0.215209 (0.239842) | 4.537086 / 2.077655 (2.459431) | 2.198662 / 1.504120 (0.694542) | 1.982772 / 1.541195 (0.441578) | 2.058673 / 1.468490 (0.590182) | 0.569268 / 4.584777 (-4.015509) | 4.095000 / 3.745712 (0.349288) | 3.891680 / 5.269862 (-1.378182) | 2.345129 / 4.565676 (-2.220548) | 0.066974 / 0.424275 (-0.357301) | 0.008557 / 0.007607 (0.000950) | 0.545290 / 0.226044 (0.319245) | 5.453377 / 2.268929 (3.184448) | 2.858688 / 55.444624 (-52.585936) | 2.502367 / 6.876477 (-4.374109) | 2.515658 / 2.142072 (0.373586) | 0.681423 / 4.805227 (-4.123804) | 0.155975 / 6.500664 (-6.344689) | 0.070872 / 0.075469 (-0.004597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.474674 / 1.841788 (-0.367114) | 21.653619 / 8.074308 (13.579311) | 16.277111 / 10.191392 (6.085719) | 0.166445 / 0.680424 (-0.513979) | 0.021676 / 0.534201 (-0.512525) | 0.466949 / 0.579283 (-0.112334) | 0.500953 / 0.434364 (0.066589) | 0.540413 / 0.540337 (0.000076) | 0.792989 / 1.386936 (-0.593947) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007633 / 0.011353 (-0.003720) | 0.004468 / 0.011008 (-0.006540) | 0.075573 / 0.038508 (0.037065) | 0.081174 / 0.023109 (0.058064) | 0.440741 / 0.275898 (0.164843) | 0.489493 / 0.323480 (0.166013) | 0.006180 / 0.007986 (-0.001805) | 0.003693 / 0.004328 (-0.000636) | 0.074692 / 0.004250 (0.070441) | 0.061732 / 0.037052 (0.024680) | 0.460391 / 0.258489 (0.201902) | 0.505575 / 0.293841 (0.211734) | 0.037692 / 0.128546 (-0.090854) | 0.009870 / 0.075646 (-0.065776) | 0.083830 / 0.419271 (-0.335442) | 0.056255 / 0.043533 (0.012723) | 0.439330 / 0.255139 (0.184191) | 0.475598 / 0.283200 (0.192399) | 0.026626 / 0.141683 (-0.115056) | 1.794410 / 1.452155 (0.342255) | 1.882510 / 1.492716 (0.389794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236194 / 0.018006 (0.218187) | 0.486109 / 0.000490 (0.485619) | 0.006652 / 0.000200 (0.006453) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037277 / 0.037411 (-0.000134) | 0.108904 / 0.014526 (0.094378) | 0.122699 / 0.176557 (-0.053857) | 0.182388 / 0.737135 (-0.554747) | 0.122826 / 0.296338 (-0.173512) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485989 / 0.215209 (0.270780) | 4.913263 / 2.077655 (2.835609) | 2.571618 / 1.504120 (1.067498) | 2.401248 / 1.541195 (0.860054) | 2.501117 / 1.468490 (1.032627) | 0.570989 / 4.584777 (-4.013788) | 4.107420 / 3.745712 (0.361708) | 3.814977 / 5.269862 (-1.454885) | 2.282539 / 4.565676 (-2.283138) | 0.067765 / 0.424275 (-0.356511) | 0.008561 / 0.007607 (0.000954) | 0.584515 / 0.226044 (0.358471) | 5.817821 / 2.268929 (3.548893) | 3.211202 / 55.444624 (-52.233422) | 2.764480 / 6.876477 (-4.111996) | 2.807301 / 2.142072 (0.665229) | 0.676882 / 4.805227 (-4.128346) | 0.150124 / 6.500664 (-6.350540) | 0.067205 / 0.075469 (-0.008265) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594945 / 1.841788 (-0.246843) | 22.533511 / 8.074308 (14.459203) | 17.099693 / 10.191392 (6.908301) | 0.195954 / 0.680424 (-0.484470) | 0.023968 / 0.534201 (-0.510233) | 0.471337 / 0.579283 (-0.107946) | 0.491017 / 0.434364 (0.056653) | 0.561342 / 0.540337 (0.021004) | 0.797116 / 1.386936 (-0.589820) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98871b9ba46e89e75e9d0dddc49f4241373c575d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006235 / 0.011353 (-0.005118) | 0.003688 / 0.011008 (-0.007321) | 0.080801 / 0.038508 (0.042293) | 0.036243 / 0.023109 (0.013134) | 0.312173 / 0.275898 (0.036275) | 0.346239 / 0.323480 (0.022759) | 0.003429 / 0.007986 (-0.004556) | 0.003806 / 0.004328 (-0.000523) | 0.063236 / 0.004250 (0.058986) | 0.053229 / 0.037052 (0.016177) | 0.315184 / 0.258489 (0.056695) | 0.360124 / 0.293841 (0.066283) | 0.027447 / 0.128546 (-0.101099) | 0.008029 / 0.075646 (-0.067618) | 0.262766 / 0.419271 (-0.156505) | 0.068421 / 0.043533 (0.024888) | 0.309028 / 0.255139 (0.053889) | 0.345859 / 0.283200 (0.062659) | 0.021388 / 0.141683 (-0.120295) | 1.452807 / 1.452155 (0.000652) | 1.502803 / 1.492716 (0.010087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211297 / 0.018006 (0.193291) | 0.423364 / 0.000490 (0.422874) | 0.004574 / 0.000200 (0.004374) | 0.000272 / 0.000054 (0.000218) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023805 / 0.037411 (-0.013606) | 0.072309 / 0.014526 (0.057783) | 0.083274 / 0.176557 (-0.093283) | 0.143594 / 0.737135 (-0.593541) | 0.083777 / 0.296338 (-0.212561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415691 / 0.215209 (0.200482) | 4.128621 / 2.077655 (2.050967) | 1.931128 / 1.504120 (0.427008) | 1.737486 / 1.541195 (0.196292) | 1.806314 / 1.468490 (0.337823) | 0.501405 / 4.584777 (-4.083372) | 3.082042 / 3.745712 (-0.663670) | 2.980224 / 5.269862 (-2.289637) | 1.879780 / 4.565676 (-2.685897) | 0.057546 / 0.424275 (-0.366729) | 0.006422 / 0.007607 (-0.001186) | 0.479813 / 0.226044 (0.253768) | 4.854497 / 2.268929 (2.585568) | 2.529674 / 55.444624 (-52.914950) | 2.283041 / 6.876477 (-4.593436) | 2.377173 / 2.142072 (0.235101) | 0.589654 / 4.805227 (-4.215573) | 0.126190 / 6.500664 (-6.374474) | 0.062391 / 0.075469 (-0.013079) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232023 / 1.841788 (-0.609764) | 17.576621 / 8.074308 (9.502313) | 13.437075 / 10.191392 (3.245683) | 0.143367 / 0.680424 (-0.537057) | 0.016638 / 0.534201 (-0.517563) | 0.332806 / 0.579283 (-0.246477) | 0.356029 / 0.434364 (-0.078335) | 0.385610 / 0.540337 (-0.154727) | 0.563268 / 1.386936 (-0.823668) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006293 / 0.011353 (-0.005060) | 0.003692 / 0.011008 (-0.007317) | 0.062075 / 0.038508 (0.023567) | 0.062104 / 0.023109 (0.038995) | 0.407478 / 0.275898 (0.131580) | 0.434982 / 0.323480 (0.111502) | 0.004889 / 0.007986 (-0.003097) | 0.002915 / 0.004328 (-0.001413) | 0.061426 / 0.004250 (0.057176) | 0.048027 / 0.037052 (0.010974) | 0.410504 / 0.258489 (0.152015) | 0.435383 / 0.293841 (0.141542) | 0.029419 / 0.128546 (-0.099127) | 0.008275 / 0.075646 (-0.067371) | 0.067796 / 0.419271 (-0.351476) | 0.041696 / 0.043533 (-0.001837) | 0.398882 / 0.255139 (0.143743) | 0.419480 / 0.283200 (0.136281) | 0.021519 / 0.141683 (-0.120164) | 1.436961 / 1.452155 (-0.015194) | 1.507961 / 1.492716 (0.015245) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223190 / 0.018006 (0.205184) | 0.416281 / 0.000490 (0.415791) | 0.003370 / 0.000200 (0.003170) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025923 / 0.037411 (-0.011488) | 0.079989 / 0.014526 (0.065463) | 0.091289 / 0.176557 (-0.085268) | 0.141212 / 0.737135 (-0.595923) | 0.091717 / 0.296338 (-0.204622) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434640 / 0.215209 (0.219431) | 4.326154 / 2.077655 (2.248500) | 2.364845 / 1.504120 (0.860725) | 2.194040 / 1.541195 (0.652846) | 2.276665 / 1.468490 (0.808175) | 0.501879 / 4.584777 (-4.082898) | 3.073307 / 3.745712 (-0.672405) | 2.893823 / 5.269862 (-2.376039) | 1.820594 / 4.565676 (-2.745083) | 0.057595 / 0.424275 (-0.366680) | 0.006516 / 0.007607 (-0.001091) | 0.513633 / 0.226044 (0.287589) | 5.104799 / 2.268929 (2.835870) | 2.845025 / 55.444624 (-52.599599) | 2.513852 / 6.876477 (-4.362624) | 2.561044 / 2.142072 (0.418972) | 0.582711 / 4.805227 (-4.222516) | 0.120631 / 6.500664 (-6.380034) | 0.056738 / 0.075469 (-0.018731) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303370 / 1.841788 (-0.538418) | 18.023568 / 8.074308 (9.949259) | 14.637973 / 10.191392 (4.446581) | 0.145182 / 0.680424 (-0.535241) | 0.018061 / 0.534201 (-0.516140) | 0.333219 / 0.579283 (-0.246065) | 0.373184 / 0.434364 (-0.061180) | 0.388176 / 0.540337 (-0.152161) | 0.564752 / 1.386936 (-0.822184) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aecdc94580d105d4b70c94e8e238ce097f97af90 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007230 / 0.011353 (-0.004122) | 0.003727 / 0.011008 (-0.007281) | 0.078893 / 0.038508 (0.040385) | 0.042600 / 0.023109 (0.019491) | 0.301905 / 0.275898 (0.026007) | 0.328478 / 0.323480 (0.004998) | 0.003960 / 0.007986 (-0.004026) | 0.004530 / 0.004328 (0.000201) | 0.059446 / 0.004250 (0.055196) | 0.061241 / 0.037052 (0.024189) | 0.301878 / 0.258489 (0.043389) | 0.340935 / 0.293841 (0.047095) | 0.030559 / 0.128546 (-0.097988) | 0.008016 / 0.075646 (-0.067630) | 0.305174 / 0.419271 (-0.114097) | 0.080374 / 0.043533 (0.036842) | 0.307162 / 0.255139 (0.052023) | 0.342459 / 0.283200 (0.059259) | 0.025881 / 0.141683 (-0.115801) | 1.443311 / 1.452155 (-0.008844) | 1.631060 / 1.492716 (0.138344) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242676 / 0.018006 (0.224670) | 0.463941 / 0.000490 (0.463451) | 0.007762 / 0.000200 (0.007562) | 0.000582 / 0.000054 (0.000527) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027334 / 0.037411 (-0.010077) | 0.078910 / 0.014526 (0.064384) | 0.091399 / 0.176557 (-0.085157) | 0.143318 / 0.737135 (-0.593818) | 0.089761 / 0.296338 (-0.206577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463002 / 0.215209 (0.247793) | 4.627235 / 2.077655 (2.549580) | 2.256699 / 1.504120 (0.752579) | 2.057615 / 1.541195 (0.516421) | 2.126424 / 1.468490 (0.657934) | 0.571969 / 4.584777 (-4.012808) | 4.130260 / 3.745712 (0.384548) | 3.833521 / 5.269862 (-1.436341) | 2.320141 / 4.565676 (-2.245535) | 0.067587 / 0.424275 (-0.356688) | 0.008452 / 0.007607 (0.000845) | 0.546478 / 0.226044 (0.320433) | 5.070678 / 2.268929 (2.801750) | 2.325387 / 55.444624 (-53.119237) | 2.044041 / 6.876477 (-4.832435) | 2.019714 / 2.142072 (-0.122358) | 0.563589 / 4.805227 (-4.241639) | 0.135269 / 6.500664 (-6.365395) | 0.058208 / 0.075469 (-0.017261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283156 / 1.841788 (-0.558631) | 18.617776 / 8.074308 (10.543468) | 13.360700 / 10.191392 (3.169308) | 0.160001 / 0.680424 (-0.520423) | 0.021538 / 0.534201 (-0.512663) | 0.384169 / 0.579283 (-0.195114) | 0.407517 / 0.434364 (-0.026847) | 0.427295 / 0.540337 (-0.113042) | 0.655288 / 1.386936 (-0.731648) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006854 / 0.011353 (-0.004499) | 0.003442 / 0.011008 (-0.007566) | 0.060622 / 0.038508 (0.022114) | 0.074649 / 0.023109 (0.051540) | 0.341733 / 0.275898 (0.065835) | 0.360096 / 0.323480 (0.036616) | 0.006235 / 0.007986 (-0.001751) | 0.003447 / 0.004328 (-0.000882) | 0.057301 / 0.004250 (0.053051) | 0.059022 / 0.037052 (0.021970) | 0.369523 / 0.258489 (0.111034) | 0.386280 / 0.293841 (0.092439) | 0.034319 / 0.128546 (-0.094228) | 0.008291 / 0.075646 (-0.067355) | 0.070403 / 0.419271 (-0.348868) | 0.050433 / 0.043533 (0.006901) | 0.347262 / 0.255139 (0.092123) | 0.380543 / 0.283200 (0.097343) | 0.024492 / 0.141683 (-0.117191) | 1.446721 / 1.452155 (-0.005433) | 1.541614 / 1.492716 (0.048898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226148 / 0.018006 (0.208142) | 0.442150 / 0.000490 (0.441660) | 0.004997 / 0.000200 (0.004797) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032866 / 0.037411 (-0.004546) | 0.088097 / 0.014526 (0.073571) | 0.102178 / 0.176557 (-0.074379) | 0.151129 / 0.737135 (-0.586006) | 0.103953 / 0.296338 (-0.192386) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.376701 / 0.215209 (0.161492) | 3.886997 / 2.077655 (1.809342) | 2.027143 / 1.504120 (0.523023) | 1.808647 / 1.541195 (0.267453) | 1.867664 / 1.468490 (0.399173) | 0.459487 / 4.584777 (-4.125290) | 3.640801 / 3.745712 (-0.104911) | 3.242512 / 5.269862 (-2.027350) | 1.889174 / 4.565676 (-2.676503) | 0.052415 / 0.424275 (-0.371860) | 0.007479 / 0.007607 (-0.000128) | 0.457706 / 0.226044 (0.231662) | 4.815041 / 2.268929 (2.546112) | 2.542470 / 55.444624 (-52.902154) | 2.137084 / 6.876477 (-4.739392) | 2.122867 / 2.142072 (-0.019205) | 0.553756 / 4.805227 (-4.251471) | 0.118902 / 6.500664 (-6.381763) | 0.058149 / 0.075469 (-0.017320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272615 / 1.841788 (-0.569173) | 19.455709 / 8.074308 (11.381401) | 14.111693 / 10.191392 (3.920301) | 0.165741 / 0.680424 (-0.514683) | 0.023680 / 0.534201 (-0.510521) | 0.431458 / 0.579283 (-0.147825) | 0.433612 / 0.434364 (-0.000752) | 0.465615 / 0.540337 (-0.074722) | 0.678177 / 1.386936 (-0.708759) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#998623fa51991320740b945d0853ee20807304d7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004870 / 0.011353 (-0.006483) | 0.002834 / 0.011008 (-0.008175) | 0.061359 / 0.038508 (0.022851) | 0.031286 / 0.023109 (0.008177) | 0.236701 / 0.275898 (-0.039197) | 0.258139 / 0.323480 (-0.065341) | 0.002943 / 0.007986 (-0.005043) | 0.002989 / 0.004328 (-0.001339) | 0.048046 / 0.004250 (0.043796) | 0.044927 / 0.037052 (0.007874) | 0.241339 / 0.258489 (-0.017151) | 0.273912 / 0.293841 (-0.019929) | 0.023427 / 0.128546 (-0.105119) | 0.007251 / 0.075646 (-0.068395) | 0.202730 / 0.419271 (-0.216542) | 0.056223 / 0.043533 (0.012691) | 0.239908 / 0.255139 (-0.015231) | 0.254723 / 0.283200 (-0.028476) | 0.018223 / 0.141683 (-0.123460) | 1.119691 / 1.452155 (-0.332464) | 1.163802 / 1.492716 (-0.328915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091303 / 0.018006 (0.073297) | 0.302097 / 0.000490 (0.301607) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018201 / 0.037411 (-0.019210) | 0.062092 / 0.014526 (0.047566) | 0.074806 / 0.176557 (-0.101751) | 0.119625 / 0.737135 (-0.617510) | 0.074680 / 0.296338 (-0.221659) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281140 / 0.215209 (0.065931) | 2.752094 / 2.077655 (0.674439) | 1.436813 / 1.504120 (-0.067307) | 1.312947 / 1.541195 (-0.228247) | 1.331022 / 1.468490 (-0.137468) | 0.396579 / 4.584777 (-4.188198) | 2.406181 / 3.745712 (-1.339531) | 2.597180 / 5.269862 (-2.672682) | 1.565879 / 4.565676 (-2.999798) | 0.046330 / 0.424275 (-0.377945) | 0.004776 / 0.007607 (-0.002831) | 0.339681 / 0.226044 (0.113637) | 3.279533 / 2.268929 (1.010605) | 1.793352 / 55.444624 (-53.651272) | 1.493910 / 6.876477 (-5.382567) | 1.514494 / 2.142072 (-0.627579) | 0.467955 / 4.805227 (-4.337272) | 0.097764 / 6.500664 (-6.402900) | 0.041659 / 0.075469 (-0.033810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943204 / 1.841788 (-0.898583) | 11.350848 / 8.074308 (3.276540) | 10.169944 / 10.191392 (-0.021448) | 0.130882 / 0.680424 (-0.549542) | 0.013804 / 0.534201 (-0.520397) | 0.269107 / 0.579283 (-0.310177) | 0.261685 / 0.434364 (-0.172679) | 0.305610 / 0.540337 (-0.234727) | 0.430586 / 1.386936 (-0.956350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004835 / 0.011353 (-0.006518) | 0.002530 / 0.011008 (-0.008479) | 0.047383 / 0.038508 (0.008875) | 0.052559 / 0.023109 (0.029450) | 0.265015 / 0.275898 (-0.010883) | 0.286955 / 0.323480 (-0.036525) | 0.003931 / 0.007986 (-0.004054) | 0.002038 / 0.004328 (-0.002290) | 0.047458 / 0.004250 (0.043207) | 0.038257 / 0.037052 (0.001205) | 0.270569 / 0.258489 (0.012080) | 0.298968 / 0.293841 (0.005127) | 0.024615 / 0.128546 (-0.103932) | 0.006969 / 0.075646 (-0.068677) | 0.052361 / 0.419271 (-0.366911) | 0.032701 / 0.043533 (-0.010832) | 0.269126 / 0.255139 (0.013987) | 0.285934 / 0.283200 (0.002735) | 0.018121 / 0.141683 (-0.123562) | 1.129796 / 1.452155 (-0.322359) | 1.272831 / 1.492716 (-0.219885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092058 / 0.018006 (0.074051) | 0.303544 / 0.000490 (0.303054) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020983 / 0.037411 (-0.016428) | 0.069798 / 0.014526 (0.055272) | 0.081410 / 0.176557 (-0.095146) | 0.120403 / 0.737135 (-0.616732) | 0.082813 / 0.296338 (-0.213525) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295943 / 0.215209 (0.080734) | 2.895761 / 2.077655 (0.818106) | 1.583534 / 1.504120 (0.079414) | 1.458397 / 1.541195 (-0.082798) | 1.492113 / 1.468490 (0.023623) | 0.402364 / 4.584777 (-4.182413) | 2.469777 / 3.745712 (-1.275935) | 2.565262 / 5.269862 (-2.704599) | 1.525914 / 4.565676 (-3.039763) | 0.047168 / 0.424275 (-0.377107) | 0.004800 / 0.007607 (-0.002808) | 0.348356 / 0.226044 (0.122311) | 3.463184 / 2.268929 (1.194255) | 1.930240 / 55.444624 (-53.514385) | 1.644312 / 6.876477 (-5.232165) | 1.625477 / 2.142072 (-0.516596) | 0.480781 / 4.805227 (-4.324446) | 0.098431 / 6.500664 (-6.402233) | 0.041071 / 0.075469 (-0.034398) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973633 / 1.841788 (-0.868154) | 11.952261 / 8.074308 (3.877953) | 11.038222 / 10.191392 (0.846830) | 0.142755 / 0.680424 (-0.537669) | 0.015389 / 0.534201 (-0.518812) | 0.274144 / 0.579283 (-0.305139) | 0.282319 / 0.434364 (-0.152045) | 0.314330 / 0.540337 (-0.226007) | 0.435315 / 1.386936 (-0.951621) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05200c0a4f8f02c3890ab79a10b44ab0bcf11629 \"CML watermark\")\n",
"The red CI job is unrelated to this PR. It appeared 5 days ago. See:\r\n- https://github.com/huggingface/datasets/pull/6390#pullrequestreview-1721070927\r\n- https://github.com/huggingface/datasets/issues/6406",
"Let's do a new release once this is merged ? cc @mariosasko as well let us know if the fix sounds good to you",
"@lhoestq Yes, this sounds good to me!",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004932 / 0.011353 (-0.006421) | 0.002956 / 0.011008 (-0.008052) | 0.061999 / 0.038508 (0.023491) | 0.030174 / 0.023109 (0.007065) | 0.241483 / 0.275898 (-0.034415) | 0.261578 / 0.323480 (-0.061902) | 0.002881 / 0.007986 (-0.005105) | 0.002451 / 0.004328 (-0.001878) | 0.048176 / 0.004250 (0.043925) | 0.045028 / 0.037052 (0.007976) | 0.244304 / 0.258489 (-0.014185) | 0.275834 / 0.293841 (-0.018007) | 0.023312 / 0.128546 (-0.105234) | 0.007361 / 0.075646 (-0.068286) | 0.204433 / 0.419271 (-0.214838) | 0.054561 / 0.043533 (0.011028) | 0.236902 / 0.255139 (-0.018237) | 0.269358 / 0.283200 (-0.013842) | 0.017736 / 0.141683 (-0.123947) | 1.112444 / 1.452155 (-0.339711) | 1.170260 / 1.492716 (-0.322456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093081 / 0.018006 (0.075074) | 0.311470 / 0.000490 (0.310981) | 0.000212 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018654 / 0.037411 (-0.018757) | 0.063239 / 0.014526 (0.048714) | 0.073759 / 0.176557 (-0.102798) | 0.120279 / 0.737135 (-0.616857) | 0.076214 / 0.296338 (-0.220124) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287219 / 0.215209 (0.072010) | 2.765378 / 2.077655 (0.687723) | 1.459733 / 1.504120 (-0.044387) | 1.325999 / 1.541195 (-0.215196) | 1.349957 / 1.468490 (-0.118533) | 0.413093 / 4.584777 (-4.171684) | 2.394758 / 3.745712 (-1.350954) | 2.633916 / 5.269862 (-2.635945) | 1.621629 / 4.565676 (-2.944047) | 0.046839 / 0.424275 (-0.377436) | 0.004786 / 0.007607 (-0.002822) | 0.336261 / 0.226044 (0.110217) | 3.348196 / 2.268929 (1.079267) | 1.853050 / 55.444624 (-53.591574) | 1.543926 / 6.876477 (-5.332551) | 1.573675 / 2.142072 (-0.568398) | 0.484088 / 4.805227 (-4.321139) | 0.100820 / 6.500664 (-6.399845) | 0.042194 / 0.075469 (-0.033275) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945186 / 1.841788 (-0.896601) | 11.859855 / 8.074308 (3.785547) | 10.459883 / 10.191392 (0.268491) | 0.142024 / 0.680424 (-0.538400) | 0.013882 / 0.534201 (-0.520319) | 0.269584 / 0.579283 (-0.309699) | 0.264353 / 0.434364 (-0.170011) | 0.307988 / 0.540337 (-0.232349) | 0.423655 / 1.386936 (-0.963281) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004891 / 0.011353 (-0.006461) | 0.003087 / 0.011008 (-0.007921) | 0.048206 / 0.038508 (0.009697) | 0.058570 / 0.023109 (0.035461) | 0.268552 / 0.275898 (-0.007346) | 0.287839 / 0.323480 (-0.035641) | 0.004044 / 0.007986 (-0.003942) | 0.002388 / 0.004328 (-0.001940) | 0.048186 / 0.004250 (0.043935) | 0.038719 / 0.037052 (0.001667) | 0.271940 / 0.258489 (0.013451) | 0.299716 / 0.293841 (0.005875) | 0.027166 / 0.128546 (-0.101380) | 0.007388 / 0.075646 (-0.068258) | 0.053885 / 0.419271 (-0.365387) | 0.032804 / 0.043533 (-0.010729) | 0.271664 / 0.255139 (0.016525) | 0.284613 / 0.283200 (0.001414) | 0.018488 / 0.141683 (-0.123195) | 1.125854 / 1.452155 (-0.326301) | 1.195896 / 1.492716 (-0.296820) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092438 / 0.018006 (0.074431) | 0.315265 / 0.000490 (0.314775) | 0.000228 / 0.000200 (0.000028) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021373 / 0.037411 (-0.016038) | 0.070611 / 0.014526 (0.056085) | 0.080391 / 0.176557 (-0.096165) | 0.118749 / 0.737135 (-0.618386) | 0.082340 / 0.296338 (-0.213999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295583 / 0.215209 (0.080374) | 2.882152 / 2.077655 (0.804497) | 1.565088 / 1.504120 (0.060968) | 1.451954 / 1.541195 (-0.089241) | 1.505783 / 1.468490 (0.037293) | 0.404699 / 4.584777 (-4.180078) | 2.451703 / 3.745712 (-1.294009) | 2.596301 / 5.269862 (-2.673560) | 1.547014 / 4.565676 (-3.018662) | 0.047750 / 0.424275 (-0.376525) | 0.004850 / 0.007607 (-0.002757) | 0.346893 / 0.226044 (0.120849) | 3.383355 / 2.268929 (1.114426) | 1.943933 / 55.444624 (-53.500692) | 1.657513 / 6.876477 (-5.218964) | 1.687166 / 2.142072 (-0.454906) | 0.478543 / 4.805227 (-4.326685) | 0.097804 / 6.500664 (-6.402860) | 0.041392 / 0.075469 (-0.034078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983894 / 1.841788 (-0.857893) | 12.446443 / 8.074308 (4.372135) | 10.973461 / 10.191392 (0.782069) | 0.131630 / 0.680424 (-0.548794) | 0.017196 / 0.534201 (-0.517005) | 0.270873 / 0.579283 (-0.308411) | 0.284379 / 0.434364 (-0.149985) | 0.306103 / 0.540337 (-0.234234) | 0.413762 / 1.386936 (-0.973174) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#980ad4c6e6e33f0129db8745e84de8c298741aa2 \"CML watermark\")\n",
"Note I had to add `pa.ExtensionType.__reduce__` because this is used by `copy.deepcopy` when using `.with_format`. See error below.\r\n\r\nThis method was added in pyarrow-13.0.0: https://github.com/apache/arrow/pull/36170\r\n- We need to re-implement it as long we support lower pyarrow versions\r\n\r\nErrors: https://github.com/huggingface/datasets/actions/runs/6861278161/job/18656665772\r\n```\r\n ____________________________ test_dataset_map[True] ____________________________\r\n[gw1] linux -- Python 3.8.18 /opt/hostedtoolcache/Python/3.8.18/x64/bin/python\r\n\r\n> ???\r\nE KeyError: 'extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>'\r\n\r\npyarrow/types.pxi:3155: KeyError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nwith_none = True\r\n\r\n @pytest.mark.parametrize(\"with_none\", [False, True])\r\n def test_dataset_map(with_none):\r\n ds = datasets.Dataset.from_dict({\"path\": [\"path1\", \"path2\"]})\r\n \r\n def process_data(batch):\r\n batch = {\r\n \"image\": [\r\n np.array(\r\n [\r\n [[1, 2, 3], [4, 5, 6], [7, 8, 9]],\r\n [[10, 20, 30], [40, 50, 60], [70, 80, 90]],\r\n [[100, 200, 300], [400, 500, 600], [700, 800, 900]],\r\n ]\r\n )\r\n for _ in batch[\"path\"]\r\n ]\r\n }\r\n if with_none:\r\n batch[\"image\"][0] = None\r\n return batch\r\n \r\n features = datasets.Features({\"image\": Array3D(dtype=\"int32\", shape=(3, 3, 3))})\r\n processed_ds = ds.map(process_data, batched=True, remove_columns=ds.column_names, features=features)\r\n assert processed_ds.shape == (2, 1)\r\n> with processed_ds.with_format(\"numpy\") as pds:\r\n\r\ntests/features/test_array_xd.py:459: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:2669: in with_format\r\n dataset = copy.deepcopy(self)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:270: in _reconstruct\r\n state = deepcopy(state, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:146: in deepcopy\r\n y = copier(x, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:230: in _deepcopy_dict\r\n y[deepcopy(key, memo)] = deepcopy(value, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:153: in deepcopy\r\n y = copier(memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/table.py:188: in __deepcopy__\r\n return _deepcopy(self, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/table.py:86: in _deepcopy\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:264: in _reconstruct\r\n y = func(*args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:263: in <genexpr>\r\n args = (deepcopy(arg, memo) for arg in args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:146: in deepcopy\r\n y = copier(x, memo)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:205: in _deepcopy_list\r\n append(deepcopy(a, memo))\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:264: in _reconstruct\r\n y = func(*args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:263: in <genexpr>\r\n args = (deepcopy(arg, memo) for arg in args)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:172: in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/copy.py:264: in _reconstruct\r\n y = func(*args)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n> ???\r\nE ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\n\r\npyarrow/types.pxi:3157: ValueError\r\n```\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_class_encode_column_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_dummy_dataset_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_conversion_in_memory - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_conversion_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_options_in_memory - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_tf_dataset_options_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_csv_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_sql_on_disk - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::test_map_cases[True] - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::test_map_cases[False] - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/test_arrow_dataset.py::test_map_cases[mix] - ValueError: No type alias for extension<datasets.features.features.array2dextensiontype<array2dextensiontype>>\r\nFAILED tests/features/test_array_xd.py::ArrayXDDynamicTest::test_map_dataset - ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\nFAILED tests/features/test_array_xd.py::test_dataset_map[False] - ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\nFAILED tests/features/test_array_xd.py::test_dataset_map[True] - ValueError: No type alias for extension<datasets.features.features.array3dextensiontype<array3dextensiontype>>\r\n===== 15 failed,\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007338 / 0.011353 (-0.004015) | 0.004308 / 0.011008 (-0.006700) | 0.088788 / 0.038508 (0.050280) | 0.039369 / 0.023109 (0.016260) | 0.334527 / 0.275898 (0.058629) | 0.373748 / 0.323480 (0.050268) | 0.005550 / 0.007986 (-0.002435) | 0.003606 / 0.004328 (-0.000723) | 0.072238 / 0.004250 (0.067988) | 0.061271 / 0.037052 (0.024218) | 0.336333 / 0.258489 (0.077844) | 0.398256 / 0.293841 (0.104415) | 0.041941 / 0.128546 (-0.086605) | 0.013372 / 0.075646 (-0.062274) | 0.336221 / 0.419271 (-0.083050) | 0.083013 / 0.043533 (0.039480) | 0.334743 / 0.255139 (0.079604) | 0.362572 / 0.283200 (0.079373) | 0.031161 / 0.141683 (-0.110521) | 1.563441 / 1.452155 (0.111287) | 1.704059 / 1.492716 (0.211343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252978 / 0.018006 (0.234972) | 0.506348 / 0.000490 (0.505859) | 0.011679 / 0.000200 (0.011479) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026257 / 0.037411 (-0.011154) | 0.085936 / 0.014526 (0.071410) | 0.098542 / 0.176557 (-0.078015) | 0.154507 / 0.737135 (-0.582628) | 0.111493 / 0.296338 (-0.184845) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.575941 / 0.215209 (0.360732) | 5.590230 / 2.077655 (3.512576) | 2.463330 / 1.504120 (0.959211) | 2.125760 / 1.541195 (0.584565) | 2.095933 / 1.468490 (0.627443) | 0.844768 / 4.584777 (-3.740009) | 4.768995 / 3.745712 (1.023282) | 4.670484 / 5.269862 (-0.599377) | 2.630386 / 4.565676 (-1.935290) | 0.085996 / 0.424275 (-0.338279) | 0.007900 / 0.007607 (0.000293) | 0.685463 / 0.226044 (0.459419) | 6.699310 / 2.268929 (4.430381) | 3.132542 / 55.444624 (-52.312083) | 2.527963 / 6.876477 (-4.348513) | 2.381835 / 2.142072 (0.239763) | 0.909668 / 4.805227 (-3.895559) | 0.209979 / 6.500664 (-6.290685) | 0.079222 / 0.075469 (0.003753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444895 / 1.841788 (-0.396892) | 20.388140 / 8.074308 (12.313832) | 19.354148 / 10.191392 (9.162756) | 0.222433 / 0.680424 (-0.457991) | 0.029710 / 0.534201 (-0.504491) | 0.427153 / 0.579283 (-0.152130) | 0.537500 / 0.434364 (0.103136) | 0.506917 / 0.540337 (-0.033421) | 0.726088 / 1.386936 (-0.660848) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007652 / 0.011353 (-0.003701) | 0.004320 / 0.011008 (-0.006688) | 0.072721 / 0.038508 (0.034212) | 0.068204 / 0.023109 (0.045095) | 0.392087 / 0.275898 (0.116189) | 0.431638 / 0.323480 (0.108158) | 0.005419 / 0.007986 (-0.002566) | 0.004305 / 0.004328 (-0.000023) | 0.069042 / 0.004250 (0.064791) | 0.051555 / 0.037052 (0.014503) | 0.412141 / 0.258489 (0.153651) | 0.438802 / 0.293841 (0.144961) | 0.043631 / 0.128546 (-0.084915) | 0.014169 / 0.075646 (-0.061478) | 0.079571 / 0.419271 (-0.339701) | 0.056707 / 0.043533 (0.013174) | 0.413698 / 0.255139 (0.158559) | 0.414127 / 0.283200 (0.130928) | 0.031380 / 0.141683 (-0.110303) | 1.677157 / 1.452155 (0.225003) | 1.755155 / 1.492716 (0.262439) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257236 / 0.018006 (0.239230) | 0.521347 / 0.000490 (0.520858) | 0.006282 / 0.000200 (0.006082) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028433 / 0.037411 (-0.008978) | 0.087698 / 0.014526 (0.073172) | 0.108840 / 0.176557 (-0.067716) | 0.157432 / 0.737135 (-0.579704) | 0.103144 / 0.296338 (-0.193195) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.598745 / 0.215209 (0.383536) | 5.981460 / 2.077655 (3.903805) | 2.556931 / 1.504120 (1.052811) | 2.179915 / 1.541195 (0.638720) | 2.240841 / 1.468490 (0.772351) | 0.811501 / 4.584777 (-3.773276) | 4.718282 / 3.745712 (0.972570) | 4.365738 / 5.269862 (-0.904124) | 2.669798 / 4.565676 (-1.895878) | 0.099135 / 0.424275 (-0.325140) | 0.007369 / 0.007607 (-0.000238) | 0.669491 / 0.226044 (0.443447) | 6.700389 / 2.268929 (4.431461) | 3.155328 / 55.444624 (-52.289296) | 2.563375 / 6.876477 (-4.313102) | 2.545191 / 2.142072 (0.403119) | 0.961359 / 4.805227 (-3.843868) | 0.189391 / 6.500664 (-6.311273) | 0.061597 / 0.075469 (-0.013873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.564008 / 1.841788 (-0.277780) | 21.401307 / 8.074308 (13.326999) | 20.693441 / 10.191392 (10.502049) | 0.229340 / 0.680424 (-0.451084) | 0.033637 / 0.534201 (-0.500564) | 0.429394 / 0.579283 (-0.149889) | 0.557202 / 0.434364 (0.122838) | 0.510284 / 0.540337 (-0.030054) | 0.725661 / 1.386936 (-0.661276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45abe297c178b829afcee853f9958b0c5a6469aa \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004820 / 0.011353 (-0.006533) | 0.003152 / 0.011008 (-0.007856) | 0.061842 / 0.038508 (0.023334) | 0.030127 / 0.023109 (0.007018) | 0.257409 / 0.275898 (-0.018489) | 0.269382 / 0.323480 (-0.054097) | 0.004288 / 0.007986 (-0.003698) | 0.002500 / 0.004328 (-0.001829) | 0.048520 / 0.004250 (0.044270) | 0.046815 / 0.037052 (0.009763) | 0.245858 / 0.258489 (-0.012631) | 0.289636 / 0.293841 (-0.004205) | 0.023983 / 0.128546 (-0.104563) | 0.007336 / 0.075646 (-0.068310) | 0.202347 / 0.419271 (-0.216924) | 0.057737 / 0.043533 (0.014204) | 0.245922 / 0.255139 (-0.009217) | 0.268788 / 0.283200 (-0.014412) | 0.017819 / 0.141683 (-0.123864) | 1.149889 / 1.452155 (-0.302265) | 1.227192 / 1.492716 (-0.265524) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092234 / 0.018006 (0.074228) | 0.310259 / 0.000490 (0.309769) | 0.000223 / 0.000200 (0.000023) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019059 / 0.037411 (-0.018352) | 0.064904 / 0.014526 (0.050378) | 0.073531 / 0.176557 (-0.103026) | 0.120879 / 0.737135 (-0.616257) | 0.075410 / 0.296338 (-0.220929) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275364 / 0.215209 (0.060155) | 2.724379 / 2.077655 (0.646725) | 1.447617 / 1.504120 (-0.056503) | 1.366794 / 1.541195 (-0.174401) | 1.345849 / 1.468490 (-0.122641) | 0.411205 / 4.584777 (-4.173572) | 2.412712 / 3.745712 (-1.333000) | 2.612469 / 5.269862 (-2.657393) | 1.552113 / 4.565676 (-3.013564) | 0.045783 / 0.424275 (-0.378492) | 0.004782 / 0.007607 (-0.002825) | 0.339218 / 0.226044 (0.113174) | 3.359540 / 2.268929 (1.090612) | 1.821369 / 55.444624 (-53.623256) | 1.540742 / 6.876477 (-5.335734) | 1.531845 / 2.142072 (-0.610227) | 0.462009 / 4.805227 (-4.343218) | 0.097794 / 6.500664 (-6.402870) | 0.041222 / 0.075469 (-0.034247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.938319 / 1.841788 (-0.903469) | 11.712003 / 8.074308 (3.637695) | 10.325317 / 10.191392 (0.133925) | 0.126812 / 0.680424 (-0.553612) | 0.013734 / 0.534201 (-0.520467) | 0.279509 / 0.579283 (-0.299774) | 0.269265 / 0.434364 (-0.165099) | 0.322033 / 0.540337 (-0.218304) | 0.441610 / 1.386936 (-0.945326) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004882 / 0.011353 (-0.006471) | 0.002984 / 0.011008 (-0.008024) | 0.048318 / 0.038508 (0.009810) | 0.054642 / 0.023109 (0.031533) | 0.268599 / 0.275898 (-0.007299) | 0.292916 / 0.323480 (-0.030564) | 0.004108 / 0.007986 (-0.003878) | 0.002500 / 0.004328 (-0.001829) | 0.048452 / 0.004250 (0.044202) | 0.038835 / 0.037052 (0.001782) | 0.275410 / 0.258489 (0.016921) | 0.307284 / 0.293841 (0.013443) | 0.024720 / 0.128546 (-0.103826) | 0.007274 / 0.075646 (-0.068372) | 0.054419 / 0.419271 (-0.364853) | 0.032815 / 0.043533 (-0.010718) | 0.273660 / 0.255139 (0.018521) | 0.289183 / 0.283200 (0.005984) | 0.017746 / 0.141683 (-0.123937) | 1.153876 / 1.452155 (-0.298278) | 1.212778 / 1.492716 (-0.279938) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095286 / 0.018006 (0.077280) | 0.305185 / 0.000490 (0.304696) | 0.000230 / 0.000200 (0.000030) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021556 / 0.037411 (-0.015855) | 0.071029 / 0.014526 (0.056503) | 0.081914 / 0.176557 (-0.094643) | 0.120553 / 0.737135 (-0.616582) | 0.086696 / 0.296338 (-0.209642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289750 / 0.215209 (0.074541) | 2.794247 / 2.077655 (0.716592) | 1.577105 / 1.504120 (0.072985) | 1.457706 / 1.541195 (-0.083489) | 1.500481 / 1.468490 (0.031991) | 0.403834 / 4.584777 (-4.180943) | 2.466810 / 3.745712 (-1.278902) | 2.701008 / 5.269862 (-2.568854) | 1.634821 / 4.565676 (-2.930856) | 0.046954 / 0.424275 (-0.377322) | 0.004811 / 0.007607 (-0.002796) | 0.347622 / 0.226044 (0.121578) | 3.407125 / 2.268929 (1.138197) | 1.987121 / 55.444624 (-53.457504) | 1.689978 / 6.876477 (-5.186499) | 1.731801 / 2.142072 (-0.410271) | 0.478926 / 4.805227 (-4.326301) | 0.100730 / 6.500664 (-6.399934) | 0.043078 / 0.075469 (-0.032391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963575 / 1.841788 (-0.878212) | 12.675331 / 8.074308 (4.601023) | 11.167584 / 10.191392 (0.976192) | 0.131199 / 0.680424 (-0.549225) | 0.016030 / 0.534201 (-0.518171) | 0.277783 / 0.579283 (-0.301500) | 0.278693 / 0.434364 (-0.155671) | 0.315141 / 0.540337 (-0.225196) | 0.429104 / 1.386936 (-0.957832) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#825c1d25835b64fc3533a63d60bd237f4465f15e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004807 / 0.011353 (-0.006546) | 0.002925 / 0.011008 (-0.008083) | 0.062560 / 0.038508 (0.024052) | 0.029926 / 0.023109 (0.006817) | 0.264708 / 0.275898 (-0.011190) | 0.273464 / 0.323480 (-0.050016) | 0.003197 / 0.007986 (-0.004788) | 0.002544 / 0.004328 (-0.001784) | 0.048230 / 0.004250 (0.043980) | 0.046552 / 0.037052 (0.009500) | 0.249553 / 0.258489 (-0.008936) | 0.282078 / 0.293841 (-0.011762) | 0.023201 / 0.128546 (-0.105346) | 0.007306 / 0.075646 (-0.068340) | 0.241361 / 0.419271 (-0.177910) | 0.058286 / 0.043533 (0.014753) | 0.245854 / 0.255139 (-0.009285) | 0.266053 / 0.283200 (-0.017146) | 0.020294 / 0.141683 (-0.121388) | 1.102215 / 1.452155 (-0.349939) | 1.170733 / 1.492716 (-0.321984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094647 / 0.018006 (0.076641) | 0.303819 / 0.000490 (0.303329) | 0.000250 / 0.000200 (0.000050) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019036 / 0.037411 (-0.018375) | 0.064729 / 0.014526 (0.050203) | 0.074143 / 0.176557 (-0.102414) | 0.120082 / 0.737135 (-0.617054) | 0.076835 / 0.296338 (-0.219503) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283786 / 0.215209 (0.068577) | 2.751446 / 2.077655 (0.673791) | 1.473789 / 1.504120 (-0.030331) | 1.336968 / 1.541195 (-0.204226) | 1.384148 / 1.468490 (-0.084342) | 0.397452 / 4.584777 (-4.187325) | 2.388042 / 3.745712 (-1.357670) | 2.661291 / 5.269862 (-2.608571) | 1.595454 / 4.565676 (-2.970223) | 0.045919 / 0.424275 (-0.378356) | 0.004879 / 0.007607 (-0.002728) | 0.337862 / 0.226044 (0.111818) | 3.355665 / 2.268929 (1.086737) | 1.875261 / 55.444624 (-53.569363) | 1.540874 / 6.876477 (-5.335603) | 1.653632 / 2.142072 (-0.488440) | 0.473090 / 4.805227 (-4.332138) | 0.100151 / 6.500664 (-6.400513) | 0.042357 / 0.075469 (-0.033112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959550 / 1.841788 (-0.882238) | 12.307145 / 8.074308 (4.232837) | 10.719321 / 10.191392 (0.527929) | 0.128376 / 0.680424 (-0.552048) | 0.014406 / 0.534201 (-0.519795) | 0.295208 / 0.579283 (-0.284075) | 0.268891 / 0.434364 (-0.165473) | 0.305446 / 0.540337 (-0.234892) | 0.429591 / 1.386936 (-0.957345) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005189 / 0.011353 (-0.006164) | 0.003082 / 0.011008 (-0.007926) | 0.048956 / 0.038508 (0.010448) | 0.063403 / 0.023109 (0.040294) | 0.272858 / 0.275898 (-0.003040) | 0.295207 / 0.323480 (-0.028273) | 0.004253 / 0.007986 (-0.003733) | 0.002552 / 0.004328 (-0.001776) | 0.048042 / 0.004250 (0.043792) | 0.040429 / 0.037052 (0.003377) | 0.269614 / 0.258489 (0.011125) | 0.307205 / 0.293841 (0.013364) | 0.027912 / 0.128546 (-0.100634) | 0.007621 / 0.075646 (-0.068026) | 0.054020 / 0.419271 (-0.365251) | 0.036958 / 0.043533 (-0.006574) | 0.272457 / 0.255139 (0.017318) | 0.287966 / 0.283200 (0.004766) | 0.019542 / 0.141683 (-0.122141) | 1.116742 / 1.452155 (-0.335413) | 1.194739 / 1.492716 (-0.297977) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093532 / 0.018006 (0.075526) | 0.303262 / 0.000490 (0.302773) | 0.000217 / 0.000200 (0.000017) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021984 / 0.037411 (-0.015428) | 0.075024 / 0.014526 (0.060498) | 0.080959 / 0.176557 (-0.095598) | 0.121780 / 0.737135 (-0.615356) | 0.082817 / 0.296338 (-0.213522) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292766 / 0.215209 (0.077557) | 2.857457 / 2.077655 (0.779802) | 1.621860 / 1.504120 (0.117740) | 1.473783 / 1.541195 (-0.067412) | 1.535211 / 1.468490 (0.066721) | 0.402212 / 4.584777 (-4.182565) | 2.467143 / 3.745712 (-1.278569) | 2.618162 / 5.269862 (-2.651700) | 1.568682 / 4.565676 (-2.996994) | 0.047123 / 0.424275 (-0.377152) | 0.004780 / 0.007607 (-0.002827) | 0.346959 / 0.226044 (0.120914) | 3.395196 / 2.268929 (1.126268) | 1.957835 / 55.444624 (-53.486789) | 1.674287 / 6.876477 (-5.202190) | 1.715879 / 2.142072 (-0.426193) | 0.479481 / 4.805227 (-4.325746) | 0.100043 / 6.500664 (-6.400621) | 0.041289 / 0.075469 (-0.034180) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965418 / 1.841788 (-0.876370) | 12.703830 / 8.074308 (4.629522) | 11.301401 / 10.191392 (1.110009) | 0.131429 / 0.680424 (-0.548995) | 0.016597 / 0.534201 (-0.517604) | 0.273290 / 0.579283 (-0.305993) | 0.285400 / 0.434364 (-0.148964) | 0.307327 / 0.540337 (-0.233011) | 0.434186 / 1.386936 (-0.952750) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c096bd288d07ed86f340ae090e5d4d9c5351f76f \"CML watermark\")\n"
] | 2023-11-13T09:15:39 | 2023-11-14T10:29:48 | 2023-11-14T10:23:29 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6404",
"html_url": "https://github.com/huggingface/datasets/pull/6404",
"diff_url": "https://github.com/huggingface/datasets/pull/6404.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6404.patch",
"merged_at": "2023-11-14T10:23:29"
} | Support `pyarrow` 14.0.1 and fix vulnerability [CVE-2023-47248](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm).
Fix #6396. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6404/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6403/comments | https://api.github.com/repos/huggingface/datasets/issues/6403/events | https://github.com/huggingface/datasets/issues/6403 | 1,990,098,817 | I_kwDODunzps52nn-B | 6,403 | Cannot import datasets on google colab (python 3.10.12) | {
"login": "nabilaannisa",
"id": 15389235,
"node_id": "MDQ6VXNlcjE1Mzg5MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nabilaannisa",
"html_url": "https://github.com/nabilaannisa",
"followers_url": "https://api.github.com/users/nabilaannisa/followers",
"following_url": "https://api.github.com/users/nabilaannisa/following{/other_user}",
"gists_url": "https://api.github.com/users/nabilaannisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nabilaannisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nabilaannisa/subscriptions",
"organizations_url": "https://api.github.com/users/nabilaannisa/orgs",
"repos_url": "https://api.github.com/users/nabilaannisa/repos",
"events_url": "https://api.github.com/users/nabilaannisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/nabilaannisa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets` to update the installation.",
"okay, it works! thank you so much! π "
] | 2023-11-13T08:14:43 | 2023-11-16T05:04:22 | 2023-11-16T05:04:21 | NONE | null | null | null | ### Describe the bug
I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12)
![image](https://github.com/huggingface/datasets/assets/15389235/6f7758a2-681d-4436-87d0-5e557838e368)
I found the same problem that have been solved in [#3326 ] but it seem still error on the google colab. I can't try on my local using jupyter notebook because of my laptop resource doesn't fulfill the requirements.
Please can anyone help me solve this problem. Thank you π
### Steps to reproduce the bug
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-8-b6e092f83978>](https://localhost:8080/#) in <cell line: 1>()
----> 1 from datasets import load_dataset
2
3 # Print all the available datasets
4 from huggingface_hub import list_datasets
5 print([dataset.id for dataset in list_datasets()])
6 frames
[/usr/lib/python3.10/functools.py](https://localhost:8080/#) in update_wrapper(wrapper, wrapped, assigned, updated)
59 # Issue #17482: set __wrapped__ last so we don't inadvertently copy it
60 # from the wrapped function when updating __dict__
---> 61 wrapper.__wrapped__ = wrapped
62 # Return the wrapper so this can be used as a decorator via partial()
63 return wrapper
AttributeError: readonly attribute
```
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6403/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6402/comments | https://api.github.com/repos/huggingface/datasets/issues/6402/events | https://github.com/huggingface/datasets/pull/6402 | 1,989,094,542 | PR_kwDODunzps5fOBdK | 6,402 | Update torch_formatter.py | {
"login": "VarunNSrivastava",
"id": 32204417,
"node_id": "MDQ6VXNlcjMyMjA0NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32204417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VarunNSrivastava",
"html_url": "https://github.com/VarunNSrivastava",
"followers_url": "https://api.github.com/users/VarunNSrivastava/followers",
"following_url": "https://api.github.com/users/VarunNSrivastava/following{/other_user}",
"gists_url": "https://api.github.com/users/VarunNSrivastava/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VarunNSrivastava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VarunNSrivastava/subscriptions",
"organizations_url": "https://api.github.com/users/VarunNSrivastava/orgs",
"repos_url": "https://api.github.com/users/VarunNSrivastava/repos",
"events_url": "https://api.github.com/users/VarunNSrivastava/events{/privacy}",
"received_events_url": "https://api.github.com/users/VarunNSrivastava/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 2023-11-11T19:40:41 | 2023-11-11T19:41:53 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6402",
"html_url": "https://github.com/huggingface/datasets/pull/6402",
"diff_url": "https://github.com/huggingface/datasets/pull/6402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6402.patch",
"merged_at": null
} | Ensure PyTorch images are converted to (C, H, W) instead of (H, W, C). See #6394 for motivation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6402/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6401/comments | https://api.github.com/repos/huggingface/datasets/issues/6401/events | https://github.com/huggingface/datasets/issues/6401 | 1,988,710,061 | I_kwDODunzps52iU6t | 6,401 | dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working | {
"login": "userbox020",
"id": 47074021,
"node_id": "MDQ6VXNlcjQ3MDc0MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/userbox020",
"html_url": "https://github.com/userbox020",
"followers_url": "https://api.github.com/users/userbox020/followers",
"following_url": "https://api.github.com/users/userbox020/following{/other_user}",
"gists_url": "https://api.github.com/users/userbox020/gists{/gist_id}",
"starred_url": "https://api.github.com/users/userbox020/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/userbox020/subscriptions",
"organizations_url": "https://api.github.com/users/userbox020/orgs",
"repos_url": "https://api.github.com/users/userbox020/repos",
"events_url": "https://api.github.com/users/userbox020/events{/privacy}",
"received_events_url": "https://api.github.com/users/userbox020/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems like it's a problem with the dataset, since in the [README](https://huggingface.co./datasets/Hyperspace-Technologies/scp-wiki-text/blob/main/README.md) the validation is not specified. Try cloning the dataset, removing the README (or validation split), and loading it locally/ ",
"@VarunNSrivastava thanks brother, working beautiful now\r\n\r\n```\r\nC:\\_Work\\_datasets>py dataset.py\r\nDownloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<?, ?it/s]\r\nExtracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 599.90it/s]\r\nGenerating train split: 314294 examples [00:00, 1293222.03 examples/s]\r\nGenerating validation split: 120 examples [00:00, 59053.91 examples/s]\r\nGenerating test split: 34922 examples [00:00, 1343275.84 examples/s]\r\n```"
] | 2023-11-11T04:09:07 | 2023-11-20T17:45:20 | 2023-11-20T17:45:20 | NONE | null | null | null | ### Describe the bug
```
(datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py
Downloading readme: 100%|βββββββββββββββββββββββββββββββββββ| 360/360 [00:00<00:00, 2.16MB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββ| 65.1M/65.1M [00:19<00:00, 3.38MB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββ| 6.35k/6.35k [00:00<00:00, 20.7kB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββ| 7.29M/7.29M [00:01<00:00, 3.99MB/s]
Downloading data files: 100%|ββββββββββββββββββββββββββββββββββ| 3/3 [00:21<00:00, 7.14s/it]
Extracting data files: 100%|βββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 1624.23it/s]
Generating train split: 100%|βββββββββββββ| 314294/314294 [00:00<00:00, 668186.58 examples/s]
Generating validation split: 120 examples [00:00, 100422.28 examples/s]
Generating test split: 100%|ββββββββββββββββ| 34922/34922 [00:00<00:00, 754683.41 examples/s]
Traceback (most recent call last):
File "/media/10TB_HHD/_LLM_DATASETS/dataset.py", line 3, in <module>
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text")
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/load.py", line 2153, in load_dataset
builder_instance.download_and_prepare(
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 1067, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 93, in verify_splits
raise UnexpectedSplits(str(set(recorded_splits) - set(expected_splits)))
datasets.utils.info_utils.UnexpectedSplits: {'validation'}
```
### Steps to reproduce the bug
Name:
`dataset.py`
Code:
```
from datasets import load_dataset
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text")
```
### Expected behavior
Run without errors
### Environment info
```
name: datasets
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2023.08.22=h06a4308_0
- ld_impl_linux-64=2.38=h1181459_1
- libffi=3.4.4=h6a678d5_0
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- libuuid=1.41.5=h5eee18b_0
- ncurses=6.4=h6a678d5_0
- openssl=3.0.12=h7f8727e_0
- python=3.10.13=h955ad1f_0
- readline=8.2=h5eee18b_0
- setuptools=68.0.0=py310h06a4308_0
- sqlite=3.41.2=h5eee18b_0
- tk=8.6.12=h1ccaba5_0
- wheel=0.41.2=py310h06a4308_0
- xz=5.4.2=h5eee18b_0
- zlib=1.2.13=h5eee18b_0
- pip:
- aiohttp==3.8.6
- aiosignal==1.3.1
- async-timeout==4.0.3
- attrs==23.1.0
- certifi==2023.7.22
- charset-normalizer==3.3.2
- click==8.1.7
- datasets==2.14.6
- dill==0.3.7
- filelock==3.13.1
- frozenlist==1.4.0
- fsspec==2023.10.0
- huggingface-hub==0.19.0
- idna==3.4
- multidict==6.0.4
- multiprocess==0.70.15
- numpy==1.26.1
- openai==0.27.8
- packaging==23.2
- pandas==2.1.3
- pip==23.3.1
- platformdirs==4.0.0
- pyarrow==14.0.1
- python-dateutil==2.8.2
- pytz==2023.3.post1
- pyyaml==6.0.1
- requests==2.31.0
- six==1.16.0
- tomli==2.0.1
- tqdm==4.66.1
- typer==0.9.0
- typing-extensions==4.8.0
- tzdata==2023.3
- urllib3==2.0.7
- xxhash==3.4.1
- yarl==1.9.2
prefix: /home/mruserbox/miniconda3/envs/datasets
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6401/timeline | null | completed | false |