The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: TypeError Message: Couldn't cast array of type timestamp[s] to null Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature return array_cast( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1962, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type timestamp[s] to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
int64 | updated_at
int64 | closed_at
int64 | author_association
string | active_lock_reason
null | draft
float64 | pull_request
dict | body
string | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | is_pull_request
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6999/comments | https://api.github.com/repos/huggingface/datasets/issues/6999/events | https://github.com/huggingface/datasets/pull/6999 | 2,372,124,589 | PR_kwDODunzps5zd-ak | 6,999 | Remove tasks | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6999). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,719,306,376,000 | 1,719,306,376,000 | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6999",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6999"
} | Remove tasks, as part of the 3.0 release. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6999/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6998/comments | https://api.github.com/repos/huggingface/datasets/issues/6998/events | https://github.com/huggingface/datasets/pull/6998 | 2,371,973,926 | PR_kwDODunzps5zddYG | 6,998 | Fix tests using hf-internal-testing/librispeech_asr_dummy | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6998). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005396 / 0.011353 (-0.005957) | 0.003974 / 0.011008 (-0.007034) | 0.063490 / 0.038508 (0.024982) | 0.030299 / 0.023109 (0.007189) | 0.244489 / 0.275898 (-0.031409) | 0.274116 / 0.323480 (-0.049364) | 0.003187 / 0.007986 (-0.004798) | 0.003433 / 0.004328 (-0.000896) | 0.049313 / 0.004250 (0.045062) | 0.043677 / 0.037052 (0.006624) | 0.260198 / 0.258489 (0.001709) | 0.283558 / 0.293841 (-0.010283) | 0.029728 / 0.128546 (-0.098819) | 0.011950 / 0.075646 (-0.063696) | 0.204371 / 0.419271 (-0.214901) | 0.035712 / 0.043533 (-0.007821) | 0.252715 / 0.255139 (-0.002424) | 0.268906 / 0.283200 (-0.014293) | 0.021153 / 0.141683 (-0.120529) | 1.125599 / 1.452155 (-0.326556) | 1.163122 / 1.492716 (-0.329594) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095089 / 0.018006 (0.077083) | 0.298576 / 0.000490 (0.298086) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018567 / 0.037411 (-0.018844) | 0.062337 / 0.014526 (0.047811) | 0.074231 / 0.176557 (-0.102326) | 0.120960 / 0.737135 (-0.616175) | 0.076124 / 0.296338 (-0.220215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286936 / 0.215209 (0.071727) | 2.816656 / 2.077655 (0.739001) | 1.486772 / 1.504120 (-0.017348) | 1.373289 / 1.541195 (-0.167905) | 1.392739 / 1.468490 (-0.075752) | 0.708091 / 4.584777 (-3.876686) | 2.410034 / 3.745712 (-1.335678) | 2.912701 / 5.269862 (-2.357161) | 1.850924 / 4.565676 (-2.714752) | 0.078896 / 0.424275 (-0.345380) | 0.005116 / 0.007607 (-0.002491) | 0.332275 / 0.226044 (0.106231) | 3.306562 / 2.268929 (1.037633) | 1.853051 / 55.444624 (-53.591573) | 1.556210 / 6.876477 (-5.320267) | 1.558892 / 2.142072 (-0.583181) | 0.789917 / 4.805227 (-4.015310) | 0.133683 / 6.500664 (-6.366981) | 0.042566 / 0.075469 (-0.032904) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957050 / 1.841788 (-0.884738) | 11.401462 / 8.074308 (3.327154) | 9.782988 / 10.191392 (-0.408404) | 0.142127 / 0.680424 (-0.538296) | 0.014730 / 0.534201 (-0.519471) | 0.302647 / 0.579283 (-0.276636) | 0.264654 / 0.434364 (-0.169710) | 0.341340 / 0.540337 (-0.198998) | 0.425808 / 1.386936 (-0.961128) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005679 / 0.011353 (-0.005674) | 0.003513 / 0.011008 (-0.007495) | 0.050135 / 0.038508 (0.011627) | 0.031614 / 0.023109 (0.008505) | 0.260064 / 0.275898 (-0.015834) | 0.285816 / 0.323480 (-0.037664) | 0.004428 / 0.007986 (-0.003558) | 0.002816 / 0.004328 (-0.001512) | 0.048441 / 0.004250 (0.044191) | 0.039622 / 0.037052 (0.002570) | 0.274940 / 0.258489 (0.016451) | 0.311837 / 0.293841 (0.017996) | 0.031439 / 0.128546 (-0.097107) | 0.012056 / 0.075646 (-0.063590) | 0.060109 / 0.419271 (-0.359163) | 0.033123 / 0.043533 (-0.010409) | 0.261563 / 0.255139 (0.006424) | 0.282640 / 0.283200 (-0.000560) | 0.017168 / 0.141683 (-0.124515) | 1.127859 / 1.452155 (-0.324295) | 1.182414 / 1.492716 (-0.310303) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095517 / 0.018006 (0.077510) | 0.300578 / 0.000490 (0.300088) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022192 / 0.037411 (-0.015220) | 0.076617 / 0.014526 (0.062091) | 0.087405 / 0.176557 (-0.089151) | 0.127011 / 0.737135 (-0.610124) | 0.088706 / 0.296338 (-0.207632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294260 / 0.215209 (0.079051) | 2.872879 / 2.077655 (0.795224) | 1.531374 / 1.504120 (0.027254) | 1.399232 / 1.541195 (-0.141962) | 1.400708 / 1.468490 (-0.067782) | 0.714003 / 4.584777 (-3.870773) | 0.943144 / 3.745712 (-2.802568) | 2.833396 / 5.269862 (-2.436466) | 1.890570 / 4.565676 (-2.675106) | 0.077664 / 0.424275 (-0.346611) | 0.005651 / 0.007607 (-0.001956) | 0.349476 / 0.226044 (0.123431) | 3.405768 / 2.268929 (1.136840) | 1.869739 / 55.444624 (-53.574885) | 1.575293 / 6.876477 (-5.301184) | 1.692981 / 2.142072 (-0.449092) | 0.795363 / 4.805227 (-4.009865) | 0.131532 / 6.500664 (-6.369132) | 0.041183 / 0.075469 (-0.034286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000821 / 1.841788 (-0.840967) | 11.987795 / 8.074308 (3.913487) | 10.147652 / 10.191392 (-0.043740) | 0.141314 / 0.680424 (-0.539110) | 0.015506 / 0.534201 (-0.518695) | 0.305090 / 0.579283 (-0.274193) | 0.123403 / 0.434364 (-0.310960) | 0.346507 / 0.540337 (-0.193831) | 0.471318 / 1.386936 (-0.915618) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#186b560eb2393c7d1913f4b3e76e9e04a081e09b \"CML watermark\")\n"
] | 1,719,302,384,000 | 1,719,303,758,000 | 1,719,303,222,000 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6998.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6998",
"merged_at": "2024-06-25T08:13:42",
"patch_url": "https://github.com/huggingface/datasets/pull/6998.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6998"
} | Fix tests using hf-internal-testing/librispeech_asr_dummy once that dataset has been converted to Parquet.
Fix #6997. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6998/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6998/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6997/comments | https://api.github.com/repos/huggingface/datasets/issues/6997/events | https://github.com/huggingface/datasets/issues/6997 | 2,371,966,127 | I_kwDODunzps6NYVSv | 6,997 | CI is broken for tests using hf-internal-testing/librispeech_asr_dummy | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 1,719,302,144,000 | 1,719,303,223,000 | 1,719,303,223,000 | MEMBER | null | null | null | CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996
```
FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other']
Right contains one more item: 'other'
Full diff:
[
'clean',
- 'other',
]
FAILED tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None] - AssertionError: assert 'clean' is None
```
Note that repository was recently converted to Parquet: https://huggingface.co./datasets/hf-internal-testing/librispeech_asr_dummy/commit/5be91486e11a2d616f4ec5db8d3fd248585ac07a | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6997/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6996/comments | https://api.github.com/repos/huggingface/datasets/issues/6996/events | https://github.com/huggingface/datasets/pull/6996 | 2,371,841,671 | PR_kwDODunzps5zdAg0 | 6,996 | Remove deprecated code | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6996). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,719,298,480,000 | 1,719,306,481,000 | null | MEMBER | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6996.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6996",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6996.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6996"
} | Remove deprecated code, as part of the 3.0 release. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6996/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6995/comments | https://api.github.com/repos/huggingface/datasets/issues/6995/events | https://github.com/huggingface/datasets/issues/6995 | 2,370,713,475 | I_kwDODunzps6NTjeD | 6,995 | ImportError when importing datasets.load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/124846947?v=4",
"events_url": "https://api.github.com/users/Leo-Lsc/events{/privacy}",
"followers_url": "https://api.github.com/users/Leo-Lsc/followers",
"following_url": "https://api.github.com/users/Leo-Lsc/following{/other_user}",
"gists_url": "https://api.github.com/users/Leo-Lsc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Leo-Lsc",
"id": 124846947,
"login": "Leo-Lsc",
"node_id": "U_kgDOB3EDYw",
"organizations_url": "https://api.github.com/users/Leo-Lsc/orgs",
"received_events_url": "https://api.github.com/users/Leo-Lsc/received_events",
"repos_url": "https://api.github.com/users/Leo-Lsc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Leo-Lsc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Leo-Lsc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Leo-Lsc"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"What is the version of your installed `huggingface-hub`:\r\n```python\r\nimport huggingface_hub\r\nprint(huggingface_hub.__version__)\r\n```\r\n\r\nIt seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:\r\n```\r\npip install -U huggingface-hub\r\n```\r\n\r\nNote that `CommitInfo` was implemented in huggingface-hub 0.10.0 and datasets requires \"huggingface-hub>=0.21.2\"",
"The version of my huggingface-hub is 0.23.4.",
"The error message says there is no CommitInfo in your installed huggingface-hub library:\r\n```\r\nImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\\Anaconda3\\envs\\CS224S\\Lib\\site-packages\\huggingface_hub_init_.py)\r\n```\r\n\r\nAnd this is implemented since version 0.10.0:\r\n- https://github.com/huggingface/huggingface_hub/pull/1066"
] | 1,719,248,842,000 | 1,719,299,845,000 | 1,719,295,897,000 | NONE | null | null | null | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. from datasets import load_dataset
### Expected behavior
ImportError Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
[3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train")
[4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test")
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7
1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
[2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) #
[3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License");
(...)
[12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and
[13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License.
[15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0"
---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset
[18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction
[19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63
[61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc
[62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs
---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import (
[64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo,
[65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd,
...
[70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) )
[71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile
[72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Environment info
Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub
$ datasets-cli env
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module>
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module>
from .arrow_dataset import Dataset
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
(CS224S) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6995/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6994/comments | https://api.github.com/repos/huggingface/datasets/issues/6994/events | https://github.com/huggingface/datasets/pull/6994 | 2,370,491,689 | PR_kwDODunzps5zYYXr | 6,994 | Fix incorrect rank value in data splitting (#6990) | {
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yzhangcs",
"id": 18402347,
"login": "yzhangcs",
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yzhangcs"
} | [] | open | false | null | [] | null | [
"Sure~",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6994). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005538 / 0.011353 (-0.005815) | 0.003997 / 0.011008 (-0.007011) | 0.063444 / 0.038508 (0.024935) | 0.032552 / 0.023109 (0.009442) | 0.266574 / 0.275898 (-0.009324) | 0.282841 / 0.323480 (-0.040639) | 0.004279 / 0.007986 (-0.003706) | 0.002788 / 0.004328 (-0.001540) | 0.049226 / 0.004250 (0.044976) | 0.044688 / 0.037052 (0.007636) | 0.275464 / 0.258489 (0.016975) | 0.305278 / 0.293841 (0.011437) | 0.030097 / 0.128546 (-0.098450) | 0.012237 / 0.075646 (-0.063410) | 0.205526 / 0.419271 (-0.213745) | 0.036145 / 0.043533 (-0.007388) | 0.267395 / 0.255139 (0.012256) | 0.289149 / 0.283200 (0.005949) | 0.019044 / 0.141683 (-0.122639) | 1.162294 / 1.452155 (-0.289861) | 1.183642 / 1.492716 (-0.309074) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.139125 / 0.018006 (0.121119) | 0.301743 / 0.000490 (0.301253) | 0.000260 / 0.000200 (0.000061) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019494 / 0.037411 (-0.017917) | 0.063078 / 0.014526 (0.048552) | 0.076989 / 0.176557 (-0.099567) | 0.121363 / 0.737135 (-0.615773) | 0.080040 / 0.296338 (-0.216298) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284401 / 0.215209 (0.069192) | 2.805397 / 2.077655 (0.727742) | 1.555609 / 1.504120 (0.051489) | 1.405662 / 1.541195 (-0.135533) | 1.459492 / 1.468490 (-0.008999) | 0.718376 / 4.584777 (-3.866401) | 2.395918 / 3.745712 (-1.349794) | 2.976753 / 5.269862 (-2.293108) | 1.883938 / 4.565676 (-2.681738) | 0.078867 / 0.424275 (-0.345408) | 0.005207 / 0.007607 (-0.002400) | 0.335178 / 0.226044 (0.109133) | 3.313414 / 2.268929 (1.044485) | 1.856929 / 55.444624 (-53.587696) | 1.565319 / 6.876477 (-5.311158) | 1.592723 / 2.142072 (-0.549350) | 0.793621 / 4.805227 (-4.011606) | 0.134208 / 6.500664 (-6.366456) | 0.042853 / 0.075469 (-0.032616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981553 / 1.841788 (-0.860235) | 11.810438 / 8.074308 (3.736130) | 9.529874 / 10.191392 (-0.661518) | 0.142216 / 0.680424 (-0.538207) | 0.014303 / 0.534201 (-0.519898) | 0.304600 / 0.579283 (-0.274684) | 0.261869 / 0.434364 (-0.172495) | 0.347301 / 0.540337 (-0.193036) | 0.437395 / 1.386936 (-0.949541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005881 / 0.011353 (-0.005472) | 0.004039 / 0.011008 (-0.006969) | 0.050241 / 0.038508 (0.011733) | 0.032670 / 0.023109 (0.009561) | 0.264940 / 0.275898 (-0.010959) | 0.287105 / 0.323480 (-0.036374) | 0.004844 / 0.007986 (-0.003142) | 0.002867 / 0.004328 (-0.001462) | 0.048083 / 0.004250 (0.043833) | 0.040965 / 0.037052 (0.003913) | 0.274390 / 0.258489 (0.015901) | 0.312107 / 0.293841 (0.018266) | 0.031714 / 0.128546 (-0.096832) | 0.012603 / 0.075646 (-0.063043) | 0.060698 / 0.419271 (-0.358573) | 0.033130 / 0.043533 (-0.010402) | 0.264444 / 0.255139 (0.009305) | 0.282797 / 0.283200 (-0.000403) | 0.027872 / 0.141683 (-0.113811) | 1.139026 / 1.452155 (-0.313129) | 1.181431 / 1.492716 (-0.311285) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097314 / 0.018006 (0.079308) | 0.301326 / 0.000490 (0.300836) | 0.000215 / 0.000200 (0.000015) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023394 / 0.037411 (-0.014018) | 0.076270 / 0.014526 (0.061744) | 0.089065 / 0.176557 (-0.087491) | 0.129996 / 0.737135 (-0.607139) | 0.089642 / 0.296338 (-0.206697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295390 / 0.215209 (0.080181) | 2.877849 / 2.077655 (0.800194) | 1.537129 / 1.504120 (0.033009) | 1.409441 / 1.541195 (-0.131754) | 1.432468 / 1.468490 (-0.036023) | 0.718054 / 4.584777 (-3.866722) | 0.930872 / 3.745712 (-2.814841) | 2.841028 / 5.269862 (-2.428834) | 1.921990 / 4.565676 (-2.643686) | 0.077638 / 0.424275 (-0.346637) | 0.005494 / 0.007607 (-0.002113) | 0.336331 / 0.226044 (0.110287) | 3.330490 / 2.268929 (1.061561) | 1.887994 / 55.444624 (-53.556630) | 1.593332 / 6.876477 (-5.283144) | 1.726956 / 2.142072 (-0.415116) | 0.783612 / 4.805227 (-4.021615) | 0.129926 / 6.500664 (-6.370738) | 0.040792 / 0.075469 (-0.034677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980274 / 1.841788 (-0.861514) | 12.193871 / 8.074308 (4.119563) | 10.348934 / 10.191392 (0.157542) | 0.141584 / 0.680424 (-0.538840) | 0.015737 / 0.534201 (-0.518464) | 0.300725 / 0.579283 (-0.278558) | 0.127190 / 0.434364 (-0.307174) | 0.341142 / 0.540337 (-0.199196) | 0.459523 / 1.386936 (-0.927413) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#637246baf96f07b19b193ed101f34b65cb35cffb \"CML watermark\")\n"
] | 1,719,241,667,000 | 1,719,241,667,000 | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6994.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6994",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6994.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6994"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6994/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6993/comments | https://api.github.com/repos/huggingface/datasets/issues/6993/events | https://github.com/huggingface/datasets/pull/6993 | 2,370,444,104 | PR_kwDODunzps5zYN7P | 6,993 | less script docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6993). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,719,240,328,000 | 1,719,240,470,000 | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6993.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6993",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6993.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6993"
} | + mark as legacy in some parts of the docs since we'll not build new features for script datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6993/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6992/comments | https://api.github.com/repos/huggingface/datasets/issues/6992/events | https://github.com/huggingface/datasets/issues/6992 | 2,367,890,622 | I_kwDODunzps6NIyS- | 6,992 | Dataset with streaming doesn't work with proxy | {
"avatar_url": "https://avatars.githubusercontent.com/u/57779173?v=4",
"events_url": "https://api.github.com/users/YHL04/events{/privacy}",
"followers_url": "https://api.github.com/users/YHL04/followers",
"following_url": "https://api.github.com/users/YHL04/following{/other_user}",
"gists_url": "https://api.github.com/users/YHL04/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YHL04",
"id": 57779173,
"login": "YHL04",
"node_id": "MDQ6VXNlcjU3Nzc5MTcz",
"organizations_url": "https://api.github.com/users/YHL04/orgs",
"received_events_url": "https://api.github.com/users/YHL04/received_events",
"repos_url": "https://api.github.com/users/YHL04/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YHL04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YHL04/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YHL04"
} | [] | open | false | null | [] | null | [
"Hi ! can you try updating `datasets` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U datasets huggingface_hub\r\n```"
] | 1,719,072,728,000 | 1,719,072,728,000 | null | NONE | null | null | null | ### Describe the bug
I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both HTTP_PROXY and HTTPS_PROXY. streaming = False works fine.
### Steps to reproduce the bug
use load_dataset with streaming = True in AIMOS
### Expected behavior
does not hang indefinitely and loads batches to start training run
### Environment info
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
_pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
abseil-cpp 20220623.0 h9888cd1_6 conda-forge
absl-py 1.0.0 py311h399429b_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
aiofiles 23.2.1 pyhd8ed1ab_0 conda-forge
aiohttp 3.8.6 py311hf118e41_0
aiosignal 1.2.0 pyhd3eb1b0_0
archspec 0.2.3 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 ha3edaa6_5_cpu conda-forge
async-timeout 4.0.2 py311h6ffa863_0
attrs 23.1.0 py311h6ffa863_0
av 10.0.0 py311he6153ed_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
aws-c-auth 0.6.24 hb81f6d7_5 conda-forge
aws-c-cal 0.5.20 h3c2b4d9_6 conda-forge
aws-c-common 0.8.11 h4194056_0 conda-forge
aws-c-compression 0.2.16 ha19333d_3 conda-forge
aws-c-event-stream 0.2.18 h12a9399_6 conda-forge
aws-c-http 0.7.4 ha2cde00_2 conda-forge
aws-c-io 0.13.17 h9189062_2 conda-forge
aws-c-mqtt 0.8.6 h40d1a04_6 conda-forge
aws-c-s3 0.2.4 hbdbe4f0_3 conda-forge
aws-c-sdkutils 0.1.7 ha19333d_3 conda-forge
aws-checksums 0.1.14 ha19333d_3 conda-forge
aws-crt-cpp 0.19.7 hd018011_7 conda-forge
aws-sdk-cpp 1.10.57 hb9575ba_4 conda-forge
blas 1.0 openblas
blinker 1.8.2 pyhd8ed1ab_0 conda-forge
boltons 23.0.0 py311h6ffa863_0
boost-cpp 1.82.0 h25e6d66_2
bottleneck 1.3.5 py311h34f6284_0
brotli 1.0.9 hf118e41_7
brotli-bin 1.0.9 hf118e41_7
brotli-python 1.0.9 py311h4a02239_7
bzip2 1.0.8 h7b6447c_0
c-ares 1.19.1 hf118e41_0
ca-certificates 2024.6.2 h0f6029e_0 conda-forge
cachetools 5.3.3 pyhd8ed1ab_0 conda-forge
certifi 2024.6.2 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311hf118e41_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.7 unix_pyh707e725_0 conda-forge
conda 24.5.0 py311h1af927a_0 conda-forge
conda-content-trust 0.2.0 py311h6ffa863_0
conda-libmamba-solver 23.11.1 py311h6ffa863_0
conda-package-handling 2.2.0 py311h6ffa863_0
conda-package-streaming 0.9.0 py311h6ffa863_0
contourpy 1.0.5 py311h25e6d66_0
cryptography 41.0.3 py311hb0e80e7_0
cudatoolkit 11.8.0 hedcfb66_13 conda-forge
cudnn 8.9.2_11.8 h9ceb136_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
cycler 0.11.0 pyhd3eb1b0_0
datasets 2.12.0 py311h6ffa863_0
dill 0.3.6 py311h6ffa863_0
distro 1.9.0 pyhd8ed1ab_0 conda-forge
ffmpeg 4.2.2 opence_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
filelock 3.9.0 py311h6ffa863_0
fmt 9.1.0 h25e6d66_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 hd23a775_0
frozendict 2.4.4 py311hb02d432_0 conda-forge
frozenlist 1.4.0 py311hf118e41_0
fsspec 2023.9.2 py311h6ffa863_0
gflags 2.2.2 he6710b0_0
giflib 5.2.1 hf118e41_3
glog 0.6.0 hbe088e0_0 conda-forge
gmp 6.3.0 h46f38da_0 conda-forge
gmpy2 2.1.5 py311h2758da7_1 conda-forge
google-auth 2.30.0 pyhff2d567_0 conda-forge
google-auth-oauthlib 0.5.3 pyhd8ed1ab_0 conda-forge
grpc-cpp 1.51.1 h8ba971d_1 conda-forge
grpcio 1.54.3 py311h414e0d3_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
huggingface_hub 0.17.3 py311h6ffa863_0
icu 73.1 h4a02239_0
idna 3.4 py311h6ffa863_0
importlib-metadata 6.0.0 py311h6ffa863_0
jinja2 3.1.4 pyhd8ed1ab_0 conda-forge
jpeg 9e hf118e41_1
jsonpatch 1.32 pyhd3eb1b0_0
jsonpointer 2.1 pyhd3eb1b0_0
kiwisolver 1.4.4 py311h4a02239_0
krb5 1.20.1 hc019ccd_1
lame 3.100 hb283c62_1003 conda-forge
lcms2 2.12 h2045e0b_0
ld_impl_linux-ppc64le 2.38 hec883e6_1
lerc 3.0 h29c3540_0
leveldb 1.23 h24532b4_1 conda-forge
libabseil 20220623.0 cxx17_h9235812_6 conda-forge
libarchive 3.6.2 hd8ab008_2
libarrow 11.0.0 h837770b_5_cpu conda-forge
libboost 1.82.0 haf51a6a_2
libbrotlicommon 1.0.9 hf118e41_7
libbrotlidec 1.0.9 hf118e41_7
libbrotlienc 1.0.9 hf118e41_7
libcrc32c 1.1.2 h3b9df90_0 conda-forge
libcurl 8.4.0 h4d62439_0
libdeflate 1.17 hf118e41_1
libedit 3.1.20221030 hf118e41_0
libev 4.33 h140841e_1
libevent 2.1.10 h19c23f1_4 conda-forge
libexpat 2.6.2 h46f38da_0 conda-forge
libffi 3.4.4 h4a02239_0
libgcc-ng 13.2.0 h31e42bb_10 conda-forge
libgfortran-ng 11.2.0 hb3889a9_1
libgfortran5 11.2.0 h1234567_1
libgomp 13.2.0 h31e42bb_10 conda-forge
libgoogle-cloud 2.7.0 h11140b6_1 conda-forge
libgrpc 1.51.1 h4d29a31_1 conda-forge
libmamba 1.5.3 h7c6fafd_0
libmambapy 1.5.3 py311h828bf7b_0
libnghttp2 1.57.0 h44e5816_0
libnsl 2.0.1 ha17a0cc_0 conda-forge
libopenblas 0.3.23 hc5a31fb_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
libopus 1.3.1 h4e0d66e_1 conda-forge
libpng 1.6.39 hf118e41_0
libprotobuf 3.21.12 h1776448_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
libsolv 0.7.24 h0f529ac_0
libsqlite 3.45.3 hd4bbf49_0 conda-forge
libssh2 1.10.0 h50fa78f_2
libstdcxx-ng 13.2.0 h262982c_10 conda-forge
libthrift 0.18.0 h82f1162_0 conda-forge
libtiff 4.5.1 h4a02239_0
libutf8proc 2.8.0 hb283c62_0 conda-forge
libuuid 2.38.1 h4194056_0 conda-forge
libvpx 1.13.1 h46f38da_0 conda-forge
libwebp 1.3.2 h0f96ee2_0
libwebp-base 1.3.2 hf118e41_0
libxcrypt 4.4.36 ha17a0cc_1 conda-forge
libxml2 2.10.4 h18e3229_1
libzlib 1.2.13 h1f2b957_6 conda-forge
llvm-openmp 14.0.6 hc028133_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
lmdb 0.9.31 ha17a0cc_1 conda-forge
lz4-c 1.9.4 h4a02239_0
markdown 3.4.4 pyhd8ed1ab_0 conda-forge
markupsafe 2.1.5 py311h32d8acf_0 conda-forge
matplotlib 3.8.0 py311h6ffa863_0
matplotlib-base 3.8.0 py311h52e1fcc_0
menuinst 2.1.1 py311h1af927a_0 conda-forge
mpc 1.3.1 heaf1863_0 conda-forge
mpfr 4.2.1 haad2271_1 conda-forge
mpmath 1.3.0 pyhd8ed1ab_0 conda-forge
multidict 6.0.2 py311hf118e41_0
multiprocess 0.70.14 py311h6ffa863_0
munkres 1.1.4 py_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
nccl 2.18.3 cuda11.8_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
ncurses 6.4 h4a02239_0
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
networkx 2.8.8 pyhd8ed1ab_0 conda-forge
nomkl 3.0 0 https://ftp.osuosl.org/pub/open-ce/1.10.0
numactl 2.0.16 hba61f60_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
numexpr 2.8.7 py311hc46fc55_0
numpy 1.24.3 py311h148a09e_0
numpy-base 1.24.3 py311h06b82f6_0
oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge
openjpeg 2.4.0 hfe35807_0
openssl 3.3.1 h1f2b957_0 conda-forge
orc 1.8.2 h341c9a4_2 conda-forge
packaging 23.1 py311h6ffa863_0
pandas 2.1.1 py311h52e1fcc_0
pcre2 10.42 h280155c_0
pillow 10.0.1 py311he33076b_0
pip 23.3 py311h6ffa863_0
platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge
pluggy 1.0.0 py311h6ffa863_1
pooch 1.8.2 pyhd8ed1ab_0 conda-forge
protobuf 4.21.12 py311ha7baec7_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
psutil 5.9.8 py311hd26027c_0 conda-forge
pyarrow 11.0.0 py311h04a18d5_1
pyasn1 0.6.0 pyhd8ed1ab_0 conda-forge
pyasn1-modules 0.4.0 pyhd8ed1ab_0 conda-forge
pybind11-abi 4 hd3eb1b0_1
pycosat 0.6.6 py311hf118e41_0
pycparser 2.21 pyhd3eb1b0_0
pyjwt 2.8.0 pyhd8ed1ab_1 conda-forge
pyopenssl 23.2.0 py311h6ffa863_0
pyparsing 3.0.9 py311h6ffa863_0
pyre-extensions 0.0.30 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 py311h6ffa863_0
python 3.11.8 h3332dee_0_cpython conda-forge
python-dateutil 2.8.2 pyhd3eb1b0_0
python-tzdata 2023.3 pyhd3eb1b0_0
python-xxhash 2.0.2 py311hf118e41_1
python_abi 3.11 4_cp311 conda-forge
pytorch 2.0.1 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
pytorch-base 2.0.1 cuda11.8_py311_pb4.21.12_4 https://ftp.osuosl.org/pub/open-ce/1.10.0
pytz 2023.3.post1 py311h6ffa863_0
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pyyaml 6.0.1 py311hf118e41_0
re2 2023.02.01 h883269e_0 conda-forge
readline 8.2 hf118e41_0
regex 2023.10.3 py311hf118e41_0
reproc 14.2.4 h29c3540_1
reproc-cpp 14.2.4 h29c3540_1
requests 2.31.0 py311h6ffa863_0
requests-oauthlib 2.0.0 pyhd8ed1ab_0 conda-forge
responses 0.13.3 pyhd3eb1b0_0
rsa 4.9 pyhd8ed1ab_0 conda-forge
ruamel.yaml 0.17.21 py311hf118e41_0
s2n 1.3.37 h5e47323_0 conda-forge
safetensors 0.4.0 py311hda16d9e_0
scipy 1.11.1 py311hd69e9bb_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
sentencepiece 0.1.97 h1e74c73_py311_pb4.21.12_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
setuptools 68.0.0 py311h6ffa863_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.1.9 h29c3540_0
sqlite 3.41.2 hf118e41_0
sympy 1.12.1 pypyh2585a3b_103 conda-forge
tabulate 0.8.10 pyhd8ed1ab_0 conda-forge
tensorboard 2.13.0 pyhab0730d_pb4.21.12_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tensorboard-data-server 0.7.0 pyh6f84499_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tensorboard-plugin-wit 1.6.0 pyh9f0ad1d_0 conda-forge
tk 8.6.13 hd4bbf49_0 conda-forge
tokenizers 0.13.3 py311h3d4f45a_0
torchdata 0.6.0 py311_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
torchsnapshot 0.1.0 pyhd8ed1ab_0 conda-forge
torchtext-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
torchtnt 0.2.4 pyhd8ed1ab_0 conda-forge
torchvision-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tornado 6.3.3 py311hf118e41_0
tqdm 4.65.0 py311h7837921_0
transformers 4.32.1 py311h6ffa863_0
truststore 0.8.0 py311h6ffa863_0
typing-extensions 4.7.1 py311h6ffa863_0
typing_extensions 4.7.1 py311h6ffa863_0
typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.18 py311h6ffa863_0
utf8proc 2.6.1 h140841e_0
werkzeug 2.3.8 pyhd8ed1ab_0 conda-forge
wheel 0.41.2 py311h6ffa863_0
xxhash 0.8.0 h140841e_3
xz 5.4.2 hf118e41_0
yaml 0.2.5 h7b6447c_0
yaml-cpp 0.8.0 h4a02239_0
yarl 1.8.1 py311hf118e41_0
zipp 3.11.0 py311h6ffa863_0
zlib 1.2.13 h1f2b957_6 conda-forge
zstandard 0.19.0 py311hf118e41_0
zstd 1.5.5 h57e4825_0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6992/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6991/comments | https://api.github.com/repos/huggingface/datasets/issues/6991/events | https://github.com/huggingface/datasets/pull/6991 | 2,367,711,094 | PR_kwDODunzps5zPoQs | 6,991 | Unblock NumPy 2.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4",
"events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}",
"followers_url": "https://api.github.com/users/NeilGirdhar/followers",
"following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}",
"gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NeilGirdhar",
"id": 730137,
"login": "NeilGirdhar",
"node_id": "MDQ6VXNlcjczMDEzNw==",
"organizations_url": "https://api.github.com/users/NeilGirdhar/orgs",
"received_events_url": "https://api.github.com/users/NeilGirdhar/received_events",
"repos_url": "https://api.github.com/users/NeilGirdhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NeilGirdhar"
} | [] | open | false | null | [] | null | [] | 1,719,047,993,000 | 1,719,047,993,000 | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6991.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6991",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6991.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6991"
} | Fixes https://github.com/huggingface/datasets/issues/6980 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6991/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6991/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6990/comments | https://api.github.com/repos/huggingface/datasets/issues/6990/events | https://github.com/huggingface/datasets/issues/6990 | 2,366,660,785 | I_kwDODunzps6NEGCx | 6,990 | Problematic rank after calling `split_dataset_by_node` twice | {
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yzhangcs",
"id": 18402347,
"login": "yzhangcs",
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yzhangcs"
} | [] | open | false | null | [] | null | [
"ah yes good catch ! feel free to open a PR with your suggested fix"
] | 1,718,979,926,000 | 1,719,221,137,000 | null | NONE | null | null | null | ### Describe the bug
I'm trying to split `IterableDataset` by `split_dataset_by_node`.
But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`.
### Steps to reproduce the bug
Here is the minimal code for reproduction:
```py
>>> from datasets import load_dataset
>>> from datasets.distributed import split_dataset_by_node
>>> dataset = load_dataset('fla-hub/slimpajama-test', split='train', streaming=True)
>>> dataset = split_dataset_by_node(dataset, 1, 32)
>>> dataset._distributed
DistributedConfig(rank=1, world_size=32)
>>> dataset = split_dataset_by_node(dataset, 1, 15)
>>> dataset._distributed
DistributedConfig(rank=481, world_size=480)
```
As you can see, the second rank 481 > 480, which is problematic.
### Expected behavior
I think this error comes from this line @lhoestq
https://github.com/huggingface/datasets/blob/a6ccf944e42c1a84de81bf326accab9999b86c90/src/datasets/iterable_dataset.py#L2943-L2944
We may need to obtain the rank first. Then the above code gives
```py
>>> dataset._distributed
DistributedConfig(rank=16, world_size=480)
```
### Environment info
datasets==2.20.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6990/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6990/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6989/comments | https://api.github.com/repos/huggingface/datasets/issues/6989/events | https://github.com/huggingface/datasets/issues/6989 | 2,365,556,449 | I_kwDODunzps6M_4bh | 6,989 | cache in nfs error | {
"avatar_url": "https://avatars.githubusercontent.com/u/66729924?v=4",
"events_url": "https://api.github.com/users/simplew2011/events{/privacy}",
"followers_url": "https://api.github.com/users/simplew2011/followers",
"following_url": "https://api.github.com/users/simplew2011/following{/other_user}",
"gists_url": "https://api.github.com/users/simplew2011/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simplew2011",
"id": 66729924,
"login": "simplew2011",
"node_id": "MDQ6VXNlcjY2NzI5OTI0",
"organizations_url": "https://api.github.com/users/simplew2011/orgs",
"received_events_url": "https://api.github.com/users/simplew2011/received_events",
"repos_url": "https://api.github.com/users/simplew2011/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simplew2011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simplew2011/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simplew2011"
} | [] | open | false | null | [] | null | [] | 1,718,935,762,000 | 1,718,935,975,000 | null | NONE | null | null | null | ### Describe the bug
- When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory
- When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory
- The default is to use the path of tempfile.tempdir
- If I modify this path to the NFS disk, an error will be reported, but the program will continue to run
- https://github.com/huggingface/datasets/blob/main/src/datasets/config.py#L257
```
Traceback (most recent call last):
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap
self.run()
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server
server.serve_forever()
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever
sys.exit(0)
SystemExit: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers
finalizer()
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs000000038330a012000030b4'
Traceback (most recent call last):
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 315, in _bootstrap
self.run()
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 616, in _run_server
server.serve_forever()
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/managers.py", line 182, in serve_forever
sys.exit(0)
SystemExit: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 300, in _run_finalizers
finalizer()
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/site-packages/multiprocess/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 718, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/home/wzp/miniconda3/envs/dask/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs0000000400064d4a000030e5'
```
### Steps to reproduce the bug
```
import os
import time
import tempfile
from datasets import load_dataset
def add_column(sample):
# print(type(sample))
# time.sleep(0.1)
sample['__ds__stats__'] = {'data': 123}
return sample
def filt_column(sample):
# print(type(sample))
if len(sample['content']) > 10:
return True
else:
return False
if __name__ == '__main__':
input_dir = '/mnt/temp/CN/small' # some json dataset
dataset = load_dataset('json', data_dir=input_dir)
temp_dir = '/media/release/release/temp/temp' # a nfs folder
os.makedirs(temp_dir, exist_ok=True)
# change huggingface-datasets runtime cache in nfs(default in /tmp)
tempfile.tempdir = temp_dir
aa = dataset.map(add_column, num_proc=64)
aa = aa.filter(filt_column, num_proc=64)
print(aa)
```
### Expected behavior
no error occur
### Environment info
datasets==2.18.0
ubuntu 20.04 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6989/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6988/comments | https://api.github.com/repos/huggingface/datasets/issues/6988/events | https://github.com/huggingface/datasets/pull/6988 | 2,364,129,918 | PR_kwDODunzps5zDpXX | 6,988 | [`feat`] Move dataset card creation to method for easier overriding | {
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tomaarsen",
"id": 37621491,
"login": "tomaarsen",
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tomaarsen"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6988). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"`Dataset` objects are not made to be subclassed, so I don't think going in that direction is a good idea. In particular there is absolutely no test to make sure it works well, and nothing in the internal has been made to anticipate this use case.\r\n\r\nI'd suggest to use a separate function to push changes to the Dataset card, and call it after `push_to_hub()`. This way people can also use a similar logic with other tools that `datasets`. You can also use composition instead of subclassing.",
"Would you consider an alternative where a Dataset instance carries a dataset card template which can be updated?\n\nI don't want to burden my users with having to call another method after `push_to_hub` themselves. If you're not a fan of the template approach above either, then I'll likely subclass `push_to_hub` to once again download the just-uploaded-but-empty dataset card, update it, and reupload it. It'll just be a bit more requests than necessary, but not a big deal overall.\n\n- Tom Aarsen ",
"Actually I find the idea of overriding `_create_dataset_card` better than implementing a templating logic. My main concern is that if we go in that direction we better make sure that subclasses of `Dataset` are working well. \r\n\r\nWell if it's been working fine on your side why not, but make sure you test correctly features that could not work because of subclassing (e.g. I'm pretty sure `map()` won't return your subclass of `Dataset`). Or at least the ones that matter for your lib.\r\n\r\nIf it sounds good to you I'm fine with merging your addition to let you override the dataset card.",
"> e.g. I'm pretty sure map() won't return your subclass of Dataset\r\n\r\nI understand that there's limitations such as this one. The subclass doesn't have to be robust - I'd just like some simple automatic dataset card generation options directly after generating the dataset. This can be removed if the user does additional steps before pushing the model, e.g. mapping, filtering, saving to disk and uploading the loaded dataset, etc.\r\n\r\n> If it sounds good to you I'm fine with merging your addition to let you override the dataset card.\r\n\r\nThat would be quite useful for me! I appreciate it.\r\n\r\nI'm not very sure what the test failures are caused by, I believe the only change in behaviour is that\r\n```python\r\n DatasetInfosDict({config_name: info_to_dump}).to_dataset_card_data(dataset_card_data)\r\n MetadataConfigs({config_name: metadata_config_to_dump}).to_dataset_card_data(dataset_card_data)\r\n```\r\nare not called when `dataset_card` was already defined. Unless these have side-effects other than updating `dataset_card_data`, it shouldn't be any different than `main`.\r\n\r\n- Tom Aarsen",
"Let's try to have this PR merged then !\r\n\r\nIMO your current implementation can be improved since you path both the dataset card data and the dataset card itself, which is redundant. Also I anticipate the failures in the CI to come from your default implementation which doesn't correspond to what it was doing before\r\n\r\n> Unless these have side-effects other than updating dataset_card_data, it shouldn't be any different than main.\r\n\r\nIndeed the dataset_card_data is the value from attribute of the dataset_card from a few lines before your changes, so yes it modifies the dataset_card object too."
] | 1,718,880,477,000 | 1,718,985,898,000 | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6988.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6988",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6988.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6988"
} | Hello!
## Pull Request overview
* Move dataset card creation to method for easier overriding
## Details
It's common for me to fully automatically download, reformat, and upload a dataset (e.g. see https://huggingface.co./datasets?other=sentence-transformers), but one aspect that I cannot easily automate is the dataset card generation. This is because during `push_to_hub`, the dataset card is created in 3 lines of code in a much larger method. To automatically generate a dataset card, I need to either:
1. Subclass `Dataset`/`DatasetDict`, copy the entire `push_to_hub` method to override the ~3 lines used to generate the dataset card. This is not viable as the method is likely to change over time.
2. Use `push_to_hub` normally, then separately download the pushed (but empty) dataset card, update it, and reupload the modified dataset. This works fine, but prevents me from being able to return a `Dataset` to my users which will automatically use a nice dataset card.
So, in this PR I'm proposing to move the dataset generation into another method so that it can be overridden more easily. For example, imagine the following use case:
````python
import json
from typing import Any, Dict, Optional
from datasets import Dataset, load_dataset
from datasets.info import DatasetInfosDict, DatasetInfo
from datasets.utils.metadata import MetadataConfigs
from huggingface_hub import DatasetCardData, DatasetCard
TEMPLATE = r"""---
{dataset_card_data}
---
# Dataset Card for {source_dataset_name} with mined hard negatives
This dataset is a collection of {column_one}-{column_two}-negative triplets from the {source_dataset_name} dataset. See [{source_dataset_name}](https://huggingface.co./datasets/{source_dataset_id}) for additional information. This dataset can be used directly with Sentence Transformers to train embedding models.
## Mining Parameters
The negative samples have been mined using the following parameters:
- `range_min`: {range_min}, i.e. we skip the {range_min} most similar samples
- `range_max`: {range_max}, i.e. we only look at the top {range_max} most similar samples
- `margin`: {margin}, i.e. we require negative similarity + margin < positive similarity, so negative samples can't be more similar than the known true answer
- `sampling_strategy`: {sampling_strategy}, i.e. whether to randomly sample from the candidate negatives or take the "top" negatives
- `num_negatives`: {num_negatives}, i.e. we mine {num_negatives} negatives per question-answer pair
## Dataset Format
- Columns: {column_one}, {column_two}, negative
- Column types: str, str, str
- Example:
```python
{example}
```
"""
class HNMDataset(Dataset):
@classmethod
def from_dict(cls, *args, mining_kwargs: Dict[str, Any], **kwargs) -> "HNMDataset":
dataset = super().from_dict(*args, **kwargs)
dataset.mining_kwargs = mining_kwargs
return dataset
def _create_dataset_card(
self,
dataset_card_data: DatasetCardData,
dataset_card: Optional[DatasetCard],
config_name: str,
info_to_dump: DatasetInfo,
metadata_config_to_dump: MetadataConfigs,
) -> DatasetCard:
if dataset_card:
return dataset_card
DatasetInfosDict({config_name: info_to_dump}).to_dataset_card_data(dataset_card_data)
MetadataConfigs({config_name: metadata_config_to_dump}).to_dataset_card_data(dataset_card_data)
dataset_card_data.tags = ["sentence-transformers"]
dataset_name = self.mining_kwargs["source_dataset"].info.dataset_name
# Very messy, just as an example:
dataset_id = list(self.mining_kwargs["source_dataset"].info.download_checksums.keys())[0].removeprefix("hf://datasets/").split("@")[0]
content = TEMPLATE.format(**{
"dataset_card_data": str(dataset_card_data),
"source_dataset_name": dataset_name,
"source_dataset_id": dataset_id,
"range_min": self.mining_kwargs["range_min"],
"range_max": self.mining_kwargs["range_max"],
"margin": self.mining_kwargs["margin"],
"sampling_strategy": self.mining_kwargs["sampling_strategy"],
"num_negatives": self.mining_kwargs["num_negatives"],
"column_one": self.column_names[0],
"column_two": self.column_names[1],
"example": json.dumps(self[0], indent=4),
})
return DatasetCard(content)
source_dataset = load_dataset("sentence-transformers/gooaq", split="train[:100]")
dataset = HNMDataset.from_dict({
"query": source_dataset["question"],
"answer": source_dataset["answer"],
# "negative": ... <- In my case, this column would be 'mined' automatically with these parameters
}, mining_kwargs={
"range_min": 10,
"range_max": 20,
"max_score": 0.9,
"margin": 0.1,
"sampling_strategy": "random",
"num_negatives": 3,
"source_dataset": source_dataset,
})
dataset.push_to_hub("tomaarsen/mining_demo", private=True)
````
In this script, I've created a subclass which stores some additional information about how the dataset was generated. It's a bit hacky (e.g. setting a `mining_kwargs` parameter in `from_dict` that wasn't created in `__init__`, but that's just a consequence of how the `from_...` methods don't accept kwargs), but it allows me to create a "hard negatives mining" function that returns a dataset which people can use locally like normal, but if they choose to upload it, then it'll automatically include some information, e.g.: https://huggingface.co./datasets/tomaarsen/mining_demo
This allows others to actually find this dataset (e.g. via the `sentence-transformers` tag) and get an idea of the quality, source, etc. by looking at the model card.
## Note
I'm not fixed on this solution whatsoever: I am also completely fine with other solutions, e.g. a `dataset.set_dataset_card_creator` method that allows me to provide a function without even having to subclass anything. I'm open to all ideas :)
cc @albertvillanova @lhoestq
cc @LysandreJik
- Tom Aarsen | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6988/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6987/comments | https://api.github.com/repos/huggingface/datasets/issues/6987/events | https://github.com/huggingface/datasets/pull/6987 | 2,363,728,190 | PR_kwDODunzps5zCRH6 | 6,987 | Remove beam | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6987). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,718,868,434,000 | 1,718,870,181,000 | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6987",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6987"
} | Remove beam, as part of the 3.0 release. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6987/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6986/comments | https://api.github.com/repos/huggingface/datasets/issues/6986/events | https://github.com/huggingface/datasets/pull/6986 | 2,362,584,179 | PR_kwDODunzps5y-Zi0 | 6,986 | Add large_list type support in string_to_arrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4",
"events_url": "https://api.github.com/users/arthasking123/events{/privacy}",
"followers_url": "https://api.github.com/users/arthasking123/followers",
"following_url": "https://api.github.com/users/arthasking123/following{/other_user}",
"gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arthasking123",
"id": 16257131,
"login": "arthasking123",
"node_id": "MDQ6VXNlcjE2MjU3MTMx",
"organizations_url": "https://api.github.com/users/arthasking123/orgs",
"received_events_url": "https://api.github.com/users/arthasking123/received_events",
"repos_url": "https://api.github.com/users/arthasking123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arthasking123"
} | [] | open | false | null | [] | null | [
"@albertvillanova @KennethEnevoldsen"
] | 1,718,808,865,000 | 1,718,863,078,000 | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6986.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6986",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6986.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6986"
} | add large_list type support in string_to_arrow() and _arrow_to_datasets_dtype() in features.py
Fix #6984
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6986/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6985/comments | https://api.github.com/repos/huggingface/datasets/issues/6985/events | https://github.com/huggingface/datasets/issues/6985 | 2,362,378,276 | I_kwDODunzps6Mzwgk | 6,985 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/26666267?v=4",
"events_url": "https://api.github.com/users/firmai/events{/privacy}",
"followers_url": "https://api.github.com/users/firmai/followers",
"following_url": "https://api.github.com/users/firmai/following{/other_user}",
"gists_url": "https://api.github.com/users/firmai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/firmai",
"id": 26666267,
"login": "firmai",
"node_id": "MDQ6VXNlcjI2NjY2MjY3",
"organizations_url": "https://api.github.com/users/firmai/orgs",
"received_events_url": "https://api.github.com/users/firmai/received_events",
"repos_url": "https://api.github.com/users/firmai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/firmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/firmai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/firmai"
} | [] | closed | false | null | [] | null | [
"Please note that the error is raised just at import:\r\n```python\r\nimport pyarrow.parquet as pq\r\n```\r\n\r\nTherefore it must be caused by some problem with your pyarrow installation. I would recommend you uninstall and install pyarrow again.\r\n\r\nI also see that it seems you use conda to install pyarrow. Please note that pyarrow offers 3 different packages in conda-forge: https://arrow.apache.org/docs/python/install.html#using-conda\r\n```\r\nconda install -c conda-forge pyarrow\r\n```\r\n> While the pyarrow [conda-forge](https://conda-forge.org/) package is the right choice for most users, both a minimal and maximal variant of the package exist, either of which may be better for your use case. See [Differences between conda-forge packages](https://arrow.apache.org/docs/python/install.html#python-conda-differences).\r\n\r\nPlease, make sure you install the right one: I guess it is either `pyarrow` (or `pyarrow-all`).",
"I have same issue, please downgrade pyarrow==15.0.2, it seem datasets library need to be fix",
"It is not a problem with the `datasets` library: we support latest version of `pyarrow` and our Continuous Integration tests are using pyarrow 16.1.0 without any problem.\r\n\r\nThe error reported here is raised when importing pyarrow.parquet:\r\n```\r\n---> 29 import pyarrow.parquet as pq\r\n```\r\n```\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20\r\n 1 # Licensed to the Apache Software Foundation (ASF) under one\r\n 2 # or more contributor license agreements. See the NOTICE file\r\n 3 # distributed with this work for additional information\r\n (...)\r\n 17 \r\n 18 # flake8: noqa\r\n---> 20 from .core import *\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33\r\n 30 import pyarrow as pa\r\n 32 try:\r\n---> 33 import pyarrow._parquet as _parquet\r\n 34 except ImportError as exc:\r\n 35 raise ImportError(\r\n 36 \"The pyarrow installation is not built with support \"\r\n 37 f\"for the Parquet file format ({str(exc)})\"\r\n 38 ) from None\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet()\r\n\r\nAttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'\r\n```\r\n\r\nThis can only be explained if pyarrow was not properly installed. \r\n\r\nIf the user just installed `pyarrow-core` from conda-forge, then its parquet subpackage is not installed and cannot be imported. You can check pyarrow docs:\r\n- Differences between conda-forge packages: https://arrow.apache.org/docs/python/install.html#python-conda-differences\r\n> The `pyarrow-core` package includes the following functionality:\r\n> ...\r\n> The `pyarrow` package adds the following:\r\n> ...\r\n> Parquet (i.e., `pyarrow.parquet`)"
] | 1,718,803,348,000 | 1,719,314,918,000 | 1,719,294,051,000 | NONE | null | null | null | ### Describe the bug
I have been struggling with this for two days, any help would be appreciated. Python 3.10
```
from setfit import SetFitModel
from huggingface_hub import login
access_token_read = "cccxxxccc"
# Authenticate with the Hugging Face Hub
login(token=access_token_read)
# Load the models from the Hugging Face Hub
trainer_relv = SetFitModel.from_pretrained("snowdere/trainer_relevance")
trainer_trust = SetFitModel.from_pretrained("snowdere/trainer_trust")
trainer_sent = SetFitModel.from_pretrained("snowdere/trainer_sent")
trainer_topic = SetFitModel.from_pretrained("snowdere/trainer_topic")
```
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 from setfit import SetFitModel
2 from huggingface_hub import login
4 access_token_read = "ccsddsds"
File /opt/conda/lib/python3.10/site-packages/setfit/__init__.py:7
4 import os
5 import warnings
----> 7 from .data import get_templated_dataset, sample_dataset
8 from .model_card import SetFitModelCardData
9 from .modeling import SetFitHead, SetFitModel
File /opt/conda/lib/python3.10/site-packages/setfit/data.py:5
3 import pandas as pd
4 import torch
----> 5 from datasets import Dataset, DatasetDict, load_dataset
6 from torch.utils.data import Dataset as TorchDataset
8 from . import logging
File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18
1 # ruff: noqa
2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
3 #
(...)
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
16 __version__ = "2.19.0"
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:76
73 from tqdm.contrib.concurrent import thread_map
75 from . import config
---> 76 from .arrow_reader import ArrowReader
77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
78 from .data_files import sanitize_patterns
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_reader.py:29
26 from typing import TYPE_CHECKING, List, Optional, Union
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
32 from .download.download_config import DownloadConfig
File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/__init__.py:20
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
(...)
17
18 # flake8: noqa
---> 20 from .core import *
File /opt/conda/lib/python3.10/site-packages/pyarrow/parquet/core.py:33
30 import pyarrow as pa
32 try:
---> 33 import pyarrow._parquet as _parquet
34 except ImportError as exc:
35 raise ImportError(
36 "The pyarrow installation is not built with support "
37 f"for the Parquet file format ({str(exc)})"
38 ) from None
File /opt/conda/lib/python3.10/site-packages/pyarrow/_parquet.pyx:1, in init pyarrow._parquet()
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
```
setfit: 1.0.3
transformers: 4.41.2
lingua-language-detector: 2.0.2
polars: 0.20.31
lightning: None
google-cloud-bigquery: 3.24.0
shapely: 2.0.4
pyarrow: 16.0.0
### Steps to reproduce the bug
I have tried all version combinations for Dataset and Pyarrow, the all have the same error since a few days ago. This is accross multiple scripts I have.
### Expected behavior
Just ron normally.
### Environment info
3.10 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6985/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6985/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6984/comments | https://api.github.com/repos/huggingface/datasets/issues/6984/events | https://github.com/huggingface/datasets/issues/6984 | 2,362,143,554 | I_kwDODunzps6My3NC | 6,984 | Convert polars DataFrame back to datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\n\r\nWe don't support `large_list` yet, though it should be added to `Sequence` IMO (maybe with a parameter `large=True` ?)"
] | 1,718,797,128,000 | 1,718,797,128,000 | null | NONE | null | null | null | ### Feature request
This returns error.
```python
from datasets import Dataset
dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]})
Dataset.from_polars(dsdf.to_polars())
```
ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent.
### Motivation
When datasets contain Sequence data type, it will be converted to Arrow type large_list. However, the reverse (from large_list to Sequence) does not work.
### Your contribution
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6984/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6983/comments | https://api.github.com/repos/huggingface/datasets/issues/6983/events | https://github.com/huggingface/datasets/pull/6983 | 2,361,806,201 | PR_kwDODunzps5y7tK7 | 6,983 | Remove metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6983). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,718,788,135,000 | 1,718,864,842,000 | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6983.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6983",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6983.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6983"
} | Remove all metrics, as part of the 3.0 release.
Note they are deprecated since 2.5.0 version. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6983/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6983/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6982/comments | https://api.github.com/repos/huggingface/datasets/issues/6982/events | https://github.com/huggingface/datasets/issues/6982 | 2,361,661,469 | I_kwDODunzps6MxBgd | 6,982 | cannot split dataset when using load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17721894?v=4",
"events_url": "https://api.github.com/users/cybest0608/events{/privacy}",
"followers_url": "https://api.github.com/users/cybest0608/followers",
"following_url": "https://api.github.com/users/cybest0608/following{/other_user}",
"gists_url": "https://api.github.com/users/cybest0608/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cybest0608",
"id": 17721894,
"login": "cybest0608",
"node_id": "MDQ6VXNlcjE3NzIxODk0",
"organizations_url": "https://api.github.com/users/cybest0608/orgs",
"received_events_url": "https://api.github.com/users/cybest0608/received_events",
"repos_url": "https://api.github.com/users/cybest0608/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cybest0608/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cybest0608/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cybest0608"
} | [] | open | false | null | [] | null | [
"it seems the bug will happened in all windows system, I tried it in windows8.1, 10, 11 and all of them failed. But it won't happened in the Linux(Ubuntu and Centos7) and Mac (both my virtual and physical machine). I still don't know what the problem is. May be related to the path? I cannot run the split file in my windows server which created in Linux (even I replace the path in the arrow document)....work for it for a week but still cannot fix it .....upset"
] | 1,718,784,436,000 | 1,718,866,533,000 | null | NONE | null | null | null | ### Describe the bug
when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document,
This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for days, even I load the datasets from local path, it can Generating train split and validation split but bug happen again in test split.
### Steps to reproduce the bug
from datasets import load_dataset, load_metric, Audio
common_voice_train = load_dataset("mozilla-foundation/common_voice_7_0", "ja", split="train", token=selftoken, trust_remote_code=True)
### Expected behavior
```
{
"name": "ValueError",
"message": "Instruction \"train\" corresponds to no data!",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 3
1 from datasets import load_dataset, load_metric, Audio
----> 3 common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\",token='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')#,trust_remote_code=True)#,streaming=True)
4 common_voice_test = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"test\",token='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')#,trust_remote_code=True)#,streaming=True)
File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\load.py:2626, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2622 # Build dataset for splits
2623 keep_in_memory = (
2624 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2625 )
-> 2626 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2627 # Rename and cast features to match task schema
2628 if task is not None:
2629 # To avoid issuing the same warning twice
File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1266, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1263 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS)
1265 # Create a dataset for each of the given splits
-> 1266 datasets = map_nested(
1267 partial(
1268 self._build_single_dataset,
1269 run_post_process=run_post_process,
1270 verification_mode=verification_mode,
1271 in_memory=in_memory,
1272 ),
1273 split,
1274 map_tuple=True,
1275 disable_tqdm=True,
1276 )
1277 if isinstance(datasets, dict):
1278 datasets = DatasetDict(datasets)
File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\utils\\py_utils.py:484, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc)
482 if batched:
483 data_struct = [data_struct]
--> 484 mapped = function(data_struct)
485 if batched:
486 mapped = mapped[0]
File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1296, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory)
1293 split = Split(split)
1295 # Build base dataset
-> 1296 ds = self._as_dataset(
1297 split=split,
1298 in_memory=in_memory,
1299 )
1300 if run_post_process:
1301 for resource_file_name in self._post_processing_resources(split).values():
File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1370, in DatasetBuilder._as_dataset(self, split, in_memory)
1368 if self._check_legacy_cache():
1369 dataset_name = self.name
-> 1370 dataset_kwargs = ArrowReader(cache_dir, self.info).read(
1371 name=dataset_name,
1372 instructions=split,
1373 split_infos=self.info.splits.values(),
1374 in_memory=in_memory,
1375 )
1376 fingerprint = self._get_dataset_fingerprint(split)
1377 return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\arrow_reader.py:256, in BaseReader.read(self, name, instructions, split_infos, in_memory)
254 msg = f'Instruction \"{instructions}\" corresponds to no data!'
255 #msg = f'Instruction \"{self._path}\",\"{name}\",\"{instructions}\",\"{split_infos}\" corresponds to no data!'
--> 256 raise ValueError(msg)
257 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
ValueError: Instruction \"train\" corresponds to no data!"
}
```
### Environment info
Environment:
python 3.9
windows 11 pro
VScode+jupyter | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6982/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6981/comments | https://api.github.com/repos/huggingface/datasets/issues/6981/events | https://github.com/huggingface/datasets/pull/6981 | 2,361,520,022 | PR_kwDODunzps5y6tnN | 6,981 | Update docs on trust_remote_code defaults to False | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6981). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005578 / 0.011353 (-0.005775) | 0.003946 / 0.011008 (-0.007062) | 0.063317 / 0.038508 (0.024808) | 0.031878 / 0.023109 (0.008769) | 0.312571 / 0.275898 (0.036673) | 0.281415 / 0.323480 (-0.042065) | 0.004139 / 0.007986 (-0.003846) | 0.002730 / 0.004328 (-0.001598) | 0.049539 / 0.004250 (0.045289) | 0.045056 / 0.037052 (0.008003) | 0.263820 / 0.258489 (0.005330) | 0.297817 / 0.293841 (0.003976) | 0.029490 / 0.128546 (-0.099056) | 0.012467 / 0.075646 (-0.063179) | 0.204607 / 0.419271 (-0.214664) | 0.036305 / 0.043533 (-0.007228) | 0.244102 / 0.255139 (-0.011037) | 0.267855 / 0.283200 (-0.015345) | 0.019794 / 0.141683 (-0.121889) | 1.130784 / 1.452155 (-0.321371) | 1.172507 / 1.492716 (-0.320209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092430 / 0.018006 (0.074424) | 0.296460 / 0.000490 (0.295970) | 0.000210 / 0.000200 (0.000010) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019467 / 0.037411 (-0.017944) | 0.062850 / 0.014526 (0.048324) | 0.074067 / 0.176557 (-0.102490) | 0.123280 / 0.737135 (-0.613856) | 0.077036 / 0.296338 (-0.219302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282687 / 0.215209 (0.067478) | 2.786715 / 2.077655 (0.709060) | 1.492028 / 1.504120 (-0.012092) | 1.373603 / 1.541195 (-0.167592) | 1.405004 / 1.468490 (-0.063486) | 0.714408 / 4.584777 (-3.870369) | 2.376785 / 3.745712 (-1.368927) | 2.916150 / 5.269862 (-2.353712) | 1.921184 / 4.565676 (-2.644493) | 0.078354 / 0.424275 (-0.345921) | 0.005236 / 0.007607 (-0.002371) | 0.334647 / 0.226044 (0.108603) | 3.262069 / 2.268929 (0.993140) | 1.858300 / 55.444624 (-53.586324) | 1.572968 / 6.876477 (-5.303509) | 1.659145 / 2.142072 (-0.482927) | 0.779546 / 4.805227 (-4.025681) | 0.132623 / 6.500664 (-6.368041) | 0.042423 / 0.075469 (-0.033046) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985516 / 1.841788 (-0.856271) | 12.001321 / 8.074308 (3.927013) | 9.927011 / 10.191392 (-0.264381) | 0.142645 / 0.680424 (-0.537779) | 0.013808 / 0.534201 (-0.520393) | 0.303422 / 0.579283 (-0.275861) | 0.262666 / 0.434364 (-0.171698) | 0.339369 / 0.540337 (-0.200969) | 0.431028 / 1.386936 (-0.955908) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005848 / 0.011353 (-0.005505) | 0.003971 / 0.011008 (-0.007037) | 0.050746 / 0.038508 (0.012238) | 0.031554 / 0.023109 (0.008445) | 0.277678 / 0.275898 (0.001780) | 0.300776 / 0.323480 (-0.022704) | 0.004428 / 0.007986 (-0.003558) | 0.002773 / 0.004328 (-0.001555) | 0.049882 / 0.004250 (0.045632) | 0.039833 / 0.037052 (0.002780) | 0.289143 / 0.258489 (0.030654) | 0.321425 / 0.293841 (0.027584) | 0.031701 / 0.128546 (-0.096845) | 0.012687 / 0.075646 (-0.062960) | 0.060650 / 0.419271 (-0.358621) | 0.033318 / 0.043533 (-0.010215) | 0.277019 / 0.255139 (0.021880) | 0.292345 / 0.283200 (0.009145) | 0.018520 / 0.141683 (-0.123163) | 1.143933 / 1.452155 (-0.308222) | 1.183913 / 1.492716 (-0.308803) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094467 / 0.018006 (0.076461) | 0.298822 / 0.000490 (0.298332) | 0.000201 / 0.000200 (0.000001) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022811 / 0.037411 (-0.014601) | 0.078084 / 0.014526 (0.063558) | 0.089079 / 0.176557 (-0.087477) | 0.130229 / 0.737135 (-0.606906) | 0.090851 / 0.296338 (-0.205487) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294981 / 0.215209 (0.079772) | 2.908294 / 2.077655 (0.830639) | 1.591281 / 1.504120 (0.087161) | 1.446032 / 1.541195 (-0.095162) | 1.469441 / 1.468490 (0.000951) | 0.726477 / 4.584777 (-3.858300) | 0.983086 / 3.745712 (-2.762626) | 2.892715 / 5.269862 (-2.377147) | 1.974092 / 4.565676 (-2.591584) | 0.079500 / 0.424275 (-0.344775) | 0.005497 / 0.007607 (-0.002110) | 0.342220 / 0.226044 (0.116176) | 3.414508 / 2.268929 (1.145579) | 1.941550 / 55.444624 (-53.503074) | 1.645268 / 6.876477 (-5.231209) | 1.805909 / 2.142072 (-0.336163) | 0.814483 / 4.805227 (-3.990744) | 0.135867 / 6.500664 (-6.364797) | 0.041718 / 0.075469 (-0.033751) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.999751 / 1.841788 (-0.842036) | 12.488263 / 8.074308 (4.413954) | 10.867040 / 10.191392 (0.675648) | 0.143999 / 0.680424 (-0.536425) | 0.015496 / 0.534201 (-0.518705) | 0.302170 / 0.579283 (-0.277113) | 0.123753 / 0.434364 (-0.310611) | 0.340424 / 0.540337 (-0.199913) | 0.458339 / 1.386936 (-0.928597) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a6ccf944e42c1a84de81bf326accab9999b86c90 \"CML watermark\")\n"
] | 1,718,781,141,000 | 1,718,807,579,000 | 1,718,807,197,000 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6981.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6981",
"merged_at": "2024-06-19T14:26:37",
"patch_url": "https://github.com/huggingface/datasets/pull/6981.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6981"
} | Update docs on trust_remote_code defaults to False.
The docs needed to be updated due to this PR:
- #6954 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6981/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6980/comments | https://api.github.com/repos/huggingface/datasets/issues/6980/events | https://github.com/huggingface/datasets/issues/6980 | 2,360,909,930 | I_kwDODunzps6MuKBq | 6,980 | Support NumPy 2.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/730137?v=4",
"events_url": "https://api.github.com/users/NeilGirdhar/events{/privacy}",
"followers_url": "https://api.github.com/users/NeilGirdhar/followers",
"following_url": "https://api.github.com/users/NeilGirdhar/following{/other_user}",
"gists_url": "https://api.github.com/users/NeilGirdhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NeilGirdhar",
"id": 730137,
"login": "NeilGirdhar",
"node_id": "MDQ6VXNlcjczMDEzNw==",
"organizations_url": "https://api.github.com/users/NeilGirdhar/orgs",
"received_events_url": "https://api.github.com/users/NeilGirdhar/received_events",
"repos_url": "https://api.github.com/users/NeilGirdhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NeilGirdhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeilGirdhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NeilGirdhar"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1,718,753,422,000 | 1,719,048,017,000 | null | NONE | null | null | null | ### Feature request
Support NumPy 2.0.
### Motivation
NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API.
Besides that, NumPy 2 provides a cleaner interface than NumPy 1.
### Tasks
NumPy 2.0 was released for testing so that libraries could ensure compatibility [since mid-March](https://github.com/numpy/numpy/issues/24300#issuecomment-1986815755). What needs to be done for HuggingFace to support Numpy 2?
- [x] Fix use of `array`: https://github.com/huggingface/datasets/pull/6976
- [ ] Remove [NumPy version limit](https://github.com/huggingface/datasets/pull/6975): https://github.com/huggingface/datasets/pull/6991 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6980/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6979/comments | https://api.github.com/repos/huggingface/datasets/issues/6979/events | https://github.com/huggingface/datasets/issues/6979 | 2,360,175,363 | I_kwDODunzps6MrWsD | 6,979 | How can I load partial parquet files only? | {
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucasjinreal",
"id": 21303438,
"login": "lucasjinreal",
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucasjinreal"
} | [] | closed | false | null | [] | null | [
"Hello,\r\n\r\nHave you tried loading the dataset in streaming mode? [Documentation](https://huggingface.co./docs/datasets/v2.20.0/stream)\r\n\r\nThis way you wouldn't have to load it all. Also, let's be nice to Parquet, it's a really nice technology and we don't need to be mean :)",
"I have downloaded part of it, just want to know how to load part of it, stream mode is not work for me since my network (in china) not stable, I don't want do it all again and again.\r\n\r\nJust curious, doesn't there a way to load part of it?",
"Could you convert the IterableDataset to a Dataset after taking the first 100 rows with `.take`? This way, you would have a local copy of the first 100 rows on your system and thus won't need to download. Would that work?\r\n\r\nHere is a [SO question](https://stackoverflow.com/questions/76227219/can-i-convert-an-iterabledataset-to-dataset) detailing how to do the conversion.",
"I mean, the parquet is like:\r\n\r\n00000-0143554\r\n00001-0143554\r\n00002-0143554\r\n...\r\n00100-0143554\r\n...\r\n09100-0143554\r\n\r\nI just downloaded the first 9900 part of it. \r\n\r\nI can not load with load_dataset, it throw an error says my file is not same as parquet all amount.\r\n\r\nHow could I load the only I have? \r\n\r\n( I really don't want downlaod them all, cause, I don't need all, and pulus, its huge.... )\r\n\r\nAs I said, I have donwloaded about 9999... It's not about stream... I just wnat to konw how to load offline... part....",
"Hi, @lucasjinreal.\r\n\r\nI am not sure of understanding your issue. What is the error message and stack trace you get? What version of `datasets` are you using? Could you provide a reproducible example?\r\n\r\nWithout knowing all those details, I would naively say that you can load whatever number of Parquet files by using the \"parquet\" loader: https://huggingface.co./docs/datasets/loading#parquet\r\n```python\r\nds = load_dataset(\"parquet\", data_files=\"data/train-001*-of-00314.parquet\", split=\"train\")\r\n```",
"@albertvillanova Not sure you have tested with this or not, but I have tried,\r\n\r\nthe only error I got is it still laodding all parquet with a progress bar maxium to the whole number 014354, and it loads my 0 - 000999 part, then throws an error.\r\n\r\nSays Numinfo is not same.\r\n\r\nI am so confused,",
"Yes, my code snippet works.\n\nCould you copy-paste your code and the output? Otherwise we are not able to know what the issue is.",
"@albertvillanova Hi, thanks for the tracing of the issue.\r\n\r\nThis is the output:\r\n\r\n```\r\nython get_llava_recap_cc3m.py\r\nGenerating train split: 3%|███▋ | 101910/3199866 [00:16<08:30, 6065.67 examples/s]\r\nTraceback (most recent call last):\r\n File \"get_llava_recap_cc3m.py\", line 31, in <module>\r\n dataset = load_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\")\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1118, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/info_utils.py\", line 101, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=156885281898.75, num_examples=3199866, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=4994080770, num_examples=101910, shard_lengths=[10191, 10291, 10291, 10291, 10291, 10191, 10191, 10291, 10291, 9591], dataset_name='llava-recap-cc3m')}]\r\n```\r\n\r\nthis is my code:\r\n\r\n```\r\ndataset = load_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\")\r\n```\r\n\r\nMy situation and requirements:\r\n\r\n00314 is all, but I downlaode about 150, half of it, as you can see, i used `0000*-of-00314.` which should be at most 99 file being loaded.\r\n\r\nBut it just fail.\r\n\r\nCan u understand my issue now?\r\n\r\nIf so, then **do not** suggest me with stream, Just want to know, is there a way to load part if it...... **and please don't say you can not replicate my issue when you have downloaded them all**, my english is not good, but I think all situations and all prerequists I have addressed already.\r\n\r\n",
"I see you did not use the \"parquet\" loader as I suggested in my code snippet above: https://github.com/huggingface/datasets/issues/6979#issuecomment-2182031415\r\nPlease try passing \"parquet\" instead of \"llava-recap-cc3m/\" to `load_dataset`, and the complete path to data files in `data_files`:\r\n```python\r\nload_dataset(\"parquet\", data_files=\"llava-recap-cc3m/data/train-001*-of-00314.parquet\")\r\n```",
"Let me explain that you get the error because of this content within the `dataset_info` YAML tag in the `llava-recap-cc3m/README.md`:\r\n```\r\n - name: train\r\n num_bytes: 156885281898.75\r\n num_examples: 3199866\r\n```\r\n\r\nBy default, if there is that content in the README file, `load_dataset` performs a basic check to verify it the generated number of examples matches the expected one and raises a `NonMatchingSplitsSizesError` if that is not the case. \r\n\r\nYou can avoid this basic check by passing `verification_mode=\"no_checks\"`:\r\n```python\r\nload_dataset(\"llava-recap-cc3m/\", data_files=\"data/train-0000*-of-00314.parquet\", verification_mode=\"no_checks\")\r\n```",
"And please, next time you have an issue, please fill the Bug template issue with all the necessary information: https://github.com/huggingface/datasets/issues/new?assignees=&labels=&projects=&template=bug-report.yml\r\n\r\nOtherwise it is very difficult for us to understand the underlying problem and to propose a pertinent solution.",
"thank u albert!\r\n\r\nIt solved my issue!"
] | 1,718,725,456,000 | 1,718,989,772,000 | 1,718,976,770,000 | NONE | null | null | null | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6979/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6978/comments | https://api.github.com/repos/huggingface/datasets/issues/6978/events | https://github.com/huggingface/datasets/pull/6978 | 2,359,511,469 | PR_kwDODunzps5yz0h6 | 6,978 | Fix regression for pandas < 2.0.0 in JSON loader | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6978). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005144 / 0.011353 (-0.006209) | 0.003500 / 0.011008 (-0.007509) | 0.063670 / 0.038508 (0.025162) | 0.031793 / 0.023109 (0.008683) | 0.239611 / 0.275898 (-0.036287) | 0.276681 / 0.323480 (-0.046799) | 0.004148 / 0.007986 (-0.003838) | 0.002713 / 0.004328 (-0.001615) | 0.048832 / 0.004250 (0.044582) | 0.043066 / 0.037052 (0.006014) | 0.256835 / 0.258489 (-0.001655) | 0.292224 / 0.293841 (-0.001617) | 0.027530 / 0.128546 (-0.101017) | 0.010509 / 0.075646 (-0.065137) | 0.203370 / 0.419271 (-0.215901) | 0.035643 / 0.043533 (-0.007890) | 0.252161 / 0.255139 (-0.002978) | 0.271883 / 0.283200 (-0.011316) | 0.018658 / 0.141683 (-0.123024) | 1.081676 / 1.452155 (-0.370479) | 1.142146 / 1.492716 (-0.350571) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093484 / 0.018006 (0.075477) | 0.298607 / 0.000490 (0.298117) | 0.000220 / 0.000200 (0.000020) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019021 / 0.037411 (-0.018390) | 0.062471 / 0.014526 (0.047946) | 0.075393 / 0.176557 (-0.101163) | 0.121040 / 0.737135 (-0.616095) | 0.077613 / 0.296338 (-0.218726) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294857 / 0.215209 (0.079648) | 2.931143 / 2.077655 (0.853489) | 1.510866 / 1.504120 (0.006746) | 1.379574 / 1.541195 (-0.161621) | 1.352358 / 1.468490 (-0.116133) | 0.561670 / 4.584777 (-4.023107) | 2.378434 / 3.745712 (-1.367278) | 2.713203 / 5.269862 (-2.556658) | 1.706416 / 4.565676 (-2.859260) | 0.062355 / 0.424275 (-0.361920) | 0.004971 / 0.007607 (-0.002636) | 0.336498 / 0.226044 (0.110453) | 3.316464 / 2.268929 (1.047535) | 1.833035 / 55.444624 (-53.611589) | 1.532808 / 6.876477 (-5.343668) | 1.537323 / 2.142072 (-0.604749) | 0.639430 / 4.805227 (-4.165798) | 0.115808 / 6.500664 (-6.384856) | 0.043545 / 0.075469 (-0.031924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974428 / 1.841788 (-0.867360) | 11.368914 / 8.074308 (3.294606) | 9.754488 / 10.191392 (-0.436904) | 0.146277 / 0.680424 (-0.534146) | 0.013917 / 0.534201 (-0.520284) | 0.286809 / 0.579283 (-0.292474) | 0.267144 / 0.434364 (-0.167219) | 0.326161 / 0.540337 (-0.214177) | 0.418059 / 1.386936 (-0.968877) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005341 / 0.011353 (-0.006012) | 0.003460 / 0.011008 (-0.007548) | 0.050135 / 0.038508 (0.011627) | 0.032014 / 0.023109 (0.008905) | 0.259835 / 0.275898 (-0.016063) | 0.286275 / 0.323480 (-0.037205) | 0.004350 / 0.007986 (-0.003636) | 0.002800 / 0.004328 (-0.001529) | 0.049358 / 0.004250 (0.045107) | 0.040182 / 0.037052 (0.003130) | 0.278352 / 0.258489 (0.019863) | 0.307869 / 0.293841 (0.014028) | 0.029151 / 0.128546 (-0.099395) | 0.010091 / 0.075646 (-0.065555) | 0.058814 / 0.419271 (-0.360458) | 0.033150 / 0.043533 (-0.010383) | 0.263594 / 0.255139 (0.008455) | 0.284065 / 0.283200 (0.000866) | 0.017968 / 0.141683 (-0.123714) | 1.145605 / 1.452155 (-0.306550) | 1.196884 / 1.492716 (-0.295832) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094045 / 0.018006 (0.076039) | 0.299031 / 0.000490 (0.298541) | 0.000210 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022510 / 0.037411 (-0.014901) | 0.077478 / 0.014526 (0.062953) | 0.087746 / 0.176557 (-0.088811) | 0.129311 / 0.737135 (-0.607825) | 0.089921 / 0.296338 (-0.206418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290279 / 0.215209 (0.075070) | 2.880725 / 2.077655 (0.803070) | 1.541262 / 1.504120 (0.037142) | 1.424475 / 1.541195 (-0.116719) | 1.436397 / 1.468490 (-0.032093) | 0.578237 / 4.584777 (-4.006540) | 0.965249 / 3.745712 (-2.780463) | 2.682534 / 5.269862 (-2.587327) | 1.732859 / 4.565676 (-2.832817) | 0.065523 / 0.424275 (-0.358752) | 0.005466 / 0.007607 (-0.002141) | 0.343985 / 0.226044 (0.117940) | 3.397463 / 2.268929 (1.128534) | 1.929370 / 55.444624 (-53.515255) | 1.605135 / 6.876477 (-5.271342) | 1.753926 / 2.142072 (-0.388146) | 0.659929 / 4.805227 (-4.145298) | 0.118093 / 6.500664 (-6.382571) | 0.041252 / 0.075469 (-0.034217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009177 / 1.841788 (-0.832610) | 11.959624 / 8.074308 (3.885316) | 10.484672 / 10.191392 (0.293280) | 0.142085 / 0.680424 (-0.538339) | 0.015955 / 0.534201 (-0.518245) | 0.283649 / 0.579283 (-0.295634) | 0.125681 / 0.434364 (-0.308683) | 0.320490 / 0.540337 (-0.219847) | 0.440353 / 1.386936 (-0.946583) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e47a746bcda4b97db2467542b76d3215b3569ff0 \"CML watermark\")\n",
"Maybe a patch release will be needed with this fix."
] | 1,718,706,394,000 | 1,718,778,204,000 | 1,718,776,218,000 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6978",
"merged_at": "2024-06-19T05:50:18",
"patch_url": "https://github.com/huggingface/datasets/pull/6978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6978"
} | A regression was introduced for pandas < 2.0.0 in PR:
- #6914
As described in pandas docs, the `dtype_backend` parameter was first added in pandas 2.0.0: https://pandas.pydata.org/docs/reference/api/pandas.read_json.html
This PR fixes the regression by passing (or not) the `dtype_backend` parameter depending on pandas version.
Maybe, in a future 3.0 `datasets` release, we could just require pandas > 2.0.
Reported by:
- #6977 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6978/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6977/comments | https://api.github.com/repos/huggingface/datasets/issues/6977/events | https://github.com/huggingface/datasets/issues/6977 | 2,359,295,045 | I_kwDODunzps6Mn_xF | 6,977 | load json file error with v2.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15037766?v=4",
"events_url": "https://api.github.com/users/xiaoyaolangzhi/events{/privacy}",
"followers_url": "https://api.github.com/users/xiaoyaolangzhi/followers",
"following_url": "https://api.github.com/users/xiaoyaolangzhi/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaoyaolangzhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xiaoyaolangzhi",
"id": 15037766,
"login": "xiaoyaolangzhi",
"node_id": "MDQ6VXNlcjE1MDM3NzY2",
"organizations_url": "https://api.github.com/users/xiaoyaolangzhi/orgs",
"received_events_url": "https://api.github.com/users/xiaoyaolangzhi/received_events",
"repos_url": "https://api.github.com/users/xiaoyaolangzhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xiaoyaolangzhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaoyaolangzhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xiaoyaolangzhi"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @xiaoyaolangzhi.\r\n\r\nIndeed, we are currently requiring `pandas` >= 2.0.0.\r\n\r\nYou will need to update pandas in your local environment:\r\n```\r\npip install -U pandas\r\n``` ",
"Thank you very much."
] | 1,718,700,061,000 | 1,718,705,170,000 | 1,718,705,169,000 | NONE | null | null | null | ### Describe the bug
```
load_dataset(path="json", data_files="./test.json")
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1997, in _prepare_split_single
for _, table in generator:
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 155, in _generate_tables
df = pd.read_json(f, dtype_backend="pyarrow")
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
TypeError: read_json() got an unexpected keyword argument 'dtype_backend'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/t1.py", line 11, in <module>
load_dataset(path=data_path, data_files="./t2.json")
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2616, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1029, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1124, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1884, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 2040, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
```
import pandas as pd
with open("./test.json", "r") as f:
df = pd.read_json(f, dtype_backend="pyarrow")
```
```
Traceback (most recent call last):
File "/app/t3.py", line 3, in <module>
df = pd.read_json(f, dtype_backend="pyarrow")
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
TypeError: read_json() got an unexpected keyword argument 'dtype_backend'
```
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
```
datasets 2.20.0
pandas 1.5.3
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6977/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6977/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6976/comments | https://api.github.com/repos/huggingface/datasets/issues/6976/events | https://github.com/huggingface/datasets/pull/6976 | 2,357,107,203 | PR_kwDODunzps5yrmNP | 6,976 | Ensure compatibility with numpy 2.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}",
"followers_url": "https://api.github.com/users/KennethEnevoldsen/followers",
"following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}",
"gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KennethEnevoldsen",
"id": 23721977,
"login": "KennethEnevoldsen",
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs",
"received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events",
"repos_url": "https://api.github.com/users/KennethEnevoldsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KennethEnevoldsen"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6976). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005361 / 0.011353 (-0.005992) | 0.003983 / 0.011008 (-0.007025) | 0.062865 / 0.038508 (0.024357) | 0.029880 / 0.023109 (0.006771) | 0.261465 / 0.275898 (-0.014433) | 0.269791 / 0.323480 (-0.053689) | 0.004198 / 0.007986 (-0.003788) | 0.002942 / 0.004328 (-0.001387) | 0.049002 / 0.004250 (0.044751) | 0.043232 / 0.037052 (0.006180) | 0.328774 / 0.258489 (0.070285) | 0.297308 / 0.293841 (0.003467) | 0.030552 / 0.128546 (-0.097994) | 0.012632 / 0.075646 (-0.063015) | 0.204156 / 0.419271 (-0.215116) | 0.036014 / 0.043533 (-0.007519) | 0.241224 / 0.255139 (-0.013915) | 0.268358 / 0.283200 (-0.014842) | 0.019227 / 0.141683 (-0.122456) | 1.114515 / 1.452155 (-0.337639) | 1.147029 / 1.492716 (-0.345688) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094925 / 0.018006 (0.076919) | 0.301548 / 0.000490 (0.301059) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018875 / 0.037411 (-0.018536) | 0.062824 / 0.014526 (0.048298) | 0.075657 / 0.176557 (-0.100900) | 0.121926 / 0.737135 (-0.615209) | 0.077102 / 0.296338 (-0.219236) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286018 / 0.215209 (0.070808) | 2.832222 / 2.077655 (0.754567) | 1.462629 / 1.504120 (-0.041491) | 1.354746 / 1.541195 (-0.186449) | 1.339504 / 1.468490 (-0.128986) | 0.718381 / 4.584777 (-3.866396) | 2.401456 / 3.745712 (-1.344256) | 3.013518 / 5.269862 (-2.256343) | 1.944892 / 4.565676 (-2.620784) | 0.078793 / 0.424275 (-0.345482) | 0.005219 / 0.007607 (-0.002388) | 0.349551 / 0.226044 (0.123507) | 3.417844 / 2.268929 (1.148916) | 1.830669 / 55.444624 (-53.613956) | 1.502134 / 6.876477 (-5.374343) | 1.529242 / 2.142072 (-0.612830) | 0.793732 / 4.805227 (-4.011495) | 0.133571 / 6.500664 (-6.367093) | 0.042588 / 0.075469 (-0.032881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988167 / 1.841788 (-0.853620) | 11.926728 / 8.074308 (3.852420) | 9.806971 / 10.191392 (-0.384421) | 0.173951 / 0.680424 (-0.506473) | 0.015308 / 0.534201 (-0.518893) | 0.310768 / 0.579283 (-0.268515) | 0.268261 / 0.434364 (-0.166103) | 0.342962 / 0.540337 (-0.197375) | 0.431255 / 1.386936 (-0.955681) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005680 / 0.011353 (-0.005673) | 0.004231 / 0.011008 (-0.006778) | 0.051009 / 0.038508 (0.012501) | 0.031431 / 0.023109 (0.008322) | 0.268582 / 0.275898 (-0.007316) | 0.287942 / 0.323480 (-0.035538) | 0.004442 / 0.007986 (-0.003543) | 0.002818 / 0.004328 (-0.001511) | 0.050241 / 0.004250 (0.045991) | 0.039933 / 0.037052 (0.002881) | 0.285814 / 0.258489 (0.027325) | 0.316082 / 0.293841 (0.022241) | 0.032416 / 0.128546 (-0.096130) | 0.012398 / 0.075646 (-0.063248) | 0.060779 / 0.419271 (-0.358493) | 0.033706 / 0.043533 (-0.009827) | 0.273915 / 0.255139 (0.018776) | 0.289752 / 0.283200 (0.006553) | 0.017859 / 0.141683 (-0.123824) | 1.150224 / 1.452155 (-0.301930) | 1.197467 / 1.492716 (-0.295250) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093810 / 0.018006 (0.075803) | 0.302529 / 0.000490 (0.302039) | 0.000221 / 0.000200 (0.000021) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022903 / 0.037411 (-0.014508) | 0.077445 / 0.014526 (0.062919) | 0.089335 / 0.176557 (-0.087222) | 0.130848 / 0.737135 (-0.606287) | 0.091106 / 0.296338 (-0.205232) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294194 / 0.215209 (0.078985) | 2.886983 / 2.077655 (0.809328) | 1.557768 / 1.504120 (0.053648) | 1.424467 / 1.541195 (-0.116727) | 1.440625 / 1.468490 (-0.027865) | 0.724793 / 4.584777 (-3.859984) | 0.985216 / 3.745712 (-2.760496) | 2.856826 / 5.269862 (-2.413036) | 1.911638 / 4.565676 (-2.654039) | 0.080350 / 0.424275 (-0.343925) | 0.005616 / 0.007607 (-0.001991) | 0.348713 / 0.226044 (0.122668) | 3.414764 / 2.268929 (1.145835) | 1.925056 / 55.444624 (-53.519568) | 1.635752 / 6.876477 (-5.240725) | 1.761117 / 2.142072 (-0.380955) | 0.808309 / 4.805227 (-3.996918) | 0.136893 / 6.500664 (-6.363771) | 0.042116 / 0.075469 (-0.033354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004740 / 1.841788 (-0.837048) | 12.495859 / 8.074308 (4.421550) | 10.681233 / 10.191392 (0.489841) | 0.133320 / 0.680424 (-0.547104) | 0.015943 / 0.534201 (-0.518258) | 0.304869 / 0.579283 (-0.274414) | 0.128616 / 0.434364 (-0.305748) | 0.345930 / 0.540337 (-0.194407) | 0.457434 / 1.386936 (-0.929502) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#84d9dea52098c9403efb43d5b542dd6d45000bec \"CML watermark\")\n"
] | 1,718,623,762,000 | 1,718,807,432,000 | 1,718,805,874,000 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6976",
"merged_at": "2024-06-19T14:04:34",
"patch_url": "https://github.com/huggingface/datasets/pull/6976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6976"
} | Following the conversion guide, copy=False is no longer required and will result in an error: https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword.
The following fix should resolve the issue.
error found during testing on the MTEB repository e.g. [here](https://github.com/embeddings-benchmark/mteb/pull/938) | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6976/timeline | null | null | true |
End of preview.