The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: TypeError Message: Couldn't cast array of type timestamp[s] to null Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2020, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature return array_cast( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1962, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type timestamp[s] to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
sequence | state
string | locked
bool | assignee
null | assignees
sequence | milestone
null | comments
sequence | created_at
int64 | updated_at
int64 | closed_at
int64 | author_association
string | active_lock_reason
null | body
string | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | draft
float64 | pull_request
dict | is_pull_request
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6507/comments | https://api.github.com/repos/huggingface/datasets/issues/6507/events | https://github.com/huggingface/datasets/issues/6507 | 2,045,152,928 | I_kwDODunzps555o6g | 6,507 | where is glue_metric.py> @Frankie123421 what was the resolution to this? | {
"avatar_url": "https://avatars.githubusercontent.com/u/119146162?v=4",
"events_url": "https://api.github.com/users/Mcccccc1024/events{/privacy}",
"followers_url": "https://api.github.com/users/Mcccccc1024/followers",
"following_url": "https://api.github.com/users/Mcccccc1024/following{/other_user}",
"gists_url": "https://api.github.com/users/Mcccccc1024/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mcccccc1024",
"id": 119146162,
"login": "Mcccccc1024",
"node_id": "U_kgDOBxoGsg",
"organizations_url": "https://api.github.com/users/Mcccccc1024/orgs",
"received_events_url": "https://api.github.com/users/Mcccccc1024/received_events",
"repos_url": "https://api.github.com/users/Mcccccc1024/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mcccccc1024/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mcccccc1024/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mcccccc1024"
} | [] | open | false | null | [] | null | [] | 1,702,807,105,000 | 1,702,807,105,000 | null | NONE | null | > @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6507/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6506/comments | https://api.github.com/repos/huggingface/datasets/issues/6506/events | https://github.com/huggingface/datasets/issues/6506 | 2,044,975,038 | I_kwDODunzps5549e- | 6,506 | Incorrect test set labels for RTE and CoLA datasets via load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/73316684?v=4",
"events_url": "https://api.github.com/users/emreonal11/events{/privacy}",
"followers_url": "https://api.github.com/users/emreonal11/followers",
"following_url": "https://api.github.com/users/emreonal11/following{/other_user}",
"gists_url": "https://api.github.com/users/emreonal11/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emreonal11",
"id": 73316684,
"login": "emreonal11",
"node_id": "MDQ6VXNlcjczMzE2Njg0",
"organizations_url": "https://api.github.com/users/emreonal11/orgs",
"received_events_url": "https://api.github.com/users/emreonal11/received_events",
"repos_url": "https://api.github.com/users/emreonal11/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emreonal11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emreonal11/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emreonal11"
} | [] | open | false | null | [] | null | [] | 1,702,764,368,000 | 1,702,765,666,000 | null | NONE | null | ### Describe the bug
The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1.
Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax)
### Steps to reproduce the bug
!pip install datasets
from datasets import load_dataset
rte_data = load_dataset('glue', 'rte')
cola_data = load_dataset('glue', 'cola')
print(rte_data['test'][0:30]['label'])
print(cola_data['test'][0:30]['label'])
Output:
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
The non-label test data seems to be fine:
e.g. rte_data['test'][1] is:
{'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.",
'sentence2': 'Authorities in Brazil hold 200 people as hostage.',
'label': -1,
'idx': 1}
Training and validation data are also fine:
e.g. rte_data['train][0] is:
{'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.',
'sentence2': 'Weapons of Mass Destruction Found in Iraq.',
'label': 1,
'idx': 0}
### Expected behavior
Expected the labels to be binary 0/1 values; Got all -1s instead
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6506/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6505/comments | https://api.github.com/repos/huggingface/datasets/issues/6505/events | https://github.com/huggingface/datasets/issues/6505 | 2,044,721,288 | I_kwDODunzps553_iI | 6,505 | Got stuck when I trying to load a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/18232551?v=4",
"events_url": "https://api.github.com/users/yirenpingsheng/events{/privacy}",
"followers_url": "https://api.github.com/users/yirenpingsheng/followers",
"following_url": "https://api.github.com/users/yirenpingsheng/following{/other_user}",
"gists_url": "https://api.github.com/users/yirenpingsheng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yirenpingsheng",
"id": 18232551,
"login": "yirenpingsheng",
"node_id": "MDQ6VXNlcjE4MjMyNTUx",
"organizations_url": "https://api.github.com/users/yirenpingsheng/orgs",
"received_events_url": "https://api.github.com/users/yirenpingsheng/received_events",
"repos_url": "https://api.github.com/users/yirenpingsheng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yirenpingsheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yirenpingsheng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yirenpingsheng"
} | [] | open | false | null | [] | null | [] | 1,702,727,467,000 | 1,702,727,467,000 | null | NONE | null | ### Describe the bug
Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records.
Here is my code:
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
I waited it for 20 minutes. It still no response. I cannot using Ctrl+C to cancel the command. I have to use Ctrl+Z to kill it. I also try it with a txt file, it still no response in a long time.
I can load the same file successfully using my laptop (windows 10, python 3.8.5, datasets==2.14.5). I can also make it on another computer (Ubuntu 20.04.5 LTS, python 3.10.13, datasets 2.14.7). It only takes me 1-2 miniutes.
Could you give me some suggestions? Thank you.
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json')
### Expected behavior
I hope it can load the file successfully.
### Environment info
OS: Debian GNU/Linux 10
Python: Python 3.10.13
Pip list:
Package Version
------------------------- ------------
accelerate 0.25.0
addict 2.4.0
aiofiles 23.2.1
aiohttp 3.9.1
aiosignal 1.3.1
aliyun-python-sdk-core 2.14.0
aliyun-python-sdk-kms 2.16.2
altair 5.2.0
annotated-types 0.6.0
anyio 3.7.1
async-timeout 4.0.3
attrs 23.1.0
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
contourpy 1.2.0
crcmod 1.7
cryptography 41.0.7
cycler 0.12.1
datasets 2.14.7
dill 0.3.7
docstring-parser 0.15
einops 0.7.0
exceptiongroup 1.2.0
fastapi 0.105.0
ffmpy 0.3.1
filelock 3.13.1
fonttools 4.46.0
frozenlist 1.4.1
fsspec 2023.10.0
gast 0.5.4
gradio 3.50.2
gradio_client 0.6.1
h11 0.14.0
httpcore 1.0.2
httpx 0.25.2
huggingface-hub 0.19.4
idna 3.6
importlib-metadata 7.0.0
importlib-resources 6.1.1
jieba 0.42.1
Jinja2 3.1.2
jmespath 0.10.0
joblib 1.3.2
jsonschema 4.20.0
jsonschema-specifications 2023.11.2
kiwisolver 1.4.5
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.8.2
mdurl 0.1.2
modelscope 1.10.0
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
nltk 3.8.1
numpy 1.26.2
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.18.1
nvidia-nvjitlink-cu12 12.3.101
nvidia-nvtx-cu12 12.1.105
orjson 3.9.10
oss2 2.18.3
packaging 23.2
pandas 2.1.4
peft 0.7.1
Pillow 10.1.0
pip 23.3.1
platformdirs 4.1.0
protobuf 4.25.1
psutil 5.9.6
pyarrow 14.0.1
pyarrow-hotfix 0.6
pycparser 2.21
pycryptodome 3.19.0
pydantic 2.5.2
pydantic_core 2.14.5
pydub 0.25.1
Pygments 2.17.2
pyparsing 3.1.1
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.3.post1
PyYAML 6.0.1
referencing 0.32.0
regex 2023.10.3
requests 2.31.0
rich 13.7.0
rouge-chinese 1.0.3
rpds-py 0.13.2
safetensors 0.4.1
scipy 1.11.4
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 68.2.2
shtab 1.6.5
simplejson 3.19.2
six 1.16.0
sniffio 1.3.0
sortedcontainers 2.4.0
sse-starlette 1.8.2
starlette 0.27.0
sympy 1.12
tiktoken 0.5.2
tokenizers 0.15.0
tomli 2.0.1
toolz 0.12.0
torch 2.1.2
tqdm 4.66.1
transformers 4.36.1
triton 2.1.0
trl 0.7.4
typing_extensions 4.9.0
tyro 0.6.0
tzdata 2023.3
urllib3 2.1.0
uvicorn 0.24.0.post1
websockets 11.0.3
wheel 0.41.2
xxhash 3.4.1
yapf 0.40.2
yarl 1.9.4
zipp 3.17.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6505/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6504/comments | https://api.github.com/repos/huggingface/datasets/issues/6504/events | https://github.com/huggingface/datasets/issues/6504 | 2,044,541,154 | I_kwDODunzps553Tji | 6,504 | Error Pushing to Hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4",
"events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}",
"followers_url": "https://api.github.com/users/Jiayi-Pan/followers",
"following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jiayi-Pan",
"id": 55055083,
"login": "Jiayi-Pan",
"node_id": "MDQ6VXNlcjU1MDU1MDgz",
"organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs",
"received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events",
"repos_url": "https://api.github.com/users/Jiayi-Pan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jiayi-Pan"
} | [] | closed | false | null | [] | null | [] | 1,702,688,722,000 | 1,702,707,653,000 | 1,702,707,653,000 | NONE | null | ### Describe the bug
Error when trying to push a dataset in a special format to hub
### Steps to reproduce the bug
```
import datasets
from datasets import Dataset
dataset_dict = {
"filename": ["apple", "banana"],
"token": [[[1,2],[3,4]],[[1,2],[3,4]]],
"label": [0, 1],
}
dataset = Dataset.from_dict(dataset_dict)
dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16"))
dataset.push_to_hub("SequenceModel/imagenet_val_256")
```
Error:
```
...
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'
in "<unicode string>", line 8, column 16:
shape: !!python/tuple
^
```
### Expected behavior
Dataset being pushed to hub
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6504/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6504/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6503/comments | https://api.github.com/repos/huggingface/datasets/issues/6503/events | https://github.com/huggingface/datasets/pull/6503 | 2,043,847,591 | PR_kwDODunzps5iHgZf | 6,503 | Fix streaming xnli | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005003 / 0.011353 (-0.006350) | 0.003020 / 0.011008 (-0.007988) | 0.061370 / 0.038508 (0.022862) | 0.050996 / 0.023109 (0.027887) | 0.243434 / 0.275898 (-0.032464) | 0.266317 / 0.323480 (-0.057163) | 0.003888 / 0.007986 (-0.004098) | 0.002607 / 0.004328 (-0.001721) | 0.047541 / 0.004250 (0.043290) | 0.037933 / 0.037052 (0.000881) | 0.259695 / 0.258489 (0.001206) | 0.279374 / 0.293841 (-0.014467) | 0.027258 / 0.128546 (-0.101288) | 0.010184 / 0.075646 (-0.065462) | 0.207412 / 0.419271 (-0.211860) | 0.034978 / 0.043533 (-0.008554) | 0.247871 / 0.255139 (-0.007267) | 0.265273 / 0.283200 (-0.017927) | 0.017886 / 0.141683 (-0.123796) | 1.090451 / 1.452155 (-0.361704) | 1.152034 / 1.492716 (-0.340682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094383 / 0.018006 (0.076377) | 0.301151 / 0.000490 (0.300661) | 0.000211 / 0.000200 (0.000011) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018927 / 0.037411 (-0.018484) | 0.062152 / 0.014526 (0.047626) | 0.072177 / 0.176557 (-0.104380) | 0.119792 / 0.737135 (-0.617343) | 0.073333 / 0.296338 (-0.223005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282671 / 0.215209 (0.067462) | 2.721148 / 2.077655 (0.643494) | 1.472689 / 1.504120 (-0.031431) | 1.355226 / 1.541195 (-0.185969) | 1.375935 / 1.468490 (-0.092556) | 0.562600 / 4.584777 (-4.022177) | 2.364046 / 3.745712 (-1.381666) | 2.714984 / 5.269862 (-2.554878) | 1.738413 / 4.565676 (-2.827263) | 0.062564 / 0.424275 (-0.361711) | 0.004964 / 0.007607 (-0.002643) | 0.341300 / 0.226044 (0.115255) | 3.345187 / 2.268929 (1.076259) | 1.857822 / 55.444624 (-53.586803) | 1.581002 / 6.876477 (-5.295475) | 1.585919 / 2.142072 (-0.556153) | 0.640105 / 4.805227 (-4.165122) | 0.117880 / 6.500664 (-6.382784) | 0.042032 / 0.075469 (-0.033437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962701 / 1.841788 (-0.879086) | 11.309251 / 8.074308 (3.234943) | 10.462520 / 10.191392 (0.271128) | 0.127399 / 0.680424 (-0.553025) | 0.014549 / 0.534201 (-0.519652) | 0.297017 / 0.579283 (-0.282266) | 0.266152 / 0.434364 (-0.168212) | 0.349252 / 0.540337 (-0.191085) | 0.457015 / 1.386936 (-0.929921) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005341 / 0.011353 (-0.006012) | 0.003108 / 0.011008 (-0.007900) | 0.048862 / 0.038508 (0.010353) | 0.053354 / 0.023109 (0.030245) | 0.274499 / 0.275898 (-0.001399) | 0.296698 / 0.323480 (-0.026782) | 0.003974 / 0.007986 (-0.004012) | 0.002631 / 0.004328 (-0.001697) | 0.048013 / 0.004250 (0.043762) | 0.040416 / 0.037052 (0.003363) | 0.276581 / 0.258489 (0.018092) | 0.301296 / 0.293841 (0.007455) | 0.029049 / 0.128546 (-0.099497) | 0.010253 / 0.075646 (-0.065393) | 0.057157 / 0.419271 (-0.362114) | 0.031830 / 0.043533 (-0.011703) | 0.274341 / 0.255139 (0.019202) | 0.292583 / 0.283200 (0.009383) | 0.018449 / 0.141683 (-0.123234) | 1.145099 / 1.452155 (-0.307055) | 1.192958 / 1.492716 (-0.299758) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091596 / 0.018006 (0.073590) | 0.300917 / 0.000490 (0.300427) | 0.000225 / 0.000200 (0.000025) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021657 / 0.037411 (-0.015754) | 0.068464 / 0.014526 (0.053938) | 0.079869 / 0.176557 (-0.096687) | 0.117523 / 0.737135 (-0.619613) | 0.081257 / 0.296338 (-0.215082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294876 / 0.215209 (0.079667) | 2.879372 / 2.077655 (0.801718) | 1.619887 / 1.504120 (0.115767) | 1.482154 / 1.541195 (-0.059041) | 1.494656 / 1.468490 (0.026166) | 0.558914 / 4.584777 (-4.025862) | 2.420948 / 3.745712 (-1.324765) | 2.728992 / 5.269862 (-2.540869) | 1.722135 / 4.565676 (-2.843542) | 0.062182 / 0.424275 (-0.362093) | 0.004933 / 0.007607 (-0.002674) | 0.342759 / 0.226044 (0.116715) | 3.424083 / 2.268929 (1.155154) | 1.950673 / 55.444624 (-53.493951) | 1.683126 / 6.876477 (-5.193351) | 1.673135 / 2.142072 (-0.468937) | 0.633711 / 4.805227 (-4.171516) | 0.114898 / 6.500664 (-6.385766) | 0.040332 / 0.075469 (-0.035137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975102 / 1.841788 (-0.866685) | 11.975731 / 8.074308 (3.901423) | 10.961103 / 10.191392 (0.769711) | 0.131152 / 0.680424 (-0.549272) | 0.016268 / 0.534201 (-0.517933) | 0.285031 / 0.579283 (-0.294252) | 0.279556 / 0.434364 (-0.154808) | 0.324183 / 0.540337 (-0.216154) | 0.571404 / 1.386936 (-0.815532) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f67312956fc15572b6a0ca0dfcc0ceb90fbb794 \"CML watermark\")\n"
] | 1,702,651,257,000 | 1,702,651,866,000 | 1,702,651,487,000 | MEMBER | null | This code was failing
```python
In [1]: from datasets import load_dataset
In [2]:
...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True)
...:
...: sample_data = next(iter(ds))["premise"] # pick up one data
...: input_text = list(sample_data.values())
```
```
File ~/hf/datasets/src/datasets/features/translation.py:104, in TranslationVariableLanguages.encode_example(self, translation_dict)
102 return translation_dict
103 elif self.languages and set(translation_dict) - lang_set:
--> 104 raise ValueError(
105 f'Some languages in example ({", ".join(sorted(set(translation_dict) - lang_set))}) are not in valid set ({", ".join(lang_set)}).'
106 )
108 # Convert dictionary into tuples, splitting out cases where there are
109 # multiple translations for a single language.
110 translation_tuples = []
ValueError: Some languages in example (language, translation) are not in valid set (ur, fr, hi, sw, vi, el, de, th, en, tr, zh, ar, bg, ru, es).
```
because in streaming mode we expect features encode methods to be no-ops if the example is already encoded.
I fixed `TranslationVariableLanguages` to account for that | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6503/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6503/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6503.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6503",
"merged_at": "2023-12-15T14:44:46",
"patch_url": "https://github.com/huggingface/datasets/pull/6503.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6503"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6502/comments | https://api.github.com/repos/huggingface/datasets/issues/6502/events | https://github.com/huggingface/datasets/pull/6502 | 2,043,771,731 | PR_kwDODunzps5iHPt- | 6,502 | Pickle support for `torch.Generator` objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6502). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005472 / 0.011353 (-0.005881) | 0.003715 / 0.011008 (-0.007293) | 0.063257 / 0.038508 (0.024749) | 0.060683 / 0.023109 (0.037574) | 0.250885 / 0.275898 (-0.025013) | 0.271685 / 0.323480 (-0.051795) | 0.003051 / 0.007986 (-0.004934) | 0.002799 / 0.004328 (-0.001530) | 0.049113 / 0.004250 (0.044863) | 0.038965 / 0.037052 (0.001912) | 0.252688 / 0.258489 (-0.005801) | 0.282536 / 0.293841 (-0.011305) | 0.028722 / 0.128546 (-0.099824) | 0.010586 / 0.075646 (-0.065060) | 0.205145 / 0.419271 (-0.214127) | 0.036996 / 0.043533 (-0.006537) | 0.248874 / 0.255139 (-0.006265) | 0.266148 / 0.283200 (-0.017051) | 0.018540 / 0.141683 (-0.123143) | 1.120216 / 1.452155 (-0.331938) | 1.191072 / 1.492716 (-0.301644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095721 / 0.018006 (0.077714) | 0.313401 / 0.000490 (0.312911) | 0.000234 / 0.000200 (0.000034) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018604 / 0.037411 (-0.018807) | 0.061571 / 0.014526 (0.047045) | 0.075343 / 0.176557 (-0.101213) | 0.121272 / 0.737135 (-0.615864) | 0.076448 / 0.296338 (-0.219890) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286885 / 0.215209 (0.071676) | 2.809100 / 2.077655 (0.731445) | 1.485365 / 1.504120 (-0.018755) | 1.367672 / 1.541195 (-0.173523) | 1.423570 / 1.468490 (-0.044920) | 0.571063 / 4.584777 (-4.013714) | 2.385248 / 3.745712 (-1.360464) | 2.855251 / 5.269862 (-2.414610) | 1.799371 / 4.565676 (-2.766306) | 0.063491 / 0.424275 (-0.360784) | 0.004942 / 0.007607 (-0.002665) | 0.346181 / 0.226044 (0.120137) | 3.388123 / 2.268929 (1.119195) | 1.819093 / 55.444624 (-53.625532) | 1.552998 / 6.876477 (-5.323479) | 1.627930 / 2.142072 (-0.514143) | 0.653438 / 4.805227 (-4.151789) | 0.123831 / 6.500664 (-6.376833) | 0.043340 / 0.075469 (-0.032129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952167 / 1.841788 (-0.889621) | 12.149515 / 8.074308 (4.075207) | 10.665085 / 10.191392 (0.473693) | 0.127768 / 0.680424 (-0.552656) | 0.014022 / 0.534201 (-0.520179) | 0.285959 / 0.579283 (-0.293324) | 0.269727 / 0.434364 (-0.164637) | 0.336646 / 0.540337 (-0.203692) | 0.442932 / 1.386936 (-0.944005) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005351 / 0.011353 (-0.006002) | 0.003561 / 0.011008 (-0.007448) | 0.048890 / 0.038508 (0.010382) | 0.054093 / 0.023109 (0.030984) | 0.274397 / 0.275898 (-0.001501) | 0.296980 / 0.323480 (-0.026500) | 0.004126 / 0.007986 (-0.003860) | 0.002751 / 0.004328 (-0.001578) | 0.049131 / 0.004250 (0.044880) | 0.040769 / 0.037052 (0.003716) | 0.279147 / 0.258489 (0.020658) | 0.302014 / 0.293841 (0.008173) | 0.029847 / 0.128546 (-0.098699) | 0.010710 / 0.075646 (-0.064936) | 0.057626 / 0.419271 (-0.361645) | 0.032801 / 0.043533 (-0.010732) | 0.272698 / 0.255139 (0.017559) | 0.289238 / 0.283200 (0.006039) | 0.017876 / 0.141683 (-0.123807) | 1.152059 / 1.452155 (-0.300096) | 1.212289 / 1.492716 (-0.280427) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092914 / 0.018006 (0.074908) | 0.303092 / 0.000490 (0.302603) | 0.000214 / 0.000200 (0.000014) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022074 / 0.037411 (-0.015337) | 0.070109 / 0.014526 (0.055583) | 0.083360 / 0.176557 (-0.093196) | 0.122445 / 0.737135 (-0.614690) | 0.083625 / 0.296338 (-0.212714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282788 / 0.215209 (0.067579) | 2.789229 / 2.077655 (0.711574) | 1.571077 / 1.504120 (0.066957) | 1.452627 / 1.541195 (-0.088567) | 1.493176 / 1.468490 (0.024686) | 0.556892 / 4.584777 (-4.027885) | 2.442771 / 3.745712 (-1.302941) | 2.826316 / 5.269862 (-2.443545) | 1.758276 / 4.565676 (-2.807401) | 0.063039 / 0.424275 (-0.361236) | 0.004928 / 0.007607 (-0.002679) | 0.338247 / 0.226044 (0.112202) | 3.346344 / 2.268929 (1.077416) | 1.952520 / 55.444624 (-53.492104) | 1.664520 / 6.876477 (-5.211956) | 1.701528 / 2.142072 (-0.440544) | 0.634746 / 4.805227 (-4.170481) | 0.116879 / 6.500664 (-6.383786) | 0.040990 / 0.075469 (-0.034479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969521 / 1.841788 (-0.872267) | 12.431395 / 8.074308 (4.357087) | 10.907503 / 10.191392 (0.716111) | 0.131028 / 0.680424 (-0.549396) | 0.015239 / 0.534201 (-0.518962) | 0.290793 / 0.579283 (-0.288490) | 0.275072 / 0.434364 (-0.159292) | 0.331036 / 0.540337 (-0.209301) | 0.567858 / 1.386936 (-0.819078) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#092118fc00f7dd718ab3643739d7b23ff16c9eff \"CML watermark\")\n"
] | 1,702,648,512,000 | 1,702,652,673,000 | 1,702,652,302,000 | CONTRIBUTOR | null | Fix for https://discuss.huggingface.co/t/caching-a-dataset-processed-with-randomness/65616 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6502/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6502/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6502.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6502",
"merged_at": "2023-12-15T14:58:22",
"patch_url": "https://github.com/huggingface/datasets/pull/6502.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6502"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6501/comments | https://api.github.com/repos/huggingface/datasets/issues/6501/events | https://github.com/huggingface/datasets/issues/6501 | 2,043,377,240 | I_kwDODunzps55y3ZY | 6,501 | OverflowError: value too large to convert to int32_t | {
"avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4",
"events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangfan-algo/followers",
"following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangfan-algo",
"id": 47747764,
"login": "zhangfan-algo",
"node_id": "MDQ6VXNlcjQ3NzQ3NzY0",
"organizations_url": "https://api.github.com/users/zhangfan-algo/orgs",
"received_events_url": "https://api.github.com/users/zhangfan-algo/received_events",
"repos_url": "https://api.github.com/users/zhangfan-algo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangfan-algo"
} | [] | open | false | null | [] | null | [] | 1,702,635,021,000 | 1,702,635,021,000 | null | NONE | null | ### Describe the bug
![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630)
### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl
pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl
done | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6501/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6500/comments | https://api.github.com/repos/huggingface/datasets/issues/6500/events | https://github.com/huggingface/datasets/pull/6500 | 2,043,258,633 | PR_kwDODunzps5iFc6e | 6,500 | Enable setting config as default when push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6500). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This is ready for review @huggingface/datasets. ",
"Also what if the config is being overwritten and it was the default config and the user doesn't pass `set_default` ?\r\nI'd expect the config to keep being the default one but lmk what you think",
"How can you unset a config as the default one? In the case you mentioned, I would expect the config not being the default one.",
"Maybe by passing `set_default=False` ? (set_default can be None by default)"
] | 1,702,631,861,000 | 1,702,653,747,000 | null | MEMBER | null | Fix #6497. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6500/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6500.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6500",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6500.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6500"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6499/comments | https://api.github.com/repos/huggingface/datasets/issues/6499/events | https://github.com/huggingface/datasets/pull/6499 | 2,043,166,976 | PR_kwDODunzps5iFIUF | 6,499 | docs: add reference Git over SSH | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6499). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005701 / 0.011353 (-0.005652) | 0.003546 / 0.011008 (-0.007463) | 0.063335 / 0.038508 (0.024827) | 0.051987 / 0.023109 (0.028878) | 0.240429 / 0.275898 (-0.035469) | 0.260659 / 0.323480 (-0.062820) | 0.003866 / 0.007986 (-0.004120) | 0.002617 / 0.004328 (-0.001712) | 0.048653 / 0.004250 (0.044403) | 0.038176 / 0.037052 (0.001124) | 0.245496 / 0.258489 (-0.012993) | 0.277141 / 0.293841 (-0.016700) | 0.027886 / 0.128546 (-0.100660) | 0.010738 / 0.075646 (-0.064908) | 0.211255 / 0.419271 (-0.208016) | 0.045205 / 0.043533 (0.001672) | 0.243062 / 0.255139 (-0.012077) | 0.262877 / 0.283200 (-0.020323) | 0.023426 / 0.141683 (-0.118257) | 1.092247 / 1.452155 (-0.359908) | 1.161074 / 1.492716 (-0.331642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090488 / 0.018006 (0.072482) | 0.300993 / 0.000490 (0.300504) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018543 / 0.037411 (-0.018868) | 0.061418 / 0.014526 (0.046892) | 0.073242 / 0.176557 (-0.103314) | 0.120757 / 0.737135 (-0.616378) | 0.073967 / 0.296338 (-0.222372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282341 / 0.215209 (0.067132) | 2.741106 / 2.077655 (0.663451) | 1.416573 / 1.504120 (-0.087547) | 1.287904 / 1.541195 (-0.253291) | 1.309425 / 1.468490 (-0.159065) | 0.582592 / 4.584777 (-4.002184) | 2.404866 / 3.745712 (-1.340846) | 2.895397 / 5.269862 (-2.374464) | 1.799864 / 4.565676 (-2.765812) | 0.064386 / 0.424275 (-0.359889) | 0.004920 / 0.007607 (-0.002687) | 0.330879 / 0.226044 (0.104835) | 3.287064 / 2.268929 (1.018135) | 1.765169 / 55.444624 (-53.679456) | 1.490442 / 6.876477 (-5.386034) | 1.530960 / 2.142072 (-0.611113) | 0.655939 / 4.805227 (-4.149288) | 0.118529 / 6.500664 (-6.382135) | 0.042350 / 0.075469 (-0.033119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959027 / 1.841788 (-0.882761) | 11.911284 / 8.074308 (3.836976) | 10.576898 / 10.191392 (0.385506) | 0.141038 / 0.680424 (-0.539386) | 0.014184 / 0.534201 (-0.520017) | 0.305335 / 0.579283 (-0.273948) | 0.267531 / 0.434364 (-0.166832) | 0.353362 / 0.540337 (-0.186975) | 0.466258 / 1.386936 (-0.920678) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005194 / 0.011353 (-0.006159) | 0.003561 / 0.011008 (-0.007448) | 0.049181 / 0.038508 (0.010673) | 0.056664 / 0.023109 (0.033555) | 0.267142 / 0.275898 (-0.008756) | 0.291871 / 0.323480 (-0.031609) | 0.003996 / 0.007986 (-0.003990) | 0.003147 / 0.004328 (-0.001181) | 0.048527 / 0.004250 (0.044276) | 0.040239 / 0.037052 (0.003187) | 0.269728 / 0.258489 (0.011239) | 0.295531 / 0.293841 (0.001690) | 0.030316 / 0.128546 (-0.098231) | 0.010666 / 0.075646 (-0.064981) | 0.058176 / 0.419271 (-0.361095) | 0.033218 / 0.043533 (-0.010315) | 0.265383 / 0.255139 (0.010244) | 0.285102 / 0.283200 (0.001902) | 0.018295 / 0.141683 (-0.123388) | 1.117830 / 1.452155 (-0.334325) | 1.196919 / 1.492716 (-0.295798) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088547 / 0.018006 (0.070541) | 0.293220 / 0.000490 (0.292730) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022060 / 0.037411 (-0.015351) | 0.071973 / 0.014526 (0.057448) | 0.081721 / 0.176557 (-0.094836) | 0.119990 / 0.737135 (-0.617145) | 0.081639 / 0.296338 (-0.214700) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293712 / 0.215209 (0.078503) | 2.872986 / 2.077655 (0.795331) | 1.568944 / 1.504120 (0.064824) | 1.434555 / 1.541195 (-0.106639) | 1.457747 / 1.468490 (-0.010743) | 0.559296 / 4.584777 (-4.025481) | 2.471845 / 3.745712 (-1.273867) | 2.840916 / 5.269862 (-2.428946) | 1.754909 / 4.565676 (-2.810768) | 0.064585 / 0.424275 (-0.359690) | 0.004992 / 0.007607 (-0.002615) | 0.349149 / 0.226044 (0.123104) | 3.385906 / 2.268929 (1.116977) | 1.940644 / 55.444624 (-53.503980) | 1.638300 / 6.876477 (-5.238177) | 1.649939 / 2.142072 (-0.492133) | 0.645680 / 4.805227 (-4.159547) | 0.118080 / 6.500664 (-6.382584) | 0.040643 / 0.075469 (-0.034826) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969965 / 1.841788 (-0.871822) | 12.099766 / 8.074308 (4.025457) | 10.550650 / 10.191392 (0.359258) | 0.131736 / 0.680424 (-0.548688) | 0.015483 / 0.534201 (-0.518718) | 0.289231 / 0.579283 (-0.290052) | 0.287505 / 0.434364 (-0.146858) | 0.327326 / 0.540337 (-0.213011) | 0.570364 / 1.386936 (-0.816572) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#533c38cef16111e9e8154eeb76c207f1f4936ddf \"CML watermark\")\n"
] | 1,702,629,511,000 | 1,702,640,927,000 | 1,702,640,558,000 | CONTRIBUTOR | null | see https://discuss.huggingface.co/t/update-datasets-getting-started-to-new-git-security/65893 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6499/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6499/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6499.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6499",
"merged_at": "2023-12-15T11:42:38",
"patch_url": "https://github.com/huggingface/datasets/pull/6499.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6499"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6498/comments | https://api.github.com/repos/huggingface/datasets/issues/6498/events | https://github.com/huggingface/datasets/pull/6498 | 2,042,075,969 | PR_kwDODunzps5iBcFj | 6,498 | Fallback on dataset script if user wants to load default config | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6498). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> I was just thinking: what if the user does not pass a config name and the dataset has only a config with a name different from \"default\"?\r\n\r\nYou mean if there is a DEFAULT_CONFIG_NAME defined in the script but the dataset only has one configuration ? We can't easily get the number of configs without running the python code so I don't think we can support detect this case\r\n",
"Most datasets with a script don't define DEFAULT_CONFIG_NAME if there is only one configuration anyway.\r\n\r\nSo there is no issue e.g. for `squad`",
"> I was trying to mean the case where DEFAULT_CONFIG_NAME is None but there is only a single config in BUILDER_CONFIGS, with a name different from \"default\".\r\n\r\nIn this case we can detect if \"DEFAULT_CONFIG_NAME\" is not mentioned and use the Parquet export. If it is mentioned (and maybe it is set to None or to the single config) I consider that it may have multiple configs and fall back on using the script",
"... but the user does not pass the config name.",
"In this case we load the single configuration (this is how a DatasetBuilder works)",
"see \r\n\r\nhttps://github.com/huggingface/datasets/blob/2feaa589de86dd85941301fc8c3fa091731a67c0/src/datasets/builder.py#L532-L532",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005122 / 0.011353 (-0.006231) | 0.003565 / 0.011008 (-0.007443) | 0.062706 / 0.038508 (0.024198) | 0.049314 / 0.023109 (0.026205) | 0.247325 / 0.275898 (-0.028573) | 0.269788 / 0.323480 (-0.053692) | 0.003895 / 0.007986 (-0.004090) | 0.002788 / 0.004328 (-0.001540) | 0.048615 / 0.004250 (0.044365) | 0.037591 / 0.037052 (0.000539) | 0.253495 / 0.258489 (-0.004994) | 0.281200 / 0.293841 (-0.012641) | 0.027712 / 0.128546 (-0.100834) | 0.010901 / 0.075646 (-0.064745) | 0.205577 / 0.419271 (-0.213694) | 0.035989 / 0.043533 (-0.007544) | 0.252978 / 0.255139 (-0.002161) | 0.268042 / 0.283200 (-0.015157) | 0.017857 / 0.141683 (-0.123826) | 1.096633 / 1.452155 (-0.355521) | 1.147026 / 1.492716 (-0.345691) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095609 / 0.018006 (0.077603) | 0.311941 / 0.000490 (0.311451) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019042 / 0.037411 (-0.018369) | 0.060549 / 0.014526 (0.046023) | 0.074761 / 0.176557 (-0.101796) | 0.121729 / 0.737135 (-0.615406) | 0.075661 / 0.296338 (-0.220677) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284774 / 0.215209 (0.069565) | 2.764576 / 2.077655 (0.686921) | 1.489926 / 1.504120 (-0.014194) | 1.387276 / 1.541195 (-0.153919) | 1.400931 / 1.468490 (-0.067559) | 0.555623 / 4.584777 (-4.029154) | 2.409488 / 3.745712 (-1.336224) | 2.781053 / 5.269862 (-2.488808) | 1.750472 / 4.565676 (-2.815204) | 0.062232 / 0.424275 (-0.362043) | 0.004974 / 0.007607 (-0.002633) | 0.336324 / 0.226044 (0.110280) | 3.286619 / 2.268929 (1.017691) | 1.825070 / 55.444624 (-53.619554) | 1.537993 / 6.876477 (-5.338484) | 1.586520 / 2.142072 (-0.555553) | 0.640090 / 4.805227 (-4.165138) | 0.117637 / 6.500664 (-6.383027) | 0.042318 / 0.075469 (-0.033151) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964051 / 1.841788 (-0.877736) | 11.706259 / 8.074308 (3.631951) | 10.752311 / 10.191392 (0.560919) | 0.128117 / 0.680424 (-0.552307) | 0.014001 / 0.534201 (-0.520200) | 0.286255 / 0.579283 (-0.293028) | 0.263810 / 0.434364 (-0.170554) | 0.329347 / 0.540337 (-0.210991) | 0.437349 / 1.386936 (-0.949587) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003586 / 0.011008 (-0.007422) | 0.049339 / 0.038508 (0.010831) | 0.051287 / 0.023109 (0.028178) | 0.274397 / 0.275898 (-0.001501) | 0.292977 / 0.323480 (-0.030503) | 0.004029 / 0.007986 (-0.003957) | 0.002727 / 0.004328 (-0.001602) | 0.048779 / 0.004250 (0.044528) | 0.040075 / 0.037052 (0.003022) | 0.277676 / 0.258489 (0.019187) | 0.301963 / 0.293841 (0.008122) | 0.029340 / 0.128546 (-0.099206) | 0.010714 / 0.075646 (-0.064932) | 0.057253 / 0.419271 (-0.362018) | 0.033426 / 0.043533 (-0.010107) | 0.276673 / 0.255139 (0.021534) | 0.291053 / 0.283200 (0.007854) | 0.017660 / 0.141683 (-0.124023) | 1.122354 / 1.452155 (-0.329800) | 1.180381 / 1.492716 (-0.312335) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091903 / 0.018006 (0.073897) | 0.300720 / 0.000490 (0.300231) | 0.000288 / 0.000200 (0.000088) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021521 / 0.037411 (-0.015890) | 0.068233 / 0.014526 (0.053707) | 0.081245 / 0.176557 (-0.095312) | 0.119996 / 0.737135 (-0.617139) | 0.082483 / 0.296338 (-0.213856) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302776 / 0.215209 (0.087567) | 2.950776 / 2.077655 (0.873122) | 1.631032 / 1.504120 (0.126912) | 1.502021 / 1.541195 (-0.039174) | 1.514213 / 1.468490 (0.045723) | 0.578246 / 4.584777 (-4.006531) | 2.443768 / 3.745712 (-1.301944) | 2.827811 / 5.269862 (-2.442051) | 1.771529 / 4.565676 (-2.794148) | 0.064479 / 0.424275 (-0.359797) | 0.005061 / 0.007607 (-0.002546) | 0.350966 / 0.226044 (0.124922) | 3.458616 / 2.268929 (1.189687) | 1.967917 / 55.444624 (-53.476707) | 1.704661 / 6.876477 (-5.171815) | 1.698895 / 2.142072 (-0.443178) | 0.663259 / 4.805227 (-4.141968) | 0.122140 / 6.500664 (-6.378525) | 0.041099 / 0.075469 (-0.034371) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972080 / 1.841788 (-0.869708) | 12.123286 / 8.074308 (4.048978) | 10.819854 / 10.191392 (0.628462) | 0.131486 / 0.680424 (-0.548938) | 0.015785 / 0.534201 (-0.518416) | 0.290048 / 0.579283 (-0.289235) | 0.277822 / 0.434364 (-0.156542) | 0.325949 / 0.540337 (-0.214388) | 0.577681 / 1.386936 (-0.809255) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#30f6a2d9af183eba4501f0b8d90e9200bdca6bb1 \"CML watermark\")\n"
] | 1,702,572,361,000 | 1,702,646,216,000 | 1,702,645,848,000 | MEMBER | null | Right now this code is failing on `main`:
```python
load_dataset("openbookqa")
```
This is because it tries to load the dataset from the Parquet export but the dataset has multiple configurations and the Parquet export doesn't know which one is the default one.
I fixed this by simply falling back on using the dataset script (which tells the user to pass `trust_remote_code=True`):
```python
load_dataset("openbookqa", trust_remote_code=True)
```
Note that if the user happened to specify a config name I don't fall back on the script since we can use the Parquet export in this case (no need to know which config is the default)
```python
load_dataset("openbookqa", "main")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6498/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6498/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6498.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6498",
"merged_at": "2023-12-15T13:10:48",
"patch_url": "https://github.com/huggingface/datasets/pull/6498.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6498"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6497/comments | https://api.github.com/repos/huggingface/datasets/issues/6497/events | https://github.com/huggingface/datasets/issues/6497 | 2,041,994,274 | I_kwDODunzps55tlwi | 6,497 | Support setting a default config name in push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 1,702,569,543,000 | 1,702,628,780,000 | null | MEMBER | null | In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6497/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6496/comments | https://api.github.com/repos/huggingface/datasets/issues/6496/events | https://github.com/huggingface/datasets/issues/6496 | 2,041,589,386 | I_kwDODunzps55sC6K | 6,496 | Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again. | {
"avatar_url": "https://avatars.githubusercontent.com/u/35808396?v=4",
"events_url": "https://api.github.com/users/GeorgesLorre/events{/privacy}",
"followers_url": "https://api.github.com/users/GeorgesLorre/followers",
"following_url": "https://api.github.com/users/GeorgesLorre/following{/other_user}",
"gists_url": "https://api.github.com/users/GeorgesLorre/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GeorgesLorre",
"id": 35808396,
"login": "GeorgesLorre",
"node_id": "MDQ6VXNlcjM1ODA4Mzk2",
"organizations_url": "https://api.github.com/users/GeorgesLorre/orgs",
"received_events_url": "https://api.github.com/users/GeorgesLorre/received_events",
"repos_url": "https://api.github.com/users/GeorgesLorre/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GeorgesLorre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeorgesLorre/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GeorgesLorre"
} | [] | open | false | null | [] | null | [
"I transferred from datasets-server, since the issue is more about `datasets` and the integration with `huggingface_hub`."
] | 1,702,553,094,000 | 1,702,556,541,000 | null | NONE | null | **Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co./api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc)
A commit has happened since. Please refresh and try again.
```
**Steps to reproduce the bug**
This is a minimal reproducer:
```
import dask.dataframe as dd
import pandas as pd
import random
import os
import huggingface_hub
import datasets
huggingface_hub.login(token=os.getenv("HF_TOKEN"))
data = {"number": [random.randint(0,10) for _ in range(1000)]}
df = pd.DataFrame.from_dict(data)
dataframe = dd.from_pandas(df, npartitions=1)
dataframe = dataframe.repartition(npartitions=3)
schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema
repo_id = "GLorr/test-dask"
repo_path = f"hf://datasets/{repo_id}"
huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True)
dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema)
```
**Expected behavior**
Would expect to write to the hub without any problem.
**Environment info**
```
datasets==2.15.0
huggingface-hub==0.19.4
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6496/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6496/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6494/comments | https://api.github.com/repos/huggingface/datasets/issues/6494/events | https://github.com/huggingface/datasets/issues/6494 | 2,039,684,839 | I_kwDODunzps55kx7n | 6,494 | Image Data loaded Twice | {
"avatar_url": "https://avatars.githubusercontent.com/u/28867010?v=4",
"events_url": "https://api.github.com/users/baowuzhida/events{/privacy}",
"followers_url": "https://api.github.com/users/baowuzhida/followers",
"following_url": "https://api.github.com/users/baowuzhida/following{/other_user}",
"gists_url": "https://api.github.com/users/baowuzhida/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/baowuzhida",
"id": 28867010,
"login": "baowuzhida",
"node_id": "MDQ6VXNlcjI4ODY3MDEw",
"organizations_url": "https://api.github.com/users/baowuzhida/orgs",
"received_events_url": "https://api.github.com/users/baowuzhida/received_events",
"repos_url": "https://api.github.com/users/baowuzhida/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/baowuzhida/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baowuzhida/subscriptions",
"type": "User",
"url": "https://api.github.com/users/baowuzhida"
} | [] | open | false | null | [] | null | [] | 1,702,473,102,000 | 1,702,473,102,000 | null | NONE | null | ### Describe the bug
![1702472610561](https://github.com/huggingface/datasets/assets/28867010/4b7ef5e7-32c3-4b73-84cb-5de059caa0b6)
When I learn from https://huggingface.co./docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see in the attached image, there are only four images in the train folder, but reading brings up eight images
### Steps to reproduce the bug
from datasets import Dataset, load_dataset
dataset = load_dataset("imagefolder", data_dir="data/", drop_labels=False)
# print(dataset["train"][0]["image"] == dataset["train"][1]["image"])
print(dataset)
print(dataset["train"]["image"])
print(len(dataset["train"]["image"]))
### Expected behavior
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 8
})
})
[<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D1CA8B0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D2452E0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245310>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2453A0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245460>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245430>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2454F0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245550>]
8
### Environment info
- `datasets` version: 2.14.5
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.17
- Huggingface_hub version: 0.19.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6494/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6495/comments | https://api.github.com/repos/huggingface/datasets/issues/6495/events | https://github.com/huggingface/datasets/issues/6495 | 2,039,708,529 | I_kwDODunzps55k3tx | 6,495 | Newline characters don't behave as expected when calling dataset.info | {
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gerald-wrona",
"id": 32300890,
"login": "gerald-wrona",
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gerald-wrona"
} | [] | open | false | null | [] | null | [] | 1,702,422,471,000 | 1,702,473,862,000 | null | NONE | null | ### System Info
- `transformers` version: 4.32.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cpu (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@marios
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[Source](https://huggingface.co./docs/datasets/v2.2.1/en/access)
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673)
### Expected behavior
```
from datasets import load_dataset
dataset = load_dataset('glue', 'mrpc', split='train')
dataset.info
```
DatasetInfo(
description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n',
citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398',
license='',
features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')},
download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}},
download_size=1494541,
post_processing_size=None,
dataset_size=1492156,
size_in_bytes=2986697
) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6495/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6493/comments | https://api.github.com/repos/huggingface/datasets/issues/6493/events | https://github.com/huggingface/datasets/pull/6493 | 2,038,221,490 | PR_kwDODunzps5h0XJK | 6,493 | Lazy data files resolution and offline cache reload | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6493). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Naive question: is there any breaking change when loading?\r\n\r\nNo breaking changes except that the cache folders are different\r\n\r\ne.g. for glue sst2 (has parquet export)\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/glue/sst2/1.0.0/fd8e86499fa5c264fcaad392a8f49ddf58bf4037\r\nOn main\r\n~/.cache/huggingface/datasets/glue/sst2/0.0.0/74a75637ac4acd3f\r\nOn 2.15.0\r\n~/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad\r\n```\r\n\r\ne.g. for wikimedia/wikipedia 20231101.ab (has metadata configs)\r\n\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/4cb9b0d719291f1a10f96f67d609c5d442980dc9\r\nOn main (takes ages to load)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/cfa627e27933df13\r\nOn 2.15.0 (takes ages to load)\r\n~/.cache/huggingface/datasets/wikimedia___wikipedia/20231101.ab/0.0.0/e92ee7a91c466564\r\n```\r\n\r\n\r\ne.g. for lhoestq/demo1 (no metadata configs)\r\n\r\n\r\n```\r\nThis branch (new format is config/version/commit_sha)\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default/0.0.0/87ecf163bedca9d80598b528940a9c4f99e14c11\r\nOn main\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default-8a4a0b7a240d3c5e/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d\r\nOn 2.15.0\r\n~/.cache/huggingface/datasets/lhoestq___demo1/default-59d4029e0bb36ae0/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d\r\n```",
"There was a last bug I just fixed: if you modify a dataset and reload it from the hub it won't download the new version - I think I need to use another hash to name the cache directory\r\nedit: fixed",
"I switched to using the git commit sha for the cache directory, which is now `config/version/commit_sha` :) much cleaner than before.\r\n\r\nAnd for local file it's a hash that takes into account the resolved files (and their last modified dates)",
"I also ran the `transformers` CI on this branch and it's green",
"FYI `huggingface_hub` will have a release on tuesday/wednesday (will speed up load_dataset data files resolution which is now needed for datasets loaded from parquet export) so we can aim on merging this around the same time and do a release on thursday"
] | 1,702,401,317,000 | 1,702,664,178,000 | null | MEMBER | null | Includes both https://github.com/huggingface/datasets/pull/6458 and https://github.com/huggingface/datasets/pull/6459
This PR should be merged instead of the two individually, since they are conflicting
## Offline cache reload
it can reload datasets that were pushed to hub if they exist in the cache.
example:
```python
>>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp")
>>> load_dataset("lhoestq/tmp")
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
and later, without connection:
```python
>>> load_dataset("lhoestq/tmp")
Using the latest cached version of the dataset since lhoestq/tmp couldn't be found on the Hugging Face Hub
Found the latest cached dataset configuration 'default' at /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/default/0.0.0/da0e902a945afeb9 (last modified on Wed Dec 13 14:55:52 2023).
DatasetDict({
train: Dataset({
features: ['a'],
num_rows: 2
})
})
```
- Updated `CachedDatasetModuleFactory` to look for datasets in the cache at `<namespace>___<dataset_name>/<config_id>`
- Since the metadata configs parameters are not available in offline mode, we don't know which folder to load (config_id and hash change), so I simply load the latest one
- I instantiate a BuilderConfig even if there is no metadata config with the right config_name
- Its config_id is equal to the config_name to be able to retrieve it in the cache (no more suffix for configs from metadata configs)
- We can reload this config if offline mode by specifying the right config_name (same as online !)
- Consequences of this change:
- Only when there are user's parameters it creates a custom builder config with config_id = config_name + user parameters hash
- the hash used to name the cache folder takes into account the metadata config and the dataset info, so that the right cache can be reloaded when there is internet connection without redownloading the data or resolving the data files. For local directories I hash the builder configs and dataset info, and for datasets on the hub I use the commit sha as hash.
- cache directories now look like `config/version/commit_sha` for hub datasets which is clean :)
Fix https://github.com/huggingface/datasets/issues/3547
## Lazy data files resolution
this makes this code run in 2sec instead of >10sec
```python
from datasets import load_dataset
ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False)
```
For some datasets with many configs and files it can be up to 100x faster.
This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts.
The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6493/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6493/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6493",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6493"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6492/comments | https://api.github.com/repos/huggingface/datasets/issues/6492/events | https://github.com/huggingface/datasets/pull/6492 | 2,037,987,267 | PR_kwDODunzps5hzjhQ | 6,492 | Make push_to_hub return CommitInfo | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6492). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This PR is ready to review @huggingface/datasets.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005093 / 0.011353 (-0.006259) | 0.003695 / 0.011008 (-0.007313) | 0.064648 / 0.038508 (0.026140) | 0.054677 / 0.023109 (0.031568) | 0.242007 / 0.275898 (-0.033891) | 0.265216 / 0.323480 (-0.058264) | 0.003847 / 0.007986 (-0.004138) | 0.003773 / 0.004328 (-0.000556) | 0.048595 / 0.004250 (0.044345) | 0.038122 / 0.037052 (0.001070) | 0.245698 / 0.258489 (-0.012791) | 0.278095 / 0.293841 (-0.015746) | 0.027488 / 0.128546 (-0.101058) | 0.011002 / 0.075646 (-0.064644) | 0.211443 / 0.419271 (-0.207829) | 0.035664 / 0.043533 (-0.007869) | 0.244754 / 0.255139 (-0.010385) | 0.261078 / 0.283200 (-0.022121) | 0.017768 / 0.141683 (-0.123915) | 1.130765 / 1.452155 (-0.321390) | 1.189825 / 1.492716 (-0.302891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093027 / 0.018006 (0.075021) | 0.302193 / 0.000490 (0.301703) | 0.000207 / 0.000200 (0.000007) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018413 / 0.037411 (-0.018999) | 0.062715 / 0.014526 (0.048190) | 0.073287 / 0.176557 (-0.103269) | 0.120394 / 0.737135 (-0.616741) | 0.077573 / 0.296338 (-0.218765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284445 / 0.215209 (0.069236) | 2.780718 / 2.077655 (0.703063) | 1.460988 / 1.504120 (-0.043132) | 1.345799 / 1.541195 (-0.195395) | 1.399892 / 1.468490 (-0.068598) | 0.576051 / 4.584777 (-4.008726) | 2.418792 / 3.745712 (-1.326921) | 2.901330 / 5.269862 (-2.368532) | 1.765083 / 4.565676 (-2.800593) | 0.063555 / 0.424275 (-0.360720) | 0.004991 / 0.007607 (-0.002616) | 0.339657 / 0.226044 (0.113613) | 3.372963 / 2.268929 (1.104034) | 1.853667 / 55.444624 (-53.590958) | 1.552022 / 6.876477 (-5.324454) | 1.616452 / 2.142072 (-0.525620) | 0.652309 / 4.805227 (-4.152919) | 0.121125 / 6.500664 (-6.379539) | 0.042420 / 0.075469 (-0.033049) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954514 / 1.841788 (-0.887274) | 11.853736 / 8.074308 (3.779428) | 10.624571 / 10.191392 (0.433179) | 0.134118 / 0.680424 (-0.546306) | 0.014200 / 0.534201 (-0.520001) | 0.290106 / 0.579283 (-0.289177) | 0.270637 / 0.434364 (-0.163727) | 0.336155 / 0.540337 (-0.204182) | 0.443962 / 1.386936 (-0.942974) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005282 / 0.011353 (-0.006071) | 0.003526 / 0.011008 (-0.007482) | 0.048994 / 0.038508 (0.010486) | 0.055345 / 0.023109 (0.032236) | 0.271587 / 0.275898 (-0.004311) | 0.294676 / 0.323480 (-0.028804) | 0.003989 / 0.007986 (-0.003996) | 0.002594 / 0.004328 (-0.001735) | 0.048310 / 0.004250 (0.044059) | 0.039945 / 0.037052 (0.002893) | 0.277304 / 0.258489 (0.018815) | 0.312017 / 0.293841 (0.018176) | 0.028364 / 0.128546 (-0.100182) | 0.010683 / 0.075646 (-0.064963) | 0.057990 / 0.419271 (-0.361281) | 0.032418 / 0.043533 (-0.011115) | 0.273835 / 0.255139 (0.018697) | 0.288585 / 0.283200 (0.005385) | 0.018964 / 0.141683 (-0.122719) | 1.148863 / 1.452155 (-0.303292) | 1.195684 / 1.492716 (-0.297032) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091967 / 0.018006 (0.073960) | 0.303236 / 0.000490 (0.302747) | 0.000214 / 0.000200 (0.000015) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021960 / 0.037411 (-0.015452) | 0.068744 / 0.014526 (0.054218) | 0.081167 / 0.176557 (-0.095390) | 0.119623 / 0.737135 (-0.617513) | 0.084965 / 0.296338 (-0.211373) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297740 / 0.215209 (0.082531) | 2.924856 / 2.077655 (0.847201) | 1.602080 / 1.504120 (0.097960) | 1.494083 / 1.541195 (-0.047112) | 1.544662 / 1.468490 (0.076172) | 0.581212 / 4.584777 (-4.003565) | 2.451064 / 3.745712 (-1.294648) | 2.875213 / 5.269862 (-2.394649) | 1.780777 / 4.565676 (-2.784900) | 0.063751 / 0.424275 (-0.360524) | 0.004967 / 0.007607 (-0.002641) | 0.350321 / 0.226044 (0.124276) | 3.449585 / 2.268929 (1.180657) | 1.977666 / 55.444624 (-53.466958) | 1.685125 / 6.876477 (-5.191351) | 1.734466 / 2.142072 (-0.407606) | 0.657477 / 4.805227 (-4.147750) | 0.116767 / 6.500664 (-6.383898) | 0.041400 / 0.075469 (-0.034069) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985751 / 1.841788 (-0.856037) | 12.300065 / 8.074308 (4.225756) | 10.608238 / 10.191392 (0.416846) | 0.139907 / 0.680424 (-0.540517) | 0.015379 / 0.534201 (-0.518822) | 0.283528 / 0.579283 (-0.295755) | 0.278751 / 0.434364 (-0.155613) | 0.328811 / 0.540337 (-0.211527) | 0.584041 / 1.386936 (-0.802895) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef0f986518bd252c5314a7e3a419dedcbb166630 \"CML watermark\")\n"
] | 1,702,394,296,000 | 1,702,477,741,000 | 1,702,477,361,000 | MEMBER | null | Make `push_to_hub` return `CommitInfo`.
This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID.
CC: @severo for the use case in https://huggingface.co./datasets/jmhessel/newyorker_caption_contest/discussions/4 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6492/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6492.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6492",
"merged_at": "2023-12-13T14:22:41",
"patch_url": "https://github.com/huggingface/datasets/pull/6492.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6492"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6491/comments | https://api.github.com/repos/huggingface/datasets/issues/6491/events | https://github.com/huggingface/datasets/pull/6491 | 2,037,690,643 | PR_kwDODunzps5hyiTY | 6,491 | Fix metrics dead link | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6491). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702,385,509,000 | 1,702,385,905,000 | null | CONTRIBUTOR | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6491/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6491.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6491",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6491.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6491"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6490/comments | https://api.github.com/repos/huggingface/datasets/issues/6490/events | https://github.com/huggingface/datasets/issues/6490 | 2,037,204,892 | I_kwDODunzps55bUec | 6,490 | `load_dataset(...,save_infos=True)` not working without loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/114978051?v=4",
"events_url": "https://api.github.com/users/morganveyret/events{/privacy}",
"followers_url": "https://api.github.com/users/morganveyret/followers",
"following_url": "https://api.github.com/users/morganveyret/following{/other_user}",
"gists_url": "https://api.github.com/users/morganveyret/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/morganveyret",
"id": 114978051,
"login": "morganveyret",
"node_id": "U_kgDOBtptAw",
"organizations_url": "https://api.github.com/users/morganveyret/orgs",
"received_events_url": "https://api.github.com/users/morganveyret/received_events",
"repos_url": "https://api.github.com/users/morganveyret/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/morganveyret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganveyret/subscriptions",
"type": "User",
"url": "https://api.github.com/users/morganveyret"
} | [] | open | false | null | [] | null | [
"Also, once the README.md exists in the python environment it is used when loading another dataset in the same format (e.g. json) since it always resolves the path to the same directory.\r\nThe consequence here is any other dataset won't load because of infos mismatch.\r\nTo reproduce this aspect:\r\n1. Do a `load_datasets(...,save_infos=True)` with one dataset without a loading script\r\n2. Try to load another dataset without a loading script in the same format (e.g. json) but with a different schema "
] | 1,702,368,558,000 | 1,702,370,182,000 | null | NONE | null | ### Describe the bug
It seems that saving a dataset infos back into the card file is not working for datasets without a loading script.
After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory.
Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`).
### Steps to reproduce the bug
1. Have a local dataset without any loading script
2. Make sure there are no dataset infos in the README.md
3. Load with `save_infos=True`
4. No change in the dataset README.md
5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`)
### Expected behavior
The dataset README.md should be updated and no file should be created in the python environment.
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.6.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6490/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6490/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6489/comments | https://api.github.com/repos/huggingface/datasets/issues/6489/events | https://github.com/huggingface/datasets/issues/6489 | 2,036,743,777 | I_kwDODunzps55Zj5h | 6,489 | load_dataset imageflder for aws s3 path | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353106?v=4",
"events_url": "https://api.github.com/users/segalinc/events{/privacy}",
"followers_url": "https://api.github.com/users/segalinc/followers",
"following_url": "https://api.github.com/users/segalinc/following{/other_user}",
"gists_url": "https://api.github.com/users/segalinc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/segalinc",
"id": 9353106,
"login": "segalinc",
"node_id": "MDQ6VXNlcjkzNTMxMDY=",
"organizations_url": "https://api.github.com/users/segalinc/orgs",
"received_events_url": "https://api.github.com/users/segalinc/received_events",
"repos_url": "https://api.github.com/users/segalinc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/segalinc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/segalinc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/segalinc"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1,702,339,723,000 | 1,702,339,767,000 | null | NONE | null | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience with this | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6489/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6488/comments | https://api.github.com/repos/huggingface/datasets/issues/6488/events | https://github.com/huggingface/datasets/issues/6488 | 2,035,899,898 | I_kwDODunzps55WV36 | 6,488 | 429 Client Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/7882383?v=4",
"events_url": "https://api.github.com/users/sasaadi/events{/privacy}",
"followers_url": "https://api.github.com/users/sasaadi/followers",
"following_url": "https://api.github.com/users/sasaadi/following{/other_user}",
"gists_url": "https://api.github.com/users/sasaadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sasaadi",
"id": 7882383,
"login": "sasaadi",
"node_id": "MDQ6VXNlcjc4ODIzODM=",
"organizations_url": "https://api.github.com/users/sasaadi/orgs",
"received_events_url": "https://api.github.com/users/sasaadi/received_events",
"repos_url": "https://api.github.com/users/sasaadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sasaadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sasaadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sasaadi"
} | [] | open | false | null | [] | null | [
"Transferring repos as this is a datasets issue "
] | 1,702,307,161,000 | 1,702,308,863,000 | null | NONE | null | Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it?
Thanks
Dataset:
https://huggingface.co./datasets/cerebras/SlimPajama-627B
Error:
`requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co./datasets/cerebras/SlimPajama-627B/resolve/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk1/example_train_3300.jsonl.zst`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6488/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6487/comments | https://api.github.com/repos/huggingface/datasets/issues/6487/events | https://github.com/huggingface/datasets/pull/6487 | 2,035,424,254 | PR_kwDODunzps5hqyfV | 6,487 | Update builder hash with info | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Closing this one in favor of https://github.com/huggingface/datasets/pull/6458/commits/565c294fc12bc547730a023a610ed4f92313d8fb in https://github.com/huggingface/datasets/pull/6458"
] | 1,702,292,956,000 | 1,702,294,894,000 | 1,702,294,894,000 | MEMBER | null | Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change.
This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub)
Ideally we should take the resolved files into account as well but this will be for another PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6487/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6487/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6487.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6487",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6487.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6487"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6486/comments | https://api.github.com/repos/huggingface/datasets/issues/6486/events | https://github.com/huggingface/datasets/pull/6486 | 2,035,206,206 | PR_kwDODunzps5hqCSc | 6,486 | Fix docs phrasing about supported formats when sharing a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6486). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005042 / 0.011353 (-0.006311) | 0.003452 / 0.011008 (-0.007557) | 0.061845 / 0.038508 (0.023337) | 0.052042 / 0.023109 (0.028933) | 0.241791 / 0.275898 (-0.034107) | 0.264639 / 0.323480 (-0.058841) | 0.003940 / 0.007986 (-0.004045) | 0.002768 / 0.004328 (-0.001560) | 0.047851 / 0.004250 (0.043600) | 0.037599 / 0.037052 (0.000547) | 0.251462 / 0.258489 (-0.007028) | 0.274737 / 0.293841 (-0.019104) | 0.027723 / 0.128546 (-0.100823) | 0.010510 / 0.075646 (-0.065137) | 0.205581 / 0.419271 (-0.213691) | 0.035504 / 0.043533 (-0.008029) | 0.242380 / 0.255139 (-0.012759) | 0.259791 / 0.283200 (-0.023409) | 0.017752 / 0.141683 (-0.123931) | 1.089289 / 1.452155 (-0.362865) | 1.161958 / 1.492716 (-0.330759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094288 / 0.018006 (0.076282) | 0.303253 / 0.000490 (0.302763) | 0.000216 / 0.000200 (0.000016) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018496 / 0.037411 (-0.018915) | 0.060411 / 0.014526 (0.045885) | 0.074294 / 0.176557 (-0.102262) | 0.122934 / 0.737135 (-0.614201) | 0.074710 / 0.296338 (-0.221629) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286394 / 0.215209 (0.071185) | 2.806145 / 2.077655 (0.728490) | 1.497071 / 1.504120 (-0.007049) | 1.362254 / 1.541195 (-0.178940) | 1.389642 / 1.468490 (-0.078848) | 0.554503 / 4.584777 (-4.030274) | 2.348029 / 3.745712 (-1.397684) | 2.780862 / 5.269862 (-2.489000) | 1.728058 / 4.565676 (-2.837619) | 0.062617 / 0.424275 (-0.361658) | 0.004901 / 0.007607 (-0.002707) | 0.346267 / 0.226044 (0.120223) | 3.363744 / 2.268929 (1.094815) | 1.826994 / 55.444624 (-53.617630) | 1.560656 / 6.876477 (-5.315820) | 1.561083 / 2.142072 (-0.580990) | 0.643395 / 4.805227 (-4.161832) | 0.116206 / 6.500664 (-6.384458) | 0.042008 / 0.075469 (-0.033461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953416 / 1.841788 (-0.888371) | 11.461665 / 8.074308 (3.387357) | 10.623865 / 10.191392 (0.432473) | 0.128071 / 0.680424 (-0.552353) | 0.014277 / 0.534201 (-0.519924) | 0.288810 / 0.579283 (-0.290474) | 0.267575 / 0.434364 (-0.166788) | 0.327422 / 0.540337 (-0.212916) | 0.435151 / 1.386936 (-0.951785) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005242 / 0.011353 (-0.006111) | 0.003515 / 0.011008 (-0.007493) | 0.048483 / 0.038508 (0.009975) | 0.051684 / 0.023109 (0.028575) | 0.276564 / 0.275898 (0.000666) | 0.297582 / 0.323480 (-0.025898) | 0.004117 / 0.007986 (-0.003869) | 0.002610 / 0.004328 (-0.001719) | 0.047811 / 0.004250 (0.043561) | 0.040622 / 0.037052 (0.003569) | 0.280265 / 0.258489 (0.021776) | 0.311719 / 0.293841 (0.017878) | 0.028811 / 0.128546 (-0.099735) | 0.010600 / 0.075646 (-0.065047) | 0.056660 / 0.419271 (-0.362611) | 0.032638 / 0.043533 (-0.010894) | 0.276434 / 0.255139 (0.021295) | 0.299095 / 0.283200 (0.015896) | 0.018483 / 0.141683 (-0.123200) | 1.156382 / 1.452155 (-0.295773) | 1.252205 / 1.492716 (-0.240511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097868 / 0.018006 (0.079862) | 0.309438 / 0.000490 (0.308948) | 0.000229 / 0.000200 (0.000029) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021838 / 0.037411 (-0.015573) | 0.068358 / 0.014526 (0.053832) | 0.080432 / 0.176557 (-0.096125) | 0.119788 / 0.737135 (-0.617348) | 0.081742 / 0.296338 (-0.214597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301239 / 0.215209 (0.086030) | 2.962242 / 2.077655 (0.884587) | 1.693918 / 1.504120 (0.189798) | 1.573663 / 1.541195 (0.032468) | 1.583125 / 1.468490 (0.114635) | 0.557267 / 4.584777 (-4.027510) | 2.440048 / 3.745712 (-1.305664) | 2.727572 / 5.269862 (-2.542290) | 1.713557 / 4.565676 (-2.852120) | 0.062526 / 0.424275 (-0.361749) | 0.004982 / 0.007607 (-0.002625) | 0.353850 / 0.226044 (0.127806) | 3.530887 / 2.268929 (1.261958) | 2.047864 / 55.444624 (-53.396761) | 1.770776 / 6.876477 (-5.105701) | 1.757621 / 2.142072 (-0.384451) | 0.633847 / 4.805227 (-4.171381) | 0.114055 / 6.500664 (-6.386609) | 0.040078 / 0.075469 (-0.035391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983721 / 1.841788 (-0.858066) | 11.896537 / 8.074308 (3.822229) | 10.529883 / 10.191392 (0.338491) | 0.129593 / 0.680424 (-0.550831) | 0.016213 / 0.534201 (-0.517988) | 0.289623 / 0.579283 (-0.289660) | 0.280073 / 0.434364 (-0.154291) | 0.327446 / 0.540337 (-0.212892) | 0.574847 / 1.386936 (-0.812089) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2684a98fe38e0c87bb11e050586004108e32b79d \"CML watermark\")\n"
] | 1,702,286,482,000 | 1,702,477,289,000 | 1,702,476,921,000 | MEMBER | null | Fix docs phrasing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6486/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6486/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6486.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6486",
"merged_at": "2023-12-13T14:15:21",
"patch_url": "https://github.com/huggingface/datasets/pull/6486.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6486"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6485/comments | https://api.github.com/repos/huggingface/datasets/issues/6485/events | https://github.com/huggingface/datasets/issues/6485 | 2,035,141,884 | I_kwDODunzps55Tcz8 | 6,485 | FileNotFoundError: [Errno 2] No such file or directory: 'nul' | {
"avatar_url": "https://avatars.githubusercontent.com/u/73683903?v=4",
"events_url": "https://api.github.com/users/amanyara/events{/privacy}",
"followers_url": "https://api.github.com/users/amanyara/followers",
"following_url": "https://api.github.com/users/amanyara/following{/other_user}",
"gists_url": "https://api.github.com/users/amanyara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amanyara",
"id": 73683903,
"login": "amanyara",
"node_id": "MDQ6VXNlcjczNjgzOTAz",
"organizations_url": "https://api.github.com/users/amanyara/orgs",
"received_events_url": "https://api.github.com/users/amanyara/received_events",
"repos_url": "https://api.github.com/users/amanyara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amanyara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanyara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amanyara"
} | [] | closed | false | null | [] | null | [
"Hi! It seems like the problem is your environment. Maybe this issue can help: https://github.com/pytest-dev/pytest/issues/9519. "
] | 1,702,284,733,000 | 1,702,541,348,000 | 1,702,541,348,000 | NONE | null | ### Describe the bug
it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets"
i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul'
![image](https://github.com/huggingface/datasets/assets/73683903/3973c120-ebb1-42b7-bede-b9de053e861d)
![image](https://github.com/huggingface/datasets/assets/73683903/0496adff-a7a7-4dcb-929e-ec11ede71f04)
### Steps to reproduce the bug
1.import datasets
### Expected behavior
i just run a single line code and stuct in this bug
### Environment info
OS: Windows10
Datasets==2.15.0
python=3.10 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6485/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6483/comments | https://api.github.com/repos/huggingface/datasets/issues/6483/events | https://github.com/huggingface/datasets/issues/6483 | 2,032,946,981 | I_kwDODunzps55LE8l | 6,483 | Iterable Dataset: rename column clashes with remove column | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi"
} | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | [] | null | [
"Column \"text\" doesn't exist anymore so you can't remove it",
"You can get the expected result by fixing typos in the snippet :)\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset.features) - COLUMNS_TO_KEEP)\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Fixed code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# load LS in streaming mode\r\ndataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# check original features\r\ndataset_features = dataset.features.keys()\r\nprint(\"Original features: \", dataset_features)\r\n\r\n# rename \"text\" -> \"sentence\"\r\ndataset = dataset.rename_column(\"text\", \"sentence\")\r\ndataset_features = dataset.features.keys()\r\n\r\n# remove unwanted columns\r\nCOLUMNS_TO_KEEP = {\"audio\", \"sentence\"}\r\ndataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))\r\n\r\n# stream first sample, should return \"audio\" and \"sentence\" columns\r\nprint(next(iter(dataset)))\r\n```",
"Whoops 😅 Thanks for the swift reply both! Works like a charm!"
] | 1,702,051,890,000 | 1,702,052,836,000 | 1,702,052,824,000 | CONTRIBUTOR | null | ### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by:
1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`)
2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`)
However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets.
### Steps to reproduce the bug
```python
from datasets import load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# check original features
dataset_features = dataset.features.keys()
print("Original features: ", dataset_features)
# rename "text" -> "sentence"
dataset = dataset.rename_column("text", "sentence")
# remove unwanted columns
COLUMNS_TO_KEEP = {"audio", "sentence"}
dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
# stream first sample, should return "audio" and "sentence" columns
print(next(iter(dataset)))
```
Traceback:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 17
14 COLUMNS_TO_KEEP = {"audio", "sentence"}
15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
---> 17 print(next(iter(dataset)))
File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)
1350 yield formatter.format_row(pa_table)
1351 return
-> 1353 for key, example in ex_iterable:
1354 if self.features:
1355 # `IterableDataset` automatically fills missing columns with None.
1356 # This is done with `_apply_feature_types_on_example`.
1357 example = _apply_feature_types_on_example(
1358 example, self.features, token_per_repo_id=self._token_per_repo_id
1359 )
File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self)
650 yield from ArrowExamplesIterable(self._iter_arrow, {})
651 else:
--> 652 yield from self._iter()
File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self)
727 if self.remove_columns:
728 for c in self.remove_columns:
--> 729 del transformed_example[c]
730 yield key, transformed_example
731 current_idx += 1
KeyError: 'text'
```
=> we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset.
### Expected behavior
Should be able to rename and remove columns from iterable dataset.
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: macOS-13.5.1-arm64-arm-64bit
- Python version: 3.11.6
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6483/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6484/comments | https://api.github.com/repos/huggingface/datasets/issues/6484/events | https://github.com/huggingface/datasets/issues/6484 | 2,033,333,294 | I_kwDODunzps55MjQu | 6,484 | [Feature Request] Dataset versioning | {
"avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4",
"events_url": "https://api.github.com/users/kenfus/events{/privacy}",
"followers_url": "https://api.github.com/users/kenfus/followers",
"following_url": "https://api.github.com/users/kenfus/following{/other_user}",
"gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kenfus",
"id": 47979198,
"login": "kenfus",
"node_id": "MDQ6VXNlcjQ3OTc5MTk4",
"organizations_url": "https://api.github.com/users/kenfus/orgs",
"received_events_url": "https://api.github.com/users/kenfus/received_events",
"repos_url": "https://api.github.com/users/kenfus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenfus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kenfus"
} | [] | open | false | null | [] | null | [
"Hello @kenfus, this is meant to be possible to do yes. Let me ping @lhoestq or @mariosasko from the `datasets` team (`huggingface_hub` is only the underlying library to download files from the Hub but here it looks more like a `datasets` problem). ",
"Hi! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 1,702,051,295,000 | 1,702,322,026,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different.
Of course, I may have done something wrong or missed a setting somewhere!
**Describe the solution you'd like**
The solution would allow me to easily work with revisions:
- create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this:
`dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')`
- then, get the current revision as follows:
```
dataset = load_dataset(
'kenfus/xy', revision='v1.0.2',
)
```
this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision.
- if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision.
**Describe alternatives you've considered**
I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere.
**Additional context**
Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface.
This is the data loading in my script:
```
## CREATE PATHS
prepared_dataset_path = os.path.join(
DATA_FOLDER, str(DATA_VERSION), "prepared_dataset"
)
os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True)
## LOAD DATASET
if os.path.exists(prepared_dataset_path):
print("Loading prepared dataset from disk...")
dataset_prepared = load_from_disk(prepared_dataset_path)
else:
print("Loading dataset from HuggingFace Datasets...")
dataset = load_dataset(
PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload"
)
print("Preparing dataset...")
dataset_prepared = dataset.map(
prepare_dataset,
remove_columns=["audio", "transcription"],
num_proc=os.cpu_count(),
load_from_cache_file=False,
)
dataset_prepared.save_to_disk(prepared_dataset_path)
del dataset
if CHECK_DATASET:
## CHECK DATASET
dataset_prepared = dataset_prepared.map(
check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False
)
dataset_filtered = dataset_prepared.filter(
lambda example: not example["incorrect_dimension"],
load_from_cache_file=False,
)
for example in dataset_prepared.filter(
lambda example: example["incorrect_dimension"], load_from_cache_file=False
):
print(example["path"])
print(
f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}"
)
print("Number of examples train: ", len(dataset_filtered["train"]))
print("Number of examples test: ", len(dataset_filtered["test"]))
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6484/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6482/comments | https://api.github.com/repos/huggingface/datasets/issues/6482/events | https://github.com/huggingface/datasets/pull/6482 | 2,032,675,918 | PR_kwDODunzps5hhl23 | 6,482 | Fix max lock length on unix | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6482). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I'm getting `AttributeError: module 'os' has no attribute 'statvfs'` on windows - reverting",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005294 / 0.011353 (-0.006059) | 0.003562 / 0.011008 (-0.007446) | 0.062030 / 0.038508 (0.023522) | 0.053335 / 0.023109 (0.030226) | 0.233303 / 0.275898 (-0.042595) | 0.252029 / 0.323480 (-0.071451) | 0.002835 / 0.007986 (-0.005151) | 0.002732 / 0.004328 (-0.001597) | 0.047973 / 0.004250 (0.043723) | 0.038380 / 0.037052 (0.001328) | 0.235028 / 0.258489 (-0.023461) | 0.265555 / 0.293841 (-0.028286) | 0.027136 / 0.128546 (-0.101410) | 0.010806 / 0.075646 (-0.064840) | 0.205040 / 0.419271 (-0.214231) | 0.035063 / 0.043533 (-0.008470) | 0.236351 / 0.255139 (-0.018788) | 0.254556 / 0.283200 (-0.028643) | 0.019528 / 0.141683 (-0.122155) | 1.099012 / 1.452155 (-0.353142) | 1.156250 / 1.492716 (-0.336466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093952 / 0.018006 (0.075946) | 0.304181 / 0.000490 (0.303692) | 0.000227 / 0.000200 (0.000027) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018568 / 0.037411 (-0.018844) | 0.060323 / 0.014526 (0.045798) | 0.073010 / 0.176557 (-0.103546) | 0.121723 / 0.737135 (-0.615412) | 0.075668 / 0.296338 (-0.220670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288429 / 0.215209 (0.073220) | 2.797834 / 2.077655 (0.720180) | 1.480957 / 1.504120 (-0.023163) | 1.360872 / 1.541195 (-0.180323) | 1.406828 / 1.468490 (-0.061663) | 0.587596 / 4.584777 (-3.997181) | 2.533997 / 3.745712 (-1.211715) | 2.906697 / 5.269862 (-2.363164) | 1.801753 / 4.565676 (-2.763923) | 0.064360 / 0.424275 (-0.359915) | 0.005016 / 0.007607 (-0.002591) | 0.347334 / 0.226044 (0.121290) | 3.426344 / 2.268929 (1.157416) | 1.856014 / 55.444624 (-53.588610) | 1.581774 / 6.876477 (-5.294703) | 1.640036 / 2.142072 (-0.502037) | 0.656096 / 4.805227 (-4.149131) | 0.120212 / 6.500664 (-6.380452) | 0.044003 / 0.075469 (-0.031466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943933 / 1.841788 (-0.897855) | 11.846572 / 8.074308 (3.772263) | 10.330705 / 10.191392 (0.139313) | 0.129767 / 0.680424 (-0.550657) | 0.013508 / 0.534201 (-0.520693) | 0.289672 / 0.579283 (-0.289611) | 0.266427 / 0.434364 (-0.167937) | 0.342766 / 0.540337 (-0.197571) | 0.452068 / 1.386936 (-0.934868) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005308 / 0.011353 (-0.006045) | 0.003712 / 0.011008 (-0.007296) | 0.048848 / 0.038508 (0.010340) | 0.055156 / 0.023109 (0.032047) | 0.271942 / 0.275898 (-0.003956) | 0.293166 / 0.323480 (-0.030314) | 0.004056 / 0.007986 (-0.003930) | 0.002722 / 0.004328 (-0.001606) | 0.048418 / 0.004250 (0.044167) | 0.039320 / 0.037052 (0.002268) | 0.277184 / 0.258489 (0.018695) | 0.312398 / 0.293841 (0.018557) | 0.029392 / 0.128546 (-0.099155) | 0.011314 / 0.075646 (-0.064332) | 0.057883 / 0.419271 (-0.361389) | 0.032603 / 0.043533 (-0.010930) | 0.273025 / 0.255139 (0.017886) | 0.289265 / 0.283200 (0.006065) | 0.017553 / 0.141683 (-0.124129) | 1.127725 / 1.452155 (-0.324430) | 1.202293 / 1.492716 (-0.290423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097179 / 0.018006 (0.079173) | 0.309712 / 0.000490 (0.309222) | 0.000269 / 0.000200 (0.000069) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024742 / 0.037411 (-0.012670) | 0.070097 / 0.014526 (0.055571) | 0.082273 / 0.176557 (-0.094283) | 0.121696 / 0.737135 (-0.615439) | 0.082983 / 0.296338 (-0.213355) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292688 / 0.215209 (0.077479) | 2.853436 / 2.077655 (0.775781) | 1.588999 / 1.504120 (0.084879) | 1.454547 / 1.541195 (-0.086648) | 1.476342 / 1.468490 (0.007852) | 0.559464 / 4.584777 (-4.025313) | 2.564597 / 3.745712 (-1.181115) | 2.900460 / 5.269862 (-2.369402) | 1.782156 / 4.565676 (-2.783520) | 0.061768 / 0.424275 (-0.362507) | 0.005042 / 0.007607 (-0.002565) | 0.345168 / 0.226044 (0.119124) | 3.412273 / 2.268929 (1.143344) | 1.953154 / 55.444624 (-53.491470) | 1.667347 / 6.876477 (-5.209130) | 1.685138 / 2.142072 (-0.456934) | 0.643270 / 4.805227 (-4.161958) | 0.115955 / 6.500664 (-6.384709) | 0.041090 / 0.075469 (-0.034379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976324 / 1.841788 (-0.865464) | 12.252294 / 8.074308 (4.177986) | 10.598062 / 10.191392 (0.406670) | 0.129779 / 0.680424 (-0.550644) | 0.015697 / 0.534201 (-0.518504) | 0.287241 / 0.579283 (-0.292042) | 0.287331 / 0.434364 (-0.147033) | 0.331710 / 0.540337 (-0.208628) | 0.574571 / 1.386936 (-0.812365) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#702344140461b7a111139860c944d3dd0a2689e3 \"CML watermark\")\n"
] | 1,702,042,770,000 | 1,702,382,012,000 | 1,702,381,647,000 | MEMBER | null | reported in https://github.com/huggingface/datasets/pull/6482 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6482/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6482",
"merged_at": "2023-12-12T11:47:27",
"patch_url": "https://github.com/huggingface/datasets/pull/6482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6482"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6481/comments | https://api.github.com/repos/huggingface/datasets/issues/6481/events | https://github.com/huggingface/datasets/issues/6481 | 2,032,650,003 | I_kwDODunzps55J8cT | 6,481 | using torchrun, save_to_disk suddenly shows SIGTERM | {
"avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4",
"events_url": "https://api.github.com/users/Ariya12138/events{/privacy}",
"followers_url": "https://api.github.com/users/Ariya12138/followers",
"following_url": "https://api.github.com/users/Ariya12138/following{/other_user}",
"gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ariya12138",
"id": 85916625,
"login": "Ariya12138",
"node_id": "MDQ6VXNlcjg1OTE2NjI1",
"organizations_url": "https://api.github.com/users/Ariya12138/orgs",
"received_events_url": "https://api.github.com/users/Ariya12138/received_events",
"repos_url": "https://api.github.com/users/Ariya12138/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ariya12138"
} | [] | open | false | null | [] | null | [] | 1,702,041,723,000 | 1,702,041,723,000 | null | NONE | null | ### Describe the bug
When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages:
Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard.
WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM
ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967.
### Steps to reproduce the bug
ds_shard = ds_shard.map(map_fn, *args, **kwargs)
ds_shard.save_to_disk(ds_shard_filepaths[rank])
Saving the dataset (14/70 shards): 20%|██ | 875350/4376702 [00:19<01:53, 30863.15 examples/s]
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python
Traceback (most recent call last):
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
==========================================================
run.py FAILED
----------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
----------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-12-08_20:09:04
rank : 0 (local_rank: 0)
exitcode : -7 (pid: 2224967)
error_file: <N/A>
traceback : Signal 7 (SIGBUS) received by PID 2224967
### Expected behavior
I hope it can save successfully without any issues, but it seems there is a problem.
### Environment info
`datasets` version: 2.14.6
- Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 14.0.0
- Pandas version: 2.1.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6481/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6480/comments | https://api.github.com/repos/huggingface/datasets/issues/6480/events | https://github.com/huggingface/datasets/pull/6480 | 2,031,116,653 | PR_kwDODunzps5hcS7P | 6,480 | Add IterableDataset `__repr__` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6480). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005392 / 0.011353 (-0.005960) | 0.003120 / 0.011008 (-0.007888) | 0.062017 / 0.038508 (0.023509) | 0.048824 / 0.023109 (0.025715) | 0.232300 / 0.275898 (-0.043598) | 0.262045 / 0.323480 (-0.061435) | 0.002909 / 0.007986 (-0.005077) | 0.003916 / 0.004328 (-0.000413) | 0.049469 / 0.004250 (0.045218) | 0.038965 / 0.037052 (0.001913) | 0.247841 / 0.258489 (-0.010648) | 0.268259 / 0.293841 (-0.025582) | 0.027588 / 0.128546 (-0.100958) | 0.010334 / 0.075646 (-0.065312) | 0.205811 / 0.419271 (-0.213460) | 0.035456 / 0.043533 (-0.008077) | 0.242774 / 0.255139 (-0.012365) | 0.260377 / 0.283200 (-0.022823) | 0.017469 / 0.141683 (-0.124214) | 1.199665 / 1.452155 (-0.252489) | 1.259316 / 1.492716 (-0.233400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092357 / 0.018006 (0.074350) | 0.303745 / 0.000490 (0.303255) | 0.000212 / 0.000200 (0.000012) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018820 / 0.037411 (-0.018592) | 0.061548 / 0.014526 (0.047022) | 0.072527 / 0.176557 (-0.104030) | 0.119696 / 0.737135 (-0.617440) | 0.074153 / 0.296338 (-0.222185) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283952 / 0.215209 (0.068743) | 2.769844 / 2.077655 (0.692189) | 1.526100 / 1.504120 (0.021980) | 1.417584 / 1.541195 (-0.123611) | 1.440523 / 1.468490 (-0.027967) | 0.556994 / 4.584777 (-4.027783) | 2.400392 / 3.745712 (-1.345320) | 2.727794 / 5.269862 (-2.542068) | 1.724671 / 4.565676 (-2.841006) | 0.062111 / 0.424275 (-0.362164) | 0.004925 / 0.007607 (-0.002682) | 0.342748 / 0.226044 (0.116704) | 3.376790 / 2.268929 (1.107862) | 1.856498 / 55.444624 (-53.588127) | 1.574143 / 6.876477 (-5.302334) | 1.591828 / 2.142072 (-0.550245) | 0.644416 / 4.805227 (-4.160811) | 0.116862 / 6.500664 (-6.383802) | 0.041484 / 0.075469 (-0.033985) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975704 / 1.841788 (-0.866084) | 11.196447 / 8.074308 (3.122139) | 10.567518 / 10.191392 (0.376126) | 0.126786 / 0.680424 (-0.553638) | 0.013768 / 0.534201 (-0.520433) | 0.284531 / 0.579283 (-0.294752) | 0.260855 / 0.434364 (-0.173509) | 0.328888 / 0.540337 (-0.211450) | 0.439911 / 1.386936 (-0.947025) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005108 / 0.011353 (-0.006245) | 0.003006 / 0.011008 (-0.008003) | 0.048673 / 0.038508 (0.010165) | 0.051066 / 0.023109 (0.027957) | 0.279578 / 0.275898 (0.003680) | 0.298356 / 0.323480 (-0.025123) | 0.003965 / 0.007986 (-0.004020) | 0.002662 / 0.004328 (-0.001667) | 0.049037 / 0.004250 (0.044786) | 0.039385 / 0.037052 (0.002333) | 0.284545 / 0.258489 (0.026055) | 0.314240 / 0.293841 (0.020399) | 0.028493 / 0.128546 (-0.100053) | 0.010400 / 0.075646 (-0.065247) | 0.057375 / 0.419271 (-0.361896) | 0.032382 / 0.043533 (-0.011151) | 0.283163 / 0.255139 (0.028024) | 0.298967 / 0.283200 (0.015768) | 0.017564 / 0.141683 (-0.124119) | 1.172425 / 1.452155 (-0.279730) | 1.219975 / 1.492716 (-0.272742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090664 / 0.018006 (0.072658) | 0.298419 / 0.000490 (0.297929) | 0.000211 / 0.000200 (0.000011) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021739 / 0.037411 (-0.015672) | 0.068274 / 0.014526 (0.053748) | 0.080820 / 0.176557 (-0.095736) | 0.119809 / 0.737135 (-0.617326) | 0.081612 / 0.296338 (-0.214727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.303346 / 0.215209 (0.088137) | 2.971648 / 2.077655 (0.893993) | 1.634828 / 1.504120 (0.130708) | 1.510851 / 1.541195 (-0.030344) | 1.515236 / 1.468490 (0.046745) | 0.558487 / 4.584777 (-4.026289) | 2.436263 / 3.745712 (-1.309449) | 2.718525 / 5.269862 (-2.551336) | 1.727421 / 4.565676 (-2.838255) | 0.061396 / 0.424275 (-0.362879) | 0.004951 / 0.007607 (-0.002656) | 0.352950 / 0.226044 (0.126906) | 3.473766 / 2.268929 (1.204838) | 1.971299 / 55.444624 (-53.473325) | 1.712173 / 6.876477 (-5.164304) | 1.711334 / 2.142072 (-0.430738) | 0.627291 / 4.805227 (-4.177936) | 0.113779 / 6.500664 (-6.386885) | 0.046561 / 0.075469 (-0.028908) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.989507 / 1.841788 (-0.852280) | 11.777883 / 8.074308 (3.703575) | 10.525453 / 10.191392 (0.334061) | 0.129118 / 0.680424 (-0.551306) | 0.014989 / 0.534201 (-0.519212) | 0.282324 / 0.579283 (-0.296959) | 0.280688 / 0.434364 (-0.153676) | 0.322579 / 0.540337 (-0.217758) | 0.554327 / 1.386936 (-0.832609) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79e94fcdf3d4378ddcdf7e130bb1ae23d99c6fce \"CML watermark\")\n"
] | 1,701,966,710,000 | 1,702,042,386,000 | 1,702,042,014,000 | MEMBER | null | Example for glue sst2:
Dataset
```
DatasetDict({
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1821
})
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 67349
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 872
})
})
```
IterableDataset (new)
```
IterableDatasetDict({
test: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
train: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
validation: IterableDataset({
features: ['sentence', 'label', 'idx'],
n_shards: 1
})
})
```
IterableDataset (before)
```
{'test': <datasets.iterable_dataset.IterableDataset object at 0x130d421f0>, 'train': <datasets.iterable_dataset.IterableDataset object at 0x136f3aaf0>, 'validation': <datasets.iterable_dataset.IterableDataset object at 0x136f4b100>}
{'sentence': 'hide new secretions from the parental units ', 'label': 0, 'idx': 0}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6480/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6480",
"merged_at": "2023-12-08T13:26:54",
"patch_url": "https://github.com/huggingface/datasets/pull/6480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6480"
} | true |
End of preview.