url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 51
51
| id
int64 1.92B
2.7B
| node_id
stringlengths 18
18
| number
int64 6.27k
7.3k
| title
stringlengths 2
150
| user
dict | labels
listlengths 0
2
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
null | comments
sequencelengths 0
23
| created_at
timestamp[ns] | updated_at
int64 1.7k
1.73k
| closed_at
timestamp[ns] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 3
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
null | pull_request
null | is_pull_request
bool 1
class | time_to_close
float64 0
0
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7299/comments | https://api.github.com/repos/huggingface/datasets/issues/7299/events | https://github.com/huggingface/datasets/issues/7299 | 2,695,378,251 | I_kwDODunzps6gqDVL | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/46443190?v=4",
"events_url": "https://api.github.com/users/fabiozappo/events{/privacy}",
"followers_url": "https://api.github.com/users/fabiozappo/followers",
"following_url": "https://api.github.com/users/fabiozappo/following{/other_user}",
"gists_url": "https://api.github.com/users/fabiozappo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fabiozappo",
"id": 46443190,
"login": "fabiozappo",
"node_id": "MDQ6VXNlcjQ2NDQzMTkw",
"organizations_url": "https://api.github.com/users/fabiozappo/orgs",
"received_events_url": "https://api.github.com/users/fabiozappo/received_events",
"repos_url": "https://api.github.com/users/fabiozappo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fabiozappo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabiozappo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fabiozappo",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,732 | null | NONE | null | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github.
Is there an existing way to add image transformations directly to the dataset loading pipeline?
### Steps to reproduce the bug
from datasets import load_dataset
from torch.utils.data import DataLoader
```python
def collate_fn(batch):
images = [item['image'] for item in batch]
texts = [item['text'] for item in batch]
return {
'images': images,
'texts': texts
}
dataset = load_dataset("Yuki20/pokemon_caption", split="train")
dataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn)
# Output shows varying image sizes:
# [(1280, 1280), (431, 431), (789, 789), (769, 769)]
```
### Expected behavior
I'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7299/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7298/comments | https://api.github.com/repos/huggingface/datasets/issues/7298/events | https://github.com/huggingface/datasets/issues/7298 | 2,694,196,968 | I_kwDODunzps6gli7o | 7,298 | loading dataset issue with load_dataset() when training controlnet | {
"avatar_url": "https://avatars.githubusercontent.com/u/81594044?v=4",
"events_url": "https://api.github.com/users/bigbraindump/events{/privacy}",
"followers_url": "https://api.github.com/users/bigbraindump/followers",
"following_url": "https://api.github.com/users/bigbraindump/following{/other_user}",
"gists_url": "https://api.github.com/users/bigbraindump/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bigbraindump",
"id": 81594044,
"login": "bigbraindump",
"node_id": "MDQ6VXNlcjgxNTk0MDQ0",
"organizations_url": "https://api.github.com/users/bigbraindump/orgs",
"received_events_url": "https://api.github.com/users/bigbraindump/received_events",
"repos_url": "https://api.github.com/users/bigbraindump/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bigbraindump/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigbraindump/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bigbraindump",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,732 | null | NONE | null | ### Describe the bug
i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work?
would appreciate if someone can explain why that's the case here
1. for reference here's the structure of the original training files _before_ dataset creation -
```
- dir train
- dir A (illustrations)
- dir B (SignWriting)
- prompt.json containing:
{"source": "B/file.png", "target": "A/file.png", "prompt": "..."}
```
2. here are features _after_ dataset creation -
```
"features": {
"control_image": {
"_type": "Image"
},
"image": {
"_type": "Image"
},
"caption": {
"dtype": "string",
"_type": "Value"
}
```
3. I've also attempted to upload the dataset to huggingface with the same error output
### Steps to reproduce the bug
1. [dataset creation script](https://github.com/sign-language-processing/signwriting-illustration/blob/main/signwriting_illustration/controlnet_huggingface/dataset.py)
2. controlnet [training script](examples/controlnet/train_controlnet.py) used
3. training parameters -
! accelerate launch diffusers/examples/controlnet/train_controlnet.py \
--pretrained_model_name_or_path="stable-diffusion-v1-5/stable-diffusion-v1-5" \
--output_dir="$OUTPUT_DIR" \
--train_data_dir="$HF_DATASET_DIR" \
--conditioning_image_column=control_image \
--image_column=image \
--caption_column=caption \
--resolution=512\
--learning_rate=1e-5 \
--validation_image "./validation/0a4b3c71265bb3a726457837428dda78.png" "./validation/0a5922fe2c638e6776bd62f623145004.png" "./validation/1c9f1a53106f64c682cf5d009ee7156f.png" \
--validation_prompt "An illustration of a man with short hair" "An illustration of a woman with short hair" "An illustration of Barack Obama" \
--train_batch_size=4 \
--num_train_epochs=500 \
--tracker_project_name="sd-controlnet-signwriting-test" \
--hub_model_id="sarahahtee/signwriting-illustration-test" \
--checkpointing_steps=5000 \
--validation_steps=1000 \
--report_to wandb \
--push_to_hub
4. command -
` sbatch --export=HUGGINGFACE_TOKEN=hf_token,WANDB_API_KEY=api_key script.sh`
### Expected behavior
```
11/25/2024 17:12:18 - INFO - __main__ - Initializing controlnet weights from unet
Generating train split: 1 examples [00:00, 334.85 examples/s]
Traceback (most recent call last):
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 1189, in <module>
main(args)
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 923, in main
train_dataset = make_train_dataset(args, tokenizer, accelerator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/user/user/signwriting_illustration/controlnet_huggingface/diffusers/examples/controlnet/train_controlnet.py", line 639, in make_train_dataset
raise ValueError(
ValueError: `--image_column` value 'image' not found in dataset columns. Dataset columns are: _data_files, _fingerprint, _format_columns, _format_kwargs, _format_type, _output_all_columns, _split
```
### Environment info
accelerate 1.1.1
huggingface-hub 0.26.2
python 3.11
torch 2.5.1
transformers 4.46.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7298/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7297/comments | https://api.github.com/repos/huggingface/datasets/issues/7297/events | https://github.com/huggingface/datasets/issues/7297 | 2,683,977,430 | I_kwDODunzps6f-j7W | 7,297 | wrong return type for `IterableDataset.shard()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47225236?v=4",
"events_url": "https://api.github.com/users/ysngshn/events{/privacy}",
"followers_url": "https://api.github.com/users/ysngshn/followers",
"following_url": "https://api.github.com/users/ysngshn/following{/other_user}",
"gists_url": "https://api.github.com/users/ysngshn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ysngshn",
"id": 47225236,
"login": "ysngshn",
"node_id": "MDQ6VXNlcjQ3MjI1MjM2",
"organizations_url": "https://api.github.com/users/ysngshn/orgs",
"received_events_url": "https://api.github.com/users/ysngshn/received_events",
"repos_url": "https://api.github.com/users/ysngshn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ysngshn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysngshn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ysngshn",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,732 | null | NONE | null | ### Describe the bug
`IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy.
### Steps to reproduce the bug
look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)?
### Expected behavior
Correct return type as `"IterableDataset"`
### Environment info
datasets==3.1.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7297/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7295/comments | https://api.github.com/repos/huggingface/datasets/issues/7295/events | https://github.com/huggingface/datasets/issues/7295 | 2,672,003,384 | I_kwDODunzps6fQ4k4 | 7,295 | [BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'` | {
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"events_url": "https://api.github.com/users/casper-hansen/events{/privacy}",
"followers_url": "https://api.github.com/users/casper-hansen/followers",
"following_url": "https://api.github.com/users/casper-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/casper-hansen",
"id": 27340033,
"login": "casper-hansen",
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"organizations_url": "https://api.github.com/users/casper-hansen/orgs",
"received_events_url": "https://api.github.com/users/casper-hansen/received_events",
"repos_url": "https://api.github.com/users/casper-hansen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/casper-hansen",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,732 | null | NONE | null | ### Describe the bug
Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions.
Analysis of what's happening:
1. `datasets` passes the `client_kwargs` through `fsspec`
2. `fsspec` passes the `client_kwargs` through `s3fs`
3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp`
```
s3creator = self.session.create_client(
"s3", config=conf, **init_kwargs, **client_kwargs
)
```
4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`).
Error:
```
Traceback (most recent call last):
File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module>
batch = next(iter(ds))
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__
for key, example in ex_iterable:
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__
for key, pa_table in self.generate_tables_fn(**self.kwargs):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables
for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths
elif xisdir(urlpath, download_config=download_config):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir
return fs.isdir(inner_path)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir
return self.info(path)["type"] == "directory"
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info
self._get_dirs()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs
f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name}
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info
out = await self._call_s3(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3
await self.set_session()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session
s3creator = self.session.create_client(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client
return ClientCreatorContext(self._create_client(*args, **kwargs))
TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url'
```
### Steps to reproduce the bug
1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0:
```
pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0'
```
2. Run this code:
```
from datasets import load_dataset
ds = load_dataset(
"json",
data_files="s3://your_path/*.jsonl.gz",
streaming=True,
split="train",
)
batch = next(iter(ds))
print(batch)
```
3. You get the `unexpected keyword argument 'requote_redirect_url'` error.
### Expected behavior
The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error.
Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly.
<img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2">
### Environment info
- `datasets` version: 3.1.0
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.10.15
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7295/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7292/comments | https://api.github.com/repos/huggingface/datasets/issues/7292/events | https://github.com/huggingface/datasets/issues/7292 | 2,664,250,855 | I_kwDODunzps6ezT3n | 7,292 | DataFilesNotFoundError for datasets `OpenMol/PubChemSFT` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4",
"events_url": "https://api.github.com/users/xnuohz/events{/privacy}",
"followers_url": "https://api.github.com/users/xnuohz/followers",
"following_url": "https://api.github.com/users/xnuohz/following{/other_user}",
"gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xnuohz",
"id": 17878022,
"login": "xnuohz",
"node_id": "MDQ6VXNlcjE3ODc4MDIy",
"organizations_url": "https://api.github.com/users/xnuohz/orgs",
"received_events_url": "https://api.github.com/users/xnuohz/received_events",
"repos_url": "https://api.github.com/users/xnuohz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xnuohz",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! If the dataset owner uses `push_to_hub()` instead of `save_to_disk()` and upload the local files it will fix the issue.\r\nRight now `datasets` sees the train/test/valid pickle files but they are not supported file formats.",
"Alternatively you can load the arrow file instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('OpenMol/PubChemSFT', data_files='stage1/*.arrow')\r\n```",
"Thanks! I'll have a try."
] | 1970-01-01T00:00:00.000001 | 1,731 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Cannot load the dataset https://huggingface.co./datasets/OpenMol/PubChemSFT
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('OpenMol/PubChemSFT')
```
### Expected behavior
```
---------------------------------------------------------------------------
DataFilesNotFoundError Traceback (most recent call last)
Cell In[7], [line 2](vscode-notebook-cell:?execution_count=7&line=2)
[1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
----> [2](vscode-notebook-cell:?execution_count=7&line=2) dataset = load_dataset('OpenMol/PubChemSFT')
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2582](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2582) verification_mode = VerificationMode(
[2583](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2583) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2584](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2584) )
[2586](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2586) # Create a dataset builder
-> [2587](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587) builder_instance = load_dataset_builder(
[2588](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2588) path=path,
[2589](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2589) name=name,
[2590](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2590) data_dir=data_dir,
[2591](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2591) data_files=data_files,
[2592](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2592) cache_dir=cache_dir,
[2593](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2593) features=features,
[2594](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2594) download_config=download_config,
[2595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2595) download_mode=download_mode,
[2596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2596) revision=revision,
[2597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2597) token=token,
[2598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2598) storage_options=storage_options,
[2599](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2599) trust_remote_code=trust_remote_code,
[2600](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2600) _require_default_config_name=name is None,
[2601](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2601) **config_kwargs,
[2602](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2602) )
[2604](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2604) # Return iterable dataset in case of streaming
[2605](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2605) if streaming:
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
[2257](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2257) download_config = download_config.copy() if download_config else DownloadConfig()
[2258](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2258) download_config.storage_options.update(storage_options)
-> [2259](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259) dataset_module = dataset_module_factory(
[2260](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2260) path,
[2261](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2261) revision=revision,
[2262](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2262) download_config=download_config,
[2263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2263) download_mode=download_mode,
[2264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2264) data_dir=data_dir,
[2265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2265) data_files=data_files,
[2266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2266) cache_dir=cache_dir,
[2267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2267) trust_remote_code=trust_remote_code,
[2268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2268) _require_default_config_name=_require_default_config_name,
[2269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2269) _require_custom_configs=bool(config_kwargs),
[2270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2270) )
[2271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2271) # Get dataset builder class from the processing script
[2272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2272) builder_kwargs = dataset_module.builder_kwargs
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
[1902](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1902) raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
[1903](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1903) if isinstance(e1, (DataFilesNotFoundError, DatasetNotFoundError, EmptyDatasetError)):
-> [1904](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904) raise e1 from None
[1905](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1905) if isinstance(e1, FileNotFoundError):
[1906](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1906) raise FileNotFoundError(
[1907](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1907) f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
[1908](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1908) f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
[1909](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1909) ) from None
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
[1876](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1876) return HubDatasetModuleFactoryWithScript(
[1877](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1877) path,
[1878](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1878) revision=revision,
(...)
[1882](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1882) trust_remote_code=trust_remote_code,
[1883](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1883) ).get_module()
[1884](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1884) else:
-> [1885](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885) return HubDatasetModuleFactoryWithoutScript(
[1886](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1886) path,
[1887](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1887) revision=revision,
[1888](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1888) data_dir=data_dir,
[1889](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1889) data_files=data_files,
[1890](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1890) download_config=download_config,
[1891](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1891) download_mode=download_mode,
[1892](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1892) ).get_module()
[1893](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1893) except Exception as e1:
[1894](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1894) # All the attempts failed, before raising the error we should check if the module is already cached
[1895](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1895) try:
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270, in HubDatasetModuleFactoryWithoutScript.get_module(self)
[1263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1263) patterns = get_data_patterns(base_path, download_config=self.download_config)
[1264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1264) data_files = DataFilesDict.from_patterns(
[1265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1265) patterns,
[1266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1266) base_path=base_path,
[1267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1267) allowed_extensions=ALL_ALLOWED_EXTENSIONS,
[1268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1268) download_config=self.download_config,
[1269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1269) )
-> [1270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270) module_name, default_builder_kwargs = infer_module_for_data_files(
[1271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1271) data_files=data_files,
[1272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1272) path=self.name,
[1273](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1273) download_config=self.download_config,
[1274](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1274) )
[1275](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1275) data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name])
[1276](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1276) # Collect metadata files if the module supports them
File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config)
[595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:595) raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}")
[596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:596) if not module_name:
--> [597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597) raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
[598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:598) return module_name, default_builder_kwargs
DataFilesNotFoundError: No (supported) data files found in OpenMol/PubChemSFT
```
### Environment info
```
- `datasets` version: 3.1.0
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- `huggingface_hub` version: 0.25.2
- PyArrow version: 18.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.9.2
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4",
"events_url": "https://api.github.com/users/xnuohz/events{/privacy}",
"followers_url": "https://api.github.com/users/xnuohz/followers",
"following_url": "https://api.github.com/users/xnuohz/following{/other_user}",
"gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xnuohz",
"id": 17878022,
"login": "xnuohz",
"node_id": "MDQ6VXNlcjE3ODc4MDIy",
"organizations_url": "https://api.github.com/users/xnuohz/orgs",
"received_events_url": "https://api.github.com/users/xnuohz/received_events",
"repos_url": "https://api.github.com/users/xnuohz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xnuohz",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7292/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7291/comments | https://api.github.com/repos/huggingface/datasets/issues/7291/events | https://github.com/huggingface/datasets/issues/7291 | 2,662,244,643 | I_kwDODunzps6erqEj | 7,291 | Why return_tensors='pt' doesn't work? | {
"avatar_url": "https://avatars.githubusercontent.com/u/86752851?v=4",
"events_url": "https://api.github.com/users/bw-wang19/events{/privacy}",
"followers_url": "https://api.github.com/users/bw-wang19/followers",
"following_url": "https://api.github.com/users/bw-wang19/following{/other_user}",
"gists_url": "https://api.github.com/users/bw-wang19/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bw-wang19",
"id": 86752851,
"login": "bw-wang19",
"node_id": "MDQ6VXNlcjg2NzUyODUx",
"organizations_url": "https://api.github.com/users/bw-wang19/orgs",
"received_events_url": "https://api.github.com/users/bw-wang19/received_events",
"repos_url": "https://api.github.com/users/bw-wang19/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bw-wang19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bw-wang19/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bw-wang19",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! `datasets` uses Arrow as storage backend which is agnostic to deep learning frameworks like torch. If you want to get torch tensors back, you need to do `dataset = dataset.with_format(\"torch\")`",
"> Hi ! `datasets` uses Arrow as storage backend which is agnostic to deep learning frameworks like torch. If you want to get torch tensors back, you need to do `dataset = dataset.with_format(\"torch\")`\r\n\r\nIt does work! Thanks for your suggestion!"
] | 1970-01-01T00:00:00.000001 | 1,731 | null | NONE | null | ### Describe the bug
I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List?
![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c)
### Steps to reproduce the bug
![image](https://github.com/user-attachments/assets/5d504d4c-22c7-4742-99a1-9cab78739b17)
### Expected behavior
Sorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol?
When I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' ),it does return in tensor type. Why doesn't it work in map()?
### Environment info
transformers>=4.41.2,<=4.45.0
datasets>=2.16.0,<=2.21.0
accelerate>=0.30.1,<=0.34.2
peft>=0.11.1,<=0.12.0
trl>=0.8.6,<=0.9.6
gradio>=4.0.0
pandas>=2.0.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
packaging
pyyaml
numpy<2.0.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7291/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7290/comments | https://api.github.com/repos/huggingface/datasets/issues/7290/events | https://github.com/huggingface/datasets/issues/7290 | 2,657,620,816 | I_kwDODunzps6eaBNQ | 7,290 | `Dataset.save_to_disk` hangs when using num_proc > 1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/22243463?v=4",
"events_url": "https://api.github.com/users/JohannesAck/events{/privacy}",
"followers_url": "https://api.github.com/users/JohannesAck/followers",
"following_url": "https://api.github.com/users/JohannesAck/following{/other_user}",
"gists_url": "https://api.github.com/users/JohannesAck/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JohannesAck",
"id": 22243463,
"login": "JohannesAck",
"node_id": "MDQ6VXNlcjIyMjQzNDYz",
"organizations_url": "https://api.github.com/users/JohannesAck/orgs",
"received_events_url": "https://api.github.com/users/JohannesAck/received_events",
"repos_url": "https://api.github.com/users/JohannesAck/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JohannesAck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohannesAck/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JohannesAck",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,731 | null | NONE | null | ### Describe the bug
Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.
Specifically, [`Dataset.save_to_disk`](https://huggingface.co./docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1`
The documentation mentions that "Multiprocessing is disabled by default.", but there is no explanation on how to enable it.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
n_samples = int(4e6)
n_tokens_sample = 100
data_dict = {
'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)),
}
dataset = Dataset.from_dict(data_dict)
dataset.save_to_disk('test_dataset', num_proc=1)
dataset.save_to_disk('test_dataset', num_proc=4)
dataset.save_to_disk('test_dataset', num_proc=8)
```
This results in:
```
>>> dataset.save_to_disk('test_dataset', num_proc=1)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [00:17<00:00, 228075.15 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=4)
Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [01:49<00:00, 36583.75 examples/s]
>>> dataset.save_to_disk('test_dataset', num_proc=8)
Saving the dataset (8/8 shards): 100%|██████████████| 4000000/4000000 [02:11<00:00, 30518.43 examples/s]
```
With larger datasets it can take hours, but I didn't benchmark that for this bug report.
### Expected behavior
I would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`.
### Environment info
- `datasets` version: 3.1.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7290/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7289/comments | https://api.github.com/repos/huggingface/datasets/issues/7289/events | https://github.com/huggingface/datasets/issues/7289 | 2,648,019,507 | I_kwDODunzps6d1ZIz | 7,289 | Dataset viewer displays wrong statists | {
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/speedcell4",
"id": 3585459,
"login": "speedcell4",
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/speedcell4",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"i think this issue is more for https://github.com/huggingface/dataset-viewer"
] | 1970-01-01T00:00:00.000001 | 1,731 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
In [my dataset](https://huggingface.co./datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them.
<img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c">
### Steps to reproduce the bug
```python3
from datasets import load_dataset
ds = load_dataset('speedcell4/opus-unigram2').unique('lang2')
for key, lang2 in ds.items():
print(key, len(lang2))
```
This script returns the following and tells that the `train` split has 94 values in the `lang2` column.
```
train 94
dev 94
test 94
zero 5
```
### Expected behavior
94 in the reviewer.
### Environment info
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 8.2.2004 (Core) (x86_64)
GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5)
Clang version: Could not collect
CMake version: version 3.11.4
Libc version: glibc-2.28
Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7542 32-Core Processor
Stepping: 0
CPU MHz: 3389.114
BogoMIPS: 5789.40
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.1+cu121
[pip3] torchaudio==2.4.1+cu121
[pip3] torchdevice==0.1.1
[pip3] torchglyph==0.3.2
[pip3] torchmetrics==1.5.0
[pip3] torchrua==0.5.1
[pip3] torchvision==0.19.1+cu121
[pip3] triton==3.0.0
[pip3] datasets==3.0.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.1+cu121 pypi_0 pypi
[conda] torchaudio 2.4.1+cu121 pypi_0 pypi
[conda] torchdevice 0.1.1 pypi_0 pypi
[conda] torchglyph 0.3.2 pypi_0 pypi
[conda] torchmetrics 1.5.0 pypi_0 pypi
[conda] torchrua 0.5.1 pypi_0 pypi
[conda] torchvision 0.19.1+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | {
"avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4",
"events_url": "https://api.github.com/users/speedcell4/events{/privacy}",
"followers_url": "https://api.github.com/users/speedcell4/followers",
"following_url": "https://api.github.com/users/speedcell4/following{/other_user}",
"gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/speedcell4",
"id": 3585459,
"login": "speedcell4",
"node_id": "MDQ6VXNlcjM1ODU0NTk=",
"organizations_url": "https://api.github.com/users/speedcell4/orgs",
"received_events_url": "https://api.github.com/users/speedcell4/received_events",
"repos_url": "https://api.github.com/users/speedcell4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/speedcell4",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7289/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7289/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7287/comments | https://api.github.com/repos/huggingface/datasets/issues/7287/events | https://github.com/huggingface/datasets/issues/7287 | 2,646,958,393 | I_kwDODunzps6dxWE5 | 7,287 | Support for identifier-based automated split construction | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! You can already configure the README.md to have multiple sets of splits, e.g.\r\n\r\n```yaml\r\nconfigs:\r\n- config_name: my_first_set_of_split\r\n data_files:\r\n - split: train\r\n path: *.csv\r\n- config_name: my_second_set_of_split\r\n data_files:\r\n - split: train\r\n path: train-*.csv\r\n - split: test\r\n path: test-*.csv\r\n```",
"Hi - I had something slightly different in mind:\r\n\r\nCurrently the yaml splits specified like this only allow specifying which filenames to pass to each split.\r\nBut what if I have a situation where I know which individual *training examples* I want to put in each split.\r\n\r\nI could build split-specific files, however for large datasets with overlapping (e.g. multiple sets of) splits this could result in significant duplication of data.\r\n\r\nI can see that this could actually be very much intended (i.e. to discourage overlapping splits), but wondered whether some support for handling splits based on individual identifiers is something that could be considered. ",
"This is not supported right now :/ Though you can load the data in two steps like this\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nfull_dataset = load_dataset(\"username/dataset\", split=\"train\")\r\nmy_first_set_indices = load_dataset(\"username/dataset\", \"my_first_set_of_split\", split=\"train\")\r\n\r\nmy_first_set = full_dataset.select(my_first_set_indices[\"indices\"])\r\n```\r\n\r\nyou can create such a dataset by adapting this code for example\r\n```python\r\n# upload the full dataset\r\nfull_dataset.push_to_hub(\"username/dataset\")\r\n# then upload the indices for each set\r\nDatasetDict({\r\n \"train\": Dataset.from_dict({\"indices\": [0, 1, 2, 3]}),\r\n \"test\": Dataset.from_dict({\"indices\": [4, 5]}),\r\n}).push_to_hub(\"username/dataset\", \"my_first_set_of_split\")"
] | 1970-01-01T00:00:00.000001 | 1,732 | null | CONTRIBUTOR | null | ### Feature request
As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co./docs/datasets/en/repository_structure))
It would seem to be pretty useful to also allow splits to be based on identifiers of individual examples
This could be configured like
{"split_name": {"column_name": [column values in split]}}
(This in turn requires unique 'index' columns, which could be explicitly supported or just assumed to be defined appropriately by the user).
I guess a potential downside would be that shards would end up spanning different splits - is this something that can be handled somehow? Would this only affect streaming from hub?
### Motivation
The main motivation would be that all data files could be stored in a single directory, and multiple sets of splits could be generated from the same data. This is often useful for large datasets with multiple distinct sets of splits.
This could all be configured via the README.md yaml configs
### Your contribution
May be able to contribute if it seems like a good idea | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7287/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7286/comments | https://api.github.com/repos/huggingface/datasets/issues/7286/events | https://github.com/huggingface/datasets/issues/7286 | 2,645,350,151 | I_kwDODunzps6drNcH | 7,286 | Concurrent loading in `load_from_disk` - `num_proc` as a param | {
"avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4",
"events_url": "https://api.github.com/users/unography/events{/privacy}",
"followers_url": "https://api.github.com/users/unography/followers",
"following_url": "https://api.github.com/users/unography/following{/other_user}",
"gists_url": "https://api.github.com/users/unography/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unography",
"id": 5240449,
"login": "unography",
"node_id": "MDQ6VXNlcjUyNDA0NDk=",
"organizations_url": "https://api.github.com/users/unography/orgs",
"received_events_url": "https://api.github.com/users/unography/received_events",
"repos_url": "https://api.github.com/users/unography/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unography/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unography",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,731 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere
### Motivation
Make loading large datasets from disk faster
### Your contribution
Happy to contribute if given pointers | {
"avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4",
"events_url": "https://api.github.com/users/unography/events{/privacy}",
"followers_url": "https://api.github.com/users/unography/followers",
"following_url": "https://api.github.com/users/unography/following{/other_user}",
"gists_url": "https://api.github.com/users/unography/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unography",
"id": 5240449,
"login": "unography",
"node_id": "MDQ6VXNlcjUyNDA0NDk=",
"organizations_url": "https://api.github.com/users/unography/orgs",
"received_events_url": "https://api.github.com/users/unography/received_events",
"repos_url": "https://api.github.com/users/unography/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unography/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unography",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7286/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7286/timeline | null | not_planned | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7282/comments | https://api.github.com/repos/huggingface/datasets/issues/7282/events | https://github.com/huggingface/datasets/issues/7282 | 2,642,075,491 | I_kwDODunzps6det9j | 7,282 | Faulty datasets.exceptions.ExpectedMoreSplitsError | {
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,731 | null | CONTRIBUTOR | null | ### Describe the bug
Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`.
Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`.
Here is the Traceback:
```
Traceback (most recent call last):
File "/home/user/app/app.py", line 12, in <module>
ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2154, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 1018, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/usr/local/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 68, in verify_splits
raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))
datasets.exceptions.ExpectedMoreSplitsError: {'train', 'test'}
```
Note: I am using the `data_files` argument only because I am trying to specify that I only want the 'validation' split, and the whole dataset will be downloaded even when the `split='validation'` argument is specified, unless you also specify `data_files`, as described here: https://discuss.huggingface.co/t/how-can-i-download-a-specific-split-of-a-dataset/79027
### Steps to reproduce the bug
1. Create a Space with the default blank 'gradio' SDK https://huggingface.co./new-space
2. Create a file 'app.py' that loads a dataset to only extract a 'validation' split:
`ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)`
### Expected behavior
Downloading validation split.
### Environment info
Default environment for creating a new Space. Relevant to this bug, that is:
```
FROM docker.io/library/python:3.10@sha256:fd0fa50d997eb56ce560c6e5ca6a1f5cf8fdff87572a16ac07fb1f5ca01eb608
--> RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.19" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1"
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7282/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7281/comments | https://api.github.com/repos/huggingface/datasets/issues/7281/events | https://github.com/huggingface/datasets/issues/7281 | 2,640,346,339 | I_kwDODunzps6dYHzj | 7,281 | File not found error | {
"avatar_url": "https://avatars.githubusercontent.com/u/37507786?v=4",
"events_url": "https://api.github.com/users/MichielBontenbal/events{/privacy}",
"followers_url": "https://api.github.com/users/MichielBontenbal/followers",
"following_url": "https://api.github.com/users/MichielBontenbal/following{/other_user}",
"gists_url": "https://api.github.com/users/MichielBontenbal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MichielBontenbal",
"id": 37507786,
"login": "MichielBontenbal",
"node_id": "MDQ6VXNlcjM3NTA3Nzg2",
"organizations_url": "https://api.github.com/users/MichielBontenbal/orgs",
"received_events_url": "https://api.github.com/users/MichielBontenbal/received_events",
"repos_url": "https://api.github.com/users/MichielBontenbal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MichielBontenbal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichielBontenbal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MichielBontenbal",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Link to the dataset: https://huggingface.co./datasets/MichielBontenbal/UrbanSounds "
] | 1970-01-01T00:00:00.000001 | 1,730 | null | NONE | null | ### Describe the bug
I get a FileNotFoundError:
<img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87">
### Steps to reproduce the bug
See screenshot.
### Expected behavior
I want to load one audiofile from the dataset.
### Environment info
MacOs Intel 14.6.1 (23G93)
Python 3.10.9
Numpy 1.23
Datasets latest version | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7281/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7280/comments | https://api.github.com/repos/huggingface/datasets/issues/7280/events | https://github.com/huggingface/datasets/issues/7280 | 2,639,977,077 | I_kwDODunzps6dWtp1 | 7,280 | Add filename in error message when ReadError or similar occur | {
"avatar_url": "https://avatars.githubusercontent.com/u/37046039?v=4",
"events_url": "https://api.github.com/users/elisa-aleman/events{/privacy}",
"followers_url": "https://api.github.com/users/elisa-aleman/followers",
"following_url": "https://api.github.com/users/elisa-aleman/following{/other_user}",
"gists_url": "https://api.github.com/users/elisa-aleman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/elisa-aleman",
"id": 37046039,
"login": "elisa-aleman",
"node_id": "MDQ6VXNlcjM3MDQ2MDM5",
"organizations_url": "https://api.github.com/users/elisa-aleman/orgs",
"received_events_url": "https://api.github.com/users/elisa-aleman/received_events",
"repos_url": "https://api.github.com/users/elisa-aleman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/elisa-aleman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisa-aleman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/elisa-aleman",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi Elisa, please share the error traceback here, and if you manage to find the location in the `datasets` code where the error occurs, feel free to open a PR to add the necessary logging / improve the error message.",
"> please share the error traceback\n\nI don't have access to it but it should be during [this exception](https://github.com/huggingface/datasets/blob/2049c00921c59cdeb835137a1c49639cf175af07/src/datasets/builder.py#L1643) which happens during the loading of a dataset. If one of the downloaded files is corrupted, the for loop will not yield correctly, and the error will come from, say, in the case of tar files, [this iterable](https://github.com/huggingface/datasets/blob/2049c00921c59cdeb835137a1c49639cf175af07/src/datasets/utils/file_utils.py#L1293) which has no explicit error handling that leaves clues as to which file has failed.\n\nI only know the case for tar files, but I consider this issue could be happening across different file types too.",
"I think having a better error handling for this tar iterable would be useful already, maybe a simple try/except in `_iter_from_urlpath` that checks for `tarfile.ReadError` and raises an error with the `urlpath` mentioned in the error ?",
"I think not just from higher calls like the `_iter_from_urlpath` but directly wherever a file is attempted to be opened would be the best case, as the traceback would simply lead to that.",
"so maybe there should be better error messages in each dataset builder definition ? e.g. in https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py for webdataset TAR archives"
] | 1970-01-01T00:00:00.000001 | 1,732 | null | NONE | null | Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files.
Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption).
However the errors often only let me know it was a tar file if `tarfile.ReadError` appears on the traceback, and I imagine similarly for other file types.
This makes it really hard to debug which file is corrupted, and when dealing with very large datasets, it shouldn't be necessary to force download everything again. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7280/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7276/comments | https://api.github.com/repos/huggingface/datasets/issues/7276/events | https://github.com/huggingface/datasets/issues/7276 | 2,631,917,431 | I_kwDODunzps6c3993 | 7,276 | Accessing audio dataset value throws Format not recognised error | {
"avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4",
"events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}",
"followers_url": "https://api.github.com/users/fawazahmed0/followers",
"following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}",
"gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fawazahmed0",
"id": 20347013,
"login": "fawazahmed0",
"node_id": "MDQ6VXNlcjIwMzQ3MDEz",
"organizations_url": "https://api.github.com/users/fawazahmed0/orgs",
"received_events_url": "https://api.github.com/users/fawazahmed0/received_events",
"repos_url": "https://api.github.com/users/fawazahmed0/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fawazahmed0",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! can you try if this works ?\r\n\r\n```python\r\nimport soundfile as sf\r\n\r\nwith open('C:\\\\Users\\\\Nawaz-Server\\\\.cache\\\\huggingface\\\\hub\\\\datasets--fawazahmed0--bug-audio\\\\snapshots\\\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\\\data\\\\Ghamadi\\\\037136.mp3', 'rb') as f:\r\n print(sf.read(f))\r\n```",
"@lhoestq Same error, here is the output:\r\n\r\n```bash\r\n(mypy) C:\\Users\\Nawaz-Server\\Documents\\ml>python myest.py\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Nawaz-Server\\Documents\\ml\\myest.py\", line 5, in <module>\r\n print(sf.read(f))\r\n ^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 285, in read\r\n with SoundFile(file, 'r', samplerate, channels,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 658, in __init__\r\n self._file = self._open(file, mode_int, closefd)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 1216, in _open\r\n raise LibsndfileError(err, prefix=\"Error opening {0!r}: \".format(self.name))\r\nsoundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\\\Users\\\\Nawaz-Server\\\\.cache\\\\huggingface\\\\hub\\\\datasets--fawazahmed0--bug-audio\\\\snapshots\\\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\\\data\\\\Ghamadi\\\\037136.mp3'>: Format not recognised.\r\n\r\n```",
"upstream bug: https://github.com/bastibe/python-soundfile/issues/439"
] | 1970-01-01T00:00:00.000001 | 1,731 | null | NONE | null | ### Describe the bug
Accessing audio dataset value throws `Format not recognised error`
### Steps to reproduce the bug
**code:**
```py
from datasets import load_dataset
dataset = load_dataset("fawazahmed0/bug-audio")
for data in dataset["train"]:
print(data)
```
**output:**
```bash
(mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py
[C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed!
{'audio': {'path': 'C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037135.mp3', 'array': array([ 0.00000000e+00, -2.86519935e-22, -2.56504911e-21, ...,
-1.94239747e-02, -2.42924765e-02, -2.99104657e-02]), 'sampling_rate': 22050}, 'reciter': 'Ghamadi', 'transcription': 'الا عجوز ا في الغبرين', 'line': 3923, 'chapter': 37, 'verse': 135, 'text': 'إِلَّا عَجُوزࣰ ا فِي ٱلۡغَٰبِرِينَ'}
Traceback (most recent call last):
File "C:\Users\Nawaz-Server\Documents\ml\myest.py", line 5, in <module>
for data in dataset["train"]:
~~~~~~~^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\arrow_dataset.py", line 2372, in __iter__
formatted_output = format_table(
^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 639, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 403, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 444, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 222, in decode_row
return self.features.decode_example(row) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 2042, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 1403, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 285, in read
with SoundFile(file, 'r', samplerate, channels,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 1216, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'>: Format not recognised.
```
### Expected behavior
Everything should work fine, as loading the problematic audio file directly with soundfile package works fine
**code:**
```
import soundfile as sf
print(sf.read('C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'))
```
**output:**
```bash
(mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py
[C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed!
(array([ 0.00000000e+00, -8.43723821e-22, -2.45370628e-22, ...,
-7.71464454e-03, -6.90496899e-03, -8.63333419e-03]), 22050)
```
### Environment info
- `datasets` version: 3.0.2
- Platform: Windows-11-10.0.22621-SP0
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.10.0
- soundfile: 0.12.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7276/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7275/comments | https://api.github.com/repos/huggingface/datasets/issues/7275/events | https://github.com/huggingface/datasets/issues/7275 | 2,631,713,397 | I_kwDODunzps6c3MJ1 | 7,275 | load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/46941974?v=4",
"events_url": "https://api.github.com/users/santiagobp99/events{/privacy}",
"followers_url": "https://api.github.com/users/santiagobp99/followers",
"following_url": "https://api.github.com/users/santiagobp99/following{/other_user}",
"gists_url": "https://api.github.com/users/santiagobp99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/santiagobp99",
"id": 46941974,
"login": "santiagobp99",
"node_id": "MDQ6VXNlcjQ2OTQxOTc0",
"organizations_url": "https://api.github.com/users/santiagobp99/orgs",
"received_events_url": "https://api.github.com/users/santiagobp99/received_events",
"repos_url": "https://api.github.com/users/santiagobp99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/santiagobp99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santiagobp99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/santiagobp99",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,730 | null | NONE | null | ### Describe the bug
I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapped:
https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B
`- lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=batch_size,
num_proc=4,
)
- tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
def tokenize_function(examples):
model_checkpoint = 'gpt2'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
return tokenizer(examples["text"])`
### Steps to reproduce the bug
Currently handle all the imports inside the function
### Expected behavior
The code must work es expected in the notebook, but currently this is not happening.
https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B
### Environment info
print(transformers.__version__)
4.46.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7275/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7269/comments | https://api.github.com/repos/huggingface/datasets/issues/7269/events | https://github.com/huggingface/datasets/issues/7269 | 2,626,873,843 | I_kwDODunzps6ckunz | 7,269 | Memory leak when streaming | {
"avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4",
"events_url": "https://api.github.com/users/Jourdelune/events{/privacy}",
"followers_url": "https://api.github.com/users/Jourdelune/followers",
"following_url": "https://api.github.com/users/Jourdelune/following{/other_user}",
"gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jourdelune",
"id": 64205064,
"login": "Jourdelune",
"node_id": "MDQ6VXNlcjY0MjA1MDY0",
"organizations_url": "https://api.github.com/users/Jourdelune/orgs",
"received_events_url": "https://api.github.com/users/Jourdelune/received_events",
"repos_url": "https://api.github.com/users/Jourdelune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jourdelune",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I seem to have encountered the same problem when loading non streaming datasets. load_from_disk. Causing hundreds of GB of memory, but the dataset actually only has 50GB",
"FYI when streaming parquet data, only one row group per worker is loaded in memory at a time.\r\n\r\nBtw for datasets of embeddings you can surely optimize your RAM by reading the data as torch tensors directly instead of the default python lists\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\ndataset = load_dataset(\"WaveGenAI/dataset\", streaming=True).with_format(\"torch\")\r\n\r\ndataloader = DataLoader(dataset[\"train\"], num_workers=3)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,731 | null | NONE | null | ### Describe the bug
I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable.
I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ram, but the issue I have is that the ram usage is not constant. So after each new shard loaded, the ram usage will be higher and higher.
### Steps to reproduce the bug
You can run this code and see you ram usage, after each shard of 255 examples, your ram usage will be extended.
```py
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("WaveGenAI/dataset", streaming=True)
dataloader = DataLoader(dataset["train"], num_workers=3)
for i, data in enumerate(dataloader):
print(i, end="\r")
```
### Expected behavior
The Ram usage should be always the same (just 3 shards loaded in the ram).
### Environment info
- `datasets` version: 3.0.1
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.12.4
- `huggingface_hub` version: 0.26.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7269/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7269/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7268/comments | https://api.github.com/repos/huggingface/datasets/issues/7268/events | https://github.com/huggingface/datasets/issues/7268 | 2,626,664,687 | I_kwDODunzps6cj7jv | 7,268 | load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/71670961?v=4",
"events_url": "https://api.github.com/users/ghaith-mq/events{/privacy}",
"followers_url": "https://api.github.com/users/ghaith-mq/followers",
"following_url": "https://api.github.com/users/ghaith-mq/following{/other_user}",
"gists_url": "https://api.github.com/users/ghaith-mq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghaith-mq",
"id": 71670961,
"login": "ghaith-mq",
"node_id": "MDQ6VXNlcjcxNjcwOTYx",
"organizations_url": "https://api.github.com/users/ghaith-mq/orgs",
"received_events_url": "https://api.github.com/users/ghaith-mq/received_events",
"repos_url": "https://api.github.com/users/ghaith-mq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghaith-mq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghaith-mq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghaith-mq",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hello, It's an interesting issue here. I have the same problem, I have a local dataset and I want to push the dataset to the hub but huggingface does a copy of it.\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"webdataset\", data_files=\"/media/works/data/*.tar\") # copy here\r\ndataset.push_to_hub(\"WaveGenAI/audios2\")\r\n```\r\n\r\nEdit: I can use HfApi for my use case\r\n"
] | 1970-01-01T00:00:00.000001 | 1,730 | null | NONE | null | ### Describe the bug
I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that?
### Steps to reproduce the bug
when trying to load data using load_From_disk after being saved using save_to_disk
### Expected behavior
run out of disk space
### Environment info
lateest version | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7268/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7267/comments | https://api.github.com/repos/huggingface/datasets/issues/7267/events | https://github.com/huggingface/datasets/issues/7267 | 2,626,490,029 | I_kwDODunzps6cjQ6t | 7,267 | Source installation fails on Macintosh with python 3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4",
"events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}",
"followers_url": "https://api.github.com/users/mayankagarwals/followers",
"following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}",
"gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mayankagarwals",
"id": 39498938,
"login": "mayankagarwals",
"node_id": "MDQ6VXNlcjM5NDk4OTM4",
"organizations_url": "https://api.github.com/users/mayankagarwals/orgs",
"received_events_url": "https://api.github.com/users/mayankagarwals/received_events",
"repos_url": "https://api.github.com/users/mayankagarwals/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mayankagarwals",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I encountered the same problem on M1, a workaround I did was to simply comment out the dependency:\r\n\r\n```python\r\n...\r\n \"zstandard\",\r\n \"polars[timezone]>=0.20.0\",\r\n # \"decord==0.6.0\",\r\n]\r\n```\r\n\r\nThis worked for me as the adjustments I did to the code do not use the dependency, but I do not know if the same holds for you.\r\n\r\nI also do not think it is a good idea to rely on a dependency (I mean decord) that has not been maintained for 2 years, but I saw that even eva-decord hasn't been maintained since last year.\r\n\r\nDid you get it to work with eva-decord?"
] | 1970-01-01T00:00:00.000001 | 1,730 | null | NONE | null | ### Describe the bug
Hi,
Decord is a dev dependency not maintained since couple years.
It does not have an ARM package available rendering it uninstallable on non-intel based macs
Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem.
Happy to raise a PR
### Steps to reproduce the bug
Source installation as mentioned in contributinog.md
### Expected behavior
Installation without decord failing to be installed.
### Environment info
python=3.10, M3 Mac | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7267/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7266/comments | https://api.github.com/repos/huggingface/datasets/issues/7266/events | https://github.com/huggingface/datasets/issues/7266 | 2,624,666,087 | I_kwDODunzps6ccTnn | 7,266 | The dataset viewer should be available soon. Please retry later. | {
"avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4",
"events_url": "https://api.github.com/users/viiika/events{/privacy}",
"followers_url": "https://api.github.com/users/viiika/followers",
"following_url": "https://api.github.com/users/viiika/following{/other_user}",
"gists_url": "https://api.github.com/users/viiika/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/viiika",
"id": 39821659,
"login": "viiika",
"node_id": "MDQ6VXNlcjM5ODIxNjU5",
"organizations_url": "https://api.github.com/users/viiika/orgs",
"received_events_url": "https://api.github.com/users/viiika/received_events",
"repos_url": "https://api.github.com/users/viiika/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viiika/subscriptions",
"type": "User",
"url": "https://api.github.com/users/viiika",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Waiting is all you need. 10 hours later, it works."
] | 1970-01-01T00:00:00.000001 | 1,730 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.''
### Steps to reproduce the bug
dataset link: https://huggingface.co./datasets/BryanW/HI_EDIT
### Expected behavior
Present the dataset viewer.
### Environment info
NA | {
"avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4",
"events_url": "https://api.github.com/users/viiika/events{/privacy}",
"followers_url": "https://api.github.com/users/viiika/followers",
"following_url": "https://api.github.com/users/viiika/following{/other_user}",
"gists_url": "https://api.github.com/users/viiika/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/viiika",
"id": 39821659,
"login": "viiika",
"node_id": "MDQ6VXNlcjM5ODIxNjU5",
"organizations_url": "https://api.github.com/users/viiika/orgs",
"received_events_url": "https://api.github.com/users/viiika/received_events",
"repos_url": "https://api.github.com/users/viiika/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viiika/subscriptions",
"type": "User",
"url": "https://api.github.com/users/viiika",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7266/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7266/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7261/comments | https://api.github.com/repos/huggingface/datasets/issues/7261/events | https://github.com/huggingface/datasets/issues/7261 | 2,620,510,840 | I_kwDODunzps6cMdJ4 | 7,261 | Cannot load the cache when mapping the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/43033959?v=4",
"events_url": "https://api.github.com/users/zhangn77/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangn77/followers",
"following_url": "https://api.github.com/users/zhangn77/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangn77/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangn77",
"id": 43033959,
"login": "zhangn77",
"node_id": "MDQ6VXNlcjQzMDMzOTU5",
"organizations_url": "https://api.github.com/users/zhangn77/orgs",
"received_events_url": "https://api.github.com/users/zhangn77/received_events",
"repos_url": "https://api.github.com/users/zhangn77/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangn77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangn77/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangn77",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,730 | null | NONE | null | ### Describe the bug
I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset.
with accelerator.main_process_first():
from datasets.fingerprint import Hasher
# fingerprint used by the cache for the other processes to load the result
# details: https://github.com/huggingface/diffusers/pull/4038#discussion_r1266078401
new_fingerprint = Hasher.hash(args)
train_dataset = train_dataset.map(
compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint, batch_size=10,
)
### Steps to reproduce the bug
train flux controlnet and start again
### Expected behavior
will not map again
### Environment info
latest diffusers
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7261/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7260/comments | https://api.github.com/repos/huggingface/datasets/issues/7260/events | https://github.com/huggingface/datasets/issues/7260 | 2,620,014,285 | I_kwDODunzps6cKj7N | 7,260 | cache can't cleaned or disabled | {
"avatar_url": "https://avatars.githubusercontent.com/u/15007828?v=4",
"events_url": "https://api.github.com/users/charliedream1/events{/privacy}",
"followers_url": "https://api.github.com/users/charliedream1/followers",
"following_url": "https://api.github.com/users/charliedream1/following{/other_user}",
"gists_url": "https://api.github.com/users/charliedream1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/charliedream1",
"id": 15007828,
"login": "charliedream1",
"node_id": "MDQ6VXNlcjE1MDA3ODI4",
"organizations_url": "https://api.github.com/users/charliedream1/orgs",
"received_events_url": "https://api.github.com/users/charliedream1/received_events",
"repos_url": "https://api.github.com/users/charliedream1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/charliedream1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charliedream1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/charliedream1",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,730 | null | NONE | null | ### Describe the bug
I tried following ways, the cache can't be disabled.
I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help!
```python
from datasets import disable_caching
from transformers import AutoTokenizer
disable_caching()
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path)
def tokenization_fn(examples):
column_name = 'text' if 'text' in examples else 'data'
tokenized_inputs = tokenizer(
examples[column_name], return_special_tokens_mask=True, truncation=False,
max_length=tokenizer.model_max_length
)
return tokenized_inputs
data = load_dataset('json', data_files=save_local_path, split='train', cache_dir=None)
data.cleanup_cache_files()
updated_dataset = data.map(tokenization_fn, load_from_cache_file=False)
updated_dataset .cleanup_cache_files()
```
### Expected behavior
no cache file generated
### Environment info
Ubuntu 20.04.6 LTS
datasets 3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7260/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7260/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7254/comments | https://api.github.com/repos/huggingface/datasets/issues/7254/events | https://github.com/huggingface/datasets/issues/7254 | 2,616,174,996 | I_kwDODunzps6b76mU | 7,254 | mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy") | {
"avatar_url": "https://avatars.githubusercontent.com/u/97193607?v=4",
"events_url": "https://api.github.com/users/Akhil-CM/events{/privacy}",
"followers_url": "https://api.github.com/users/Akhil-CM/followers",
"following_url": "https://api.github.com/users/Akhil-CM/following{/other_user}",
"gists_url": "https://api.github.com/users/Akhil-CM/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Akhil-CM",
"id": 97193607,
"login": "Akhil-CM",
"node_id": "U_kgDOBcsOhw",
"organizations_url": "https://api.github.com/users/Akhil-CM/orgs",
"received_events_url": "https://api.github.com/users/Akhil-CM/received_events",
"repos_url": "https://api.github.com/users/Akhil-CM/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Akhil-CM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Akhil-CM/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Akhil-CM",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"It seems that https://github.com/huggingface/datasets/issues/5517 is exactly the same issue.\r\n\r\nIt was mentioned there that this would be fixed in version 3.x"
] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `float32` and for ints it will be set to `int64`
### Steps to reproduce the bug
```python
import numpy as np
import datasets
from datasets import Dataset, Features, Array2D
print(f"datasets version: {datasets.__version__}")
data_info = {
"arr_float" : "float64",
"arr_int" : "int32"
}
sample = {key : [np.zeros([4, 5], dtype=dtype)] for key, dtype in data_info.items()}
features = {key : Array2D(shape=(None, 5), dtype=dtype) for key, dtype in data_info.items()}
features = Features(features)
dataset = Dataset.from_dict(sample, features=features)
ds = dataset.with_format("numpy")
for key in features:
print(f"{key} feature dtype: ", ds.features[key].dtype)
print(f"{key} dtype:", ds[key].dtype)
```
Output:
```bash
datasets version: 3.0.2
arr_float feature dtype: float64
arr_float dtype: float32
arr_int feature dtype: int32
arr_int dtype: int64
```
### Expected behavior
It should return a `np.array` with `dtype` that the user provided for the corresponding member in the `Features` type value
### Environment info
- `datasets` version: 3.0.2
- Platform: Linux-6.11.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7254/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7253/comments | https://api.github.com/repos/huggingface/datasets/issues/7253/events | https://github.com/huggingface/datasets/issues/7253 | 2,615,862,202 | I_kwDODunzps6b6uO6 | 7,253 | Unable to upload a large dataset zip either from command line or UI | {
"avatar_url": "https://avatars.githubusercontent.com/u/159609047?v=4",
"events_url": "https://api.github.com/users/vakyansh/events{/privacy}",
"followers_url": "https://api.github.com/users/vakyansh/followers",
"following_url": "https://api.github.com/users/vakyansh/following{/other_user}",
"gists_url": "https://api.github.com/users/vakyansh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vakyansh",
"id": 159609047,
"login": "vakyansh",
"node_id": "U_kgDOCYNw1w",
"organizations_url": "https://api.github.com/users/vakyansh/orgs",
"received_events_url": "https://api.github.com/users/vakyansh/received_events",
"repos_url": "https://api.github.com/users/vakyansh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vakyansh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vakyansh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vakyansh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB.
<img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34">
<img width="755" alt="image" src="https://github.com/user-attachments/assets/a8d4acda-7f02-4279-9c2d-b2e0282b4faa">
### Steps to reproduce the bug
Upload a large file
### Expected behavior
The file should upload without any issue.
### Environment info
None | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7253/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7249/comments | https://api.github.com/repos/huggingface/datasets/issues/7249/events | https://github.com/huggingface/datasets/issues/7249 | 2,610,136,636 | I_kwDODunzps6bk4Y8 | 7,249 | How to debugging | {
"avatar_url": "https://avatars.githubusercontent.com/u/49576595?v=4",
"events_url": "https://api.github.com/users/ShDdu/events{/privacy}",
"followers_url": "https://api.github.com/users/ShDdu/followers",
"following_url": "https://api.github.com/users/ShDdu/following{/other_user}",
"gists_url": "https://api.github.com/users/ShDdu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ShDdu",
"id": 49576595,
"login": "ShDdu",
"node_id": "MDQ6VXNlcjQ5NTc2NTk1",
"organizations_url": "https://api.github.com/users/ShDdu/orgs",
"received_events_url": "https://api.github.com/users/ShDdu/received_events",
"repos_url": "https://api.github.com/users/ShDdu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ShDdu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShDdu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ShDdu",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints.
### Steps to reproduce the bug
# my_dataset.py
import json
import datasets
class MyDatasetConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(MyDatasetConfig, self).__init__(**kwargs)
class MyDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
MyDatasetConfig(
name="default",
version=VERSION,
description="myDATASET"
),
]
def _info(self):
print("info") # breakpoints
return datasets.DatasetInfo(
description="myDATASET",
features=datasets.Features(
{
"id": datasets.Value("int32"),
"text": datasets.Value("string"),
"label": datasets.ClassLabel(names=["negative", "positive"]),
}
),
supervised_keys=("text", "label"),
)
def _split_generators(self, dl_manager):
print("generate") # breakpoints
data_file = "data.json"
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file}
),
]
def _generate_examples(self, filepath):
print("example") # breakpoints
with open(filepath, encoding="utf-8") as f:
data = json.load(f)
for idx, sample in enumerate(data):
yield idx, {
"id": sample["id"],
"text": sample["text"],
"label": sample["label"],
}
#main.py
import os
os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1"
from datasets import load_dataset
dataset = load_dataset("my_dataset.py", split="train", cache_dir=None)
print(dataset[:5])
### Expected behavior
Pause at breakpoints while running debugging
### Environment info
pycharm
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7249/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7248/comments | https://api.github.com/repos/huggingface/datasets/issues/7248/events | https://github.com/huggingface/datasets/issues/7248 | 2,609,926,089 | I_kwDODunzps6bkE_J | 7,248 | ModuleNotFoundError: No module named 'datasets.tasks' | {
"avatar_url": "https://avatars.githubusercontent.com/u/93593941?v=4",
"events_url": "https://api.github.com/users/shoowadoo/events{/privacy}",
"followers_url": "https://api.github.com/users/shoowadoo/followers",
"following_url": "https://api.github.com/users/shoowadoo/following{/other_user}",
"gists_url": "https://api.github.com/users/shoowadoo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shoowadoo",
"id": 93593941,
"login": "shoowadoo",
"node_id": "U_kgDOBZQhVQ",
"organizations_url": "https://api.github.com/users/shoowadoo/orgs",
"received_events_url": "https://api.github.com/users/shoowadoo/received_events",
"repos_url": "https://api.github.com/users/shoowadoo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shoowadoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoowadoo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shoowadoo",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"tasks was removed in v3: #6999 \r\n\r\nI also don't see why TextClassification is imported, since it's not used after. So the fix is simple: delete this line.",
"I opened https://huggingface.co./datasets/knowledgator/events_classification_biotech/discussions/7 to remove the line, hopefully the dataset owner will merge it soon"
] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <cell line: 1>()
----> 1 dataset = load_dataset('knowledgator/events_classification_biotech')
11 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2130
2131 # Create a dataset builder
-> 2132 builder_instance = load_dataset_builder(
2133 path=path,
2134 name=name,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
1886 raise ValueError(error_msg)
1887
-> 1888 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name)
1889 # Instantiate the dataset builder
1890 builder_instance: DatasetBuilder = builder_cls(
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in get_dataset_builder_class(dataset_module, dataset_name)
246 dataset_module.importable_file_path
247 ) if dataset_module.importable_file_path else nullcontext():
--> 248 builder_cls = import_main_class(dataset_module.module_path)
249 if dataset_module.builder_configs_parameters.builder_configs:
250 dataset_name = dataset_name or dataset_module.builder_kwargs.get("dataset_name")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_main_class(module_path)
167 def import_main_class(module_path) -> Optional[Type[DatasetBuilder]]:
168 """Import a module at module_path and return its main class: a DatasetBuilder"""
--> 169 module = importlib.import_module(module_path)
170 # Find the main class in our imported module
171 module_main_cls = None
[/usr/lib/python3.10/importlib/__init__.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _load_unlocked(spec)
/usr/lib/python3.10/importlib/_bootstrap_external.py in exec_module(self, module)
/usr/lib/python3.10/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
[~/.cache/huggingface/modules/datasets_modules/datasets/knowledgator--events_classification_biotech/9c8086d498c3104de3a3c5b6640837e18ccd829dcaca49f1cdffe3eb5c4a6361/events_classification_biotech.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <module>
1 import datasets
2 from datasets import load_dataset
----> 3 from datasets.tasks import TextClassification
4
5 DESCRIPTION = """
ModuleNotFoundError: No module named 'datasets.tasks'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
### Steps to reproduce the bug
!pip install datasets
from datasets import load_dataset
dataset = load_dataset('knowledgator/events_classification_biotech')
### Expected behavior
no ModuleNotFoundError
### Environment info
google colab | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7248/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7247/comments | https://api.github.com/repos/huggingface/datasets/issues/7247/events | https://github.com/huggingface/datasets/issues/7247 | 2,606,230,029 | I_kwDODunzps6bV-oN | 7,247 | Adding column with dict struction when mapping lead to wrong order | {
"avatar_url": "https://avatars.githubusercontent.com/u/114604968?v=4",
"events_url": "https://api.github.com/users/chchch0109/events{/privacy}",
"followers_url": "https://api.github.com/users/chchch0109/followers",
"following_url": "https://api.github.com/users/chchch0109/following{/other_user}",
"gists_url": "https://api.github.com/users/chchch0109/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chchch0109",
"id": 114604968,
"login": "chchch0109",
"node_id": "U_kgDOBtS7qA",
"organizations_url": "https://api.github.com/users/chchch0109/orgs",
"received_events_url": "https://api.github.com/users/chchch0109/received_events",
"repos_url": "https://api.github.com/users/chchch0109/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chchch0109/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chchch0109/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chchch0109",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
in `map()` function, I want to add a new column with a dict structure.
```
def map_fn(example):
example['text'] = {'user': ..., 'assistant': ...}
return example
```
However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset.
Thus I can't concatenate two datasets due to the different feature structures.
[Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example
This seems an issue in low level pyarrow library instead of datasets, however, I think datasets should allow concatenate two datasets actually in the same structure.
### Steps to reproduce the bug
[Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example
### Expected behavior
two datasets could be concatenated.
### Environment info
N/A | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7247/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7243/comments | https://api.github.com/repos/huggingface/datasets/issues/7243/events | https://github.com/huggingface/datasets/issues/7243 | 2,602,853,172 | I_kwDODunzps6bJGM0 | 7,243 | ArrayXD with None as leading dim incompatible with DatasetCardData | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"It looks like `CardData` in `huggingface_hub` removes None values where it shouldn't. Indeed it calls `_remove_none` on the return of `to_dict()`:\r\n\r\n```python\r\n def to_dict(self) -> Dict[str, Any]:\r\n \"\"\"Converts CardData to a dict.\r\n\r\n Returns:\r\n `dict`: CardData represented as a dictionary ready to be dumped to a YAML\r\n block for inclusion in a README.md file.\r\n \"\"\"\r\n\r\n data_dict = copy.deepcopy(self.__dict__)\r\n self._to_dict(data_dict)\r\n return _remove_none(data_dict)\r\n```\r\n\r\nWould it be ok to remove `list()` from being scanned in `_remove_none` ? it could also be a specific behavior to DatasetCardData if necessary @Wauplin ",
"I have actually no idea why none values are removed in model and dataset card data... :see_no_evil:\r\nLooks like `_remove_none` has been introduced at the same time as the entire repocard module (see https://github.com/huggingface/huggingface_hub/pull/940). I would be tempted to remove `_remove_none` entirely actually and only remove \"top-level\" None values (i.e. if something like `pipeline_tag=None` due to a default value in kwargs => we remove it). Hard to tell what could be the side effects but I'm not against trying.\r\n\r\n\r\nHowever, I'm not really in favor in making an exception only for lists. It would mean that tuples, sets and dicts are filtered but not lists, which is pretty inconsistent.",
"let's do it for top level attributes yes",
"I opened https://github.com/huggingface/huggingface_hub/pull/2626 to address it :)",
"thanks !"
] | 1970-01-01T00:00:00.000001 | 1,729 | null | CONTRIBUTOR | null | ### Describe the bug
Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones
@lhoestq
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Array2D, Dataset, Features, load_dataset
def examples_generator():
for i in range(4):
yield {
"array_1d": np.zeros((10,1), dtype="uint16"),
"array_2d": np.zeros((10, 1), dtype="uint16"),
}
features = Features(array_1d=Array2D((None,1), "uint16"), array_2d=Array2D((None, 1), "uint16"))
dataset = Dataset.from_generator(examples_generator, features=features)
dataset.push_to_hub("alex-hh/test_array_1d2d")
ds = load_dataset("alex-hh/test_array_1d2d")
```
Source of error appears to be DatasetCardData.to_dict invoking DatasetCardData._remove_none
```python
from huggingface_hub import DatasetCardData
from datasets.info import DatasetInfosDict
dataset_card_data = DatasetCardData()
DatasetInfosDict({"default": dataset.info.copy()}).to_dataset_card_data(dataset_card_data)
print(dataset_card_data.to_dict()) # removes Nones in shape
```
### Expected behavior
Should be possible to load datasets saved with shape None in leading dimension
### Environment info
3.0.2 and latest huggingface_hub | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7243/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7243/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7241/comments | https://api.github.com/repos/huggingface/datasets/issues/7241/events | https://github.com/huggingface/datasets/issues/7241 | 2,599,899,156 | I_kwDODunzps6a91AU | 7,241 | `push_to_hub` overwrite argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4",
"events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}",
"followers_url": "https://api.github.com/users/ceferisbarov/followers",
"following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}",
"gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ceferisbarov",
"id": 60838378,
"login": "ceferisbarov",
"node_id": "MDQ6VXNlcjYwODM4Mzc4",
"organizations_url": "https://api.github.com/users/ceferisbarov/orgs",
"received_events_url": "https://api.github.com/users/ceferisbarov/received_events",
"repos_url": "https://api.github.com/users/ceferisbarov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ceferisbarov",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi ! Do you mean deleting all the files ? or erasing the repository git history before push_to_hub ?",
"Hi! I meant the latter.",
"I don't think there is a `huggingface_hub` utility to erase the git history, cc @Wauplin maybe ?",
"What is the goal exactly of deleting all the git history without deleting the repo? ",
"You can use [`super_squash_commit`](https://huggingface.co./docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.super_squash_history) to squash all the commits into a single one, hence deleting the git history. This is not exactly what you asked for since it squashes the commits for a specific revision (example: \"all commits on main\"). This means that if other branches exists, they are kept the same. Also if some PRs are already opened on the repo, they will become unmergeable since the commits will have diverted.",
"So the solution is:\r\n\r\n```python\r\nfrom huggingface_hub import HfApi\r\nrepo_id = \"username/dataset_name\"\r\nds.push_to_hub(repo_id)\r\nHfApi().super_squash_commit(repo_id)\r\n```\r\n\r\nThis way you erase previous git history to end up with only 1 commit containing your dataset.\r\nStill, I'd be curious why it's important in your case. Is it to save storage space ? or to disallow loading old versions of the data ?",
"Thanks, everyone! I am building a new dataset and playing around with column names, splits, etc. Sometimes I push to the hub to share it with other teammates, I don't want those variations to be part of the repo. Deleting the repo from the website takes a little time, but it also loses repo settings that I have set, since I always set it to public with manually approved requests.\r\n\r\nBTW, I had to write `HfApi().super_squash_history(repo_id, repo_type=\"dataset\")`, but otherwise it works.",
"@ceferisbarov just to let you know, recreating a gated repo + granting access to your teammates is something that you can automate with something like this (not fully tested but should work):\r\n\r\n```py\r\nfrom huggingface_hub import HfApi\r\n\r\napi = HfApi()\r\napi.delete_repo(repo_id, repo_type=\"dataset\", missing_ok=True)\r\napi.create_repo(repo_id, repo_type=\"dataset\", private=False)\r\napi.update_repo_settings(repo_id, repo_type=\"dataset\", gated=\"manual\")\r\nfor user in [\"user1\", \"user2\"] # list of teammates\r\n api.grant_access(repo_id, user, repo_type=\"dataset\")\r\n```\r\n\r\nI think it'd be a better solution than squashing commits (which is more of a hack), typically if you are using the dataset viewer.",
"This is great, @Wauplin. If we can achieve this with HfApi, then we probably don't need to add another parameter to push_to_hub. I am closing the issue."
] | 1970-01-01T00:00:00.000001 | 1,729 | 1970-01-01T00:00:00.000001 | NONE | null | ### Feature request
Add an `overwrite` argument to the `push_to_hub` method.
### Motivation
I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials.
### Your contribution
I can create a PR. | {
"avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4",
"events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}",
"followers_url": "https://api.github.com/users/ceferisbarov/followers",
"following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}",
"gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ceferisbarov",
"id": 60838378,
"login": "ceferisbarov",
"node_id": "MDQ6VXNlcjYwODM4Mzc4",
"organizations_url": "https://api.github.com/users/ceferisbarov/orgs",
"received_events_url": "https://api.github.com/users/ceferisbarov/received_events",
"repos_url": "https://api.github.com/users/ceferisbarov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ceferisbarov",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7241/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7238/comments | https://api.github.com/repos/huggingface/datasets/issues/7238/events | https://github.com/huggingface/datasets/issues/7238 | 2,598,409,993 | I_kwDODunzps6a4JcJ | 7,238 | incompatibily issue when using load_dataset with datasets==3.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/74985234?v=4",
"events_url": "https://api.github.com/users/jupiterMJM/events{/privacy}",
"followers_url": "https://api.github.com/users/jupiterMJM/followers",
"following_url": "https://api.github.com/users/jupiterMJM/following{/other_user}",
"gists_url": "https://api.github.com/users/jupiterMJM/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jupiterMJM",
"id": 74985234,
"login": "jupiterMJM",
"node_id": "MDQ6VXNlcjc0OTg1MjM0",
"organizations_url": "https://api.github.com/users/jupiterMJM/orgs",
"received_events_url": "https://api.github.com/users/jupiterMJM/received_events",
"repos_url": "https://api.github.com/users/jupiterMJM/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jupiterMJM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jupiterMJM/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jupiterMJM",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
There is a bug when using load_dataset with dataset version at 3.0.1 .
Please see below in the "steps to reproduce the bug".
To resolve the bug, I had to downgrade to version 2.21.0
OS: Ubuntu 24 (AWS instance)
Python: same bug under 3.12 and 3.10
The error I had was:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/load.py", line 2096, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
super()._download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 977, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_0/cb17afd34f5799f97e8f48398748f83006335b702bd785f9880797838d541b81/common_voice_6_0.py", line 159, in _split_generators
archive_path = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template))
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_manager.py", line 150, in download
download_config = self.download_config.copy()
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_config.py", line 73, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
TypeError: DownloadConfig.__init__() got an unexpected keyword argument 'ignore_url_params'
### Steps to reproduce the bug
1. install dataset with ```pip install datasets --upgrade```
2. launch python; from datasets import loaad_dataset
3. run load_dataset("mozilla-foundation/common_voice_6_0")
4. exit python
5. uninstall datasets; then ```pip install datasets==2.21.0```
6. launch python; from datasets import loaad_dataset
7. run load_dataset("mozilla-foundation/common_voice_6_0")
8. Everything runs great now
### Expected behavior
Be able to download a dataset without error
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.0.1
- Platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.39
- Python version: 3.12.4
- `huggingface_hub` version: 0.26.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7238/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7233/comments | https://api.github.com/repos/huggingface/datasets/issues/7233/events | https://github.com/huggingface/datasets/issues/7233 | 2,593,903,113 | I_kwDODunzps6am9IJ | 7,233 | 数据集数量问题 | {
"avatar_url": "https://avatars.githubusercontent.com/u/180297268?v=4",
"events_url": "https://api.github.com/users/want-well/events{/privacy}",
"followers_url": "https://api.github.com/users/want-well/followers",
"following_url": "https://api.github.com/users/want-well/following{/other_user}",
"gists_url": "https://api.github.com/users/want-well/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/want-well",
"id": 180297268,
"login": "want-well",
"node_id": "U_kgDOCr8eNA",
"organizations_url": "https://api.github.com/users/want-well/orgs",
"received_events_url": "https://api.github.com/users/want-well/received_events",
"repos_url": "https://api.github.com/users/want-well/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/want-well/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/want-well/subscriptions",
"type": "User",
"url": "https://api.github.com/users/want-well",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错
### Steps to reproduce the bug
1.
这里我的数据集可以微调的最后两个数据集是:
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "为什么闭环带宽要求会影响串联超前校正的效果?"
},
{
"role": "assistant",
"content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。"
}
]
}
2.
当我把倒数第二个数据集再次添加到数据集中时,此时我的最后三个数据集是
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "为什么闭环带宽要求会影响串联超前校正的效果?"
},
{
"role": "assistant",
"content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。"
}
]
}
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。"
}
]
}
这时系统会显示bug:
root@autodl-container-027f4cad3d-6baf4e64:~/autodl-tmp# python GLM-4/finetune_demo/finetune.py datasets/ ZhipuAI/glm-4-9b-chat GLM-4/finetune_demo/configs/lora.yaml
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.04it/s]
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
trainable params: 2,785,280 || all params: 9,402,736,640 || trainable%: 0.0296
Generating train split: 0 examples [00:00, ? examples/s]Failed to load JSON from file '/root/autodl-tmp/datasets/train.jsonl' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 718
Generating train split: 0 examples [00:00, ? examples/s]
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:153 in _generate_tables │
│ │
│ 150 │ │ │ │ │ │ │ │ with open( │
│ 151 │ │ │ │ │ │ │ │ │ file, encoding=self.config.encoding, errors=self.con │
│ 152 │ │ │ │ │ │ │ │ ) as f: │
│ ❱ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │
│ 154 │ │ │ │ │ │ │ except ValueError: │
│ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │
│ 156 │ │ │ │ │ │ │ │ raise e │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:815 in read_json │
│ │
│ 812 │ if chunksize: │
│ 813 │ │ return json_reader │
│ 814 │ else: │
│ ❱ 815 │ │ return json_reader.read() │
│ 816 │
│ 817 │
│ 818 class JsonReader(abc.Iterator, Generic[FrameSeriesStrT]): │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1025 in read │
│ │
│ 1022 │ │ │ │ │ │ data_lines = data.split("\n") │
│ 1023 │ │ │ │ │ │ obj = self._get_object_parser(self._combine_lines(data_lines)) │
│ 1024 │ │ │ │ else: │
│ ❱ 1025 │ │ │ │ │ obj = self._get_object_parser(self.data) │
│ 1026 │ │ │ │ if self.dtype_backend is not lib.no_default: │
│ 1027 │ │ │ │ │ return obj.convert_dtypes( │
│ 1028 │ │ │ │ │ │ infer_objects=False, dtype_backend=self.dtype_backend │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1051 in _get_object_parser │
│ │
│ 1048 │ │ } │
│ 1049 │ │ obj = None │
│ 1050 │ │ if typ == "frame": │
│ ❱ 1051 │ │ │ obj = FrameParser(json, **kwargs).parse() │
│ 1052 │ │ │
│ 1053 │ │ if typ == "series" or obj is None: │
│ 1054 │ │ │ if not isinstance(dtype, bool): │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1187 in parse │
│ │
│ 1184 │ │
│ 1185 │ @final │
│ 1186 │ def parse(self): │
│ ❱ 1187 │ │ self._parse() │
│ 1188 │ │ │
│ 1189 │ │ if self.obj is None: │
│ 1190 │ │ │ return None │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1403 in _parse │
│ │
│ 1400 │ │ │
│ 1401 │ │ if orient == "columns": │
│ 1402 │ │ │ self.obj = DataFrame( │
│ ❱ 1403 │ │ │ │ ujson_loads(json, precise_float=self.precise_float), dtype=None │
│ 1404 │ │ │ ) │
│ 1405 │ │ elif orient == "split": │
│ 1406 │ │ │ decoded = { │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Trailing data
During handling of the above exception, another exception occurred:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1997 in _prepare_split_single │
│ │
│ 1994 │ │ │ ) │
│ 1995 │ │ │ try: │
│ 1996 │ │ │ │ _time = time.time() │
│ ❱ 1997 │ │ │ │ for _, table in generator: │
│ 1998 │ │ │ │ │ if max_shard_size is not None and writer._num_bytes > max_shard_size │
│ 1999 │ │ │ │ │ │ num_examples, num_bytes = writer.finalize() │
│ 2000 │ │ │ │ │ │ writer.close() │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:156 in _generate_tables │
│ │
│ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │
│ 154 │ │ │ │ │ │ │ except ValueError: │
│ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │
│ ❱ 156 │ │ │ │ │ │ │ │ raise e │
│ 157 │ │ │ │ │ │ │ if df.columns.tolist() == [0]: │
│ 158 │ │ │ │ │ │ │ │ df.columns = list(self.config.features) if self.config.f │
│ 159 │ │ │ │ │ │ │ try: │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:130 in _generate_tables │
│ │
│ 127 │ │ │ │ │ │ try: │
│ 128 │ │ │ │ │ │ │ while True: │
│ 129 │ │ │ │ │ │ │ │ try: │
│ ❱ 130 │ │ │ │ │ │ │ │ │ pa_table = paj.read_json( │
│ 131 │ │ │ │ │ │ │ │ │ │ io.BytesIO(batch), read_options=paj.ReadOptions( │
│ 132 │ │ │ │ │ │ │ │ │ ) │
│ 133 │ │ │ │ │ │ │ │ │ break │
│ │
│ in pyarrow._json.read_json:308 │
│ │
│ in pyarrow.lib.pyarrow_internal_check_status:154 │
│ │
│ in pyarrow.lib.check_status:91 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ArrowInvalid: JSON parse error: Missing a name for object member. in row 718
The above exception was the direct cause of the following exception:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:406 in main │
│ │
│ 403 ): │
│ 404 │ ft_config = FinetuningConfig.from_file(config_file) │
│ 405 │ tokenizer, model = load_tokenizer_and_model(model_dir, peft_config=ft_config.peft_co │
│ ❱ 406 │ data_manager = DataManager(data_dir, ft_config.data_config) │
│ 407 │ │
│ 408 │ train_dataset = data_manager.get_dataset( │
│ 409 │ │ Split.TRAIN, │
│ │
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:204 in __init__ │
│ │
│ 201 │ def __init__(self, data_dir: str, data_config: DataConfig): │
│ 202 │ │ self._num_proc = data_config.num_proc │
│ 203 │ │ │
│ ❱ 204 │ │ self._dataset_dct = _load_datasets( │
│ 205 │ │ │ data_dir, │
│ 206 │ │ │ data_config.data_format, │
│ 207 │ │ │ data_config.data_files, │
│ │
│ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:189 in _load_datasets │
│ │
│ 186 │ │ num_proc: Optional[int], │
│ 187 ) -> DatasetDict: │
│ 188 │ if data_format == '.jsonl': │
│ ❱ 189 │ │ dataset_dct = load_dataset( │
│ 190 │ │ │ data_dir, │
│ 191 │ │ │ data_files=data_files, │
│ 192 │ │ │ split=None, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/load.py:2616 in load_dataset │
│ │
│ 2613 │ │ return builder_instance.as_streaming_dataset(split=split) │
│ 2614 │ │
│ 2615 │ # Download and prepare data │
│ ❱ 2616 │ builder_instance.download_and_prepare( │
│ 2617 │ │ download_config=download_config, │
│ 2618 │ │ download_mode=download_mode, │
│ 2619 │ │ verification_mode=verification_mode, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1029 in download_and_prepare │
│ │
│ 1026 │ │ │ │ │ │ │ prepare_split_kwargs["max_shard_size"] = max_shard_size │
│ 1027 │ │ │ │ │ │ if num_proc is not None: │
│ 1028 │ │ │ │ │ │ │ prepare_split_kwargs["num_proc"] = num_proc │
│ ❱ 1029 │ │ │ │ │ │ self._download_and_prepare( │
│ 1030 │ │ │ │ │ │ │ dl_manager=dl_manager, │
│ 1031 │ │ │ │ │ │ │ verification_mode=verification_mode, │
│ 1032 │ │ │ │ │ │ │ **prepare_split_kwargs, │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1124 in _download_and_prepare │
│ │
│ 1121 │ │ │ │
│ 1122 │ │ │ try: │
│ 1123 │ │ │ │ # Prepare split will record examples associated to the split │
│ ❱ 1124 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │
│ 1125 │ │ │ except OSError as e: │
│ 1126 │ │ │ │ raise OSError( │
│ 1127 │ │ │ │ │ "Cannot find data file. " │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1884 in _prepare_split │
│ │
│ 1881 │ │ │ gen_kwargs = split_generator.gen_kwargs │
│ 1882 │ │ │ job_id = 0 │
│ 1883 │ │ │ with pbar: │
│ ❱ 1884 │ │ │ │ for job_id, done, content in self._prepare_split_single( │
│ 1885 │ │ │ │ │ gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args │
│ 1886 │ │ │ │ ): │
│ 1887 │ │ │ │ │ if done: │
│ │
│ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:2040 in _prepare_split_single │
│ │
│ 2037 │ │ │ │ e = e.__context__ │
│ 2038 │ │ │ if isinstance(e, DatasetGenerationError): │
│ 2039 │ │ │ │ raise │
│ ❱ 2040 │ │ │ raise DatasetGenerationError("An error occurred while generating the dataset │
│ 2041 │ │ │
│ 2042 │ │ yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_ │
│ 2043 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
DatasetGenerationError: An error occurred while generating the dataset
3.请问是否可以帮我解决
### Expected behavior
希望问题可以得到解决
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.20.0
- Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.8
- `huggingface_hub` version: 0.24.6
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.12.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7233/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7228/comments | https://api.github.com/repos/huggingface/datasets/issues/7228/events | https://github.com/huggingface/datasets/issues/7228 | 2,587,310,094 | I_kwDODunzps6aNzgO | 7,228 | Composite (multi-column) features | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | CONTRIBUTOR | null | ### Feature request
Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding
Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of multiple numpy array / ArrayXD's, it would be more efficient to store each ArrayXD as a separate column (though I'm not sure by how much)
Perhaps specification / implementation could be supported by something like:
```
features=Features(**{("feature0", "feature1")=Features(feature0=Array2D((None,10), dtype="float32"), feature1=Array2D((None,10), dtype="float32"))
```
### Motivation
Defining efficient composite feature types based on numpy arrays for representing data such as graphs with multiple node and edge attributes is currently challenging.
### Your contribution
Possibly able to contribute | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7228/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7226/comments | https://api.github.com/repos/huggingface/datasets/issues/7226/events | https://github.com/huggingface/datasets/issues/7226 | 2,586,920,351 | I_kwDODunzps6aMUWf | 7,226 | Add R as a How to use from the Polars (R) Library as an option | {
"avatar_url": "https://avatars.githubusercontent.com/u/45013044?v=4",
"events_url": "https://api.github.com/users/ran-codes/events{/privacy}",
"followers_url": "https://api.github.com/users/ran-codes/followers",
"following_url": "https://api.github.com/users/ran-codes/following{/other_user}",
"gists_url": "https://api.github.com/users/ran-codes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ran-codes",
"id": 45013044,
"login": "ran-codes",
"node_id": "MDQ6VXNlcjQ1MDEzMDQ0",
"organizations_url": "https://api.github.com/users/ran-codes/orgs",
"received_events_url": "https://api.github.com/users/ran-codes/received_events",
"repos_url": "https://api.github.com/users/ran-codes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ran-codes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ran-codes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ran-codes",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Feature request
The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd
## Add Polars (R) option
The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well.
```r
library(polars)
df <- pl$read_parquet("hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet")
```
## Polars (python) option
![image](https://github.com/user-attachments/assets/8f1bcd19-e578-4b18-b324-7cc00b80ac0a)
## Libraries Currently
![image](https://github.com/user-attachments/assets/0cf50063-f9db-443c-97b4-3ef0664b6e6e)
### Motivation
There are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem.
**This is a small/low-hanging-fruit front end change but would make a big impact expanding the community**
### Your contribution
I am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7226/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7225/comments | https://api.github.com/repos/huggingface/datasets/issues/7225/events | https://github.com/huggingface/datasets/issues/7225 | 2,586,229,216 | I_kwDODunzps6aJrng | 7,225 | Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padmalcom",
"id": 3961950,
"login": "padmalcom",
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padmalcom",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push.
### Steps to reproduce the bug
A basic kotlin application:
```
val person = PersonIdent(
"padmalcom",
"[email protected]"
)
val cp = UsernamePasswordCredentialsProvider(
"padmalcom",
"mysecrettoken"
)
val git =
KGit.cloneRepository {
setURI("https://huggingface.co./datasets/sth/images")
setTimeout(60)
setProgressMonitor(TextProgressMonitor())
setCredentialsProvider(cp)
}
FileOutputStream("./images/images.csv").apply { writeCsv(images) }
git.add {
addFilepattern("images.csv")
}
for (i in images) {
FileUtils.copyFile(
File("./files/${i.id}"),
File("./images/${i.id + File(i.fileName).extension }")
)
git.add {
addFilepattern("${i.id + File(i.fileName).extension }")
}
}
val revCommit = git.commit {
author = person
message = "Uploading images at " + LocalDateTime.now()
.format(DateTimeFormatter.ISO_DATE_TIME)
setCredentialsProvider(cp)
}
val push = git.push {
setCredentialsProvider(cp)
}
```
### Expected behavior
The git server is expected to return the Content-Type _application/x-git-receive-pack-result_.
### Environment info
It is independent from the datasets library. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7225/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7225/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7223/comments | https://api.github.com/repos/huggingface/datasets/issues/7223/events | https://github.com/huggingface/datasets/issues/7223 | 2,583,231,590 | I_kwDODunzps6Z-Pxm | 7,223 | Fallback to arrow defaults when loading dataset with custom features that aren't registered locally | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | CONTRIBUTOR | null | ### Describe the bug
Datasets allows users to create and register custom features.
However if datasets are then pushed to the hub, this means that anyone calling load_dataset without registering the custom Features in the same way as the dataset creator will get an error message.
It would be nice to offer a fallback in this case.
### Steps to reproduce the bug
```python
load_dataset("alex-hh/custom-features-example")
```
(Dataset creation process - must be run in separate session so that NewFeature isn't registered in session in which download is attempted:)
```python
from dataclasses import dataclass, field
import pyarrow as pa
from datasets.features.features import register_feature
from datasets import Dataset, Features, Value, load_dataset
from datasets import Feature
@dataclass
class NewFeature(Feature):
_type: str = field(default="NewFeature", init=False, repr=False)
def __call__(self):
return pa.int32()
def examples_generator():
for i in range(5):
yield {"feature": i}
ds = Dataset.from_generator(examples_generator, features=Features(feature=NewFeature()))
ds.push_to_hub("alex-hh/custom-features-example")
register_feature(NewFeature, "NewFeature")
```
### Expected behavior
It would be nice, and offer greater extensibility, if there was some kind of graceful fallback mechanism in place for cases where user-defined features are stored in the dataset but not available locally.
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7223/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7222/comments | https://api.github.com/repos/huggingface/datasets/issues/7222/events | https://github.com/huggingface/datasets/issues/7222 | 2,582,678,033 | I_kwDODunzps6Z8IoR | 7,222 | TypeError: Couldn't cast array of type string to null in long json | {
"avatar_url": "https://avatars.githubusercontent.com/u/5142577?v=4",
"events_url": "https://api.github.com/users/nokados/events{/privacy}",
"followers_url": "https://api.github.com/users/nokados/followers",
"following_url": "https://api.github.com/users/nokados/following{/other_user}",
"gists_url": "https://api.github.com/users/nokados/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nokados",
"id": 5142577,
"login": "nokados",
"node_id": "MDQ6VXNlcjUxNDI1Nzc=",
"organizations_url": "https://api.github.com/users/nokados/orgs",
"received_events_url": "https://api.github.com/users/nokados/received_events",
"repos_url": "https://api.github.com/users/nokados/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nokados/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nokados/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nokados",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation.
However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk.
Consequently, if all values in the first chunk of a field are, for example, null, the field will be locked as type null, and if a string appears in that field in the second chunk, it will trigger this error:
<details>
<summary>Traceback </summary>
```
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 try:
-> 1869 writer.write_table(table)
1870 except CastError as cast_error:
14 frames
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size)
579 pa_table = pa_table.combine_chunks()
--> 580 pa_table = table_cast(pa_table, self._schema)
581 if self.embed_local_files:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema)
2291 if table.schema != schema:
-> 2292 return cast_table_to_schema(table, schema)
2293 elif table.schema.metadata != schema.metadata:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema)
2244 )
-> 2245 arrays = [
2246 cast_array_to_feature(
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
2245 arrays = [
-> 2246 cast_array_to_feature(
2247 table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_array_to_feature(array, feature, allow_primitive_to_str, allow_decimal_to_str)
2101 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 2102 return array_cast(
2103 array,
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1796 else:
-> 1797 return func(array, *args, **kwargs)
1798
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)
1947 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
-> 1948 raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
1949 return array.cast(pa_type)
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-353-e02f83980611>](https://localhost:8080/#) in <cell line: 1>()
----> 1 dd = load_dataset("json", data_files=["TEST.json"])
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2094
2095 # Download and prepare data
-> 2096 builder_instance.download_and_prepare(
2097 download_config=download_config,
2098 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
997 try:
998 # Prepare split will record examples associated to the split
--> 999 self._prepare_split(split_generator, **prepare_split_kwargs)
1000 except OSError as e:
1001 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1738 job_id = 0
1739 with pbar:
-> 1740 for job_id, done, content in self._prepare_split_single(
1741 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1742 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1894 if isinstance(e, DatasetGenerationError):
1895 raise
-> 1896 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1897
1898 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
</details>
### Steps to reproduce the bug
```python
import json
from datasets import load_dataset
with open("TEST.json", "w") as f:
row = {"ballast": "qwerty" * 1000, "b": None}
row_str = json.dumps(row) + "\n"
line_size = len(row_str)
chunk_size = 10 << 20
lines_in_chunk = chunk_size // line_size + 1
print(f"Writing {lines_in_chunk} lines")
for i in range(lines_in_chunk):
f.write(row_str)
null_row = {"ballast": "Gotcha", "b": "Not Null"}
f.write(json.dumps(null_row) + "\n")
load_dataset("json", data_files=["TEST.json"])
```
### Expected behavior
Concatenation of the chunks without errors
### Environment info
- `datasets` version: 3.0.1
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.24.7
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7222/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7220/comments | https://api.github.com/repos/huggingface/datasets/issues/7220/events | https://github.com/huggingface/datasets/issues/7220 | 2,582,036,110 | I_kwDODunzps6Z5r6O | 7,220 | Custom features not compatible with special encoding/decoding logic | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I think you can fix this simply by replacing the line with hardcoded features with `hastattr(schema, \"encode_example\")` actually",
"#7284 "
] | 1970-01-01T00:00:00.000001 | 1,731 | null | CONTRIBUTOR | null | ### Describe the bug
It is possible to register custom features using datasets.features.features.register_feature (https://github.com/huggingface/datasets/pull/6727)
However such features are not compatible with Features.encode_example/decode_example if they require special encoding / decoding logic because encode_nested_example / decode_nested_example checks whether the feature is in a fixed list of encodable types:
https://github.com/huggingface/datasets/blob/16a121d7821a7691815a966270f577e2c503473f/src/datasets/features/features.py#L1349
This prevents the extensibility of features to complex cases
### Steps to reproduce the bug
```python
class ListOfStrs:
def encode_example(self, value):
if isinstance(value, str):
return [str]
else:
return value
feats = Features(strlist=ListOfStrs())
assert feats.encode_example({"strlist": "a"})["strlist"] = feats["strlist"].encode_example("a")}
```
### Expected behavior
Registered feature types should be encoded based on some property of the feature (e.g. requires_encoding)?
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7220/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7217/comments | https://api.github.com/repos/huggingface/datasets/issues/7217/events | https://github.com/huggingface/datasets/issues/7217 | 2,581,095,098 | I_kwDODunzps6Z2GK6 | 7,217 | ds.map(f, num_proc=10) is slower than df.apply | {
"avatar_url": "https://avatars.githubusercontent.com/u/178981231?v=4",
"events_url": "https://api.github.com/users/lanlanlanlanlanlan365/events{/privacy}",
"followers_url": "https://api.github.com/users/lanlanlanlanlanlan365/followers",
"following_url": "https://api.github.com/users/lanlanlanlanlanlan365/following{/other_user}",
"gists_url": "https://api.github.com/users/lanlanlanlanlanlan365/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lanlanlanlanlanlan365",
"id": 178981231,
"login": "lanlanlanlanlanlan365",
"node_id": "U_kgDOCqsJbw",
"organizations_url": "https://api.github.com/users/lanlanlanlanlanlan365/orgs",
"received_events_url": "https://api.github.com/users/lanlanlanlanlanlan365/received_events",
"repos_url": "https://api.github.com/users/lanlanlanlanlanlan365/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lanlanlanlanlanlan365/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanlanlanlanlanlan365/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lanlanlanlanlanlan365",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! `map()` reads all the columns and writes the resulting dataset with all the columns as well, while df.column_name.apply only reads and writes one column and does it in memory. So this is speed difference is actually expected.\r\n\r\nMoreover using multiprocessing on a dataset that lives in memory (from_pandas uses the same in-memory data as the pandas DataFrame while load_dataset or from_generator load from disk) requires to copy the data to each subprocess which can also be slow. Data loaded from disk don't need to be copied though since they work as a form of shared memory thanks to memory mapping.\r\n\r\nHowever you can make you map() call much faster by making it read and write only the column you want:\r\n\r\n```python\r\nhas_cover_ds = ds.map(lambda song_name: {'has_cover': has_cover(song_name)}, input_columns=[\"song_name\"], remove_columns=ds.column_names) # outputs a dataset with 1 column\r\nds = ds.concatenate_datasets([ds, has_cover_ds], axis=1)\r\n```\r\n\r\nand if your dataset is loaded from disk you can pass num_proc=10 and get a nice speed up as well (no need to copy the data to subprocesses)"
] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
pandas columns: song_id, song_name
ds = Dataset.from_pandas(df)
def has_cover(song_name):
if song_name is None or pd.isna(song_name):
return False
return 'cover' in song_name.lower()
df['has_cover'] = df.song_name.progress_apply(has_cover)
ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10)
time cost:
1. df.apply: 100%|██████████| 12500592/12500592 [00:13<00:00, 959825.47it/s]
2. ds.map: Map (num_proc=10): 31%
3899028/12500592 [00:28<00:38, 222532.89 examples/s]
### Steps to reproduce the bug
pandas columns: song_id, song_name
ds = Dataset.from_pandas(df)
def has_cover(song_name):
if song_name is None or pd.isna(song_name):
return False
return 'cover' in song_name.lower()
df['has_cover'] = df.song_name.progress_apply(has_cover)
ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10)
### Expected behavior
ds.map is ~num_proc faster than df.apply
### Environment info
pandas: 2.2.2
datasets: 2.19.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7217/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7215/comments | https://api.github.com/repos/huggingface/datasets/issues/7215/events | https://github.com/huggingface/datasets/issues/7215 | 2,579,942,939 | I_kwDODunzps6Zxs4b | 7,215 | Iterable dataset map with explicit features causes slowdown for Sequence features | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | CONTRIBUTOR | null | ### Describe the bug
When performing map, it's nice to be able to pass the new feature type, and indeed required by interleave and concatenate datasets.
However, this can cause a major slowdown for certain types of array features due to the features being re-encoded.
This is separate to the slowdown reported in #7206
### Steps to reproduce the bug
```
from datasets import Dataset, Features, Array3D, Sequence, Value
import numpy as np
import time
features=Features(**{"array0": Sequence(feature=Value("float32"), length=-1), "array1": Sequence(feature=Value("float32"), length=-1)})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [5000,10000]*25] for i in range(2)}, features=features)
```
```
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~1.5 s on main
```
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x, features=features)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~ 3 s on main
### Expected behavior
I'm not 100% sure whether passing new feature types to formatted outputs of map should be supported or not, but assuming it should, then there should be a cost-free way to specify the new feature type - knowing feature type is required by interleave_datasets and concatenate_datasets for example
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7215/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7214/comments | https://api.github.com/repos/huggingface/datasets/issues/7214/events | https://github.com/huggingface/datasets/issues/7214 | 2,578,743,713 | I_kwDODunzps6ZtIGh | 7,214 | Formatted map + with_format(None) changes array dtype for iterable datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"possibly due to this logic:\r\n\r\n```python\r\n def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:\r\n if isinstance(pa_array, pa.ChunkedArray):\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and all(\r\n not _is_array_with_nulls(chunk) for chunk in pa_array.chunks\r\n )\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and not _is_array_with_nulls(pa_array)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,728 | null | CONTRIBUTOR | null | ### Describe the bug
When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed
### Steps to reproduce the bug
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")})
dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10), dtype=np.float32)]*25}, features=features)
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features)
ex_0 = next(iter(ds))
ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features).with_format(None)
ex_1 = next(iter(ds))
assert ex_1["array0"].dtype == ex_0["array0"].dtype, f"{ex_1['array0'].dtype} {ex_0['array0'].dtype}"
```
### Expected behavior
Dtypes should be preserved.
### Environment info
3.0.2 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7214/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7213/comments | https://api.github.com/repos/huggingface/datasets/issues/7213/events | https://github.com/huggingface/datasets/issues/7213 | 2,578,675,565 | I_kwDODunzps6Zs3dt | 7,213 | Add with_rank to Dataset.from_generator | {
"avatar_url": "https://avatars.githubusercontent.com/u/17828087?v=4",
"events_url": "https://api.github.com/users/muthissar/events{/privacy}",
"followers_url": "https://api.github.com/users/muthissar/followers",
"following_url": "https://api.github.com/users/muthissar/following{/other_user}",
"gists_url": "https://api.github.com/users/muthissar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/muthissar",
"id": 17828087,
"login": "muthissar",
"node_id": "MDQ6VXNlcjE3ODI4MDg3",
"organizations_url": "https://api.github.com/users/muthissar/orgs",
"received_events_url": "https://api.github.com/users/muthissar/received_events",
"repos_url": "https://api.github.com/users/muthissar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/muthissar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muthissar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/muthissar",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Feature request
Add `with_rank` to `Dataset.from_generator` similar to `Dataset.map` and `Dataset.filter`.
### Motivation
As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU, where the rank can be used to select GPU IDs. For now, rank can be added in the `gen_kwars` argument; however, this, in turn, includes the rank when computing the fingerprint.
### Your contribution
Added #7199 which passes rank based on the `job_id` set by `num_proc`. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7213/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7212/comments | https://api.github.com/repos/huggingface/datasets/issues/7212/events | https://github.com/huggingface/datasets/issues/7212 | 2,578,641,259 | I_kwDODunzps6ZsvFr | 7,212 | Windows do not supprot signal.alarm and singal.signal | {
"avatar_url": "https://avatars.githubusercontent.com/u/33832672?v=4",
"events_url": "https://api.github.com/users/TomasJavurek/events{/privacy}",
"followers_url": "https://api.github.com/users/TomasJavurek/followers",
"following_url": "https://api.github.com/users/TomasJavurek/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasJavurek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomasJavurek",
"id": 33832672,
"login": "TomasJavurek",
"node_id": "MDQ6VXNlcjMzODMyNjcy",
"organizations_url": "https://api.github.com/users/TomasJavurek/orgs",
"received_events_url": "https://api.github.com/users/TomasJavurek/received_events",
"repos_url": "https://api.github.com/users/TomasJavurek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomasJavurek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasJavurek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomasJavurek",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
signal.alarm and signal.signal are used in the load.py module, but these are not supported by Windows.
### Steps to reproduce the bug
lighteval accelerate --model_args "pretrained=gpt2,trust_remote_code=True" --tasks "community|kinit_sts" --custom_tasks "community_tasks/kinit_evals.py" --output_dir "./evals"
### Expected behavior
proceed with input(..) method
### Environment info
Windows 11 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7212/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7211/comments | https://api.github.com/repos/huggingface/datasets/issues/7211/events | https://github.com/huggingface/datasets/issues/7211 | 2,576,400,502 | I_kwDODunzps6ZkMB2 | 7,211 | Describe only selected fields in README | {
"avatar_url": "https://avatars.githubusercontent.com/u/67658835?v=4",
"events_url": "https://api.github.com/users/alozowski/events{/privacy}",
"followers_url": "https://api.github.com/users/alozowski/followers",
"following_url": "https://api.github.com/users/alozowski/following{/other_user}",
"gists_url": "https://api.github.com/users/alozowski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alozowski",
"id": 67658835,
"login": "alozowski",
"node_id": "MDQ6VXNlcjY3NjU4ODM1",
"organizations_url": "https://api.github.com/users/alozowski/orgs",
"received_events_url": "https://api.github.com/users/alozowski/received_events",
"repos_url": "https://api.github.com/users/alozowski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alozowski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alozowski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alozowski",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co./datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some fields in order not to overcomplicate the Dataset Preview and filter out some fields
### Motivation
The `Results` dataset for the Open LLM Leaderboard contains json files with a complex nested structure. I would like to add `README.md` there to use the SQL console, for example. But if I describe the structure of this dataset completely, it will overcomplicate the use of Dataset Preview and the total number of columns will exceed 50
### Your contribution
I'm afraid I'm not familiar with the project structure, so I won't be able to open a PR, but I'll try to help with something else if possible | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7211/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7210/comments | https://api.github.com/repos/huggingface/datasets/issues/7210/events | https://github.com/huggingface/datasets/issues/7210 | 2,575,883,939 | I_kwDODunzps6ZiN6j | 7,210 | Convert Array features to numpy arrays rather than lists by default | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | CONTRIBUTOR | null | ### Feature request
It is currently quite easy to cause massive slowdowns when using datasets and not familiar with the underlying data conversions by e.g. making bad choices of formatting.
Would it be more user-friendly to set defaults that avoid this as much as possible? e.g. format Array features as numpy arrays rather than python lists
### Motivation
Default array formatting leads to slow performance: e.g.
```python
import numpy as np
from datasets import Dataset, Features, Array3D
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
```
```python
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~1.4 s
```python
ds = dataset.to_iterable_dataset()
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~10s
```python
ds = dataset.with_format("numpy")
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~0.04s
```python
ds = dataset.to_iterable_dataset().with_format("numpy")
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
```
~0.04s
### Your contribution
May be able to contribute | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7210/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7208/comments | https://api.github.com/repos/huggingface/datasets/issues/7208/events | https://github.com/huggingface/datasets/issues/7208 | 2,575,484,256 | I_kwDODunzps6ZgsVg | 7,208 | Iterable dataset.filter should not override features | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"closed by https://github.com/huggingface/datasets/pull/7209, thanks @alex-hh !"
] | 1970-01-01T00:00:00.000001 | 1,728 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
When calling filter on an iterable dataset, the features get set to None
### Steps to reproduce the bug
import numpy as np
import time
from datasets import Dataset, Features, Array3D
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
ds = dataset.to_iterable_dataset()
orig_column_names = ds.column_names
ds = ds.filter(lambda x: True)
assert ds.column_names == orig_column_names
```
### Expected behavior
Filter should preserve features information
### Environment info
3.0.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7208/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7206/comments | https://api.github.com/repos/huggingface/datasets/issues/7206/events | https://github.com/huggingface/datasets/issues/7206 | 2,573,567,467 | I_kwDODunzps6ZZYXr | 7,206 | Slow iteration for iterable dataset with numpy formatting for array data | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The below easily eats up 32G of RAM. Leaving it for a while bricked the laptop with 16GB.\r\n\r\n```\r\ndataset = load_dataset(\"Voxel51/OxfordFlowers102\", data_dir=\"data\").with_format(\"numpy\")\r\nprocessed_dataset = dataset.map(lambda x: x)\r\n```\r\n\r\n![image](https://github.com/user-attachments/assets/c1863a69-b18f-4014-89dc-98994336df96)\r\n\r\nSimilar problems occur if using a real transform function in `.map()`."
] | 1970-01-01T00:00:00.000001 | 1,729 | null | CONTRIBUTOR | null | ### Describe the bug
When working with large arrays, setting with_format to e.g. numpy then applying map causes a significant slowdown for iterable datasets.
### Steps to reproduce the bug
```python
import numpy as np
import time
from datasets import Dataset, Features, Array3D
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
```
Then
```python
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy").map(lambda x: x)
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
print(t1-t0)
```
takes 27 s, whereas
```python
ds = dataset.to_iterable_dataset()
ds = ds.with_format("numpy")
ds = dataset.to_iterable_dataset()
t0 = time.time()
for ex in ds:
pass
t1 = time.time()
print(t1 - t0)
```
takes ~1s
### Expected behavior
Map should not introduce a slowdown when formatting is enabled.
### Environment info
3.0.2 | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7206/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7202/comments | https://api.github.com/repos/huggingface/datasets/issues/7202/events | https://github.com/huggingface/datasets/issues/7202 | 2,572,583,798 | I_kwDODunzps6ZVoN2 | 7,202 | `from_parquet` return type annotation | {
"avatar_url": "https://avatars.githubusercontent.com/u/45285915?v=4",
"events_url": "https://api.github.com/users/saiden89/events{/privacy}",
"followers_url": "https://api.github.com/users/saiden89/followers",
"following_url": "https://api.github.com/users/saiden89/following{/other_user}",
"gists_url": "https://api.github.com/users/saiden89/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/saiden89",
"id": 45285915,
"login": "saiden89",
"node_id": "MDQ6VXNlcjQ1Mjg1OTE1",
"organizations_url": "https://api.github.com/users/saiden89/orgs",
"received_events_url": "https://api.github.com/users/saiden89/received_events",
"repos_url": "https://api.github.com/users/saiden89/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/saiden89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saiden89/subscriptions",
"type": "User",
"url": "https://api.github.com/users/saiden89",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
As already posted in https://github.com/microsoft/pylance-release/issues/6534, the correct type hinting fails when building a dataset using the `from_parquet` constructor.
Their suggestion is to comprehensively annotate the method's return type to better align with the docstring information.
### Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_parquet(path_or_paths="file")
dataset.map(lambda x: {"new": x["old"]}, batched=True)
```
### Expected behavior
map is a [valid](https://huggingface.co./docs/datasets/v3.0.1/en/package_reference/main_classes#datasets.Dataset.map), no error should be thrown.
### Environment info
- `datasets` version: 3.0.1
- Platform: macOS-15.0.1-arm64-arm-64bit
- Python version: 3.12.6
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7202/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7201/comments | https://api.github.com/repos/huggingface/datasets/issues/7201/events | https://github.com/huggingface/datasets/issues/7201 | 2,569,837,015 | I_kwDODunzps6ZLJnX | 7,201 | `load_dataset()` of images from a single directory where `train.png` image exists | {
"avatar_url": "https://avatars.githubusercontent.com/u/56922146?v=4",
"events_url": "https://api.github.com/users/SagiPolaczek/events{/privacy}",
"followers_url": "https://api.github.com/users/SagiPolaczek/followers",
"following_url": "https://api.github.com/users/SagiPolaczek/following{/other_user}",
"gists_url": "https://api.github.com/users/SagiPolaczek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SagiPolaczek",
"id": 56922146,
"login": "SagiPolaczek",
"node_id": "MDQ6VXNlcjU2OTIyMTQ2",
"organizations_url": "https://api.github.com/users/SagiPolaczek/orgs",
"received_events_url": "https://api.github.com/users/SagiPolaczek/received_events",
"repos_url": "https://api.github.com/users/SagiPolaczek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SagiPolaczek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SagiPolaczek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SagiPolaczek",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
Hey!
Firstly, thanks for maintaining such framework!
I had a small issue, where I wanted to load a custom dataset of image+text captioning. I had all of my images in a single directory, and one of the images had the name `train.png`. Then, the loaded dataset had only this image.
I guess it's related to "train" as a split name, but it's definitely an unexpected behavior :)
Unfortunately I don't have time to submit a proper PR. I'm attaching a toy example to reproduce the issue.
Thanks,
Sagi
### Steps to reproduce the bug
All of the steps I'm attaching are in a fresh env :)
```
(base) sagipolaczek@Sagis-MacBook-Pro ~ % conda activate hf_issue_env
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python --version
Python 3.10.15
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % pip list | grep datasets
datasets 3.0.1
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % ls -la Documents/hf_datasets_issue
total 352
drwxr-xr-x 6 sagipolaczek staff 192 Oct 7 11:59 .
drwx------@ 23 sagipolaczek staff 736 Oct 7 11:46 ..
-rw-r--r--@ 1 sagipolaczek staff 72 Oct 7 11:59 metadata.csv
-rw-r--r--@ 1 sagipolaczek staff 160154 Oct 6 18:00 pika.png
-rw-r--r--@ 1 sagipolaczek staff 5495 Oct 6 12:02 pika_pika.png
-rw-r--r--@ 1 sagipolaczek staff 1753 Oct 6 11:50 train.png
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % cat Documents/hf_datasets_issue/metadata.csv
file_name,text
train.png,A train
pika.png,Pika
pika_pika.png,Pika Pika!
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python
Python 3.10.15 (main, Oct 3 2024, 02:33:33) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="Documents/hf_datasets_issue/")
>>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'text'],
num_rows: 1
})
})
>>> dataset["train"][0]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=354x84 at 0x10B50FD90>, 'text': 'A train'}
### DELETING `train.png` sample ###
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % vim Documents/hf_datasets_issue/metadata.csv
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % rm Documents/hf_datasets_issue/train.png
(hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python
Python 3.10.15 (main, Oct 3 2024, 02:33:33) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="Documents/hf_datasets_issue/")
Generating train split: 2 examples [00:00, 65.99 examples/s]
>>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'text'],
num_rows: 2
})
})
>>> dataset["train"]
Dataset({
features: ['image', 'text'],
num_rows: 2
})
>>> dataset["train"][0],dataset["train"][1]
({'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=2356x1054 at 0x10DD11E70>, 'text': 'Pika'}, {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=343x154 at 0x10E258C70>, 'text': 'Pika Pika!'})
```
### Expected behavior
My expected behavior would be to get a dataset with the sample `train.png` in it (along with the others data points).
### Environment info
I've attached it in the example:
Python 3.10.15
datasets 3.0.1
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7201/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7197/comments | https://api.github.com/repos/huggingface/datasets/issues/7197/events | https://github.com/huggingface/datasets/issues/7197 | 2,565,924,788 | I_kwDODunzps6Y8Oe0 | 7,197 | ConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)数据集下不下来,怎么回事 | {
"avatar_url": "https://avatars.githubusercontent.com/u/114299344?v=4",
"events_url": "https://api.github.com/users/Mrgengli/events{/privacy}",
"followers_url": "https://api.github.com/users/Mrgengli/followers",
"following_url": "https://api.github.com/users/Mrgengli/following{/other_user}",
"gists_url": "https://api.github.com/users/Mrgengli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mrgengli",
"id": 114299344,
"login": "Mrgengli",
"node_id": "U_kgDOBtAR0A",
"organizations_url": "https://api.github.com/users/Mrgengli/orgs",
"received_events_url": "https://api.github.com/users/Mrgengli/received_events",
"repos_url": "https://api.github.com/users/Mrgengli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mrgengli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mrgengli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mrgengli",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Also cant download \"allenai/c4\", but with different error reported:\r\n```\r\nTraceback (most recent call last): \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 2074, in load_dataset \r\n builder_instance = load_dataset_builder( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1795, in load_dataset_builder \r\n dataset_module = dataset_module_factory( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1659, in dataset_module_factory \r\n raise e1 from None \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1647, in dataset_module_factory \r\n ).get_module() \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1069, in get_module \r\n module_name, default_builder_kwargs = infer_module_for_data_files( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 594, in infer_module_for_data_files \r\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\")) \r\ndatasets.exceptions.DataFilesNotFoundError: No (supported) data files found in allenai/c4 \r\n```\r\n\r\n## Code to reproduce\r\n```\r\ndataset = load_dataset(\"allenai/c4\", \"en\", split=\"train\", streaming=True,trust_remote_code=True,\r\n cache_dir=\"dataset/en\",\r\n download_mode=\"force_redownload\")\r\n```\r\n\r\n## Environment\r\ndatasets 3.0.1 \r\nhuggingface_hub 0.25.1"
] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
from datasets import load_dataset
print("11")
traindata = load_dataset('ptb_text_only', 'penn_treebank', split='train')
print("22")
valdata = load_dataset('ptb_text_only',
'penn_treebank',
split='validation')
### Steps to reproduce the bug
1
### Expected behavior
1
### Environment info
1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7197/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7196/comments | https://api.github.com/repos/huggingface/datasets/issues/7196/events | https://github.com/huggingface/datasets/issues/7196 | 2,564,218,566 | I_kwDODunzps6Y1t7G | 7,196 | concatenate_datasets does not preserve shuffling state | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | null | CONTRIBUTOR | null | ### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed that the number of shards is the same after concatenation, which I found surprising, but I don't understand the internals well enough to know whether this is actually surprising or not
### Steps to reproduce the bug
```python
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset1 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25))} # TODO: how to understand this?
)
dataset2 = datasets.IterableDataset.from_generator(
gen, gen_kwargs={"shards": list(range(25, 50))} # TODO: how to understand this?
)
dataset1 = dataset1.shuffle(buffer_size=1)
dataset2 = dataset2.shuffle(buffer_size=1)
print(dataset1.n_shards)
print(dataset2.n_shards)
dataset = datasets.concatenate_datasets(
[dataset1, dataset2]
)
print(dataset.n_shards)
# dataset = dataset1
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=0,
)
for i, batch in enumerate(dataloader):
print(batch)
print("\nNew epoch")
dataset = dataset.set_epoch(1)
for i, batch in enumerate(dataloader):
print(batch)
if __name__ == "__main__":
main()
```
### Expected behavior
Shuffling state should be preserved
### Environment info
Latest datasets | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7196/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7195/comments | https://api.github.com/repos/huggingface/datasets/issues/7195/events | https://github.com/huggingface/datasets/issues/7195 | 2,564,070,809 | I_kwDODunzps6Y1J2Z | 7,195 | Add support for 3D datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
] | null | [
"maybe related: https://github.com/huggingface/datasets/issues/6388",
"Also look at https://github.com/huggingface/dataset-viewer/blob/f5fd117ceded990a7766e705bba1203fa907d6ad/services/worker/src/worker/job_runners/dataset/modalities.py#L241 which lists the 3D file formats that will assign the 3D modality to a dataset.",
"~~we can brainstorm about the UX maybe (i don't expect we should load all models on the page at once – IMO there should be a manual action from user to load + maybe load first couple of row by default) cc @gary149 @cfahlgren1~~\r\n\r\nit's more for the viewer issue (https://github.com/huggingface/dataset-viewer/issues/1003)"
] | 1970-01-01T00:00:00.000001 | 1,728 | null | COLLABORATOR | null | See https://huggingface.co./datasets/allenai/objaverse for example | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7195/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7194/comments | https://api.github.com/repos/huggingface/datasets/issues/7194/events | https://github.com/huggingface/datasets/issues/7194 | 2,563,364,199 | I_kwDODunzps6YydVn | 7,194 | datasets.exceptions.DatasetNotFoundError for private dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/20212179?v=4",
"events_url": "https://api.github.com/users/kdutia/events{/privacy}",
"followers_url": "https://api.github.com/users/kdutia/followers",
"following_url": "https://api.github.com/users/kdutia/following{/other_user}",
"gists_url": "https://api.github.com/users/kdutia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kdutia",
"id": 20212179,
"login": "kdutia",
"node_id": "MDQ6VXNlcjIwMjEyMTc5",
"organizations_url": "https://api.github.com/users/kdutia/orgs",
"received_events_url": "https://api.github.com/users/kdutia/received_events",
"repos_url": "https://api.github.com/users/kdutia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kdutia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kdutia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kdutia",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Actually there is no such dataset available, that is why you are getting that error.",
"Fixed with @kdutia in Slack chat. Generating a new token fixed this issue. "
] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The following Python code tries to download a private dataset and fails with the error `datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.`. Downloading a public dataset doesn't work.
``` py
from datasets import load_dataset
_ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly")
```
This seems to be just an issue with my machine config as the code above works with a colleague's machine. So far I have tried:
- logging back out and in from the Huggingface CLI using `huggingface-cli logout`
- manually removing the token cache at `/Users/kalyan/.cache/huggingface/token` (found using `huggingface-cli env`)
- manually passing a token in `load_dataset`
My output of `huggingface-cli whoami`:
```
kdutia
orgs: ClimatePolicyRadar
```
### Steps to reproduce the bug
```
python
Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> _ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 2074, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1659, in dataset_module_factory
raise e1 from None
File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1597, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.
>>>
```
### Expected behavior
The dataset downloads successfully.
### Environment info
From `huggingface-cli env`:
```
- huggingface_hub version: 0.25.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.12.2
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: /Users/kalyan/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: kdutia
- Configured git credential helpers: osxkeychain
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.4
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: N/A
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 2.1.1
- pydantic: N/A
- aiohttp: 3.10.8
- ENDPOINT: https://huggingface.co.
- HF_HUB_CACHE: /Users/kalyan/.cache/huggingface/hub
- HF_ASSETS_CACHE: /Users/kalyan/.cache/huggingface/assets
- HF_TOKEN_PATH: /Users/kalyan/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
from `datasets-cli env`:
```
- `datasets` version: 3.0.1
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.12.2
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davanstrien",
"id": 8995957,
"login": "davanstrien",
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davanstrien",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7194/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7193/comments | https://api.github.com/repos/huggingface/datasets/issues/7193/events | https://github.com/huggingface/datasets/issues/7193 | 2,562,392,887 | I_kwDODunzps6YuwM3 | 7,193 | Support of num_workers (multiprocessing) in map for IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4",
"events_url": "https://api.github.com/users/getao/events{/privacy}",
"followers_url": "https://api.github.com/users/getao/followers",
"following_url": "https://api.github.com/users/getao/following{/other_user}",
"gists_url": "https://api.github.com/users/getao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/getao",
"id": 12735658,
"login": "getao",
"node_id": "MDQ6VXNlcjEyNzM1NjU4",
"organizations_url": "https://api.github.com/users/getao/orgs",
"received_events_url": "https://api.github.com/users/getao/received_events",
"repos_url": "https://api.github.com/users/getao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/getao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/getao",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"I was curious about the same - since map is applied on the fly I was assuming that setting num_workers>1 in DataLoader would effectively do the map in parallel, have you tried that?"
] | 1970-01-01T00:00:00.000001 | 1,727 | null | NONE | null | ### Feature request
Currently, IterableDataset doesn't support setting num_worker in .map(), which results in slow processing here. Could we add support for it? As .map() can be run in the batch fashion (e.g., batch_size is default to 1000 in datasets), it seems to be doable for IterableDataset as the regular Dataset.
### Motivation
Improving data processing efficiency
### Your contribution
Testing | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7193/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7192/comments | https://api.github.com/repos/huggingface/datasets/issues/7192/events | https://github.com/huggingface/datasets/issues/7192 | 2,562,289,642 | I_kwDODunzps6YuW_q | 7,192 | Add repeat() for iterable datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"perhaps concatenate_datasets can already be used to achieve almost the same effect? ",
"`concatenate_datasets` does the job when there is a finite number of repetitions, but in case of `.repeat()` forever we need a new logic in `iterable_dataset.py`"
] | 1970-01-01T00:00:00.000001 | 1,727 | null | CONTRIBUTOR | null | ### Feature request
It would be useful to be able to straightforwardly repeat iterable datasets indefinitely, to provide complete control over starting and ending of iteration to the user.
An IterableDataset.repeat(n) function could do this automatically
### Motivation
This feature was discussed in this issue https://github.com/huggingface/datasets/issues/7147, and would resolve the need to use the hack of interleave datasets with probability 0 as a simple way to achieve this functionality.
An additional benefit might be the simplification of the use of iterable datasets in a distributed setting:
If the user can assume that datasets will repeat indefinitely, then issues around different numbers of samples appearing on different devices (e.g. https://github.com/huggingface/datasets/issues/6437, https://github.com/huggingface/datasets/issues/6594, https://github.com/huggingface/datasets/issues/6623, https://github.com/huggingface/datasets/issues/6719) can potentially be straightforwardly resolved by simply doing:
ids.repeat(None).take(n_samples_per_epoch)
### Your contribution
I'm not familiar enough with the codebase to assess how straightforward this would be to implement.
If it might be very straightforward, I could possibly have a go. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7192/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7190/comments | https://api.github.com/repos/huggingface/datasets/issues/7190/events | https://github.com/huggingface/datasets/issues/7190 | 2,562,162,725 | I_kwDODunzps6Yt4Al | 7,190 | Datasets conflicts with fsspec 2024.9 | {
"avatar_url": "https://avatars.githubusercontent.com/u/162599174?v=4",
"events_url": "https://api.github.com/users/cw-igormorgado/events{/privacy}",
"followers_url": "https://api.github.com/users/cw-igormorgado/followers",
"following_url": "https://api.github.com/users/cw-igormorgado/following{/other_user}",
"gists_url": "https://api.github.com/users/cw-igormorgado/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cw-igormorgado",
"id": 162599174,
"login": "cw-igormorgado",
"node_id": "U_kgDOCbERBg",
"organizations_url": "https://api.github.com/users/cw-igormorgado/orgs",
"received_events_url": "https://api.github.com/users/cw-igormorgado/received_events",
"repos_url": "https://api.github.com/users/cw-igormorgado/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cw-igormorgado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cw-igormorgado/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cw-igormorgado",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Yes, I need to use the latest version of fsspec and datasets for my usecase. \r\nhttps://github.com/fsspec/s3fs/pull/888#issuecomment-2404204606\r\nhttps://github.com/apache/arrow/issues/34363#issuecomment-2403553473\r\n\r\nlast version where things install without conflict is: 2.14.4\r\n\r\nSo this issue starts from:\r\nhttps://github.com/huggingface/datasets/releases/tag/2.14.5"
] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
Installing both in latest versions are not possible
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
But using older version of datasets is ok
`pip install "datasets==1.24.4" "fsspec==2024.9.0"`
### Steps to reproduce the bug
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
### Expected behavior
install both versions.
### Environment info
debian 11.
python 3.10.15 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7190/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7190/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7189/comments | https://api.github.com/repos/huggingface/datasets/issues/7189/events | https://github.com/huggingface/datasets/issues/7189 | 2,562,152,845 | I_kwDODunzps6Yt1mN | 7,189 | Audio preview in dataset viewer for audio array data without a path/filename | {
"avatar_url": "https://avatars.githubusercontent.com/u/7157234?v=4",
"events_url": "https://api.github.com/users/Lauler/events{/privacy}",
"followers_url": "https://api.github.com/users/Lauler/followers",
"following_url": "https://api.github.com/users/Lauler/following{/other_user}",
"gists_url": "https://api.github.com/users/Lauler/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lauler",
"id": 7157234,
"login": "Lauler",
"node_id": "MDQ6VXNlcjcxNTcyMzQ=",
"organizations_url": "https://api.github.com/users/Lauler/orgs",
"received_events_url": "https://api.github.com/users/Lauler/received_events",
"repos_url": "https://api.github.com/users/Lauler/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lauler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lauler/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lauler",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | null | NONE | null | ### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co./docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](https://github.com/huggingface/datasets/blob/3.0.1/src/datasets/features/audio.py#L20) appears designed with this assumption in mind. Looking at its source code it returns a dictionary with the keys `path`, `array` and `sampling_rate`.
However, sometimes users may have different pipelines where they themselves decode the audio array. This feature request has to do with wishing some clarification in guides on whether it is possible, and in such case how users can insert already decoded audio array data into datasets (pandas DataFrame, HF dataset or whatever) that are later saved as parquet, and still get a functioning audio preview in the dataset viewer.
Do I perhaps need to write a tempfile of my audio array slice to wav and capture the bytes object with `io.BytesIO` and pass that to `Audio()`?
### Motivation
I'm working with large audio datasets, and my pipeline reads (decodes) audio from larger files, and slices the relevant portions of audio from that larger file based on metadata I have available.
The pipeline is designed this way to avoid having to store multiple copies of data, and to avoid having to store tens of millions of small files.
I tried [test-uploading parquet files](https://huggingface.co./datasets/Lauler/riksdagen_test) where I store the audio array data of decoded slices of audio in an `audio` column with a dictionary with the keys `path`, `array` and `sampling_rate`. But I don't know the secret sauce of what the Huggingface Hub expects and requires to be able to display audio previews correctly.
### Your contribution
I could contribute a tool agnostic guide of creating HF audio datasets directly as parquet to the HF documentation if there is an interest. Provided you help me figure out the secret sauce of what the dataset viewer expects to display the preview correctly. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7189/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7187/comments | https://api.github.com/repos/huggingface/datasets/issues/7187/events | https://github.com/huggingface/datasets/issues/7187 | 2,560,501,308 | I_kwDODunzps6YniY8 | 7,187 | shard_data_sources() got an unexpected keyword argument 'worker_id' | {
"avatar_url": "https://avatars.githubusercontent.com/u/27758466?v=4",
"events_url": "https://api.github.com/users/Qinghao-Hu/events{/privacy}",
"followers_url": "https://api.github.com/users/Qinghao-Hu/followers",
"following_url": "https://api.github.com/users/Qinghao-Hu/following{/other_user}",
"gists_url": "https://api.github.com/users/Qinghao-Hu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Qinghao-Hu",
"id": 27758466,
"login": "Qinghao-Hu",
"node_id": "MDQ6VXNlcjI3NzU4NDY2",
"organizations_url": "https://api.github.com/users/Qinghao-Hu/orgs",
"received_events_url": "https://api.github.com/users/Qinghao-Hu/received_events",
"repos_url": "https://api.github.com/users/Qinghao-Hu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Qinghao-Hu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qinghao-Hu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Qinghao-Hu",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | null | NONE | null | ### Describe the bug
```
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 238, in __iter__
[rank0]: for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None):
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/packaged_modules/generator/generator.py", line 32, in _generate_examples
[rank0]: for idx, ex in enumerate(self.config.generator(**gen_kwargs)):
[rank0]: File "/home/qinghao/workdir/doremi/doremi/dataloader.py", line 337, in take_data_generator
[rank0]: for ex in ds:
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1791, in __iter__
[rank0]: yield from self._iter_pytorch()
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1704, in _iter_pytorch
[rank0]: ex_iterable = ex_iterable.shard_data_sources(worker_id=worker_info.id, num_workers=worker_info.num_workers)
[rank0]: TypeError: UpdatableRandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources() got an unexpected keyword argument 'worker_id'
```
### Steps to reproduce the bug
IterableDataset cannot use
### Expected behavior
can work on datasets==2.10, but will raise error for later versions.
### Environment info
datasets==3.0.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7187/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7186/comments | https://api.github.com/repos/huggingface/datasets/issues/7186/events | https://github.com/huggingface/datasets/issues/7186 | 2,560,323,917 | I_kwDODunzps6Ym3FN | 7,186 | pinning `dill<0.3.9` without pinning `multiprocess` | {
"avatar_url": "https://avatars.githubusercontent.com/u/38372682?v=4",
"events_url": "https://api.github.com/users/shubhbapna/events{/privacy}",
"followers_url": "https://api.github.com/users/shubhbapna/followers",
"following_url": "https://api.github.com/users/shubhbapna/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhbapna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shubhbapna",
"id": 38372682,
"login": "shubhbapna",
"node_id": "MDQ6VXNlcjM4MzcyNjgy",
"organizations_url": "https://api.github.com/users/shubhbapna/orgs",
"received_events_url": "https://api.github.com/users/shubhbapna/received_events",
"repos_url": "https://api.github.com/users/shubhbapna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shubhbapna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhbapna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shubhbapna",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multiprocess<=0.70.16` so that the `dill` version is compatible?
### Steps to reproduce the bug
NA
### Expected behavior
NA
### Environment info
NA | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7186/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7185/comments | https://api.github.com/repos/huggingface/datasets/issues/7185/events | https://github.com/huggingface/datasets/issues/7185 | 2,558,508,748 | I_kwDODunzps6Yf77M | 7,185 | CI benchmarks are broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | null | [] | null | [
"Fixed by #7205"
] | 1970-01-01T00:00:00.000001 | 1,728 | 1970-01-01T00:00:00.000001 | MEMBER | null | Since Aug 30, 2024, CI benchmarks are broken: https://github.com/huggingface/datasets/actions/runs/11108421214/job/30861323975
```
{"level":"error","message":"Resource not accessible by integration","name":"HttpError","request":{"body":"{\"body\":\"<details>\\n<summary>Show benchmarks</summary>\\n\\nPyArrow==8.0.0\\n\\n<details>\\n<summary>Show updated benchmarks!</summary>\\n\\n### Benchmark: benchmark_array_xd.json\\n\\n| metric | read_batch_formatted_as_numpy after write_array2d |
...
"headers":{"accept":"application/vnd.github.v3+json","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8","user-agent":"octokit-rest.js/18.0.0 octokit-core.js/3.6.0 Node.js/16.20.2 (linux; x64)"},"method":"POST","request":{"agent":{"_events":{},"_eventsCount":2,"cache":
...
"response":{"data":{"documentation_url":"https://docs.github.com/rest/issues/comments#create-an-issue-comment","message":"Resource not accessible by integration","status":"403"},
...
"stack":"HttpError: Resource not accessible by integration\n at /usr/lib/node_modules/@dvcorg/cml/node_modules/@octokit/request/dist-node/index.js:86:21\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Job.doExecute (/usr/lib/node_modules/@dvcorg/cml/node_modules/bottleneck/light.js:405:18)","status":403}
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7185/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7183/comments | https://api.github.com/repos/huggingface/datasets/issues/7183/events | https://github.com/huggingface/datasets/issues/7183 | 2,556,789,055 | I_kwDODunzps6YZYE_ | 7,183 | CI is broken for deps-latest | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/11106149906/job/30853879890
```
=========================== short test summary info ============================
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_caching_on_disk - AssertionError: Lists differ: [{'fi[44 chars] {'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}] != [{'fi[44 chars] {'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}]
First differing element 1:
{'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}
{'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}
[{'filename': '/tmp/tmp6xcyyjs4/dataset0.arrow'},
- {'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}]
? ^^^^^ --------
+ {'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}]
? ++++++++++ ^^ +
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_on_disk - AssertionError: Lists differ: [{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}] != [{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}]
First differing element 0:
{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}
{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}
- [{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}]
? ^^ -----------
+ [{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}]
? +++++++++++ ^^
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_regex - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ignores_line_definition_of_function - AssertionError: '52e56ee04ad92499' != '0a4f75cec280f634'
- 52e56ee04ad92499
+ 0a4f75cec280f634
FAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ipython_function - AssertionError: 'a6bd2041ca63d6c0' != '517bf36b7eecdef5'
- a6bd2041ca63d6c0
+ 517bf36b7eecdef5
FAILED tests/test_fingerprint.py::HashingTest::test_hash_tiktoken_encoding - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_compiled_module - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_generator - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_tensor - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_set_doesnt_depend_on_order - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_set_stable - NameError: name 'log' is not defined
ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NameError: name 'log' is not defined
= 11 failed, 2850 passed, 3 skipped, 23 warnings, 1 error in 191.06s (0:03:11) =
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7183/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7180/comments | https://api.github.com/repos/huggingface/datasets/issues/7180/events | https://github.com/huggingface/datasets/issues/7180 | 2,554,244,750 | I_kwDODunzps6YPq6O | 7,180 | Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion | {
"avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4",
"events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}",
"followers_url": "https://api.github.com/users/iamwangyabin/followers",
"following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}",
"gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iamwangyabin",
"id": 38123329,
"login": "iamwangyabin",
"node_id": "MDQ6VXNlcjM4MTIzMzI5",
"organizations_url": "https://api.github.com/users/iamwangyabin/orgs",
"received_events_url": "https://api.github.com/users/iamwangyabin/received_events",
"repos_url": "https://api.github.com/users/iamwangyabin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iamwangyabin",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"> I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.\r\n\r\nDatasets are memory mapped so they work like SWAP memory. In particular as long as you have RAM available the data will stay in RAM, and get paged out once your system needs RAM for something else (no OOM).\r\n\r\nrelated: https://github.com/huggingface/datasets/issues/4883"
] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.
### Steps to reproduce the bug
Steps to reproduce:
Create a PyTorch Dataset wrapper for 'nebula/cc12m':
````
from torch.utils.data import Dataset
from tqdm import tqdm
from datasets import load_dataset
from torchvision import transforms
Image.MAX_IMAGE_PIXELS = None
class CC12M(Dataset):
def __init__(self, path_or_name='nebula/cc12m', split='train', transform=None, single_caption=True):
self.raw_dataset = load_dataset(path_or_name)[split]
if transform is None:
self.transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711]
)
])
else:
self.transform = transforms.Compose(transform)
self.single_caption = single_caption
self.length = len(self.raw_dataset)
def __len__(self):
return self.length
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
# del item # Uncomment this line to prevent the memory leak
return image, caption
````
Iterate through the dataset without the del item line in __getitem__.
Observe RAM usage increasing constantly.
Add del item at the end of __getitem__:
```
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
del item # This line prevents the memory leak
return image, caption
```
Iterate through the dataset again and observe that RAM usage remains stable.
### Expected behavior
Expected behavior:
RAM usage should remain stable during iteration without needing to explicitly delete items.
Actual behavior:
RAM usage constantly increases unless items are explicitly deleted after use
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-4.18.0-513.5.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
| {
"avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4",
"events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}",
"followers_url": "https://api.github.com/users/iamwangyabin/followers",
"following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}",
"gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iamwangyabin",
"id": 38123329,
"login": "iamwangyabin",
"node_id": "MDQ6VXNlcjM4MTIzMzI5",
"organizations_url": "https://api.github.com/users/iamwangyabin/orgs",
"received_events_url": "https://api.github.com/users/iamwangyabin/received_events",
"repos_url": "https://api.github.com/users/iamwangyabin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iamwangyabin",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7180/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7178/comments | https://api.github.com/repos/huggingface/datasets/issues/7178/events | https://github.com/huggingface/datasets/issues/7178 | 2,552,378,330 | I_kwDODunzps6YIjPa | 7,178 | Support Python 3.11 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,728 | 1970-01-01T00:00:00.000001 | MEMBER | null | Support Python 3.11: https://peps.python.org/pep-0664/ | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7178/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7175/comments | https://api.github.com/repos/huggingface/datasets/issues/7175/events | https://github.com/huggingface/datasets/issues/7175 | 2,550,957,337 | I_kwDODunzps6YDIUZ | 7,175 | [FSTimeoutError] load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/53268607?v=4",
"events_url": "https://api.github.com/users/cosmo3769/events{/privacy}",
"followers_url": "https://api.github.com/users/cosmo3769/followers",
"following_url": "https://api.github.com/users/cosmo3769/following{/other_user}",
"gists_url": "https://api.github.com/users/cosmo3769/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cosmo3769",
"id": 53268607,
"login": "cosmo3769",
"node_id": "MDQ6VXNlcjUzMjY4NjA3",
"organizations_url": "https://api.github.com/users/cosmo3769/orgs",
"received_events_url": "https://api.github.com/users/cosmo3769/received_events",
"repos_url": "https://api.github.com/users/cosmo3769/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cosmo3769/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cosmo3769/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cosmo3769",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Is this `FSTimeoutError` due to download network issue from remote resource (from where it is being accessed)?",
"It seems to happen for all datasets, not just a specific one, and especially for versions after 3.0. (3.0.0, 3.0.1 have this problem)\r\n\r\nI had the same error on a different dataset, but after downgrading to datasets==2.21.0, the problem was solved.",
"Same as https://github.com/huggingface/datasets/issues/7164\r\n\r\nThis dataset is made of a python script that downloads data from elsewhere than HF, so availability depends on the original host. Ultimately it would be nice to host the files of this dataset on HF\r\n\r\nin `datasets` <3.0 there were lots of mechanisms that got removed after the decision to make datasets with python loading scripts legacy for security and maintenance reasons (we only do very basic support now)",
"@lhoestq Thank you for the clarification! Closing the issue.",
"I'm getting this too, and also at 5 minutes. But for `CSTR-Edinburgh/vctk`, so it's not just this dataset, it seems to be a timeout that was introduced and needs to be raised. The progress bar was moving along just fine before the timeout, and I get more or less of it depending on how fast the network is.",
"You can change the `aiohttp` timeout from 5min to 1h like this:\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,729 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When using `load_dataset`to load [HuggingFaceM4/VQAv2](https://huggingface.co./datasets/HuggingFaceM4/VQAv2), I am getting `FSTimeoutError`.
### Error
```
TimeoutError:
The above exception was the direct cause of the following exception:
FSTimeoutError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py](https://klh9mr78js-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240924-060116_RC00_678132060#) in sync(loop, func, timeout, *args, **kwargs)
99 if isinstance(return_result, asyncio.TimeoutError):
100 # suppress asyncio.TimeoutError, raise FSTimeoutError
--> 101 raise FSTimeoutError from return_result
102 elif isinstance(return_result, BaseException):
103 raise return_result
FSTimeoutError:
```
It usually fails around 5-6 GB.
<img width="847" alt="Screenshot 2024-09-26 at 9 10 19 PM" src="https://github.com/user-attachments/assets/ff91995a-fb55-4de6-8214-94025d6c8470">
### Steps to reproduce the bug
To reproduce it, run this in colab notebook:
```
!pip install -q -U datasets
from datasets import load_dataset
ds = load_dataset('HuggingFaceM4/VQAv2', split="train[:10%]")
```
### Expected behavior
It should download properly.
### Environment info
Using Colab Notebook. | {
"avatar_url": "https://avatars.githubusercontent.com/u/53268607?v=4",
"events_url": "https://api.github.com/users/cosmo3769/events{/privacy}",
"followers_url": "https://api.github.com/users/cosmo3769/followers",
"following_url": "https://api.github.com/users/cosmo3769/following{/other_user}",
"gists_url": "https://api.github.com/users/cosmo3769/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cosmo3769",
"id": 53268607,
"login": "cosmo3769",
"node_id": "MDQ6VXNlcjUzMjY4NjA3",
"organizations_url": "https://api.github.com/users/cosmo3769/orgs",
"received_events_url": "https://api.github.com/users/cosmo3769/received_events",
"repos_url": "https://api.github.com/users/cosmo3769/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cosmo3769/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cosmo3769/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cosmo3769",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7175/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7171/comments | https://api.github.com/repos/huggingface/datasets/issues/7171/events | https://github.com/huggingface/datasets/issues/7171 | 2,549,738,919 | I_kwDODunzps6X-e2n | 7,171 | CI is broken: No solution found when resolving dependencies | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297
```
Run uv pip install --system -r additional-tests-requirements.txt --no-deps
× No solution found when resolving dependencies:
╰─▶ Because the current Python version (3.8.18) does not satisfy Python>=3.9
and torchdata==0.10.0a0+1a98f21 depends on Python>=3.9, we can conclude
that torchdata==0.10.0a0+1a98f21 cannot be used.
And because only torchdata==0.10.0a0+1a98f21 is available and
you require torchdata, we can conclude that your requirements are
unsatisfiable.
Error: Process completed with exit code 1.
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7171/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7169/comments | https://api.github.com/repos/huggingface/datasets/issues/7169/events | https://github.com/huggingface/datasets/issues/7169 | 2,546,894,076 | I_kwDODunzps6XzoT8 | 7,169 | JSON lines with missing columns raise CastError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | MEMBER | null | JSON lines with missing columns raise CastError:
> CastError: Couldn't cast ... to ... because column names don't match
Related to:
- #7159
- #7161 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7169/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7168/comments | https://api.github.com/repos/huggingface/datasets/issues/7168/events | https://github.com/huggingface/datasets/issues/7168 | 2,546,710,631 | I_kwDODunzps6Xy7hn | 7,168 | sd1.5 diffusers controlnet training script gives new error | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"not sure why the issue is formatting oddly",
"I guess this is a dupe of\r\n\r\nhttps://github.com/huggingface/datasets/issues/7071",
"this turned out to be because of a bad image in dataset"
] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
This will randomly pop up during training now
```
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main
for step, batch in enumerate(train_dataloader):
File "/usr/local/lib/python3.11/dist-packages/accelerate/data_loader.py", line 561, in __iter__
next_batch = next(dataloader_iter)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 673, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/_utils/fetch.py", line 50, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2746, in __getitems__
batch = self.__getitem__(keys)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2742, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2727, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 639, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 407, in __call__
return self.format_batch(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 521, in format_batch
batch = self.python_features_decoder.decode_batch(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 228, in decode_batch
return self.features.decode_batch(batch) if self.features else batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2084, in decode_batch
[
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2085, in <listcomp>
decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 1403, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
```
### Steps to reproduce the bug
Train on diffusers sd1.5 controlnet example script
This will pop up randomly, you can see in wandb below when i manually resume run everytime this error appears
![image](https://github.com/user-attachments/assets/87e9a6af-cb3c-4398-82e7-d6a90add8d31)
### Expected behavior
Training to continue without above error
### Environment info
- datasets version: 3.0.0
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- huggingface_hub version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- fsspec version: 2024.6.1
Training on 4090 | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7168/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7167/comments | https://api.github.com/repos/huggingface/datasets/issues/7167/events | https://github.com/huggingface/datasets/issues/7167 | 2,546,708,014 | I_kwDODunzps6Xy64u | 7,167 | Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, batch_size=16, new_fingerprint=new_fingerprint)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s]
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1132, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 560, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3035, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3461, in _map_single
writer.write_batch(batch)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 567, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 579, in write_table
pa_table = pa_table.combine_chunks()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4387, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1174, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 769, in simple_launcher
```
### Steps to reproduce the bug
The dataset has no problem training on sd1.5 controlnet train script
### Expected behavior
Script not randomly erroing with error above
### Environment info
- `datasets` version: 3.0.0
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
training on A100 | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7167/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7164/comments | https://api.github.com/repos/huggingface/datasets/issues/7164/events | https://github.com/huggingface/datasets/issues/7164 | 2,544,757,297 | I_kwDODunzps6Xreox | 7,164 | fsspec.exceptions.FSTimeoutError when downloading dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38216460?v=4",
"events_url": "https://api.github.com/users/timonmerk/events{/privacy}",
"followers_url": "https://api.github.com/users/timonmerk/followers",
"following_url": "https://api.github.com/users/timonmerk/following{/other_user}",
"gists_url": "https://api.github.com/users/timonmerk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timonmerk",
"id": 38216460,
"login": "timonmerk",
"node_id": "MDQ6VXNlcjM4MjE2NDYw",
"organizations_url": "https://api.github.com/users/timonmerk/orgs",
"received_events_url": "https://api.github.com/users/timonmerk/received_events",
"repos_url": "https://api.github.com/users/timonmerk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timonmerk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timonmerk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timonmerk",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! If you check the dataset loading script [here](https://huggingface.co./datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\n\r\nIn the meantime I can only recommend to try again later :/",
"Ok, still many thanks!",
"I'm also getting this same error but for `CSTR-Edinburgh/vctk`, so I don't think it's the remote host that's timing out, since I also time out at exactly 5 minutes. It seems there is a universal fsspec timeout that's getting hit starting in v3.",
"in v3 we cleaned the download parts of the library to make it more robust for HF downloads and to simplify support of script-based datasets. As a side effect it's not the same code that is used for other hosts, maybe time out handling changed. Anyway it should be possible to tweak fsspec to use retries\r\n\r\nFor example using [aiohttp_retry](https://github.com/inyutin/aiohttp_retry) maybe (haven't tried) ?\r\n\r\n```python\r\nimport fsspec\r\nfrom aiohttp_retry import RetryClient\r\n\r\nfsspec.filesystem(\"http\")._session = RetryClient()\r\n```\r\n\r\nrelated topic : https://github.com/huggingface/datasets/issues/7175",
"Adding a timeout argument to the `fs.get_file` call in `fsspec_get` in `datasets/utils/file_utils.py` might fix this ([source code](https://github.com/huggingface/datasets/blob/65f6eb54aa0e8bb44cea35deea28e0e8fecc25b9/src/datasets/utils/file_utils.py#L330)):\r\n\r\n```python\r\nfs.get_file(path, temp_file.name, callback=callback, timeout=3600)\r\n```\r\n\r\nSetting `timeout=1` fails after about one second, so setting it to 3600 should give us 1h. Havn't really tested this though. I'm also not sure what implications this has and if it causes errors for other `fs` implementations/configurations.\r\n\r\nThis is using `datasets==3.0.1` and Python 3.11.6.\r\n\r\n---\r\n\r\nEdit: This doesn't seem to change the timeout time, but add a second timeout counter (probably in `fsspec/asyn.py/sync`). So one can reduce the time for downloading like this, but not expand.\r\n\r\n---\r\n\r\nEdit 2: `fs` is of type `fsspec.implementations.http.HTTPFileSystem` which initializes a `aiohttp.ClientSession` using `client_kwargs`. We can pass these when calling `load_dataset`.\r\n\r\n**TLDR; This fixes it:**\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Describe the bug
I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data.
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("librispeech_asr", "clean")
```
The output is as follows:
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:06, 13.2MB/s]Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 56, in _runner
> result[0] = await coro
> ^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/implementations/http.py", line 262, in _get_file
> chunk = await r.content.read(chunk_size)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 393, in read
> await self._wait("read")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 311, in _wait
> with self._timer:
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 713, in __exit__
> raise asyncio.TimeoutError from None
> TimeoutError
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/load_dataset.py", line 3, in <module>
> datasets.load_dataset("librispeech_asr", "clean")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/load.py", line 2096, in load_dataset
> builder_instance.download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 924, in download_and_prepare
> self._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
> super()._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 977, in _download_and_prepare
> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/2712a8f82f0d20807a56faadcd08734f9bdd24c850bb118ba21ff33ebff0432f/librispeech_asr.py", line 115, in _split_generators
> archive_path = dl_manager.download(_DL_URLS[self.config.name])
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 159, in download
> downloaded_path_or_paths = map_nested(
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 512, in map_nested
> _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 380, in _single_map_nested
> return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 216, in _download_batched
> self._download_single(url_or_filename, download_config=download_config)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 225, in _download_single
> out = cached_path(url_or_filename, download_config=download_config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 205, in cached_path
> output_path = get_from_cache(
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 415, in get_from_cache
> fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 334, in fsspec_get
> fs.get_file(path, temp_file.name, callback=callback)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 118, in wrapper
> return sync(self.loop, func, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 101, in sync
> raise FSTimeoutError from return_result
> fsspec.exceptions.FSTimeoutError
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:09, 13.0MB/s]
### Expected behavior
Complete the download
### Environment info
Python version 3.12.6
Dependencies:
> dependencies = [
> "accelerate>=0.34.2",
> "datasets[audio]>=3.0.0",
> "ipython>=8.18.1",
> "librosa>=0.10.2.post1",
> "torch>=2.4.1",
> "torchaudio>=2.4.1",
> "transformers>=4.44.2",
> ]
MacOS 14.6.1 (23G93) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7164/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7163/comments | https://api.github.com/repos/huggingface/datasets/issues/7163/events | https://github.com/huggingface/datasets/issues/7163 | 2,542,361,234 | I_kwDODunzps6XiVqS | 7,163 | Set explicit seed in iterable dataset ddp shuffling example | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"thanks for reporting !"
] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Describe the bug
In the examples section of the iterable dataset docs https://huggingface.co./docs/datasets/en/package_reference/main_classes#datasets.IterableDataset
the ddp example shuffles without seeding
```python
from datasets.distributed import split_dataset_by_node
ids = ds.to_iterable_dataset(num_shards=512)
ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating
ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating
dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating
for example in ids:
pass
```
This code would - I think - raise an error due to the lack of an explicit seed:
https://github.com/huggingface/datasets/blob/2eb4edb97e1a6af2ea62738ec58afbd3812fc66e/src/datasets/iterable_dataset.py#L1707-L1711
### Steps to reproduce the bug
Run example code
### Expected behavior
Add explicit seeding to example code
### Environment info
latest datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7163/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7161/comments | https://api.github.com/repos/huggingface/datasets/issues/7161/events | https://github.com/huggingface/datasets/issues/7161 | 2,541,971,931 | I_kwDODunzps6Xg2nb | 7,161 | JSON lines with empty struct raise ArrowTypeError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | MEMBER | null | JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order
See example: https://huggingface.co./datasets/wikimedia/structured-wikipedia/discussions/5
> ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_count: int64, update_count: int64, citation_needed_count: int64>
Related to:
- #7159 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7161/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7159/comments | https://api.github.com/repos/huggingface/datasets/issues/7159/events | https://github.com/huggingface/datasets/issues/7159 | 2,541,865,613 | I_kwDODunzps6XgcqN | 7,159 | JSON lines with missing struct fields raise TypeError: Couldn't cast array | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co./datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co./datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wikipedia dataset locally but when loading I have the same issue.\r\n\r\n```\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"/gpfsdsdir/dataset/HuggingFace/wikimedia/structured-wikipedia/20240916.fr\")\r\n```\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<content_url: string, width: int64, height: int64, alternative_text: string>\r\nto\r\n{'content_url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n```\r\nMy version of datasets is 3.0.1"
] | 1970-01-01T00:00:00.000001 | 1,729 | 1970-01-01T00:00:00.000001 | MEMBER | null | JSON lines with missing struct fields raise TypeError: Couldn't cast array of type.
See example: https://huggingface.co./datasets/wikimedia/structured-wikipedia/discussions/5
One would expect that the struct missing fields are added with null values. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7159/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7156/comments | https://api.github.com/repos/huggingface/datasets/issues/7156/events | https://github.com/huggingface/datasets/issues/7156 | 2,539,360,617 | I_kwDODunzps6XW5Fp | 7,156 | interleave_datasets resets shuffle state | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,726 | null | NONE | null | ### Describe the bug
```
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={'shards': list(range(25))}
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
if __name__ == "__main__":
main()
```
### Steps to reproduce the bug
Run the script, it will output
```
{'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]}
{'shards': [tensor([ 1, 9, 17, 1, 9, 17, 1, 9])]}
{'shards': [tensor([ 2, 10, 18, 2, 10, 18, 2, 10])]}
{'shards': [tensor([ 3, 11, 19, 3, 11, 19, 3, 11])]}
{'shards': [tensor([ 4, 12, 20, 4, 12, 20, 4, 12])]}
{'shards': [tensor([ 5, 13, 21, 5, 13, 21, 5, 13])]}
{'shards': [tensor([ 6, 14, 22, 6, 14, 22, 6, 14])]}
{'shards': [tensor([ 7, 15, 23, 7, 15, 23, 7, 15])]}
{'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]}
{'shards': [tensor([17, 1, 9, 17, 1, 9, 17, 1])]}
{'shards': [tensor([18, 2, 10, 18, 2, 10, 18, 2])]}
```
### Expected behavior
The shards should be shuffled.
### Environment info
- `datasets` version: 3.0.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.25.0
- PyArrow version: 17.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7156/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7156/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7155/comments | https://api.github.com/repos/huggingface/datasets/issues/7155/events | https://github.com/huggingface/datasets/issues/7155 | 2,533,641,870 | I_kwDODunzps6XBE6O | 7,155 | Dataset viewer not working! Failure due to more than 32 splits. | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. "
] | 1970-01-01T00:00:00.000001 | 1,726 | 1970-01-01T00:00:00.000001 | NONE | null | Hello guys,
I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoid this issue in the future. But, at the moment I need a hard fix for two of my datasets.
And I don't want to mess or change anything and allow everyone in public to see the dataset and interact with it. Can you please help me?
https://huggingface.co./datasets/laion/Wikipedia-X
https://huggingface.co./datasets/laion/Wikipedia-X-Full | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7155/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7153/comments | https://api.github.com/repos/huggingface/datasets/issues/7153/events | https://github.com/huggingface/datasets/issues/7153 | 2,532,788,555 | I_kwDODunzps6W90lL | 7,153 | Support data files with .ndjson extension | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,726 | 1970-01-01T00:00:00.000001 | MEMBER | null | ### Feature request
Support data files with `.ndjson` extension.
### Motivation
We already support data files with `.jsonl` extension.
### Your contribution
I am opening a PR. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7153/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7150/comments | https://api.github.com/repos/huggingface/datasets/issues/7150/events | https://github.com/huggingface/datasets/issues/7150 | 2,527,571,175 | I_kwDODunzps6Wp6zn | 7,150 | WebDataset loader splits keys differently than WebDataset library | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,726 | 1970-01-01T00:00:00.000001 | MEMBER | null | As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames.
For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`:
- datasets library: `/some/path/22` and `0/1.1.png`
- webdataset library: `/some/path/22.0/1`, `1.png`
```python
import webdataset as wds
wds.tariterators.base_plus_ext("/some/path/22.0/1.1.png")
# ('/some/path/22.0/1', '1.png')
```
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7150/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7149/comments | https://api.github.com/repos/huggingface/datasets/issues/7149/events | https://github.com/huggingface/datasets/issues/7149 | 2,524,497,448 | I_kwDODunzps6WeMYo | 7,149 | Datasets Unknown Keyword Argument Error - task_templates | {
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varungupta31",
"id": 51288316,
"login": "varungupta31",
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varungupta31",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co./datasets/facebook/winoground/discussions/8\r\n\r\n",
"Hello @albertvillanova \r\n\r\nI got the same error while loading this dataset: https://huggingface.co./datasets/alaleye/aloresb...\r\n\r\nHow can I fix it ? \r\nThanks"
] | 1970-01-01T00:00:00.000001 | 1,731 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Issue
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
Gives error
```
TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates'
```
A simple downgrade to lower `datasets v 2.21.0` solves it.
### Steps to reproduce the bug
1. `pip install datsets`
2.
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
### Expected behavior
Should load the dataset correctly.
### Environment info
- Datasets version `3.0.0`
- `transformers` version: 4.45.0.dev0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.12.4
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.35.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7149/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7148/comments | https://api.github.com/repos/huggingface/datasets/issues/7148/events | https://github.com/huggingface/datasets/issues/7148 | 2,523,833,413 | I_kwDODunzps6WbqRF | 7,148 | Bug: Error when downloading mteb/mtop_domain | {
"avatar_url": "https://avatars.githubusercontent.com/u/77958037?v=4",
"events_url": "https://api.github.com/users/ZiyiXia/events{/privacy}",
"followers_url": "https://api.github.com/users/ZiyiXia/followers",
"following_url": "https://api.github.com/users/ZiyiXia/following{/other_user}",
"gists_url": "https://api.github.com/users/ZiyiXia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZiyiXia",
"id": 77958037,
"login": "ZiyiXia",
"node_id": "MDQ6VXNlcjc3OTU4MDM3",
"organizations_url": "https://api.github.com/users/ZiyiXia/orgs",
"received_events_url": "https://api.github.com/users/ZiyiXia/received_events",
"repos_url": "https://api.github.com/users/ZiyiXia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZiyiXia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZiyiXia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZiyiXia",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```",
"Seems the error is still there",
"I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: data = load_dataset(\"mteb/mtop_domain\", \"en\")\r\n\r\nIn [3]: data\r\nOut[3]: DatasetDict({\r\n train: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 15667\r\n })\r\n validation: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 2235\r\n })\r\n test: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 4386\r\n })\r\n})\r\n```",
"Just solved this by reinstall Huggingface Hub and datasets. Thanks for your help!"
] | 1970-01-01T00:00:00.000001 | 1,726 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
When downloading the dataset "mteb/mtop_domain", ran into the following error:
```
Traceback (most recent call last):
File "/share/project/xzy/test/test_download.py", line 3, in <module>
data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1923, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1896, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1507, in get_module
local_path = self.download_loading_script()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1467, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 211, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 689, in get_from_cache
fsspec_get(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 395, in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 648, in get_file
http_get(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 578, in http_get
raise EnvironmentError(
OSError: Consistency check failed: file should be of size 2191 but has size 2190 ((…)ets/mteb/mtop_domain@main/mtop_domain.py).
We are sorry for the inconvenience. Please retry with `force_download=True`.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
```
Try to download through HF datasets directly but got the same error as above.
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en")
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en", force_download=True)
```
With and without `force_download=True` both ran into the same error.
### Expected behavior
Should download the dataset successfully.
### Environment info
- datasets version: 2.21.0
- huggingface-hub version: 0.24.6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7148/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7147/comments | https://api.github.com/repos/huggingface/datasets/issues/7147/events | https://github.com/huggingface/datasets/issues/7147 | 2,523,129,465 | I_kwDODunzps6WY-Z5 | 7,147 | IterableDataset strange deadlock | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard to 8 workers, so some workers may not have an example that satisfies your condition `if shard < 25`). It creates an infinite loop, trying to get samples from empty datasets with probability 1.",
"Opened https://github.com/huggingface/datasets/issues/7156\r\n\r\nCan the deadlock be fixed somehow? The point of IterableDataset is so we don't need to preload the entire dataset, which loses some meaning if we need to see how many examples are in the dataset in order to set shards correctly.",
"~~And it is kinda strange that `Commenting out the final shuffle avoids the issue` since if the infinite loop is inside interleave_datasets you'd expect that to happen regardless of the additional shuffle call?~~\r\n\r\nEdit: oh I guess without the shuffle it's guaranteed every worker gets something, but the shuffle makes it so some workers could have nothing\r\n\r\n~~Edit2: maybe the shuffle can be changed so initially it gives one example to each worker, and only starts the random shuffle after that~~ wait it's not about the workers not getting any shards, it's about a worker getting shards but all of the shards it gets are empty shards\r\n\r\nEdit3: If it's trying to get samples from empty datasets, it should be getting back a StopIteration -- and \"all_exhausted\" should mean it eventually discovers all its datasets are empty, and then it should just raise a StopIteration itself. So it seems like there is a reasonable behavior result for this?",
"well the second dataset passed to interleave_datasets is never exhausted, since it's never sampled. But we could also state that the stream of examples from the second dataset is empty if it has probability 0, so I opened https://github.com/huggingface/datasets/pull/7157 to fix the infinite loop issue by ignoring datasets with probability 0, let me know what you think !",
"Thanks for taking a look!\r\n\r\nI think you're right that this is ultimately an issue that the user opts into by specifying a dataset with probability 0, because the user is basically saying \"I want to force this `interleave_datasets` call to run forever\" and yet one of the workers can end up having only empty shards to mix...\r\n\r\nThat said it's probably not a good idea to randomly change the behavior of `interleave_datasets` with probability 0, I can't be the only one that uses it to repeat many different datasets (since there is no `datasets.repeat()` function). https://xkcd.com/1172/\r\n\r\nI think just the knowledge that filtering out probability 0 datasets fixes the deadlock is good enough for me. I can filter it out on my side and add a restart loop around the dataloader instead.\r\n\r\nThanks again for investigating.",
"Ok I see ! We can also add .repeat() as well"
] | 1970-01-01T00:00:00.000001 | 1,727 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
```
import datasets
import torch.utils.data
num_shards = 1024
def gen(shards):
for shard in shards:
if shard < 25:
yield {"shard": shard}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={"shards": list(range(num_shards))},
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataset = dataset.shuffle(buffer_size=1)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
print()
if __name__ == "__main__":
for _ in range(100):
main()
```
### Steps to reproduce the bug
Running the script above, at some point it will freeze.
- Changing `num_shards` from 1024 to 25 avoids the issue
- Commenting out the final shuffle avoids the issue
- Commenting out the interleave_datasets call avoids the issue
As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets.
### Expected behavior
The script should not freeze.
### Environment info
- `datasets` version: 3.0.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.5
- `huggingface_hub` version: 0.24.7
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro. | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7147/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7142/comments | https://api.github.com/repos/huggingface/datasets/issues/7142/events | https://github.com/huggingface/datasets/issues/7142 | 2,512,244,938 | I_kwDODunzps6VvdDK | 7,142 | Specifying datatype when adding a column to a dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varadhbhatnagar",
"id": 20443618,
"login": "varadhbhatnagar",
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varadhbhatnagar",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"#self-assign"
] | 1970-01-01T00:00:00.000001 | 1,726 | 1970-01-01T00:00:00.000001 | CONTRIBUTOR | null | ### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function.
IMO this functionality should be natively supported.
https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674
### Your contribution
I can submit a PR for this. | {
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varadhbhatnagar",
"id": 20443618,
"login": "varadhbhatnagar",
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varadhbhatnagar",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7142/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7141/comments | https://api.github.com/repos/huggingface/datasets/issues/7141/events | https://github.com/huggingface/datasets/issues/7141 | 2,510,797,653 | I_kwDODunzps6Vp7tV | 7,141 | Older datasets throwing safety errors with 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4",
"events_url": "https://api.github.com/users/alvations/events{/privacy}",
"followers_url": "https://api.github.com/users/alvations/followers",
"following_url": "https://api.github.com/users/alvations/following{/other_user}",
"gists_url": "https://api.github.com/users/alvations/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvations",
"id": 1050316,
"login": "alvations",
"node_id": "MDQ6VXNlcjEwNTAzMTY=",
"organizations_url": "https://api.github.com/users/alvations/orgs",
"received_events_url": "https://api.github.com/users/alvations/received_events",
"repos_url": "https://api.github.com/users/alvations/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvations/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvations",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I am also getting this error with this dataset: https://huggingface.co./datasets/google/IFEval",
"Me too, didn't have this issue few hours ago.",
"same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?\r\n",
"Not a good idea, but commenting out the whole security block at `/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py` is a temporary workaround:\r\n\r\n```\r\n #security = kwargs.pop(\"security\", None)\r\n #if security is not None:\r\n # security = BlobSecurityInfo(\r\n # safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\n # )\r\n #self.security = security\r\n```\r\n",
"Uploading a dataset to Huggingface also results in the following error in the Dataset Preview:\r\n```\r\nThe full dataset viewer is not available (click to read why). Only showing a preview of the rows.\r\n'safe'\r\nError code: UnexpectedError\r\nNeed help to make the dataset viewer work? Make sure to review [how to configure the dataset viewer](link1), and [open a discussion](link2) for direct support.\r\n```\r\nI used jsonl format for the dataset in this case. Same exact dataset worked previously.",
"Same issue here. Even reverting to older version of `datasets` (e.g., `2.19.0`) results in same error:\r\n\r\n```python\r\n>>> datasets.load_dataset('allenai/ai2_arc', 'ARC-Easy')\r\n\r\nFile \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3048, in <listcomp>\r\n RepoFile(**path_info) if path_info[\"type\"] == \"file\" else RepoFolder(**path_info)\r\n File \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 534, in __init__\r\n safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\nKeyError: 'safe'\r\n```",
"i just had this issue a few minutes ago, crawled the internet and found nothing. came here to open an issue and found this. it is really frustrating. anyone found a fix?",
"hi, me and my team have the same problem",
"Yeah, this just suddenly appeared without client-side code changes, within the last hours.\r\n\r\nHere's a patch to fix the issue temporarily:\r\n```python\r\nimport huggingface_hub\r\ndef patched_repofolder_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.tree_id = kwargs.pop(\"oid\")\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n\r\n\r\ndef patched_repo_file_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.size = kwargs.pop(\"size\")\r\n self.blob_id = kwargs.pop(\"oid\")\r\n lfs = kwargs.pop(\"lfs\", None)\r\n if lfs is not None:\r\n lfs = huggingface_hub.hf_api.BlobLfsInfo(size=lfs[\"size\"], sha256=lfs[\"oid\"], pointer_size=lfs[\"pointerSize\"])\r\n self.lfs = lfs\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n self.security = None\r\n\r\n # backwards compatibility\r\n self.rfilename = self.path\r\n self.lastCommit = self.last_commit\r\n\r\n\r\nhuggingface_hub.hf_api.RepoFile.__init__ = patched_repo_file_init\r\nhuggingface_hub.hf_api.RepoFolder.__init__ = patched_repofolder_init\r\n```\r\n",
"Also discussed here:\r\nhttps://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/1",
"i'm thinking this should be a server issue, i mean no client code was changed on my end. so weird!",
"As far as I can tell, this seems to be happening with **all** datasets that use RepoFolder (probably represents most datasets on huggingface, right?)",
"> Here is a temporary fix for the problem: https://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/12?u=mlscientist\r\n\r\nthis doesn't seem to work!",
"In case you are using Colab or similar, remember to restart your session after modyfing the hf_api.py file",
"No need to modify the file directly, just monkey-patch.\r\n\r\nI'm now more sure that the error appears because the backend expects the api code to look like it does on `main`. If `RepoFile` and `RepoFolder` look about like they look on main, they work again.\r\n\r\nIf not fixed like above, a secondary error that will appear is \r\n```\r\n return self.info(path, expand_info=False)[\"type\"] == \"directory\"\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n \"tree_id\": path_info.tree_id,\r\n ^^^^^^^^^^^^^^^^^\r\nAttributeError: 'RepoFolder' object has no attribute 'tree_id'\r\n```\r\n",
"We've reverted the deployment, please let us know if the issue still persists!",
"thanks @muellerzr!"
] | 1970-01-01T00:00:00.000001 | 1,725 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
The dataset loading was throwing some safety errors for this popular dataset `wmt14`.
[in]:
```
import datasets
# train_data = datasets.load_dataset("wmt14", "de-en", split="train")
train_data = datasets.load_dataset("wmt14", "de-en", split="train")
val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
```
[out]:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-9-445f0ecc4817>](https://localhost:8080/#) in <cell line: 4>()
2
3 # train_data = datasets.load_dataset("wmt14", "de-en", split="train")
----> 4 train_data = datasets.load_dataset("wmt14", "de-en", split="train")
5 val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
12 frames
[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in __init__(self, **kwargs)
636 if security is not None:
637 security = BlobSecurityInfo(
--> 638 safe=security["safe"], av_scan=security["avScan"], pickle_import_scan=security["pickleImportScan"]
639 )
640 self.security = security
KeyError: 'safe'
```
### Steps to reproduce the bug
See above.
### Expected behavior
Dataset properly loaded.
### Environment info
version: 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/muellerzr",
"id": 7831895,
"login": "muellerzr",
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/muellerzr",
"user_view_type": "public"
} | {
"+1": 26,
"-1": 0,
"confused": 2,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 28,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7141/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7139/comments | https://api.github.com/repos/huggingface/datasets/issues/7139/events | https://github.com/huggingface/datasets/issues/7139 | 2,508,078,858 | I_kwDODunzps6Vfj8K | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/105094708?v=4",
"events_url": "https://api.github.com/users/fscdc/events{/privacy}",
"followers_url": "https://api.github.com/users/fscdc/followers",
"following_url": "https://api.github.com/users/fscdc/following{/other_user}",
"gists_url": "https://api.github.com/users/fscdc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fscdc",
"id": 105094708,
"login": "fscdc",
"node_id": "U_kgDOBkOeNA",
"organizations_url": "https://api.github.com/users/fscdc/orgs",
"received_events_url": "https://api.github.com/users/fscdc/received_events",
"repos_url": "https://api.github.com/users/fscdc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fscdc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fscdc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fscdc",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Imagenet-1k is a gated dataset which means you’ll have to agree to share your contact info to access it. Have you tried this yet? Once you have, you can sign in with your user token (you can find this in your Hugging Face account settings) when prompted by running.\r\n\r\n```\r\nhuggingface-cli login\r\ntrain_set = load_dataset('imagenet-1k', split='train', use_auth_token=True)\r\n``` ",
"Thanks a lot! It helps me"
] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
CenterCrop(224),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
def transform_train_examples(examples):
transform = Compose([
RandomResizedCrop(224),
RandomHorizontalFlip(),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
# @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)
# train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4)
# test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4)
train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True)
test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True)
print(train_set["label"])
train_set.set_transform(transform_train_examples)
test_set.set_transform(transform_val_examples)
return train_set, test_set
```
above the code, but output of the print is a list of None:
<img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb">
### Steps to reproduce the bug
1. just ran the code
2. see the print
### Expected behavior
I do not know how to fix this, can anyone provide help or something? It is hurry for me
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7139/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7138/comments | https://api.github.com/repos/huggingface/datasets/issues/7138/events | https://github.com/huggingface/datasets/issues/7138 | 2,507,738,308 | I_kwDODunzps6VeQzE | 7,138 | Cache only changed columns? | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`.",
"yes this is the right workaround. We're keeping the cache like this to make it easier for people to delete intermediate cache files"
] | 1970-01-01T00:00:00.000001 | 1,726 | null | CONTRIBUTOR | null | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
### Your contribution
Is this even viable in the current architecture of the package?
I quickly looked into it and it seems it would require significant changes.
I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it? | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7138/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7137/comments | https://api.github.com/repos/huggingface/datasets/issues/7137/events | https://github.com/huggingface/datasets/issues/7137 | 2,506,851,048 | I_kwDODunzps6Va4Lo | 7,137 | [BUG] dataset_info sequence unexpected behavior in README.md YAML | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: string\r\n\r\n\r\n# data\r\n{\"answers\": {\"text\": \"ADDRESS\", \"label\": \"abc\"}}\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,725 | null | NONE | null | ### Describe the bug
When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly.
My data looks like
```
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
```
My `dataset_info` in README.md is:
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
**Error log**:
```
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from list<item: struct<text: string, label: string>> to struct using function cast_struct
```
## Potential Reason
After some analysis, it turns out that my yaml config is requiring `dict[str, list[str]]` instead of `list[dict[str, str]]`. It would work if I change my data to
```
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
These following 2 different `dataset_info` are actually equivalent.
```
dataset_info:
- config_name: default
features:
- name: answers
dtype:
- name: text
sequence: string
- name: label
sequence: string
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
### Steps to reproduce the bug
```
# README.md
---
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
configs:
- config_name: default
default: true
data_files:
- split: train
path:
- "test.jsonl"
---
# test.jsonl
# expected but not working
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
# unexpected but working
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
### Expected behavior
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
Should work on following data format:
```
{"answers":[{"text":"ADDRESS", "label": "abc"}]}
```
### Environment info
- `datasets` version: 2.21.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7137/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7135/comments | https://api.github.com/repos/huggingface/datasets/issues/7135/events | https://github.com/huggingface/datasets/issues/7135 | 2,503,318,328 | I_kwDODunzps6VNZs4 | 7,135 | Bug: Type Mismatch in Dataset Mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/45327989?v=4",
"events_url": "https://api.github.com/users/marko1616/events{/privacy}",
"followers_url": "https://api.github.com/users/marko1616/followers",
"following_url": "https://api.github.com/users/marko1616/following{/other_user}",
"gists_url": "https://api.github.com/users/marko1616/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marko1616",
"id": 45327989,
"login": "marko1616",
"node_id": "MDQ6VXNlcjQ1MzI3OTg5",
"organizations_url": "https://api.github.com/users/marko1616/orgs",
"received_events_url": "https://api.github.com/users/marko1616/received_events",
"repos_url": "https://api.github.com/users/marko1616/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marko1616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marko1616/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marko1616",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dict(data)\r\n\r\n# Mapping function to convert label to string\r\ndef add_one(example):\r\n example['label'] += 1\r\n return example\r\n\r\n# Applying the mapping function\r\ndataset = dataset.map(add_one)\r\n\r\n# Iterating over the dataset to show results\r\nfor item in dataset:\r\n print(item)\r\n print(type(item['label']))\r\n```",
"Hello, thanks for submitting an issue.\r\n\r\nFWIU, the issue is that `datasets` tries to limit casting [ref](https://github.com/huggingface/datasets/blob/ca58154bba185c1916ca5eea4e33b27258642044/src/datasets/arrow_writer.py#L526) and as such will try to convert your strings back to int to preserve the `Features`. \r\n\r\nA quick solution would be to use `dataset.cast` or to supply `features` when calling `dataset.map`.\r\n\r\n\r\n```python\r\n# using Dataset.cast\r\ndataset = dataset.cast_column('label', Value('string'))\r\n\r\n# Alternative, supply features\r\ndataset = dataset.map(add_one, features=Features({**dataset.features, 'label': Value('string')}))\r\n```",
"LGTM! Thanks for the review.\r\n\r\nJust to clarify, is this intended behavior, or is it something that might be addressed in a future update?\r\nI'll leave this issue open until it's fixed if this is not the intended behavior."
] | 1970-01-01T00:00:00.000001 | 1,725 | null | NONE | null | # Issue: Type Mismatch in Dataset Mapping
## Description
There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of a string.
## Reproduction Code
Below is a Python script that demonstrates the problem:
```python
from datasets import Dataset
# Original data
data = {
'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],
'label': [0, 1, 0, 1, 1, 0]
}
# Creating a Dataset object
dataset = Dataset.from_dict(data)
# Mapping function to convert label to string
def add_one(example):
example['label'] = str(example['label'])
return example
# Applying the mapping function
dataset = dataset.map(add_one)
# Iterating over the dataset to show results
for item in dataset:
print(item)
print(type(item['label']))
```
## Expected Output
After applying the mapping function, the expected output should have the `label` field as strings:
```plaintext
{'text': 'Hello', 'label': '0'}
<class 'str'>
{'text': 'world', 'label': '1'}
<class 'str'>
{'text': 'this', 'label': '0'}
<class 'str'>
{'text': 'is', 'label': '1'}
<class 'str'>
{'text': 'a', 'label': '1'}
<class 'str'>
{'text': 'test', 'label': '0'}
<class 'str'>
```
## Actual Output
The actual output still shows the `label` field values as integers:
```plaintext
{'text': 'Hello', 'label': 0}
<class 'int'>
{'text': 'world', 'label': 1}
<class 'int'>
{'text': 'this', 'label': 0}
<class 'int'>
{'text': 'is', 'label': 1}
<class 'int'>
{'text': 'a', 'label': 1}
<class 'int'>
{'text': 'test', 'label': 0}
<class 'int'>
```
## Why necessary
In the case of Image process we often need to convert PIL to tensor with same column name.
Thank for every dev who review this issue. 🤗 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7135/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7134/comments | https://api.github.com/repos/huggingface/datasets/issues/7134/events | https://github.com/huggingface/datasets/issues/7134 | 2,499,484,041 | I_kwDODunzps6U-xmJ | 7,134 | Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown | {
"avatar_url": "https://avatars.githubusercontent.com/u/46371349?v=4",
"events_url": "https://api.github.com/users/navidmafi/events{/privacy}",
"followers_url": "https://api.github.com/users/navidmafi/followers",
"following_url": "https://api.github.com/users/navidmafi/following{/other_user}",
"gists_url": "https://api.github.com/users/navidmafi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/navidmafi",
"id": 46371349,
"login": "navidmafi",
"node_id": "MDQ6VXNlcjQ2MzcxMzQ5",
"organizations_url": "https://api.github.com/users/navidmafi/orgs",
"received_events_url": "https://api.github.com/users/navidmafi/received_events",
"repos_url": "https://api.github.com/users/navidmafi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/navidmafi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navidmafi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/navidmafi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,725 | null | NONE | null | ### Describe the bug
Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method.
I can convert an image from a (H,W,3) shape to a grayscale (H,W) image and I have no problems with this. But when attempting to return a (H,W,1) shaped matrix from a map function, it never completes and sometimes even results in an OOM from the OS.
I've used various methods to expand a (H,W) shaped array to a (H,W,1) array. But they all resulted in extremely long map operations consuming a lot of CPU and RAM.
### Steps to reproduce the bug
Below is a minimal example using two methods to get the desired output. Both of which don't work
```py
import tensorflow as tf
import datasets
import numpy as np
ds = datasets.load_dataset("project-sloth/captcha-images")
to_gray_pillow = lambda sample: {'image': np.expand_dims(sample['image'].convert("L"), axis=-1)}
ds_gray = ds.map(to_gray_pillow)
# Alternatively
ds = datasets.load_dataset("project-sloth/captcha-images").with_format("tensorflow")
to_gray_tf = lambda sample: {'image': tf.expand_dims(tf.image.rgb_to_grayscale(sample['image']), axis=-1)}
ds_gray = ds.map(to_gray_tf)
```
### Expected behavior
I expect the map operation to complete and return a new dataset containing grayscale images in a (H,W,1) shape.
### Environment info
datasets 2.21.0
python tested with both 3.11 and 3.12
host os : linux | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7134/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7129/comments | https://api.github.com/repos/huggingface/datasets/issues/7129/events | https://github.com/huggingface/datasets/issues/7129 | 2,491,942,650 | I_kwDODunzps6UiAb6 | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | {
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiopaniego",
"id": 17179696,
"login": "sergiopaniego",
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiopaniego",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,724 | null | NONE | null | In the documentation for [ClassLabel](https://huggingface.co./docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
features
````
which expects to output (as stated in the documentation):
````
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
````
but it generates the following
````
{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}
````
If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:
https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975
I would like to work on this issue if this is something needed 😄
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7129/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7128/comments | https://api.github.com/repos/huggingface/datasets/issues/7128/events | https://github.com/huggingface/datasets/issues/7128 | 2,490,274,775 | I_kwDODunzps6UbpPX | 7,128 | Filter Large Dataset Entry by Entry | {
"avatar_url": "https://avatars.githubusercontent.com/u/36057290?v=4",
"events_url": "https://api.github.com/users/QiyaoWei/events{/privacy}",
"followers_url": "https://api.github.com/users/QiyaoWei/followers",
"following_url": "https://api.github.com/users/QiyaoWei/following{/other_user}",
"gists_url": "https://api.github.com/users/QiyaoWei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QiyaoWei",
"id": 36057290,
"login": "QiyaoWei",
"node_id": "MDQ6VXNlcjM2MDU3Mjkw",
"organizations_url": "https://api.github.com/users/QiyaoWei/orgs",
"received_events_url": "https://api.github.com/users/QiyaoWei/received_events",
"repos_url": "https://api.github.com/users/QiyaoWei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QiyaoWei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiyaoWei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QiyaoWei",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n",
"Jumping on this as it seems relevant - when I use the `filter` method, it often results in an OOM (or at least unacceptably high memory usage).\r\n\r\nFor example in the [this notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_), we load an object detection dataset from HF and imagine I want to filter such that I only have images which contain a single annotation class. Each row has a JSON field that contains MS-COCO annotations for the image, so we could load that field and filter on it.\r\n\r\nThe test dataset is only about 440 images, probably less than 1GB, but running the following filter crashes the VM (over 12 GB RAM):\r\n\r\n```python\r\nimport json\r\ndef filter_single_class(example, target_class_id):\r\n \"\"\"Filters examples based on whether they contain annotations from a single class.\r\n\r\n Args:\r\n example: A dictionary representing a single example from the dataset.\r\n target_class_id: The target class ID to filter for.\r\n\r\n Returns:\r\n True if the example contains only annotations from the target class, False otherwise.\r\n \"\"\"\r\n if not example['coco_annotations']:\r\n return False\r\n\r\n annotation_category_ids = set([annotation['category_id'] for annotation in json.loads(example['coco_annotations'])])\r\n\r\n return len(annotation_category_ids) == 1 and target_class_id in annotation_category_ids\r\n\r\ntarget_class_id = 1 \r\nfiltered_dataset = dataset['test'].filter(lambda example: filter_single_class(example, target_class_id))\r\n```\r\n\r\n<img width=\"255\" alt=\"image\" src=\"https://github.com/user-attachments/assets/be475f15-5b6b-4df2-b5b5-a1f60ae2b05c\">\r\n\r\nIterating over the dataset works fine:\r\n\r\n```python\r\nfiltered_dataset = []\r\nfor example in dataset['test']:\r\n if filter_single_class(example, target_class_id):\r\n filtered_dataset.append(example)\r\n```\r\n\r\n<img width=\"129\" alt=\"image\" src=\"https://github.com/user-attachments/assets/34fa5612-0394-4c46-9f34-e94650f05d65\">\r\n\r\nIt would be great if there was guidance in the documentation on how to use filters efficiently, or if this is some performance bug that could be addressed. At the very least I would expect a filter operation to use at most 2x the footprint of the database plus some overhead for the lambda (i.e. worst case would be a duplicate copy with all entries retained). Even if the operation is parallelised, each thread/worker should only take a subset of the dataset - so I'm not sure where this ballooning in memory usage comes from.\r\n\r\nFrom some other comments there seems to be a workaround with `writer_batch_size` or caching to file, but in the [docs](https://huggingface.co./docs/datasets/v3.0.0/en/package_reference/main_classes#datasets.Dataset.filter) at least, `keep_in_memory` defaults to `False`.",
"You can try passing input_columns=[\"coco_annotations\"] to only load this column instead of all the columns. In that case your function should take coco_annotations as input instead of example",
"If your filter_function is large and computationally intensive, consider using multi-processing or multi-threading with concurrent.futures to filter the dataset. This approach allows you to process multiple tables concurrently, reducing overall processing time, especially for CPU-bound tasks. Use ThreadPoolExecutor for I/O-bound operations and ProcessPoolExecutor for CPU-bound operations.\r\n"
] | 1970-01-01T00:00:00.000001 | 1,728 | null | NONE | null | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like:
```
dataset = load_dataset(
"really-large-dataset",
streaming=True
)
# And let's say we process the dataset bit by bit because we want intermediate results
dataset = islice(dataset, 10000)
# Define a function to filter the data
def filter_function(table):
if some_condition:
return True
else:
return False
# Use the filter function on your dataset
filtered_dataset = (ex for ex in dataset if filter_function(ex))
```
And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!
### Motivation
See description above
### Your contribution
Happy to make PR if this is a new feature | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7128/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7127/comments | https://api.github.com/repos/huggingface/datasets/issues/7127/events | https://github.com/huggingface/datasets/issues/7127 | 2,486,524,966 | I_kwDODunzps6UNVwm | 7,127 | Caching shuffles by np.random.Generator results in unintiutive behavior | {
"avatar_url": "https://avatars.githubusercontent.com/u/11832922?v=4",
"events_url": "https://api.github.com/users/el-hult/events{/privacy}",
"followers_url": "https://api.github.com/users/el-hult/followers",
"following_url": "https://api.github.com/users/el-hult/following{/other_user}",
"gists_url": "https://api.github.com/users/el-hult/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/el-hult",
"id": 11832922,
"login": "el-hult",
"node_id": "MDQ6VXNlcjExODMyOTIy",
"organizations_url": "https://api.github.com/users/el-hult/orgs",
"received_events_url": "https://api.github.com/users/el-hult/received_events",
"repos_url": "https://api.github.com/users/el-hult/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/el-hult/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/el-hult/subscriptions",
"type": "User",
"url": "https://api.github.com/users/el-hult",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/arrow_dataset.py#L4306-L4316\r\n\r\nbecause the shuffle happens after checking the cache, the rng state won't advance if the cache is used. This is VERY confusing. Also not documented.\r\n\r\nMy proposal is that you remove the API for using a Generator, and only keep the seed-based API since that is functional and cache-compatible."
] | 1970-01-01T00:00:00.000001 | 1,724 | null | NONE | null | ### Describe the bug
Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles.
Load dataset from disk again. Shuffle and Iterate. See same result as before. Shuffle and iterate, and this time it does not have the same shuffling as ion previous run.
The motivation is I have a deep learning loop with
```
for epoch in range(10):
for batch in dataset.shuffle(generator=generator).iter(batch_size=32):
.... # do stuff
```
where I want a new shuffling at every epoch. Instead I get the same shuffling.
### Steps to reproduce the bug
Run the code below two times.
```python
import datasets
import numpy as np
generator = np.random.default_rng(0)
ds = datasets.Dataset.from_dict(mapping={"X":range(1000)})
ds.save_to_disk("tmp")
print("First loop: ", end="")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
print("Second loop: ", end="")
ds = datasets.Dataset.load_from_disk("tmp")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
```
The output is:
```
$ python main.py
Saving the dataset (1/1 shards): 100%|███████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 495019.95 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 847, 944, 795, 483, 842, 717, 865, 231, 840,
$ python main.py
Saving the dataset (1/1 shards): 100%|████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 22243.40 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 741, 741, 741, 741, 741, 741, 741, 741, 741,
```
The second loop, on the second run, only spits out "741, 741, 741...." which is *not* the desired output
### Expected behavior
I want the dataset to shuffle at every epoch since I provide it with a generator for shuffling.
### Environment info
Datasets version 2.21.0
Ubuntu linux. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7127/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7123/comments | https://api.github.com/repos/huggingface/datasets/issues/7123/events | https://github.com/huggingface/datasets/issues/7123 | 2,484,003,937 | I_kwDODunzps6UDuRh | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | {
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egrace479",
"id": 38985481,
"login": "egrace479",
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"repos_url": "https://api.github.com/users/egrace479/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egrace479",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Note that you can already have one directory per subset just for the metadata, e.g.\r\n\r\n```\r\nconfigs:\r\n - config_name: subset0\r\n data_files:\r\n - subset0/metadata.csv\r\n - images/*.jpg\r\n - config_name: subset1\r\n data_files:\r\n - subset1/metadata.csv\r\n - images/*.jpg\r\n```\r\n\r\nEDIT: ah maybe it doesn't work because you'd have to provide relative paths from the metadata files to the images",
"Yes, that's part of the issue. Also, `metadata.csv` is a very ambiguous name and we generally try to avoid using the same name for different files within a dataset, as this can quickly lead to confusion.",
"I think supporting `**/*-metadata.csv` or `**/*_metadata.csv` makes sense to me. If it sounds good to you feel free to open a PR to update the patterns here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/d4422cc24a56dc7132ddc3fd6b285c5edbd60b8c/src/datasets/data_files.py#L104-L115"
] | 1970-01-01T00:00:00.000001 | 1,729 | null | NONE | null | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed.
### Motivation
When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co./datasets/imageomics/2018-NEON-beetles/discussions/8)).
It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co./datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue).
### Your contribution
I can make a suggestion for one approach to address the issue:
For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?).
Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co./datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work?
```
configs:
- config_name: <image subset>
data_files:
- <image-metadata>.csv
- <path/to/images>/*.jpg
```
I'd also be happy to look at whatever solution is decided upon and contribute to the ideation.
Thanks for your time and consideration! The dataset viewer really is fabulous when it works :) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7123/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7122/comments | https://api.github.com/repos/huggingface/datasets/issues/7122/events | https://github.com/huggingface/datasets/issues/7122 | 2,482,491,258 | I_kwDODunzps6T9896 | 7,122 | [interleave_dataset] sample batches from a single source at a time | {
"avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4",
"events_url": "https://api.github.com/users/memray/events{/privacy}",
"followers_url": "https://api.github.com/users/memray/followers",
"following_url": "https://api.github.com/users/memray/following{/other_user}",
"gists_url": "https://api.github.com/users/memray/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/memray",
"id": 4197249,
"login": "memray",
"node_id": "MDQ6VXNlcjQxOTcyNDk=",
"organizations_url": "https://api.github.com/users/memray/orgs",
"received_events_url": "https://api.github.com/users/memray/received_events",
"repos_url": "https://api.github.com/users/memray/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/memray/subscriptions",
"type": "User",
"url": "https://api.github.com/users/memray",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 1970-01-01T00:00:00.000001 | 1,724 | null | NONE | null | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?
### Motivation
Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?
### Your contribution
I can contribute a PR. But I wonder what the best way is to test its correctness and robustness. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7122/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7117/comments | https://api.github.com/repos/huggingface/datasets/issues/7117/events | https://github.com/huggingface/datasets/issues/7117 | 2,476,555,659 | I_kwDODunzps6TnT2L | 7,117 | Audio dataset load everything in RAM and is very slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4",
"events_url": "https://api.github.com/users/Jourdelune/events{/privacy}",
"followers_url": "https://api.github.com/users/Jourdelune/followers",
"following_url": "https://api.github.com/users/Jourdelune/following{/other_user}",
"gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jourdelune",
"id": 64205064,
"login": "Jourdelune",
"node_id": "MDQ6VXNlcjY0MjA1MDY0",
"organizations_url": "https://api.github.com/users/Jourdelune/orgs",
"received_events_url": "https://api.github.com/users/Jourdelune/received_events",
"repos_url": "https://api.github.com/users/Jourdelune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jourdelune",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\r\n return {\"transcribed\": True}\r\n```\r\n\r\nPS: no need to iter on the dataset to trigger the `map` function on a `Dataset` - `map` runs directly when it's called (contrary to `IterableDataset` taht you can get when streaming, which are lazy)",
"No, that doesn't change anything, I manage to solve this problem by setting with_indices=True in the map function and directly retrieving the audio corresponding to the index.\r\n```py\r\nfrom datasets import load_dataset\r\nimport time\r\n\r\nds = load_dataset(\"WaveGenAI/audios2\", split=\"train[:50]\")\r\n\r\n\r\n# map the dataset\r\ndef transcribe_audio(row, idx):\r\n audio = ds[idx][\"audio\"] # get the audio but do nothing with it\r\n row[\"transcribed\"] = True\r\n return row\r\n\r\n\r\ntime1 = time.time()\r\nds = ds.map(\r\n transcribe_audio, with_indices=True\r\n) # set low writer_batch_size to avoid memory issues\r\n\r\nfor row in ds:\r\n pass # do nothing, just iterate to trigger the map function\r\n\r\nprint(f\"Time taken: {time.time() - time1:.2f} seconds\")\r\n```",
"Hmm maybe accessing `row[\"audio\"]` makes `map()` reencode what's inside `row[\"audio\"]` in case there are in-place modifications"
] | 1970-01-01T00:00:00.000001 | 1,724 | null | NONE | null | Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes.
To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow.
To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`)
### Steps to reproduce the bug
Hug ram usage but fast (but actually slow when saving the dataset):
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio
)
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
Low ram usage but very very slow:
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio, writer_batch_size=10
) # set low writer_batch_size to avoid memory issues
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
### Expected behavior
I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio).
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.10.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2024.6.1
# Extra
The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem.
```py
import argparse
from datasets import load_dataset
parser = argparse.ArgumentParser()
parser.add_argument("--folder", help="folder path", default="/media/works/test/")
args = parser.parse_args()
dataset = load_dataset("audiofolder", data_dir=args.folder)
# push the dataset to hub
dataset.push_to_hub("WaveGenAI/audios")
```
Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7117/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7116/comments | https://api.github.com/repos/huggingface/datasets/issues/7116/events | https://github.com/huggingface/datasets/issues/7116 | 2,475,522,721 | I_kwDODunzps6TjXqh | 7,116 | datasets cannot handle nested json if features is given. | {
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': [{\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n }]\r\n}))\r\n```",
"> Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n> \r\n> ```python\r\n> ds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n> 'ref1': datasets.Value('string'),\r\n> 'ref2': datasets.Value('string'),\r\n> 'cuts': [{\r\n> \"cut1\": datasets.Value(\"uint16\"),\r\n> \"cut2\": datasets.Value(\"uint16\")\r\n> }]\r\n> }))\r\n> ```\r\nThank you!\r\n",
"It works."
] | 1970-01-01T00:00:00.000001 | 1,725 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7116/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7115/comments | https://api.github.com/repos/huggingface/datasets/issues/7115/events | https://github.com/huggingface/datasets/issues/7115 | 2,475,363,142 | I_kwDODunzps6TiwtG | 7,115 | module 'pyarrow.lib' has no attribute 'ListViewType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/175128880?v=4",
"events_url": "https://api.github.com/users/neurafusionai/events{/privacy}",
"followers_url": "https://api.github.com/users/neurafusionai/followers",
"following_url": "https://api.github.com/users/neurafusionai/following{/other_user}",
"gists_url": "https://api.github.com/users/neurafusionai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neurafusionai",
"id": 175128880,
"login": "neurafusionai",
"node_id": "U_kgDOCnBBMA",
"organizations_url": "https://api.github.com/users/neurafusionai/orgs",
"received_events_url": "https://api.github.com/users/neurafusionai/received_events",
"repos_url": "https://api.github.com/users/neurafusionai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neurafusionai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neurafusionai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neurafusionai",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked"
] | 1970-01-01T00:00:00.000001 | 1,725 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Code:
`!pipuninstall -y pyarrow
!pip install --no-cache-dir pyarrow
!pip uninstall -y pyarrow
!pip install pyarrow --no-cache-dir
!pip install --upgrade datasets transformers pyarrow
!pip install pyarrow.parquet
! pip install pyarrow-core libparquet
!pip install pyarrow --no-cache-dir
!pip install pyarrow
!pip install transformers
!pip install --upgrade datasets
!pip install datasets
! pip install pyarrow
! pip install pyarrow.lib
! pip install pyarrow.parquet
!pip install transformers
import pyarrow as pa
print(pa.__version__)
from datasets import load_dataset
import pyarrow.parquet as pq
import pyarrow.lib as lib
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset
from transformers import AutoTokenizer
! pip install pyarrow-core libparquet
# Load the dataset for content moderation
dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
# Tokenize the dataset
def tokenize_function(examples):
return tokenizer(examples['text'], padding="max_length", truncation=True)
# Apply tokenization to the entire dataset
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# Check the first few tokenized samples
print(tokenized_datasets['train'][0])
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
# Load the model
model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
eval_strategy="epoch", #
save_strategy="epoch",
logging_dir="./logs",
learning_rate=2e-5,
)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
)
# Train the model
trainer.train()
# Evaluate the model
trainer.evaluate()
`
AttributeError Traceback (most recent call last)
[<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>()
20
21
---> 22 from datasets import load_dataset
23 import pyarrow.parquet as pq
24 import pyarrow.lib as lib
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
15 __version__ = "2.21.0"
16
---> 17 from .arrow_dataset import Dataset
18 from .arrow_reader import ReadInstruction
19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
74
75 from . import config
---> 76 from .arrow_reader import ArrowReader
77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
78 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
31
32 try:
---> 33 import pyarrow._parquet as _parquet
34 except ImportError as exc:
35 raise ImportError(
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
### Steps to reproduce the bug
https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing
### Expected behavior
Looks like there is an issue with datasets and pyarrow
### Environment info
google colab
python
huggingface
Found existing installation: pyarrow 17.0.0
Uninstalling pyarrow-17.0.0:
Successfully uninstalled pyarrow-17.0.0
Collecting pyarrow
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB)
Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4)
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.9/39.9 MB 188.9 MB/s eta 0:00:00
Installing collected packages: pyarrow
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
Successfully installed pyarrow-17.0.0
WARNING: The following packages were previously imported in this runtime:
[pyarrow]
You must restart the runtime in order to use newly installed versions. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7115/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7113/comments | https://api.github.com/repos/huggingface/datasets/issues/7113/events | https://github.com/huggingface/datasets/issues/7113 | 2,475,029,640 | I_kwDODunzps6ThfSI | 7,113 | Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch) | {
"avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4",
"events_url": "https://api.github.com/users/memray/events{/privacy}",
"followers_url": "https://api.github.com/users/memray/followers",
"following_url": "https://api.github.com/users/memray/following{/other_user}",
"gists_url": "https://api.github.com/users/memray/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/memray",
"id": 4197249,
"login": "memray",
"node_id": "MDQ6VXNlcjQxOTcyNDk=",
"organizations_url": "https://api.github.com/users/memray/orgs",
"received_events_url": "https://api.github.com/users/memray/received_events",
"repos_url": "https://api.github.com/users/memray/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/memray/subscriptions",
"type": "User",
"url": "https://api.github.com/users/memray",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"That's expected behavior, it's also the same in `torch`:\r\n\r\n```python\r\n>>> list(DataLoader(list(range(5)), batch_size=10, drop_last=True))\r\n[]\r\n```"
] | 1970-01-01T00:00:00.000001 | 1,724 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains.
Please see the code below to reproduce the problem.
The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False.
I have to use drop_last_batch=True since it's for distributed training.
### Steps to reproduce the bug
```python
# datasets==2.21.0
import datasets
def data_prepare(examples):
print(examples["sentence1"][0])
return examples
batch_size = 101
# the size of the dataset is 100
# the dataset iterates correctly if we set either streaming=False or drop_last_batch=False
dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True)
dataset = dataset.map(lambda x: data_prepare(x),
drop_last_batch=True,
batched=True, batch_size=batch_size)
for ex in dataset:
print(ex)
pass
```
### Expected behavior
The dataset iterates regardless of the batch size.
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7113/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7112/comments | https://api.github.com/repos/huggingface/datasets/issues/7112/events | https://github.com/huggingface/datasets/issues/7112 | 2,475,004,644 | I_kwDODunzps6ThZLk | 7,112 | cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/174590283?v=4",
"events_url": "https://api.github.com/users/SoumyaMB10/events{/privacy}",
"followers_url": "https://api.github.com/users/SoumyaMB10/followers",
"following_url": "https://api.github.com/users/SoumyaMB10/following{/other_user}",
"gists_url": "https://api.github.com/users/SoumyaMB10/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SoumyaMB10",
"id": 174590283,
"login": "SoumyaMB10",
"node_id": "U_kgDOCmgJSw",
"organizations_url": "https://api.github.com/users/SoumyaMB10/orgs",
"received_events_url": "https://api.github.com/users/SoumyaMB10/received_events",
"repos_url": "https://api.github.com/users/SoumyaMB10/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SoumyaMB10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoumyaMB10/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SoumyaMB10",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@sayakpaul please advice ",
"Hits the same dependency conflict"
] | 1970-01-01T00:00:00.000001 | 1,726 | null | NONE | null | ### Describe the bug
!pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
to solve above error
!pip install pyarrow==14.0.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible.
### Steps to reproduce the bug
!pip install datasets>=2.19.1
### Expected behavior
run without dependency error
### Environment info
Diffusers version: 0.31.0.dev0
Platform: Linux-6.1.85+-x86_64-with-glibc2.35
Running on Google Colab?: Yes
Python version: 3.10.12
PyTorch version (GPU?): 2.3.1+cu121 (True)
Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu)
Jax version: 0.4.26
JaxLib version: 0.4.26
Huggingface_hub version: 0.23.5
Transformers version: 4.42.4
Accelerate version: 0.32.1
PEFT version: 0.7.0
Bitsandbytes version: not installed
Safetensors version: 0.4.4
xFormers version: not installed
Accelerator: Tesla T4, 15360 MiB
Using GPU in script?:
Using distributed or parallel set-up in script?: | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7112/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7112/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7111/comments | https://api.github.com/repos/huggingface/datasets/issues/7111/events | https://github.com/huggingface/datasets/issues/7111 | 2,474,915,845 | I_kwDODunzps6ThDgF | 7,111 | CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Note that the CI before was using:\r\n- llvmlite: 0.43.0\r\n- numba: 0.60.0\r\n\r\nNow it tries to use:\r\n- llvmlite: 0.34.0\r\n- numba: 0.51.2",
"The issue is because numba-0.60.0 pins numpy<2.1 and `uv` tries to install latest numpy-2.1.0 with an old numba-0.51.0 version (and llvmlite-0.34.0). See discussion in their repo:\r\n- https://github.com/numba/numba/issues/9708\r\n\r\nLatest numpy-2.1.0 will be supported by the next numba-0.61.0 release in September.\r\n\r\nNote that our CI requires numba with the \"audio\" extra:\r\n- librosa > numba"
] | 1970-01-01T00:00:00.000001 | 1,724 | 1970-01-01T00:00:00.000001 | MEMBER | null | Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269
```
Run uv pip install --system "datasets[tests_numpy2] @ ."
Resolved 150 packages in 4.42s
error: Failed to prepare distributions
Caused by: Failed to fetch wheel: llvmlite==0.34.0
Caused by: Build backend failed to build wheel through `build_wheel()` with exit status: 1
--- stdout:
running bdist_wheel
/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python /home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py
LLVM version...
--- stderr:
Traceback (most recent call last):
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 105, in main_posix
out = subprocess.check_output([llvm_config, '--version'])
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 1863, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 191, in <module>
main()
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 181, in main
main_posix('linux', '.so')
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 107, in main_posix
raise RuntimeError("%s failed executing, please point LLVM_CONFIG "
RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config
error: command '/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python' failed with exit code 1
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7111/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7109/comments | https://api.github.com/repos/huggingface/datasets/issues/7109/events | https://github.com/huggingface/datasets/issues/7109 | 2,473,367,848 | I_kwDODunzps6TbJko | 7,109 | ConnectionError for gated datasets and unauthenticated users | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | 1970-01-01T00:00:00.000001 | 1,724 | 1970-01-01T00:00:00.000001 | MEMBER | null | Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852
We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before).
See:
- https://github.com/huggingface/dataset-viewer/issues/3025
- https://github.com/huggingface/huggingface_hub/issues/2457 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7109/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7108/comments | https://api.github.com/repos/huggingface/datasets/issues/7108/events | https://github.com/huggingface/datasets/issues/7108 | 2,470,665,327 | I_kwDODunzps6TQ1xv | 7,108 | website broken: Create a new dataset repository, doesn't create a new repo in Firefox | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I don't reproduce, I was able to create a new repo: https://huggingface.co./datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?",
"I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Create dataset` works.\r\n\r\nIt seems to be a Firefox specific issue.",
"I have updated Firefox 129.0 (64 bit), and now the `Create dataset` is working again in Firefox.\r\n\r\nUX: It would be nice with better error messages on HuggingFace.",
"maybe an issue with the cookie. cc @Wauplin @coyotte508 "
] | 1970-01-01T00:00:00.000001 | 1,724 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
This issue is also reported here:
https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644
This page is broken.
https://huggingface.co./new-dataset
I fill in the form with my text, and click `Create Dataset`.
![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6)
Then the form gets wiped. And no repo got created. No error message visible in the developer console.
![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3)
# Idea for improvement
For better UX, if the repo cannot be created, then show an error message, that something went wrong.
# Work around, that works for me
```python
from huggingface_hub import HfApi, HfFolder
repo_id = 'simon-arc-solve-fractal-v3'
api = HfApi()
username = api.whoami()['name']
repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset")
```
### Steps to reproduce the bug
Go https://huggingface.co./new-dataset
Fill in the form.
Click `Create dataset`.
Now the form is cleared. And the page doesn't jump anywhere.
### Expected behavior
The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo.
### Environment info
Firefox 128.0.3 (64-bit)
macOS Sonoma 14.5
| {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7108/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7107/comments | https://api.github.com/repos/huggingface/datasets/issues/7107/events | https://github.com/huggingface/datasets/issues/7107 | 2,470,444,732 | I_kwDODunzps6TP_68 | 7,107 | load_dataset broken in 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4",
"events_url": "https://api.github.com/users/anjor/events{/privacy}",
"followers_url": "https://api.github.com/users/anjor/followers",
"following_url": "https://api.github.com/users/anjor/following{/other_user}",
"gists_url": "https://api.github.com/users/anjor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anjor",
"id": 1911631,
"login": "anjor",
"node_id": "MDQ6VXNlcjE5MTE2MzE=",
"organizations_url": "https://api.github.com/users/anjor/orgs",
"received_events_url": "https://api.github.com/users/anjor/received_events",
"repos_url": "https://api.github.com/users/anjor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anjor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anjor",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now",
"+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.",
"I tried adding a simple test to `test_load.py` with the alpaca eval dataset but the test didn't fail :(. \r\n\r\nSo looks like this might have something to do with the environment? ",
"There was an issue with the script of the \"tatsu-lab/alpaca_eval\" dataset.\r\n\r\nI was fixed with this PR: \r\n- [Fix FileNotFoundError](https://huggingface.co./datasets/tatsu-lab/alpaca_eval/discussions/2)\r\n\r\nIt should work now if you retry to load the dataset."
] | 1970-01-01T00:00:00.000001 | 1,723 | 1970-01-01T00:00:00.000001 | NONE | null | ### Describe the bug
`eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:
![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9)
in 2.21.0:
![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f)
### Steps to reproduce the bug
1. Spin up a new google collab
2. `pip install datasets==2.21.0`
3. `import datasets`
4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
5. Will throw an error.
### Expected behavior
Try steps 1-5 again but replace datasets version with 2.20.0, it will work
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 17.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.5.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7107/timeline | null | completed | null | null | false | 0 |
https://api.github.com/repos/huggingface/datasets/issues/7102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7102/comments | https://api.github.com/repos/huggingface/datasets/issues/7102/events | https://github.com/huggingface/datasets/issues/7102 | 2,466,893,106 | I_kwDODunzps6TCc0y | 7,102 | Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True) | {
"avatar_url": "https://avatars.githubusercontent.com/u/13192126?v=4",
"events_url": "https://api.github.com/users/lajd/events{/privacy}",
"followers_url": "https://api.github.com/users/lajd/followers",
"following_url": "https://api.github.com/users/lajd/following{/other_user}",
"gists_url": "https://api.github.com/users/lajd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lajd",
"id": 13192126,
"login": "lajd",
"node_id": "MDQ6VXNlcjEzMTkyMTI2",
"organizations_url": "https://api.github.com/users/lajd/orgs",
"received_events_url": "https://api.github.com/users/lajd/received_events",
"repos_url": "https://api.github.com/users/lajd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lajd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lajd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lajd",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow and parquet about the same. However, I was unable to reproduce a drastically slower iteration speed after shuffling in any case when using the revised script -- pasting below:\r\n\r\n```python\r\nimport time\r\nfrom datasets import load_dataset, Dataset, IterableDataset\r\nfrom pathlib import Path\r\nimport torch\r\nimport pandas as pd\r\nimport pickle\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\n\r\ndef generate_random_example():\r\n return {\r\n 'inputs': torch.randn(128).tolist(),\r\n 'indices': torch.randint(0, 10000, (2, 20000)).tolist(),\r\n 'values': torch.randn(20000).tolist(),\r\n }\r\n\r\n\r\ndef generate_shard_data(examples_per_shard: int = 512):\r\n return [generate_random_example() for _ in range(examples_per_shard)]\r\n\r\n\r\ndef save_shard_as_arrow(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a Hugging Face Dataset\r\n dataset = Dataset.from_dict({\r\n 'inputs': [example['inputs'] for example in shard_data],\r\n 'indices': [example['indices'] for example in shard_data],\r\n 'values': [example['values'] for example in shard_data],\r\n })\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}\"\r\n\r\n # Save the dataset to disk using the Arrow format\r\n dataset.save_to_disk(str(shard_write_path))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_parquet(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a pandas DataFrame for easy conversion to Parquet\r\n df = pd.DataFrame(shard_data)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.parquet\"\r\n\r\n # Convert DataFrame to PyArrow Table for Parquet saving\r\n table = pa.Table.from_pandas(df)\r\n\r\n # Save the table as a Parquet file\r\n pq.write_table(table, shard_write_path)\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_binary(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.bin\"\r\n\r\n # Save each example as a serialized binary object using pickle\r\n with open(shard_write_path, 'wb') as f:\r\n for example in shard_data:\r\n f.write(pickle.dumps(example))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef generate_split_shards(save_dir, filetype=\"parquet\", num_shards: int = 16, examples_per_shard: int = 512):\r\n shard_filepaths = []\r\n for shard_idx in range(num_shards):\r\n if filetype == \"parquet\":\r\n shard_filepaths.append(save_shard_as_parquet(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"binary\":\r\n shard_filepaths.append(save_shard_as_binary(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"arrow\":\r\n shard_filepaths.append(save_shard_as_arrow(shard_idx, save_dir, examples_per_shard))\r\n else:\r\n raise ValueError(f\"Unsupported filetype: {filetype}. Choose either 'parquet' or 'binary'.\")\r\n return shard_filepaths\r\n\r\n\r\ndef _binary_dataset_generator(files):\r\n for filepath in files:\r\n with open(filepath, 'rb') as f:\r\n while True:\r\n try:\r\n example = pickle.load(f)\r\n yield example\r\n except EOFError:\r\n break\r\n\r\n\r\ndef load_binary_dataset(shard_filepaths):\r\n return IterableDataset.from_generator(\r\n _binary_dataset_generator, gen_kwargs={\"files\": shard_filepaths},\r\n )\r\n\r\n\r\ndef load_parquet_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n return load_dataset(\r\n \"parquet\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_arrow_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n shard_filepaths = [f + \"/data-00000-of-00001.arrow\" for f in shard_filepaths]\r\n return load_dataset(\r\n \"arrow\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_dataset_wrapper(filetype: str, shard_filepaths: list[str]):\r\n if filetype == \"parquet\":\r\n return load_parquet_dataset(shard_filepaths)\r\n if filetype == \"binary\":\r\n return load_binary_dataset(shard_filepaths)\r\n if filetype == \"arrow\":\r\n return load_arrow_dataset(shard_filepaths)\r\n else:\r\n raise ValueError(\"Unsupported filetype\")\r\n\r\n\r\n# Example usage:\r\nsplit = \"train\"\r\nsplit_save_dir = \"/tmp/random_split\"\r\n\r\nfiletype = \"binary\" # or \"parquet\", or \"arrow\"\r\nnum_shards = 16\r\n\r\nshard_filepaths = generate_split_shards(split_save_dir, filetype=filetype, num_shards=num_shards)\r\ndataset = load_dataset_wrapper(filetype=filetype, shard_filepaths=shard_filepaths)\r\n\r\ndataset = dataset.shuffle(buffer_size=100, seed=42)\r\n\r\nstart_time = time.time()\r\nfor count, item in enumerate(dataset):\r\n if count > 0 and count % 100 == 0:\r\n elapsed_time = time.time() - start_time\r\n iterations_per_second = count / elapsed_time\r\n print(f\"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second\")\r\n```",
"update: I was able to reproduce the issue you described -- but ONLY if I do \r\n\r\n```\r\nrandom_dataset = random_dataset.with_format(\"numpy\")\r\n```\r\n\r\nIf I do this, I see similar numbers as what you reported. If I do not use numpy format, parquet and arrow are about 17 iterations per second regardless of whether or not we shuffle. Using binary, (again no numpy format tried with this yet), still shows the fastest speeds on average (shuffle and no shuffle) of about 850 it/sec.\r\n\r\nI suspect some issues with arrow and numpy being optimized for sequential reads, and shuffling cuases issuses... hmm"
] | 1970-01-01T00:00:00.000001 | 1,723 | null | NONE | null | ### Describe the bug
When I load a dataset from a number of arrow files, as in:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
```
I'm able to get fast iteration speeds when iterating over the dataset without shuffling.
When I shuffle the dataset, the iteration speed is reduced by ~1000x.
It's very possible the way I'm loading dataset shards is not appropriate; if so please advise!
Thanks for the help
### Steps to reproduce the bug
Here's full code to reproduce the issue:
- Generate a random dataset
- Create shards of data independently using Dataset.save_to_disk()
- The below will generate 16 shards (arrow files), of 512 examples each
```
import time
from pathlib import Path
from multiprocessing import Pool, cpu_count
import torch
from datasets import Dataset, load_dataset
split = "train"
split_save_dir = "/tmp/random_split"
def generate_random_example():
return {
'inputs': torch.randn(128).tolist(),
'indices': torch.randint(0, 10000, (2, 20000)).tolist(),
'values': torch.randn(20000).tolist(),
}
def generate_shard_dataset(examples_per_shard: int = 512):
dataset_dict = {
'inputs': [],
'indices': [],
'values': []
}
for _ in range(examples_per_shard):
example = generate_random_example()
dataset_dict['inputs'].append(example['inputs'])
dataset_dict['indices'].append(example['indices'])
dataset_dict['values'].append(example['values'])
return Dataset.from_dict(dataset_dict)
def save_shard(shard_idx, save_dir, examples_per_shard):
shard_dataset = generate_shard_dataset(examples_per_shard)
shard_write_path = Path(save_dir) / f"shard_{shard_idx}"
shard_dataset.save_to_disk(shard_write_path)
return str(Path(shard_write_path) / "data-00000-of-00001.arrow")
def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512):
with Pool(cpu_count()) as pool:
args = [(m, save_dir, examples_per_shard) for m in range(num_shards)]
shard_filepaths = pool.starmap(save_shard, args)
return shard_filepaths
shard_filepaths = generate_split_shards(split_save_dir)
```
Load the dataset as IterableDataset:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
random_dataset = random_dataset.with_format("numpy")
```
Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating:
Without shuffling, this gives ~1500 iterations/second
```
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 705.74 iterations/second
Processed 200 items at an average of 1169.68 iterations/second
Processed 300 items at an average of 1497.97 iterations/second
Processed 400 items at an average of 1739.62 iterations/second
Processed 500 items at an average of 1931.11 iterations/second`
```
When shuffling, this gives ~3 iterations/second:
```
random_dataset = random_dataset.shuffle(buffer_size=100,seed=42)
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 3.75 iterations/second
Processed 200 items at an average of 3.93 iterations/second
```
### Expected behavior
Iterations per second should be barely affected by shuffling, especially with a small buffer size
### Environment info
Datasets version: 2.21.0
Python 3.10
Ubuntu 22.04 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7102/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7102/timeline | null | null | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7101/comments | https://api.github.com/repos/huggingface/datasets/issues/7101/events | https://github.com/huggingface/datasets/issues/7101 | 2,466,510,783 | I_kwDODunzps6TA_e_ | 7,101 | `load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present | {
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.github.com/users/hlky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hlky",
"id": 106811348,
"login": "hlky",
"node_id": "U_kgDOBl3P1A",
"organizations_url": "https://api.github.com/users/hlky/orgs",
"received_events_url": "https://api.github.com/users/hlky/received_events",
"repos_url": "https://api.github.com/users/hlky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hlky",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would be to just turn the `parquet` into a `WebDataset`, although I'd still need the Dataset Viewer config limit increasing. In other cases using the same format may not be possible.\r\n\r\nRelevant code: \r\n- [HubDatasetModuleFactoryWithoutScript](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/load.py#L964)\r\n- [get_data_patterns](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/data_files.py#L415)"
] | 1970-01-01T00:00:00.000001 | 1,723 | null | NONE | null | Following [documentation](https://huggingface.co./docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co./datasets/bigdata-pw/Dataception), a dataset of datasets:
```yaml
configs:
- config_name: dataception
data_files:
- path: dataception.parquet
split: train
default: true
- config_name: dataset_5423
data_files:
- path: datasets/5423.tar
split: train
...
- config_name: dataset_721736
data_files:
- path: datasets/721736.tar
split: train
```
The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`.
While testing `load_dataset` I encountered the following error:
```python
>>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691")
Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467k/467k [00:00<00:00, 1.99MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.0M/71.0M [00:02<00:00, 26.8MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "datasets\load.py", line 2145, in load_dataset
builder_instance.download_and_prepare(
File "datasets\builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "datasets\builder.py", line 1100, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 2325, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 318, in __init__
self.reader.open(
File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account.
Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7101/timeline | null | null | null | null | false | null |