url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.12B
| node_id
stringlengths 18
32
| number
int64 1
3.68k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1.59k
1,644B
| updated_at
int64 1.59k
1,694B
| closed_at
int64 1.59k
1,690B
β | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 2
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2970/comments | https://api.github.com/repos/huggingface/datasets/issues/2970/events | https://github.com/huggingface/datasets/issues/2970 | 1,007,340,089 | I_kwDODunzps48Cso5 | 2,970 | Magnetβs | {
"login": "rcacho172",
"id": 90449239,
"node_id": "MDQ6VXNlcjkwNDQ5MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcacho172",
"html_url": "https://github.com/rcacho172",
"followers_url": "https://api.github.com/users/rcacho172/followers",
"following_url": "https://api.github.com/users/rcacho172/following{/other_user}",
"gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions",
"organizations_url": "https://api.github.com/users/rcacho172/orgs",
"repos_url": "https://api.github.com/users/rcacho172/repos",
"events_url": "https://api.github.com/users/rcacho172/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcacho172/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,632,649,829,000 | 1,632,652,739,000 | 1,632,652,739,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2970/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2969/comments | https://api.github.com/repos/huggingface/datasets/issues/2969/events | https://github.com/huggingface/datasets/issues/2969 | 1,007,217,867 | I_kwDODunzps48COzL | 2,969 | medical-dialog error | {
"login": "smeyerhot",
"id": 43877130,
"node_id": "MDQ6VXNlcjQzODc3MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smeyerhot",
"html_url": "https://github.com/smeyerhot",
"followers_url": "https://api.github.com/users/smeyerhot/followers",
"following_url": "https://api.github.com/users/smeyerhot/following{/other_user}",
"gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions",
"organizations_url": "https://api.github.com/users/smeyerhot/orgs",
"repos_url": "https://api.github.com/users/smeyerhot/repos",
"events_url": "https://api.github.com/users/smeyerhot/events{/privacy}",
"received_events_url": "https://api.github.com/users/smeyerhot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @smeyerhot, thanks for reporting.\r\n\r\nYou are right: there is an issue with the dataset metadata. I'm fixing it.\r\n\r\nIn the meantime, you can circumvent the issue by passing `ignore_verifications=True`:\r\n```python\r\nraw_datasets = load_dataset(\"medical_dialog\", \"en\", split=\"train\", download_mode=\"force_redownload\", data_dir=\"./Medical-Dialogue-Dataset-English\", ignore_verifications=True)\r\n```"
] | 1,632,611,324,000 | 1,633,938,402,000 | 1,633,938,402,000 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
When I attempt to download the huggingface datatset medical_dialog it errors out midway through
## Steps to reproduce the bug
```python
raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English")
```
## Expected results
A clear and concise description of the expected results.
No error
## Actual results
```
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}]
```
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.21.1
- Platform: colab
- Python version: colab 3.7
- PyArrow version: N/A
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2969/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2968/comments | https://api.github.com/repos/huggingface/datasets/issues/2968/events | https://github.com/huggingface/datasets/issues/2968 | 1,007,209,488 | I_kwDODunzps48CMwQ | 2,968 | `DatasetDict` cannot be exported to parquet if the splits have different features | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This is because you have to specify which split corresponds to what file:\r\n```python\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```\r\n\r\nOtherwise it tries to concatenate the two splits, and it fails because they don't have the same features.\r\n\r\nIt works with save_to_disk/load_from_disk because it also stores json files that contain the information about which files goes into which split",
"Wonderful, thanks for the help!",
"I may be mistaken but I think the following doesn't work either:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"lhoestq/custom_squad\")\r\n\r\n\r\ndef identical_answers(e):\r\n e['identical_answers'] = len(set(e['answers']['text'])) == 1\r\n return e\r\n\r\n\r\nds['validation'] = ds['validation'].map(identical_answers)\r\nds['train'].to_parquet(\"./ds/train/split.parquet\")\r\nds['validation'].to_parquet(\"./ds/validation/split.parquet\")\r\n\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```",
"It works on my side as soon as the directories named `ds/train` and `ds/validation` exist (otherwise it returns a FileNotFoundError). What error are you getting ?",
"Also we may introduce a default mapping for the data files:\r\n```python\r\n{\r\n \"train\": [\"*train*\"],\r\n \"test\": [\"*test*\"],\r\n \"validation\": [\"*dev*\", \"valid\"],\r\n}\r\n```\r\nthis way if you name your files according to the splits you won't have to specify the data_files parameter. What do you think ?\r\n\r\nI moved this discussion to #3027 ",
"I'm getting the following error:\r\n\r\n```\r\nDownloading and preparing dataset custom_squad/plain_text to /home/lysandre/.cache/huggingface/datasets/lhoestq___custom_squad)/plain_text/1.0.0/397916d1ae99584877e0fb4f5b8b6f01e66fcbbeff4d178afb30c933a8d0d93a...\r\n100%|ββββββββββ| 2/2 [00:00<00:00, 7760.04it/s]\r\n100%|ββββββββββ| 2/2 [00:00<00:00, 2020.38it/s]\r\n 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py\", line 198, in runfile\r\n pydev_imports.execfile(filename, global_vars, local_vars) # execute the script\r\n File \"/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py\", line 12, in <module>\r\n ds = load_dataset(\"lhoestq/custom_squad\")\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py\", line 1207, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 823, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 207, in map_nested\r\n mapped = [\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 208, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True))\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 143, in _single_map_nested\r\n return function(data_struct)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 854, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 924, in _as_dataset\r\n dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 217, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 238, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 173, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 308, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 327, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py\", line 458, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py\", line 45, in _memory_mapped_arrow_table_from_file\r\n pa_table = opened_stream.read_all()\r\n File \"pyarrow/ipc.pxi\", line 563, in pyarrow.lib.RecordBatchReader.read_all\r\n File \"pyarrow/error.pxi\", line 114, in pyarrow.lib.check_status\r\nOSError: Header-type of flatbuffer-encoded Message is not RecordBatch.\r\n```\r\n\r\nTried on current master, after updating latest dependencies and obtained the same result",
"The proposal in #3027 sounds good to me!",
"I just tried again on colab by installing `datasets` from source with pyarrow 3.0.0 and didn't get any error.\r\n\r\nYou error seems to happen when doing\r\n```python\r\nds = load_dataset(\"lhoestq/custom_squad\")\r\n```\r\n\r\nMore specifically it fails when trying to read the arrow file that just got generated. I haven't issues like this before. Can you make sure you have a recent version of `pyarrow` ? Maybe it was an old version that wrote the arrow file and some header was missing.",
"Thank you for your pointer! This seems to have been linked to Python 3.9.7: it works flawlessly with Python 3.8.6. This can be closed, thanks a lot for your help."
] | 1,632,608,319,000 | 1,633,646,862,000 | 1,633,646,846,000 | MEMBER | null | null | null | ## Describe the bug
I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly.
For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folders representing individual splits. This works too, as long as the splits have identical features. If a split has different features to neighboring splits, then loading the dataset will fail: a single schema is used to load both splits, resulting in a failure to load the second parquet file.
## Steps to reproduce the bug
The following works as expected:
```python
from datasets import load_dataset
ds = load_dataset("lhoestq/custom_squad")
ds['train'].to_parquet("./ds/train/split.parquet")
ds['validation'].to_parquet("./ds/validation/split.parquet")
brand_new_dataset = load_dataset("ds")
```
Modifying a single split to add a new feature ends up in a crash:
```python
from datasets import load_dataset
ds = load_dataset("lhoestq/custom_squad")
def identical_answers(e):
e['identical_answers'] = len(set(e['answers']['text'])) == 1
return e
ds['validation'] = ds['validation'].map(identical_answers)
ds['train'].to_parquet("./ds/train/split.parquet")
ds['validation'].to_parquet("./ds/validation/split.parquet")
brand_new_dataset = load_dataset("ds")
```
```
File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 26, in <module>
brand_new_dataset = load_dataset("ds")
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1151, in load_dataset
builder_instance.download_and_prepare(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 642, in download_and_prepare
self._download_and_prepare(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 732, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 1194, in _prepare_split
writer.write_table(table)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1257, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1833, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1808, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "identical_answers" does not exist in table schema'
```
It does work, however, to use the `save_to_disk` and `load_from_disk` methods:
```py
from datasets import load_from_disk
ds = load_dataset("lhoestq/custom_squad")
def identical_answers(e):
e['identical_answers'] = len(set(e['answers']['text'])) == 1
return e
ds['validation'] = ds['validation'].map(identical_answers)
ds.save_to_disk("local_path")
brand_new_dataset = load_from_disk("local_path")
```
## Expected results
The saving works correctly - but the loading fails. I would expect either an error when saving or an error-less instantiation of the dataset through the parquet files.
If it's helpful, I've traced a possible patch to the `write_table` method here:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L424-L425
The writer is built only if the parquet writer is `None`, but I expect we would want to build a new writer as the table schema has changed. Furthermore, it relies on having the property `update_features` set to `True` in order to update the features:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L254-L255
but the `ArrowWriter` is instantiated without that option in the `_prepare_split` method of the `ArrowBasedBuilder`:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/builder.py#L1190
Updating these two parts to recreate a schema on each split results in an error that is, unfortunately, out of my expertise:
```
File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 27, in <module>
brand_new_dataset = load_dataset("ds")
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1163, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 819, in as_dataset
datasets = utils.map_nested(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 207, in map_nested
mapped = [
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 208, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 143, in _single_map_nested
return function(data_struct)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 850, in _build_single_dataset
ds = self._as_dataset(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 920, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 217, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 238, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 173, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 308, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 327, in read_table
return table_cls.from_file(filename)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 458, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 45, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
File "pyarrow/ipc.pxi", line 563, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status
OSError: Header-type of flatbuffer-encoded Message is not RecordBatch.
```
## Environment info
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.14.7-arch1-1-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2968/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2967/comments | https://api.github.com/repos/huggingface/datasets/issues/2967/events | https://github.com/huggingface/datasets/issues/2967 | 1,007,194,837 | I_kwDODunzps48CJLV | 2,967 | Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets | {
"login": "WadeYin9712",
"id": 42200725,
"node_id": "MDQ6VXNlcjQyMjAwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/42200725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WadeYin9712",
"html_url": "https://github.com/WadeYin9712",
"followers_url": "https://api.github.com/users/WadeYin9712/followers",
"following_url": "https://api.github.com/users/WadeYin9712/following{/other_user}",
"gists_url": "https://api.github.com/users/WadeYin9712/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WadeYin9712/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WadeYin9712/subscriptions",
"organizations_url": "https://api.github.com/users/WadeYin9712/orgs",
"repos_url": "https://api.github.com/users/WadeYin9712/repos",
"events_url": "https://api.github.com/users/WadeYin9712/events{/privacy}",
"received_events_url": "https://api.github.com/users/WadeYin9712/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 1,632,603,495,000 | 1,633,293,262,000 | 1,633,293,262,000 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets?
**Describe the solution you'd like**
N/A
**Describe alternatives you've considered**
N/A
**Additional context**
This is Da Yin at UCLA. Recently, we have published an EMNLP 2021 paper about geo-diverse visual commonsense reasoning (https://arxiv.org/abs/2109.06860). We propose a new dataset called GD-VCR, a vision-and-language dataset to evaluate how well V&L models perform on scenarios involving geo-location-specific commonsense. We hope to have our V&L dataset incorporated into Huggingface to further promote our project, but I haven't seen much V&L datasets in the current package. Is it possible to add V&L datasets, and if so, how should we prepare for the loading? Thank you very much!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2967/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2966/comments | https://api.github.com/repos/huggingface/datasets/issues/2966/events | https://github.com/huggingface/datasets/pull/2966 | 1,007,142,233 | PR_kwDODunzps4sRRMs | 2,966 | Upload greek-legal-code dataset | {
"login": "christospi",
"id": 9130406,
"node_id": "MDQ6VXNlcjkxMzA0MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9130406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/christospi",
"html_url": "https://github.com/christospi",
"followers_url": "https://api.github.com/users/christospi/followers",
"following_url": "https://api.github.com/users/christospi/following{/other_user}",
"gists_url": "https://api.github.com/users/christospi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/christospi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/christospi/subscriptions",
"organizations_url": "https://api.github.com/users/christospi/orgs",
"repos_url": "https://api.github.com/users/christospi/repos",
"events_url": "https://api.github.com/users/christospi/events{/privacy}",
"received_events_url": "https://api.github.com/users/christospi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova @lhoestq thank you very much for reviewing! :hugs: \r\n\r\nI 've pushed some updates/changes as requested."
] | 1,632,588,735,000 | 1,634,132,250,000 | 1,634,132,250,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2966",
"html_url": "https://github.com/huggingface/datasets/pull/2966",
"diff_url": "https://github.com/huggingface/datasets/pull/2966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2966.patch",
"merged_at": 1634132250000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2966/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2965/comments | https://api.github.com/repos/huggingface/datasets/issues/2965/events | https://github.com/huggingface/datasets/issues/2965 | 1,007,084,153 | I_kwDODunzps48BuJ5 | 2,965 | Invalid download URL of WMT17 `zh-en` data | {
"login": "Ririkoo",
"id": 3339950,
"node_id": "MDQ6VXNlcjMzMzk5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3339950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ririkoo",
"html_url": "https://github.com/Ririkoo",
"followers_url": "https://api.github.com/users/Ririkoo/followers",
"following_url": "https://api.github.com/users/Ririkoo/following{/other_user}",
"gists_url": "https://api.github.com/users/Ririkoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ririkoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ririkoo/subscriptions",
"organizations_url": "https://api.github.com/users/Ririkoo/orgs",
"repos_url": "https://api.github.com/users/Ririkoo/repos",
"events_url": "https://api.github.com/users/Ririkoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ririkoo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Fixed in the current release. Close this issue."
] | 1,632,575,852,000 | 1,661,928,431,000 | 1,661,928,430,000 | NONE | null | null | null | ## Describe the bug
Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wmt17','zh-en')
```
## Expected results
ConnectionError: Couldn't reach ftp://cwmt-wmt:[email protected]/parallel/casia2015.zip | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2965/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2965/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2964/comments | https://api.github.com/repos/huggingface/datasets/issues/2964/events | https://github.com/huggingface/datasets/issues/2964 | 1,006,605,904 | I_kwDODunzps47_5ZQ | 2,964 | Error when calculating Matthews Correlation Coefficient loaded with `load_metric` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"After some more tests I've realized that this \"issue\" is due to the `numpy.float64` to `float` conversion, but when defining a function named `compute_metrics` as it follows:\r\n\r\n```python\r\ndef compute_metrics(eval_preds):\r\n metric = load_metric(\"matthews_correlation\")\r\n logits, labels = eval_preds\r\n predictions = np.argmax(logits, axis=1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n```\r\n\r\nIt fails when the evaluation metrics are computed in the `Trainer` with the same error code `AttributeError: 'float' object has no attribute 'item'` as the output is not a `numpy.float64`... Maybe I'm doing something wrong, not sure!",
"Ok after some more experiments I've realized that it's an issue from my side, at first I thought it was due to `fp16=True` in `TrainingArguments`, but in the end that may not be the issue, so I'll close this for now and check later, since the mistake is on my side :weary: Sorry for the inconvenience!"
] | 1,632,498,921,000 | 1,632,557,167,000 | 1,632,557,167,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co./metrics/matthews_correlation)" from `π€datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required).
## Steps to reproduce the bug
```python
import torch
predictions = torch.ones((10,))
references = torch.zeros((10,))
from datasets import load_metric
METRIC = load_metric("matthews_correlation")
result = METRIC.compute(predictions=predictions, references=references)
```
## Expected results
We should expect a Python `dict` as it follows:
```
{
"matthews_correlation": float()
}
```
as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail.
## Actual results
```
Traceback (most recent call last):
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main
app()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper
return callback(**use_params) # type: ignore
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train
metrics = trainer.evaluate()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate
output = eval_loop(
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics
res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids)
File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute
"matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(),
AttributeError: 'float' object has no attribute 'item'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2964/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2963/comments | https://api.github.com/repos/huggingface/datasets/issues/2963/events | https://github.com/huggingface/datasets/issues/2963 | 1,006,588,605 | I_kwDODunzps47_1K9 | 2,963 | raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects. | {
"login": "keloemma",
"id": 40454218,
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keloemma",
"html_url": "https://github.com/keloemma",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"repos_url": "https://api.github.com/users/keloemma/repos",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,632,497,711,000 | 1,632,497,904,000 | 1,632,497,904,000 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError(
TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
I was able to load my file using dataset before but since this morning , I keep getting this erreor.
## Steps to reproduce the bug
```python
# Xtrain, ytrain, filename, len_labels = read_file_2(fic)
# Xtrain, lge_size = get_flaubert_layer(Xtrain, path_to_model_lge)
data_preprocessed = make_new_traindata(Xtrain)
my_dict = {"verbatim": data_preprocessed[1], "label": ytrain} # lemme avec conjonction
dataset = Dataset.from_dict(my_dict)
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2963/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2962/comments | https://api.github.com/repos/huggingface/datasets/issues/2962/events | https://github.com/huggingface/datasets/issues/2962 | 1,006,557,666 | I_kwDODunzps47_tni | 2,962 | Enable splits during streaming the dataset | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,632,495,689,000 | 1,632,495,689,000 | null | CONTRIBUTOR | null | null | null | ## Describe the Problem
I'd like to stream only a specific percentage or part of the dataset.
I want to do splitting when I'm streaming dataset as well.
## Solution
Enabling splits when `streaming = True` as well.
`e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)`
## Alternatives
Below is the alternative of doing it.
`dataset = load_dataset("dataset", split='train', streaming = True).take(100)`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2962/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2961/comments | https://api.github.com/repos/huggingface/datasets/issues/2961/events | https://github.com/huggingface/datasets/pull/2961 | 1,006,453,781 | PR_kwDODunzps4sPTXV | 2,961 | Fix CI doc build | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,489,208,000 | 1,632,489,487,000 | 1,632,489,487,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2961",
"html_url": "https://github.com/huggingface/datasets/pull/2961",
"diff_url": "https://github.com/huggingface/datasets/pull/2961.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2961.patch",
"merged_at": 1632489487000
} | Pin `fsspec`.
Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1'
Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2961/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2960/comments | https://api.github.com/repos/huggingface/datasets/issues/2960/events | https://github.com/huggingface/datasets/pull/2960 | 1,006,222,850 | PR_kwDODunzps4sOl0Y | 2,960 | Support pandas 1.3 new `read_csv` parameters | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,472,644,000 | 1,632,482,551,000 | 1,632,482,550,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2960",
"html_url": "https://github.com/huggingface/datasets/pull/2960",
"diff_url": "https://github.com/huggingface/datasets/pull/2960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2960.patch",
"merged_at": 1632482550000
} | Support two new arguments introduced in pandas v1.3.0:
- `encoding_errors`
- `on_bad_lines`
`read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2960/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2959/comments | https://api.github.com/repos/huggingface/datasets/issues/2959/events | https://github.com/huggingface/datasets/pull/2959 | 1,005,547,632 | PR_kwDODunzps4sMihl | 2,959 | Added computer vision tasks | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks great, thanks ! If the 3d ones are really rare we can remove them for now.\r\n\r\nAnd I can see that `object-detection` and `semantic-segmentation` are both task categories (top-level) and task ids (bottom-level). Maybe there's a way to group them and have less granularity for the task categories. For example `speech-processing` is a high level task category. What do you think ?\r\n\r\nWe can still update the list of tasks later if needed when we have more vision datasets\r\n",
"@lhoestq @osanseviero I used the categories (there were main ones and subcategories) in the paperswithcode, I got rid of some of them that could be too granular. I can put it there if you'd like (I'll wait for your reply before committing it again)",
"We can ignore the ones that are too granular IMO. What we did for audio tasks is to have them all under \"audio-processing\". Maybe we can do the same here for now until we have more comprehensive tasks/applications ?",
"Following the discussion in (private) https://github.com/huggingface/moon-landing/issues/2020, what do you think of aligning the top level tasks list with the model tasks taxonomy ?\r\n\r\n* Image Classification\r\n* Object Detection\r\n* Image Segmentation\r\n* Text-to-Image\r\n* Image-to-Text\r\n",
"I moved it to [a branch](https://github.com/huggingface/datasets/pull/3800) for ease."
] | 1,632,409,647,000 | 1,646,156,511,000 | 1,646,156,511,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2959",
"html_url": "https://github.com/huggingface/datasets/pull/2959",
"diff_url": "https://github.com/huggingface/datasets/pull/2959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2959.patch",
"merged_at": null
} | Added various image processing/computer vision tasks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2959/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2958/comments | https://api.github.com/repos/huggingface/datasets/issues/2958/events | https://github.com/huggingface/datasets/pull/2958 | 1,005,144,601 | PR_kwDODunzps4sLTaB | 2,958 | Add security policy to the project | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,385,255,000 | 1,634,829,404,000 | 1,634,829,403,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2958",
"html_url": "https://github.com/huggingface/datasets/pull/2958",
"diff_url": "https://github.com/huggingface/datasets/pull/2958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2958.patch",
"merged_at": 1634829403000
} | Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository
Close #2953. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2958/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2957/comments | https://api.github.com/repos/huggingface/datasets/issues/2957/events | https://github.com/huggingface/datasets/issues/2957 | 1,004,868,337 | I_kwDODunzps475RLx | 2,957 | MultiWOZ Dataset NonMatchingChecksumError | {
"login": "bradyneal",
"id": 8754873,
"node_id": "MDQ6VXNlcjg3NTQ4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8754873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bradyneal",
"html_url": "https://github.com/bradyneal",
"followers_url": "https://api.github.com/users/bradyneal/followers",
"following_url": "https://api.github.com/users/bradyneal/following{/other_user}",
"gists_url": "https://api.github.com/users/bradyneal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bradyneal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bradyneal/subscriptions",
"organizations_url": "https://api.github.com/users/bradyneal/orgs",
"repos_url": "https://api.github.com/users/bradyneal/repos",
"events_url": "https://api.github.com/users/bradyneal/events{/privacy}",
"received_events_url": "https://api.github.com/users/bradyneal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi Brady! I met the similar issue, it stuck in the downloading stage instead of download anything, maybe it is broken. After I change the downloading from URLs to one url of the [Multiwoz project](https://github.com/budzianowski/multiwoz/archive/44f0f8479f11721831c5591b839ad78827da197b.zip) and use dirs to get separate files, the problems gone."
] | 1,632,354,300,000 | 1,647,360,422,000 | 1,647,360,422,000 | NONE | null | null | null | ## Describe the bug
The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching.
## Steps to reproduce the bug
Both of the below dataset versions yield the checksum error:
```python
from datasets import load_dataset
dataset = load_dataset('multi_woz_v22', 'v2.2')
dataset = load_dataset('multi_woz_v22', 'v2.2_active_only')
```
## Expected results
For the above calls to `load_dataset` to work.
## Actual results
NonMatchingChecksumError. Traceback:
> Traceback (most recent call last):
File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-15-4e91280e112e>", line 1, in <module>
dataset = load_dataset('multi_woz_v22', 'v2.2')
File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/load.py", line 847, in load_dataset
builder_instance.download_and_prepare(
File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 615, in download_and_prepare
self._download_and_prepare(
File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
verify_checksums(
File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']
## Environment info
- `datasets` version: 1.11.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2957/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2956/comments | https://api.github.com/repos/huggingface/datasets/issues/2956/events | https://github.com/huggingface/datasets/issues/2956 | 1,004,306,367 | I_kwDODunzps473H-_ | 2,956 | Cache problem in the `load_dataset` method for local compressed file(s) | {
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"The problem is still present. \r\nOne solution would be to add the `download_mode=\"force_redownload\"` argument to load_dataset. \r\nHowever, doing so may lead to a `DatasetGenerationError: An error occurred while generating the dataset`. To mitigate, just do:\r\n`rm -r ~/.cache/huggingface/datasets/*`"
] | 1,632,317,672,000 | 1,693,500,541,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version.
## Steps to reproduce the bug
To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.google.com/drive/11Em_Amoc-aPGhSBIkSHU2AvEh24nVayy?usp=sharing) that shows this behavior.
For this example, I have created a toy dataset at: https://huggingface.co./datasets/SaulLu/toy_struc_dataset
This dataset is composed of two versions:
- v1 on commit `a6beb46` which has a single example `{'id': 1, 'value': {'tag': 'a', 'value': 1}}` in file `train.jsonl.gz`
- v2 on commit `e7935f4` (`main` head) which has a single example `{'attr': 1, 'id': 1, 'value': 'a'}` in file `train.jsonl.gz`
With a terminal, we can start to get the v1 version of the dataset
```bash
git lfs install
git clone https://huggingface.co./datasets/SaulLu/toy_struc_dataset
cd toy_struc_dataset
git checkout a6beb46
```
Then we can load it with python and look at the content:
```python
from datasets import load_dataset
path = "/content/toy_struc_dataset"
dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"})
print(dataset["train"][0])
```
Output
```
{'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1
```
With a terminal, we can now start to get the v1 version of the dataset
```bash
git checkout main
```
Then we can load it with python and look at the content:
```python
from datasets import load_dataset
path = "/content/toy_struc_dataset"
dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"})
print(dataset["train"][0])
```
Output
```
{'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 (not v2)
```
## Expected results
The last output should have been
```
{"id":1, "value": "a", "attr": 1} # This is the example in v2
```
## Ideas
As discussed offline with Quentin, if the cache hash was ever sensitive to changes in a compressed file we would probably not have the problem anymore.
This situation leads me to suggest 2 other features:
- to also have an `load_from_cache_file` argument in the "load_dataset" method
- to reorganize the cache so that we can delete the caches related to a dataset (cf issue #ToBeFilledSoon)
And thanks again for this great library :hugs:
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2956/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2956/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2955/comments | https://api.github.com/repos/huggingface/datasets/issues/2955/events | https://github.com/huggingface/datasets/pull/2955 | 1,003,999,469 | PR_kwDODunzps4sHuRu | 2,955 | Update legacy Python image for CI tests in Linux | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There is an exception when running `pip install .[tests]`:\r\n```\r\nProcessing /home/circleci/datasets\r\nCollecting numpy>=1.17 (from datasets==1.12.2.dev0)\r\n Downloading https://files.pythonhosted.org/packages/45/b2/6c7545bb7a38754d63048c7696804a0d947328125d81bf12beaa692c3ae3/numpy-1.19.5-cp36-cp36m-manylinux1_x86_64.whl (13.4MB)\r\n 100% |ββββββββββββββββββββββββββββββββ| 13.4MB 3.9MB/s eta 0:00:011\r\n\r\n...\r\n\r\nCollecting faiss-cpu (from datasets==1.12.2.dev0)\r\n Downloading https://files.pythonhosted.org/packages/87/91/bf8ea0d42733cbb04f98d3bf27808e4919ceb5ec71102e21119398a97237/faiss-cpu-1.7.1.post2.tar.gz (41kB)\r\n 100% |ββββββββββββββββββββββββββββββββ| 51kB 30.9MB/s ta 0:00:01\r\n Complete output from command python setup.py egg_info:\r\n Traceback (most recent call last):\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 154, in save_modules\r\n yield saved\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 195, in setup_context\r\n yield\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 250, in run_setup\r\n _execfile(setup_script, ns)\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 45, in _execfile\r\n exec(code, globals, locals)\r\n File \"/tmp/easy_install-1pop4blm/numpy-1.21.2/setup.py\", line 34, in <module>\r\n method can be invoked.\r\n RuntimeError: Python version >= 3.7 required.\r\n```\r\n\r\nApparently, `numpy-1.21.2` tries to be installed in the temporary directory `/tmp/easy_install-1pop4blm` instead of the downloaded `numpy-1.19.5` (requirement of `datasets`).\r\n\r\nThis is caused because `pip` downloads the `.tar.gz` (instead of the `.whl`) and tries to build it in a tmp dir."
] | 1,632,299,127,000 | 1,632,479,765,000 | 1,632,479,765,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2955",
"html_url": "https://github.com/huggingface/datasets/pull/2955",
"diff_url": "https://github.com/huggingface/datasets/pull/2955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2955.patch",
"merged_at": 1632479765000
} | Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights:
- Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host.
- Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images.
More info: https://circleci.com/docs/2.0/circleci-images | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2955/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2954/comments | https://api.github.com/repos/huggingface/datasets/issues/2954/events | https://github.com/huggingface/datasets/pull/2954 | 1,003,904,803 | PR_kwDODunzps4sHa8O | 2,954 | Run tests in parallel | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```",
"There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`"
] | 1,632,294,044,000 | 1,632,812,151,000 | 1,632,812,151,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2954",
"html_url": "https://github.com/huggingface/datasets/pull/2954",
"diff_url": "https://github.com/huggingface/datasets/pull/2954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2954.patch",
"merged_at": 1632812151000
} | Run CI tests in parallel to speed up the test suite.
Speed up results:
- Linux: from `7m 30s` to `5m 32s`
- Windows: from `13m 52s` to `11m 10s`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2954/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2953/comments | https://api.github.com/repos/huggingface/datasets/issues/2953/events | https://github.com/huggingface/datasets/issues/2953 | 1,002,766,517 | I_kwDODunzps47xQC1 | 2,953 | Trying to get in touch regarding a security issue | {
"login": "JamieSlome",
"id": 55323451,
"node_id": "MDQ6VXNlcjU1MzIzNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/55323451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamieSlome",
"html_url": "https://github.com/JamieSlome",
"followers_url": "https://api.github.com/users/JamieSlome/followers",
"following_url": "https://api.github.com/users/JamieSlome/following{/other_user}",
"gists_url": "https://api.github.com/users/JamieSlome/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamieSlome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamieSlome/subscriptions",
"organizations_url": "https://api.github.com/users/JamieSlome/orgs",
"repos_url": "https://api.github.com/users/JamieSlome/repos",
"events_url": "https://api.github.com/users/JamieSlome/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamieSlome/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @JamieSlome,\r\n\r\nThanks for reaching out. Yes, you are right: I'm opening a PR to add the `SECURITY.md` file and a contact method.\r\n\r\nIn the meantime, please feel free to report the security issue to: [email protected]"
] | 1,632,239,893,000 | 1,634,829,403,000 | 1,634,829,403,000 | NONE | null | null | null | Hey there!
I'd like to report a security issue but cannot find contact instructions on your repository.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2953/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2952/comments | https://api.github.com/repos/huggingface/datasets/issues/2952/events | https://github.com/huggingface/datasets/pull/2952 | 1,002,704,096 | PR_kwDODunzps4sDU8S | 2,952 | Fix missing conda deps | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,237,781,000 | 1,632,285,599,000 | 1,632,238,244,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2952",
"html_url": "https://github.com/huggingface/datasets/pull/2952",
"diff_url": "https://github.com/huggingface/datasets/pull/2952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2952.patch",
"merged_at": 1632238244000
} | `aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail.
Fix #2932. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2952/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2951/comments | https://api.github.com/repos/huggingface/datasets/issues/2951/events | https://github.com/huggingface/datasets/pull/2951 | 1,001,267,888 | PR_kwDODunzps4r-lGs | 2,951 | Dummy labels no longer on by default in `to_tf_dataset` | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.",
"Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features"
] | 1,632,162,419,000 | 1,632,232,857,000 | 1,632,219,272,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2951",
"html_url": "https://github.com/huggingface/datasets/pull/2951",
"diff_url": "https://github.com/huggingface/datasets/pull/2951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2951.patch",
"merged_at": 1632219272000
} | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2951/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2950/comments | https://api.github.com/repos/huggingface/datasets/issues/2950/events | https://github.com/huggingface/datasets/pull/2950 | 1,001,085,353 | PR_kwDODunzps4r-AKu | 2,950 | Fix fn kwargs in filter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,150,626,000 | 1,632,154,979,000 | 1,632,151,681,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2950",
"html_url": "https://github.com/huggingface/datasets/pull/2950",
"diff_url": "https://github.com/huggingface/datasets/pull/2950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2950.patch",
"merged_at": 1632151681000
} | #2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927
I fixed that and added a test to make sure it doesn't happen again (for either map or filter)
Fix #2927 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2950/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2950/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2949/comments | https://api.github.com/repos/huggingface/datasets/issues/2949/events | https://github.com/huggingface/datasets/pull/2949 | 1,001,026,680 | PR_kwDODunzps4r90Pt | 2,949 | Introduce web and wiki config in triviaqa dataset | {
"login": "shirte",
"id": 1706443,
"node_id": "MDQ6VXNlcjE3MDY0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1706443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shirte",
"html_url": "https://github.com/shirte",
"followers_url": "https://api.github.com/users/shirte/followers",
"following_url": "https://api.github.com/users/shirte/following{/other_user}",
"gists_url": "https://api.github.com/users/shirte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shirte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shirte/subscriptions",
"organizations_url": "https://api.github.com/users/shirte/orgs",
"repos_url": "https://api.github.com/users/shirte/repos",
"events_url": "https://api.github.com/users/shirte/events{/privacy}",
"received_events_url": "https://api.github.com/users/shirte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just made the dummy data smaller :)\r\nOnce github refreshes the change I think we can merge !",
"Thank you so much for reviewing and accepting my pull request!! :)\r\n\r\nI created these rather large dummy data sets to cover all different cases for the row structure. E.g. in the web configuration, it's possible that a row has evidence from wikipedia (\"EntityPages\") and the web (\"SearchResults\"). But it also might happen that either EntityPages or SearchResults is empty. Probably, I will add this thought to the dataset description in the future.",
"Ok I see ! Yes feel free to mention it in the dataset card, this can be useful.\r\n\r\nFor the dummy data though we can keep the small ones, as the tests are mainly about testing the parsing from the dataset script rather than the actual content of the dataset."
] | 1,632,147,443,000 | 1,633,440,052,000 | 1,633,102,769,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2949",
"html_url": "https://github.com/huggingface/datasets/pull/2949",
"diff_url": "https://github.com/huggingface/datasets/pull/2949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2949.patch",
"merged_at": 1633102769000
} | The TriviaQA paper suggests that the two subsets (Wikipedia and Web)
should be treated differently. There are also different leaderboards
for the two sets on CodaLab. For that reason, introduce additional
builder configs in the trivia_qa dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2949/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2948/comments | https://api.github.com/repos/huggingface/datasets/issues/2948/events | https://github.com/huggingface/datasets/pull/2948 | 1,000,844,077 | PR_kwDODunzps4r9PdV | 2,948 | Fix minor URL format in scitldr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,136,292,000 | 1,632,143,908,000 | 1,632,143,908,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2948",
"html_url": "https://github.com/huggingface/datasets/pull/2948",
"diff_url": "https://github.com/huggingface/datasets/pull/2948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2948.patch",
"merged_at": 1632143908000
} | While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2948/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2947/comments | https://api.github.com/repos/huggingface/datasets/issues/2947/events | https://github.com/huggingface/datasets/pull/2947 | 1,000,798,338 | PR_kwDODunzps4r9GIP | 2,947 | Don't use old, incompatible cache for the new `filter` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,133,139,000 | 1,632,155,109,000 | 1,632,145,382,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2947",
"html_url": "https://github.com/huggingface/datasets/pull/2947",
"diff_url": "https://github.com/huggingface/datasets/pull/2947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2947.patch",
"merged_at": 1632145381000
} | #2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation.
However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account).
This is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result.
To fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0
This way the new `filter` outputs are now considered different from the old ones from the caching point of view.
This should fix #2943
cc @anton-l | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2947/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2947/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2946/comments | https://api.github.com/repos/huggingface/datasets/issues/2946/events | https://github.com/huggingface/datasets/pull/2946 | 1,000,754,824 | PR_kwDODunzps4r89f8 | 2,946 | Update meteor score from nltk update | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,130,126,000 | 1,632,130,559,000 | 1,632,130,559,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2946",
"html_url": "https://github.com/huggingface/datasets/pull/2946",
"diff_url": "https://github.com/huggingface/datasets/pull/2946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2946.patch",
"merged_at": 1632130559000
} | It looks like there were issues in NLTK on the way the METEOR score was computed.
A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values.
I updated the score of the example in the docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2946/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2945/comments | https://api.github.com/repos/huggingface/datasets/issues/2945/events | https://github.com/huggingface/datasets/issues/2945 | 1,000,624,883 | I_kwDODunzps47pFLz | 2,945 | Protect master branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Cool, I think we can do both :)",
"@lhoestq now the 2 are implemented.\r\n\r\nPlease note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history)."
] | 1,632,120,421,000 | 1,632,139,287,000 | 1,632,139,216,000 | MEMBER | null | null | null | After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:
- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch
- Currently, simple merge commits are already disabled
- I propose to disable rebase merging as well
- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~
- ~~This protection would reject direct pushes to master branch~~
- ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~
- [x] Protect the master branch only from direct pushing of **merge commits**
- GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).
- No need to disable/re-enable this protection on each release
This purpose of this Issue is to open a discussion about this problem and to agree in a solution. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2945/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2945/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2944/comments | https://api.github.com/repos/huggingface/datasets/issues/2944/events | https://github.com/huggingface/datasets/issues/2944 | 1,000,544,370 | I_kwDODunzps47oxhy | 2,944 | Add `remove_columns` to `IterableDataset ` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi ! Good idea :)\r\nIf you are interested in contributing, feel free to give it a try and open a Pull Request. Also let me know if I can help you with this or if you have questions"
] | 1,632,110,460,000 | 1,633,707,113,000 | 1,633,707,113,000 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
```python
from datasets import load_dataset
dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train')
dataset = dataset.remove_columns('url')
```
```
AttributeError: 'IterableDataset' object has no attribute 'remove_columns'
```
**Describe the solution you'd like**
It would be nice to have `.remove_columns()` to match the `Datasets` api.
**Describe alternatives you've considered**
This can be done with a single call to `.map()`,
I can try to help add this. π€ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2944/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2943/comments | https://api.github.com/repos/huggingface/datasets/issues/2943/events | https://github.com/huggingface/datasets/issues/2943 | 1,000,355,115 | I_kwDODunzps47oDUr | 2,943 | Backwards compatibility broken for cached datasets that use `.filter()` | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?",
"If it's easy enough to implement, then yes please π But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.",
"Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR",
"I just merged a fix, let me know if you're still having this kind of issues :)\r\n\r\nWe'll do a release soon to make this fix available",
"Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !",
"Fixed by #2947."
] | 1,632,068,197,000 | 1,632,155,143,000 | 1,632,155,142,000 | MEMBER | null | null | null | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2943/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2943/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2942/comments | https://api.github.com/repos/huggingface/datasets/issues/2942/events | https://github.com/huggingface/datasets/pull/2942 | 1,000,309,765 | PR_kwDODunzps4r7tY6 | 2,942 | Add SEDE dataset | {
"login": "Hazoom",
"id": 13545154,
"node_id": "MDQ6VXNlcjEzNTQ1MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/13545154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hazoom",
"html_url": "https://github.com/Hazoom",
"followers_url": "https://api.github.com/users/Hazoom/followers",
"following_url": "https://api.github.com/users/Hazoom/following{/other_user}",
"gists_url": "https://api.github.com/users/Hazoom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hazoom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hazoom/subscriptions",
"organizations_url": "https://api.github.com/users/Hazoom/orgs",
"repos_url": "https://api.github.com/users/Hazoom/repos",
"events_url": "https://api.github.com/users/Hazoom/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hazoom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.",
"Hi @Hazoom,\r\n\r\nYou were right: the non-passing test had nothing to do with this PR.\r\n\r\nUnfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n- your commits repeated two times\r\n- and commits which are not yours from the master branch\r\n\r\nIf you would like to clean your pull request, please make:\r\n```\r\ngit reset --hard 587b93a\r\ngit fetch upstream master\r\ngit merge upstream/master\r\ngit push --force origin sede\r\n```",
"> Hi @Hazoom,\r\n> \r\n> You were right: the non-passing test had nothing to do with this PR.\r\n> \r\n> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n> \r\n> * your commits repeated two times\r\n> * and commits which are not yours from the master branch\r\n> \r\n> If you would like to clean your pull request, please make:\r\n> \r\n> ```\r\n> git reset --hard 587b93a\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> git push --force origin sede\r\n> ```\r\n\r\nThanks @albertvillanova ",
"> Nice! Just one final request before approving your pull request:\r\n> \r\n> As you have updated the \"QuerySetId\" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:\r\n> \r\n> ```\r\n> rm datasets/sede/dataset_infos.json\r\n> datasets-cli test datasets/sede --save_infos --all_configs\r\n> ```\r\n\r\n@albertvillanova Good catch, just fixed it."
] | 1,632,057,084,000 | 1,632,479,995,000 | 1,632,479,994,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2942",
"html_url": "https://github.com/huggingface/datasets/pull/2942",
"diff_url": "https://github.com/huggingface/datasets/pull/2942.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2942.patch",
"merged_at": 1632479994000
} | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2942/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2941/comments | https://api.github.com/repos/huggingface/datasets/issues/2941/events | https://github.com/huggingface/datasets/issues/2941 | 1,000,000,711 | I_kwDODunzps47mszH | 2,941 | OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError | {
"login": "ayaka14732",
"id": 68557794,
"node_id": "MDQ6VXNlcjY4NTU3Nzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayaka14732",
"html_url": "https://github.com/ayaka14732",
"followers_url": "https://api.github.com/users/ayaka14732/followers",
"following_url": "https://api.github.com/users/ayaka14732/following{/other_user}",
"gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions",
"organizations_url": "https://api.github.com/users/ayaka14732/orgs",
"repos_url": "https://api.github.com/users/ayaka14732/repos",
"events_url": "https://api.github.com/users/ayaka14732/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayaka14732/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"I tried `unshuffled_original_da` and it is also not working"
] | 1,631,961,553,000 | 1,642,601,407,000 | null | NONE | null | null | null | ## Describe the bug
Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`.
## Steps to reproduce the bug
```python
>>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko')
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}]
```
## Expected results
Loading is successful.
## Actual results
Loading throws above error.
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2941/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2940/comments | https://api.github.com/repos/huggingface/datasets/issues/2940/events | https://github.com/huggingface/datasets/pull/2940 | 999,680,796 | PR_kwDODunzps4r6EUF | 2,940 | add swedish_medical_ner dataset | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,908,985,000 | 1,633,436,014,000 | 1,633,436,013,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2940",
"html_url": "https://github.com/huggingface/datasets/pull/2940",
"diff_url": "https://github.com/huggingface/datasets/pull/2940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2940.patch",
"merged_at": 1633436013000
} | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2940/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2939/comments | https://api.github.com/repos/huggingface/datasets/issues/2939/events | https://github.com/huggingface/datasets/pull/2939 | 999,639,630 | PR_kwDODunzps4r58Gu | 2,939 | MENYO-20k repo has moved, updating URL | {
"login": "cdleong",
"id": 4109253,
"node_id": "MDQ6VXNlcjQxMDkyNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cdleong",
"html_url": "https://github.com/cdleong",
"followers_url": "https://api.github.com/users/cdleong/followers",
"following_url": "https://api.github.com/users/cdleong/following{/other_user}",
"gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cdleong/subscriptions",
"organizations_url": "https://api.github.com/users/cdleong/orgs",
"repos_url": "https://api.github.com/users/cdleong/repos",
"events_url": "https://api.github.com/users/cdleong/events{/privacy}",
"received_events_url": "https://api.github.com/users/cdleong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,905,314,000 | 1,632,238,297,000 | 1,632,238,296,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2939",
"html_url": "https://github.com/huggingface/datasets/pull/2939",
"diff_url": "https://github.com/huggingface/datasets/pull/2939.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2939.patch",
"merged_at": 1632238296000
} | Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match.
https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2939/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2938/comments | https://api.github.com/repos/huggingface/datasets/issues/2938/events | https://github.com/huggingface/datasets/pull/2938 | 999,552,263 | PR_kwDODunzps4r5qwa | 2,938 | Take namespace into account in caching | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benchmark/scripts/serialize.py. That way, all the datasets are one-level deep directories",
"IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\ncc @Pierrci ",
"> IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\nout of curiosity: where is it enforced?",
"> where is it enforced?\r\n\r\nNowhere yet but we should :) feel free to track in internal tracker and/or implement, as this will be useful in the future",
"Thanks for the trick, I'm doing the change :)\r\nWe can use\r\n`~/.cache/huggingface/datasets/username___dataset_name` for the data\r\n`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files",
"Merging, though it will have to be integrated again the refactor at #2986",
"@lhoestq we changed a bit the naming policy on the Hub, and the substring '--' is now forbidden, which makes it available for serializing the repo names (`namespace/repo` -> `namespace--repo`). See https://github.com/huggingface/moon-landing/pull/1657 and https://github.com/huggingface/huggingface_hub/pull/545"
] | 1,631,897,853,000 | 1,639,738,338,000 | 1,632,920,491,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2938",
"html_url": "https://github.com/huggingface/datasets/pull/2938",
"diff_url": "https://github.com/huggingface/datasets/pull/2938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2938.patch",
"merged_at": 1632920491000
} | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2938/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2938/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2937/comments | https://api.github.com/repos/huggingface/datasets/issues/2937/events | https://github.com/huggingface/datasets/issues/2937 | 999,548,277 | I_kwDODunzps47k-V1 | 2,937 | load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied | {
"login": "daqieq",
"id": 40532020,
"node_id": "MDQ6VXNlcjQwNTMyMDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/40532020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daqieq",
"html_url": "https://github.com/daqieq",
"followers_url": "https://api.github.com/users/daqieq/followers",
"following_url": "https://api.github.com/users/daqieq/following{/other_user}",
"gists_url": "https://api.github.com/users/daqieq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daqieq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daqieq/subscriptions",
"organizations_url": "https://api.github.com/users/daqieq/orgs",
"repos_url": "https://api.github.com/users/daqieq/repos",
"events_url": "https://api.github.com/users/daqieq/events{/privacy}",
"received_events_url": "https://api.github.com/users/daqieq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB/s]\r\nDownloading: 2.71kB [00:00, ?B/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\\r\n1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nDownloading: 334MB [01:17, 4.32MB/s]\r\nDataset wiki_bio downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi\r\ns data.\r\n```\r\n\r\nThis kind of error messages usually happen because:\r\n- Your running Python script hasn't write access to that directory\r\n- You have another program (the File Explorer?) already browsing inside that directory",
"Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.\r\n\r\nRunning on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.\r\n\r\nThat leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).\r\n\r\nIf it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue.",
"Hi @albertvillanova, even I am facing the same issue on my work machine:\r\n\r\n`Downloading and preparing dataset json/c4-en-html-with-metadata to C:\\Users\\......\\.cache\\huggingface\\datasets\\json\\c4-en-html-with-metadata-4635c2fd9249f62d\\0.0.0\\c90812beea906fcffe0d5e3bb9eba909a80a998b5f88e9f8acbd320aa91acfde...\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 983.42it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 209.01it/s]\r\nTraceback (most recent call last):\r\n File \"bsmetadata/preprocessing_utils.py\", line 710, in <module>\r\n ds = load_dataset(\r\n File \"C:\\Users\\.......\\AppData\\Roaming\\Python\\Python38\\site-packages\\datasets\\load.py\", line 1694, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\........\\AppData\\Roaming\\Python\\Python38\\site-packages\\datasets\\builder.py\", line 603, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\..........\\AppData\\Local\\Programs\\Python\\Python38\\lib\\contextlib.py\", line 120, in __exit__\r\n next(self.gen)\r\n File \"C:\\Users\\.....\\AppData\\Roaming\\Python\\Python38\\site-packages\\datasets\\builder.py\", line 557, in incomplete_dir\r\n os.rename(tmp_dir, dirname)\r\nPermissionError: [WinError 5] Access is denied: 'C:\\\\Users\\\\.........\\\\.cache\\\\huggingface\\\\datasets\\\\json\\\\c4-en-html-with-metadata-4635c2fd9249f62d\\\\0.0.0\\\\c90812beea906fcffe0d5e3bb9eba909a80a998b5f88e9f8acbd320aa91acfde.incomplete' -> 'C:\\\\Users\\\\I355109\\\\.cache\\\\huggingface\\\\datasets\\\\json\\\\c4-en-html-with-metadata-4635c2fd9249f62d\\\\0.0.0\\\\c90812beea906fcffe0d5e3bb9eba909a80a998b5f88e9f8acbd320aa91acfde'`",
"I'm facing the same issue.\r\n\r\n## System Information\r\n\r\n- OS Edition: Windows 10 21H1\r\n- OS build: 19043.1826\r\n- Python version: 3.10.6 (installed using `choco install python`)\r\n- datasets: 2.4.0\r\n- PyArrow: 6.0.1\r\n\r\n## Troubleshooting steps\r\n\r\n- Restart the computer, unfortunately doesn't work! π\r\n- Checked the permissions of `~./cache/...`, looks fine.\r\n- Tested with a simple file operation using the `open()` function and writing a hello_world.txt, it works fine.\r\n- Tested with a different `cache_dir` value on the `load_dataset()`, e.g. \"./data\"\r\n- Tested different datasets: `conll2003`, `squad_v2`, and `wiki_bio`.\r\n- Downgraded datasets from `2.4.0` to `2.1.0`, issue persists.\r\n- Tested it on WSL (Ubuntu 20.04), and it works! \r\n- Python reinstallation, in the first time downloading `conll2003` works fine, but `squad` or `squad_v2` raises Access Denied.\r\n - After the system or VSCode restart, the issue comes back.\r\n\r\n## Resolution\r\n\r\nI fixed it by changing the following command:\r\n\r\nhttps://github.com/huggingface/datasets/blob/68cffe30917a9abed68d28caf54b40c10f977602/src/datasets/builder.py#L666\r\n\r\nfor\r\n\r\n```python\r\nshutil.move(tmp_dir, dirname)\r\n```"
] | 1,631,897,530,000 | 1,661,346,548,000 | 1,661,346,548,000 | NONE | null | null | null | ## Describe the bug
Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('wiki_bio')
```
## Expected results
It is expected that the dataset downloads without any errors.
## Actual results
PermissionError see trace below:
```
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare
self._save_info()
File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__
next(self.gen)
File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir
os.rename(tmp_dir, dirname)
PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'
```
By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.
It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue.
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.22449-SP0
- Python version: 3.8.12
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2937/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2936/comments | https://api.github.com/repos/huggingface/datasets/issues/2936/events | https://github.com/huggingface/datasets/pull/2936 | 999,521,647 | PR_kwDODunzps4r5knb | 2,936 | Check that array is not Float as nan != nan | {
"login": "Iwontbecreative",
"id": 494951,
"node_id": "MDQ6VXNlcjQ5NDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iwontbecreative",
"html_url": "https://github.com/Iwontbecreative",
"followers_url": "https://api.github.com/users/Iwontbecreative/followers",
"following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}",
"gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions",
"organizations_url": "https://api.github.com/users/Iwontbecreative/orgs",
"repos_url": "https://api.github.com/users/Iwontbecreative/repos",
"events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iwontbecreative/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,895,401,000 | 1,632,217,145,000 | 1,632,217,144,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2936",
"html_url": "https://github.com/huggingface/datasets/pull/2936",
"diff_url": "https://github.com/huggingface/datasets/pull/2936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2936.patch",
"merged_at": 1632217144000
} | The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan.
Pass on FloatArrays as we should not raise an Exception for them. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2936/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2935/comments | https://api.github.com/repos/huggingface/datasets/issues/2935/events | https://github.com/huggingface/datasets/pull/2935 | 999,518,469 | PR_kwDODunzps4r5j8B | 2,935 | Add Jigsaw unintended Bias | {
"login": "Iwontbecreative",
"id": 494951,
"node_id": "MDQ6VXNlcjQ5NDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iwontbecreative",
"html_url": "https://github.com/Iwontbecreative",
"followers_url": "https://api.github.com/users/Iwontbecreative/followers",
"following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}",
"gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions",
"organizations_url": "https://api.github.com/users/Iwontbecreative/orgs",
"repos_url": "https://api.github.com/users/Iwontbecreative/repos",
"events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iwontbecreative/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix",
"@lhoestq implemented your changes, I think this might be ready for another look.",
"Thanks @lhoestq, implemented the changes, let me know if anything else pops up."
] | 1,631,895,151,000 | 1,632,480,112,000 | 1,632,480,112,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2935",
"html_url": "https://github.com/huggingface/datasets/pull/2935",
"diff_url": "https://github.com/huggingface/datasets/pull/2935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2935.patch",
"merged_at": 1632480112000
} | Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2935/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2934/comments | https://api.github.com/repos/huggingface/datasets/issues/2934/events | https://github.com/huggingface/datasets/issues/2934 | 999,477,413 | I_kwDODunzps47ktCl | 2,934 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon!",
"Thanks a lot for investigating !"
] | 1,631,892,413,000 | 1,634,115,803,000 | 1,634,115,803,000 | MEMBER | null | null | null | To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one reference left"
```
This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.
Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.
cc @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2934/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2933/comments | https://api.github.com/repos/huggingface/datasets/issues/2933/events | https://github.com/huggingface/datasets/pull/2933 | 999,392,566 | PR_kwDODunzps4r5MHs | 2,933 | Replace script_version with revision | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm also fine with the removal in 1.15"
] | 1,631,887,479,000 | 1,632,131,530,000 | 1,632,131,530,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2933",
"html_url": "https://github.com/huggingface/datasets/pull/2933",
"diff_url": "https://github.com/huggingface/datasets/pull/2933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2933.patch",
"merged_at": 1632131530000
} | As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files).
This PR replaces the parameter name `script_version` with `revision`.
This way, we are also aligned with:
- Transformers: `AutoTokenizer.from_pretrained(..., revision=...)`
- Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2933/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2933/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2932/comments | https://api.github.com/repos/huggingface/datasets/issues/2932/events | https://github.com/huggingface/datasets/issues/2932 | 999,317,750 | I_kwDODunzps47kGD2 | 2,932 | Conda build fails | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Why 1.9 ?\r\n\r\nhttps://anaconda.org/HuggingFace/datasets currently says 1.11",
"Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 "
] | 1,631,882,962,000 | 1,632,238,270,000 | 1,632,238,270,000 | MEMBER | null | null | null | ## Describe the bug
Current `datasets` version in conda is 1.9 instead of 1.12.
The build of the conda package fails.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2932/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2931/comments | https://api.github.com/repos/huggingface/datasets/issues/2931/events | https://github.com/huggingface/datasets/pull/2931 | 998,326,359 | PR_kwDODunzps4r1-JH | 2,931 | Fix bug in to_tf_dataset | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!"
] | 1,631,804,883,000 | 1,631,811,698,000 | 1,631,811,697,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2931",
"html_url": "https://github.com/huggingface/datasets/pull/2931",
"diff_url": "https://github.com/huggingface/datasets/pull/2931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2931.patch",
"merged_at": 1631811697000
} | Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2931/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2930/comments | https://api.github.com/repos/huggingface/datasets/issues/2930/events | https://github.com/huggingface/datasets/issues/2930 | 998,154,311 | I_kwDODunzps47fqBH | 2,930 | Mutable columns argument breaks set_format | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Pushed a fix to my branch #2731 "
] | 1,631,795,242,000 | 1,631,800,253,000 | 1,631,800,253,000 | MEMBER | null | null | null | ## Describe the bug
If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
column_list = ["idx", "label"]
dataset.set_format("python", columns=column_list)
column_list[1] = "foo" # Change the list after we call `set_format`
dataset['train'][:4].keys()
```
## Expected results
```python
dict_keys(['idx', 'label'])
```
## Actual results
```python
dict_keys(['idx'])
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2930/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2929/comments | https://api.github.com/repos/huggingface/datasets/issues/2929/events | https://github.com/huggingface/datasets/pull/2929 | 997,960,024 | PR_kwDODunzps4r015C | 2,929 | Add regression test for null Sequence | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,782,713,000 | 1,631,867,039,000 | 1,631,867,039,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2929",
"html_url": "https://github.com/huggingface/datasets/pull/2929",
"diff_url": "https://github.com/huggingface/datasets/pull/2929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2929.patch",
"merged_at": 1631867039000
} | Relates to #2892 and #2900. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2929/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2928/comments | https://api.github.com/repos/huggingface/datasets/issues/2928/events | https://github.com/huggingface/datasets/pull/2928 | 997,941,506 | PR_kwDODunzps4r0yUb | 2,928 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,781,560,000 | 1,631,795,734,000 | 1,631,795,734,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2928",
"html_url": "https://github.com/huggingface/datasets/pull/2928",
"diff_url": "https://github.com/huggingface/datasets/pull/2928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2928.patch",
"merged_at": 1631795734000
} | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2928/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2927/comments | https://api.github.com/repos/huggingface/datasets/issues/2927/events | https://github.com/huggingface/datasets/issues/2927 | 997,654,680 | I_kwDODunzps47dwCY | 2,927 | Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, I'm looking into it :)",
"Fixed by #2950."
] | 1,631,754,842,000 | 1,632,155,002,000 | 1,632,155,001,000 | NONE | null | null | null | ## Describe the bug
Upgrading to 1.12 caused `dataset.filter` call to fail with
> get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels
## Steps to reproduce the bug
```pythondef
filter_good_rows(
ex: Dict,
valid_rel_labels: Set[str],
valid_ner_labels: Set[str],
tokenizer: PreTrainedTokenizerFast,
) -> bool:
"""Get the good rows"""
encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer)
ex["encoding"] = encoding
for relation in ex["relations"]:
if not is_valid_relation(relation, valid_rel_labels):
return False
for span in ex["spans"]:
if not is_valid_span(span, valid_ner_labels, encoding):
return False
return True
def get_dataset():
loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py")
ds = load_dataset(
loader_path,
name="prodigy-dataset",
data_files=sorted(file_paths),
cache_dir=cache_dir,
)["train"]
valid_ner_labels = set(vocab.ner_category)
valid_relations = set(vocab.relation_types.keys())
ds = ds.filter(
filter_good_rows,
fn_kwargs=dict(
valid_rel_labels=valid_relations,
valid_ner_labels=valid_ner_labels,
tokenizer=vocab.tokenizer,
),
keep_in_memory=True,
num_proc=num_proc,
)
```
`ds` is a `DatasetDict` produced by a jsonl dataset.
This runs fine on 1.11 but fails on 1.12
**Stack Trace**
## Expected results
I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11
## Actual results
```
tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl
ds = ds.filter(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper
out = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter
indices = self.map(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map
return self._map_single(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper
out = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single
batch = apply_function_on_filtered_inputs(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...}
indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0
def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0):
"""Utility to apply the function on a selection of columns."""
nonlocal update_data
fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
if offset == 0:
effective_indices = indices
else:
effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
processed_inputs = (
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
)
E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels'
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Mac
- Python version: 3.8.9
- PyArrow version: pyarrow==5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2927/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2926/comments | https://api.github.com/repos/huggingface/datasets/issues/2926/events | https://github.com/huggingface/datasets/issues/2926 | 997,463,277 | I_kwDODunzps47dBTt | 2,926 | Error when downloading datasets to non-traditional cache directories | {
"login": "dar-tau",
"id": 45885627,
"node_id": "MDQ6VXNlcjQ1ODg1NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/45885627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dar-tau",
"html_url": "https://github.com/dar-tau",
"followers_url": "https://api.github.com/users/dar-tau/followers",
"following_url": "https://api.github.com/users/dar-tau/following{/other_user}",
"gists_url": "https://api.github.com/users/dar-tau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dar-tau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dar-tau/subscriptions",
"organizations_url": "https://api.github.com/users/dar-tau/orgs",
"repos_url": "https://api.github.com/users/dar-tau/repos",
"events_url": "https://api.github.com/users/dar-tau/events{/privacy}",
"received_events_url": "https://api.github.com/users/dar-tau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Same here !"
] | 1,631,735,986,000 | 1,637,790,151,000 | null | NONE | null | null | null | ## Describe the bug
When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails.
## Steps to reproduce the bug
```bash
ln -s /path/to/netapp/.cache ~/.cache
```
```python
load_dataset("imdb")
```
## Expected results
Successfully loading IMDB dataset
## Actual results
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835,
num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0,
dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'),
'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected':
SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded':
SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.1.2
- Platform: Ubuntu
- Python version: 3.8
## Extra notes
Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction:
- With `cache_dir="/path/to/netapp/.cache"` the same thing happens.
- However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work
- On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore.
While I could test it only for a NetApp device, it might have to do with any other mounted FS.
Thanks :)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2926/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2925/comments | https://api.github.com/repos/huggingface/datasets/issues/2925/events | https://github.com/huggingface/datasets/pull/2925 | 997,407,034 | PR_kwDODunzps4rzJ9s | 2,925 | Add tutorial for no-code dataset upload | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```",
"Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet",
"I just added a mention of the login for private datasets. Don't hesitate to edit or comment.\r\n\r\nOtherwise I think it's all good, feel free to merge it @stevhliu if you don't have other changes to make :)"
] | 1,631,732,082,000 | 1,632,765,115,000 | 1,632,765,115,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2925",
"html_url": "https://github.com/huggingface/datasets/pull/2925",
"diff_url": "https://github.com/huggingface/datasets/pull/2925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2925.patch",
"merged_at": 1632765115000
} | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2925/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2925/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2924/comments | https://api.github.com/repos/huggingface/datasets/issues/2924/events | https://github.com/huggingface/datasets/issues/2924 | 997,378,113 | I_kwDODunzps47cshB | 2,924 | "File name too long" error for file locks | {
"login": "gar1t",
"id": 184949,
"node_id": "MDQ6VXNlcjE4NDk0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/184949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gar1t",
"html_url": "https://github.com/gar1t",
"followers_url": "https://api.github.com/users/gar1t/followers",
"following_url": "https://api.github.com/users/gar1t/following{/other_user}",
"gists_url": "https://api.github.com/users/gar1t/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gar1t/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gar1t/subscriptions",
"organizations_url": "https://api.github.com/users/gar1t/orgs",
"repos_url": "https://api.github.com/users/gar1t/repos",
"events_url": "https://api.github.com/users/gar1t/events{/privacy}",
"received_events_url": "https://api.github.com/users/gar1t/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note that the lock files we use always have smaller filenames than 255)\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135",
"Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info.",
"Snap, encountered when trying to run [this example from PyTorch Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html):\r\n\r\n```py\r\nimport torch\r\n\r\nimport flash\r\nfrom flash.audio import SpeechRecognition, SpeechRecognitionData\r\nfrom flash.core.data.utils import download_data\r\n\r\n# 1. Create the DataModule\r\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/timit_data.zip\", \"./data\")\r\n\r\ndatamodule = SpeechRecognitionData.from_json(\r\n input_fields=\"file\",\r\n target_fields=\"text\",\r\n train_file=\"data/timit/train.json\",\r\n test_file=\"data/timit/test.json\",\r\n)\r\n```\r\n\r\nGave this traceback:\r\n\r\n```py\r\nTraceback (most recent call last):\r\n File \"lf_ft.py\", line 10, in <module>\r\n datamodule = SpeechRecognitionData.from_json(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 1005, in from_json\r\n return cls.from_data_source(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 571, in from_data_source\r\n train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 307, in to_datasets\r\n train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 344, in generate_dataset\r\n data = load_data(data, mock_dataset)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py\", line 103, in load_data\r\n dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1599, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1457, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py\", line 285, in __init__\r\n with FileLock(lock_path):\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 272, in acquire\r\n self._acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 403, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nOSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock'\r\n```\r\n\r\nMy home directory is encrypted, therefore the maximum length is 143 ([source 1](https://github.com/ray-project/ray/issues/1463#issuecomment-425674521), [source 2](https://stackoverflow.com/a/6571568/2668831))\r\n\r\nFrom what I've read I think the error is in reference to the file name (just the final part of the path) which is 145 chars long:\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock\")\r\n145\r\n```\r\n\r\nI also have a file in this directory (i.e. whose length is not a problem):\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_librispeech_asr_clean_2.1.0_468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1.lock\")\r\n137\r\n```",
"Perhaps this could be exposed as a config setting so you could change it manually?\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135\r\n\r\nRather than hard-code 255, default it to 255, and allow it to be changed, the same way is done for `datasets.config.IN_MEMORY_MAX_SIZE`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L171-L173\r\n\r\nIn fact there already appears to be an existing variable to do so:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L187\r\n\r\nIt's used here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/efe89edd36e4ffa562fc3eebaf07a5fec26e6dac/src/datasets/builder.py#L163-L165\r\n\r\nPerhaps it could be set based on a test (trying to create a 255 char length named lock file and seeing if it fails)",
"Just fixed it, sending a PR :smile:",
"Hi @lmmx @gar1t ,\r\n\r\nit would be helpful if you could run the following code and copy-paste the output here:\r\n```python\r\nimport datasets\r\nimport os\r\nos.statvfs(datasets.config.HF_DATASETS_CACHE)\r\n```",
"`os.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=240046344, f_bfree=96427610, f_bavail=84216487, f_files=61038592, f_ffree=58216027, f_favail=58216027, f_flag=4102, f_namemax=143)`",
"Hi @lmmx,\r\n\r\nThanks for providing the result of the command. I've opened a PR, and it would be great if you could verify that the fix works on your system. To install the version of the datasets with the fix, please run the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-2924\r\n```\r\n\r\nBtw, I saw your PR, and I appreciate your effort. However, my approach is a bit simpler for the end-user, so that's why I decided to fix the issue myself.",
"No problem Mario I didn't know that was where that value was recorded so I learnt something :smiley: I just wanted to get a local version working, of course you should implement whatever fix is best for HF. Yes can confirm this fixes it too. Thanks!",
"Hello @mariosasko \r\n\r\nHas this fix shown up in the 2.10.1 version of huggingface datasets?"
] | 1,631,729,810,000 | 1,679,986,218,000 | 1,635,500,544,000 | NONE | null | null | null | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2924/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2924/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2923/comments | https://api.github.com/repos/huggingface/datasets/issues/2923/events | https://github.com/huggingface/datasets/issues/2923 | 997,351,590 | I_kwDODunzps47cmCm | 2,923 | Loading an autonlp dataset raises in normal mode but not in streaming mode | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Closing since autonlp dataset are now supported"
] | 1,631,727,878,000 | 1,649,758,180,000 | 1,649,758,179,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False)
## raises an error
load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True)
## does not raise an error
```
## Expected results
Both calls should raise the same error
## Actual results
Call with streaming=False:
```
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 5825.42it/s]
Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b
Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50...
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 15923.71it/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 3346.88it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split
writer.write_table(table)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "splits" does not exist in table schema'
```
Call with `streaming=False`:
```
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 6000.43it/s]
Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 46916.15it/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 148734.18it/s]
```
## Environment info
- `datasets` version: 1.12.1.dev0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2923/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2922/comments | https://api.github.com/repos/huggingface/datasets/issues/2922/events | https://github.com/huggingface/datasets/pull/2922 | 997,332,662 | PR_kwDODunzps4ry6-s | 2,922 | Fix conversion of multidim arrays in list to arrow | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,726,496,000 | 1,631,726,572,000 | 1,631,726,505,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2922",
"html_url": "https://github.com/huggingface/datasets/pull/2922",
"diff_url": "https://github.com/huggingface/datasets/pull/2922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2922.patch",
"merged_at": 1631726505000
} | Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation.
However in #2361 we started to keep numpy arrays in order to keep their dtypes.
It works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays.
In this PR I added two strategies:
- one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case)
- one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed.
Fix https://github.com/huggingface/datasets/issues/2921 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2922/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2921/comments | https://api.github.com/repos/huggingface/datasets/issues/2921/events | https://github.com/huggingface/datasets/issues/2921 | 997,325,424 | I_kwDODunzps47cfpw | 2,921 | Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values" | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,725,931,000 | 1,631,726,505,000 | 1,631,726,505,000 | MEMBER | null | null | null | This error has been introduced in https://github.com/huggingface/datasets/pull/2361
To reproduce:
```python
import numpy as np
from datasets import Dataset
d = Dataset.from_dict({"a": [np.zeros((2, 2))]})
```
raises
```python
Traceback (most recent call last):
File "playground/ttest.py", line 5, in <module>
d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch")
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 223, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__
out = pa.array(self.data, type=type)
File "pyarrow/array.pxi", line 306, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2921/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2920/comments | https://api.github.com/repos/huggingface/datasets/issues/2920/events | https://github.com/huggingface/datasets/pull/2920 | 997,323,014 | PR_kwDODunzps4ry4_u | 2,920 | Fix unwanted tqdm bar when accessing examples | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,725,751,000 | 1,631,726,304,000 | 1,631,726,304,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2920",
"html_url": "https://github.com/huggingface/datasets/pull/2920",
"diff_url": "https://github.com/huggingface/datasets/pull/2920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2920.patch",
"merged_at": 1631726303000
} | A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default
Fix #2919 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2920/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2919/comments | https://api.github.com/repos/huggingface/datasets/issues/2919/events | https://github.com/huggingface/datasets/issues/2919 | 997,127,487 | I_kwDODunzps47bvU_ | 2,919 | Unwanted progress bars when accessing examples | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"doing a patch release now :)"
] | 1,631,714,710,000 | 1,631,726,509,000 | 1,631,726,303,000 | MEMBER | null | null | null | When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples:
```python
In [1]: import datasets as ds
In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch")
In [3]: d[0]
100%|ββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 3172.70it/s]
Out[3]: {'a': tensor(0)}
```
This is because the pytorch formatter calls `map_nested` that uses progress bars
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2919/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2919/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2918/comments | https://api.github.com/repos/huggingface/datasets/issues/2918/events | https://github.com/huggingface/datasets/issues/2918 | 997,063,347 | I_kwDODunzps47bfqz | 2,918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...",
"Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```",
"Thanks for investigating @albertvillanova ! π€ "
] | 1,631,711,167,000 | 1,638,346,500,000 | 1,638,346,500,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2918/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2917/comments | https://api.github.com/repos/huggingface/datasets/issues/2917/events | https://github.com/huggingface/datasets/issues/2917 | 997,041,658 | I_kwDODunzps47baX6 | 2,917 | windows download abnormal | {
"login": "wei1826676931",
"id": 52347799,
"node_id": "MDQ6VXNlcjUyMzQ3Nzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/52347799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wei1826676931",
"html_url": "https://github.com/wei1826676931",
"followers_url": "https://api.github.com/users/wei1826676931/followers",
"following_url": "https://api.github.com/users/wei1826676931/following{/other_user}",
"gists_url": "https://api.github.com/users/wei1826676931/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wei1826676931/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wei1826676931/subscriptions",
"organizations_url": "https://api.github.com/users/wei1826676931/orgs",
"repos_url": "https://api.github.com/users/wei1826676931/repos",
"events_url": "https://api.github.com/users/wei1826676931/events{/privacy}",
"received_events_url": "https://api.github.com/users/wei1826676931/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used",
"It is indeed an agency problem, thank you very, very much",
"Let me know if you have other questions :)\r\n\r\nClosing this issue now"
] | 1,631,709,935,000 | 1,631,812,668,000 | 1,631,812,668,000 | NONE | null | null | null | ## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)
# Sample code to reproduce the bug
```
## Expected results
It can be downloaded normally.
## Actual results
it cann't
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.11.0
- Platform:windows
- Python version:3.7
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2917/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2916/comments | https://api.github.com/repos/huggingface/datasets/issues/2916/events | https://github.com/huggingface/datasets/pull/2916 | 997,003,661 | PR_kwDODunzps4rx5ua | 2,916 | Add OpenAI's pass@k code evaluation metric | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co./docs/datasets/loading.html?highlight=rank#distributed-setup)\r\nYou can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions\r\n\r\nLet me know if you have questions or if I can help",
"Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages.",
"Indeed it has an issue on windows.\r\nIn your example it's supposed to output\r\n```python\r\n{'pass@1': 0.5, 'pass@2': 1.0}\r\n```\r\nbut it gets\r\n```python\r\n{'pass@1': 0.0, 'pass@2': 0.0}\r\n```\r\n\r\nI'm not on my windows machine today so I can't take a look at it. I can dive into it early next week if you want",
"> I'm not on my windows machine today so I can't take a look at it. I can dive into it early next week if you want\r\n\r\nThat would be great - unfortunately I have no access to a windows machine at the moment. I am quite sure it is an issue with in exectue.py because of multiprocessing.\r\n"
] | 1,631,707,543,000 | 1,636,726,791,000 | 1,636,726,790,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2916",
"html_url": "https://github.com/huggingface/datasets/pull/2916",
"diff_url": "https://github.com/huggingface/datasets/pull/2916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2916.patch",
"merged_at": 1636726790000
} | This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2916/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2916/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2915/comments | https://api.github.com/repos/huggingface/datasets/issues/2915/events | https://github.com/huggingface/datasets/pull/2915 | 996,870,071 | PR_kwDODunzps4rxfWb | 2,915 | Fix fsspec AbstractFileSystem access | {
"login": "pierre-godard",
"id": 3969168,
"node_id": "MDQ6VXNlcjM5NjkxNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3969168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pierre-godard",
"html_url": "https://github.com/pierre-godard",
"followers_url": "https://api.github.com/users/pierre-godard/followers",
"following_url": "https://api.github.com/users/pierre-godard/following{/other_user}",
"gists_url": "https://api.github.com/users/pierre-godard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pierre-godard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pierre-godard/subscriptions",
"organizations_url": "https://api.github.com/users/pierre-godard/orgs",
"repos_url": "https://api.github.com/users/pierre-godard/repos",
"events_url": "https://api.github.com/users/pierre-godard/events{/privacy}",
"received_events_url": "https://api.github.com/users/pierre-godard/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,698,760,000 | 1,631,705,724,000 | 1,631,705,724,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2915",
"html_url": "https://github.com/huggingface/datasets/pull/2915",
"diff_url": "https://github.com/huggingface/datasets/pull/2915.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2915.patch",
"merged_at": 1631705724000
} | This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2915/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2914/comments | https://api.github.com/repos/huggingface/datasets/issues/2914/events | https://github.com/huggingface/datasets/issues/2914 | 996,770,168 | I_kwDODunzps47aYF4 | 2,914 | Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets | {
"login": "pierre-godard",
"id": 3969168,
"node_id": "MDQ6VXNlcjM5NjkxNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3969168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pierre-godard",
"html_url": "https://github.com/pierre-godard",
"followers_url": "https://api.github.com/users/pierre-godard/followers",
"following_url": "https://api.github.com/users/pierre-godard/following{/other_user}",
"gists_url": "https://api.github.com/users/pierre-godard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pierre-godard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pierre-godard/subscriptions",
"organizations_url": "https://api.github.com/users/pierre-godard/orgs",
"repos_url": "https://api.github.com/users/pierre-godard/repos",
"events_url": "https://api.github.com/users/pierre-godard/events{/privacy}",
"received_events_url": "https://api.github.com/users/pierre-godard/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Closed by #2915."
] | 1,631,692,446,000 | 1,631,724,557,000 | 1,631,724,556,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
In one of my project, I defined a custom fsspec filesystem with an entrypoint.
My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesystem_spec/blob/0589358d8a029ed6b60d031018f52be2eb721291/fsspec/__init__.py#L55)).
So that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable.
This make the import of datasets failing as it is using that `fsspec.spec`.
## Steps to reproduce the bug
I could reproduce the bug with a dummy poetry project.
Here is the pyproject.toml:
```toml
[tool.poetry]
name = "debug-datasets"
version = "0.1.0"
description = ""
authors = ["Pierre Godard"]
[tool.poetry.dependencies]
python = "^3.8"
datasets = "^1.11.0"
[tool.poetry.dev-dependencies]
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.poetry.plugins."fsspec.specs"]
"file2" = "fsspec.implementations.local.LocalFileSystem"
```
The only other file being a `debug_datasets/__init__.py` empty file.
The overall structure of the project is as follows:
```
.
βββ pyproject.toml
βββ debug_datasets
βββ __init__.py
```
Then, within the project folder run:
```
poetry install
poetry run python
```
And in the python interpreter, try to import `datasets`:
```
import datasets
```
## Expected results
The import should run successfully.
## Actual results
Here is the trace of the error I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 48, in <module>
from .filesystems import extract_path_from_uri, is_remote_filesystem
File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/filesystems/__init__.py", line 30, in <module>
def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool:
AttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem'
```
## Suggested fix
`datasets/filesystems/__init__.py`, line 30, replace:
```
def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool:
```
by:
```
def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool:
```
I will come up with a PR soon if this effectively solves the issue.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: WSL2 (Ubuntu 20.04.1 LTS)
- Python version: 3.8.5
- PyArrow version: 5.0.0
- `fsspec` version: 2021.8.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2914/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2913/comments | https://api.github.com/repos/huggingface/datasets/issues/2913/events | https://github.com/huggingface/datasets/issues/2913 | 996,436,368 | I_kwDODunzps47ZGmQ | 2,913 | timit_asr dataset only includes one text phrase | {
"login": "margotwagner",
"id": 39107794,
"node_id": "MDQ6VXNlcjM5MTA3Nzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/39107794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/margotwagner",
"html_url": "https://github.com/margotwagner",
"followers_url": "https://api.github.com/users/margotwagner/followers",
"following_url": "https://api.github.com/users/margotwagner/following{/other_user}",
"gists_url": "https://api.github.com/users/margotwagner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/margotwagner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/margotwagner/subscriptions",
"organizations_url": "https://api.github.com/users/margotwagner/orgs",
"repos_url": "https://api.github.com/users/margotwagner/repos",
"events_url": "https://api.github.com/users/margotwagner/events{/privacy}",
"received_events_url": "https://api.github.com/users/margotwagner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)",
"Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `datasets` version: 1.4.1"
] | 1,631,653,567,000 | 1,631,693,119,000 | 1,631,693,118,000 | NONE | null | null | null | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co./blog/fine-tune-wav2vec2-english
1. Install the dataset and other packages
```python
!pip install datasets>=1.5.0
!pip install transformers==4.4.0
!pip install soundfile
!pip install jiwer
```
2. Load the dataset
```python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
```
3. Remove columns that we don't want
```python
timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"])
```
4. Write a short function to display some random samples of the dataset.
```python
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit["train"].remove_columns(["file"]))
```
## Expected results
10 random different transcription phrases.
## Actual results
10 of the same transcription phrase "Would such an act of refusal be useful?"
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: not listed
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2913/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2912/comments | https://api.github.com/repos/huggingface/datasets/issues/2912/events | https://github.com/huggingface/datasets/pull/2912 | 996,256,005 | PR_kwDODunzps4rvhgp | 2,912 | Update link to Blog in docs footer | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,640,194,000 | 1,631,692,763,000 | 1,631,692,763,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2912",
"html_url": "https://github.com/huggingface/datasets/pull/2912",
"diff_url": "https://github.com/huggingface/datasets/pull/2912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2912.patch",
"merged_at": 1631692763000
} | Update link. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2912/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2911/comments | https://api.github.com/repos/huggingface/datasets/issues/2911/events | https://github.com/huggingface/datasets/pull/2911 | 996,202,598 | PR_kwDODunzps4rvW7Y | 2,911 | Fix exception chaining | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,636,369,000 | 1,631,804,684,000 | 1,631,804,684,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2911",
"html_url": "https://github.com/huggingface/datasets/pull/2911",
"diff_url": "https://github.com/huggingface/datasets/pull/2911.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2911.patch",
"merged_at": 1631804684000
} | Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2911/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2910/comments | https://api.github.com/repos/huggingface/datasets/issues/2910/events | https://github.com/huggingface/datasets/pull/2910 | 996,149,632 | PR_kwDODunzps4rvL9N | 2,910 | feat: πΈ pass additional arguments to get private configs + info | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Included in https://github.com/huggingface/datasets/pull/2906"
] | 1,631,633,059,000 | 1,631,722,749,000 | 1,631,722,746,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2910",
"html_url": "https://github.com/huggingface/datasets/pull/2910",
"diff_url": "https://github.com/huggingface/datasets/pull/2910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2910.patch",
"merged_at": null
} | `use_auth_token` can now be passed to the functions to get the configs
or infos of private datasets on the hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2910/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2909/comments | https://api.github.com/repos/huggingface/datasets/issues/2909/events | https://github.com/huggingface/datasets/pull/2909 | 996,002,180 | PR_kwDODunzps4rutdo | 2,909 | fix anli splits | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,625,035,000 | 1,634,124,469,000 | 1,634,124,469,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2909",
"html_url": "https://github.com/huggingface/datasets/pull/2909",
"diff_url": "https://github.com/huggingface/datasets/pull/2909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2909.patch",
"merged_at": null
} | I can't run the tests for dummy data, facing this error
`ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'.
tests/conftest.py:10: in <module>
from datasets import config
E ImportError: cannot import name 'config' from 'datasets' (unknown location)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2909/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2908/comments | https://api.github.com/repos/huggingface/datasets/issues/2908/events | https://github.com/huggingface/datasets/pull/2908 | 995,970,612 | PR_kwDODunzps4rumwW | 2,908 | Update Zenodo metadata with creator names and affiliation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,623,177,000 | 1,631,629,765,000 | 1,631,629,765,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2908",
"html_url": "https://github.com/huggingface/datasets/pull/2908",
"diff_url": "https://github.com/huggingface/datasets/pull/2908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2908.patch",
"merged_at": 1631629765000
} | This PR helps in prefilling author data when automatically generating the DOI after each release. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2908/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2907/comments | https://api.github.com/repos/huggingface/datasets/issues/2907/events | https://github.com/huggingface/datasets/pull/2907 | 995,968,152 | PR_kwDODunzps4rumOy | 2,907 | add story_cloze dataset | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Will create a new one, this one seems to be missed up. "
] | 1,631,623,013,000 | 1,633,729,302,000 | 1,633,729,301,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2907",
"html_url": "https://github.com/huggingface/datasets/pull/2907",
"diff_url": "https://github.com/huggingface/datasets/pull/2907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2907.patch",
"merged_at": null
} | @lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2907/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2907/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2906/comments | https://api.github.com/repos/huggingface/datasets/issues/2906/events | https://github.com/huggingface/datasets/pull/2906 | 995,962,905 | PR_kwDODunzps4rulH- | 2,906 | feat: πΈ add a function to get a dataset config's split names | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is"
] | 1,631,622,682,000 | 1,633,341,338,000 | 1,633,341,337,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2906",
"html_url": "https://github.com/huggingface/datasets/pull/2906",
"diff_url": "https://github.com/huggingface/datasets/pull/2906.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2906.patch",
"merged_at": 1633341337000
} | Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub
Questions:
- [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?
-> no: reverted
- [x] Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
-> yes: added | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2906/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2906/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2905/comments | https://api.github.com/repos/huggingface/datasets/issues/2905/events | https://github.com/huggingface/datasets/pull/2905 | 995,843,964 | PR_kwDODunzps4ruL5X | 2,905 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,614,577,000 | 1,631,622,337,000 | 1,631,622,337,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2905",
"html_url": "https://github.com/huggingface/datasets/pull/2905",
"diff_url": "https://github.com/huggingface/datasets/pull/2905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2905.patch",
"merged_at": 1631622337000
} | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2905/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2904/comments | https://api.github.com/repos/huggingface/datasets/issues/2904/events | https://github.com/huggingface/datasets/issues/2904 | 995,814,222 | I_kwDODunzps47WutO | 2,904 | FORCE_REDOWNLOAD does not work | {
"login": "anoopkatti",
"id": 5278299,
"node_id": "MDQ6VXNlcjUyNzgyOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5278299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anoopkatti",
"html_url": "https://github.com/anoopkatti",
"followers_url": "https://api.github.com/users/anoopkatti/followers",
"following_url": "https://api.github.com/users/anoopkatti/following{/other_user}",
"gists_url": "https://api.github.com/users/anoopkatti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anoopkatti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anoopkatti/subscriptions",
"organizations_url": "https://api.github.com/users/anoopkatti/orgs",
"repos_url": "https://api.github.com/users/anoopkatti/repos",
"events_url": "https://api.github.com/users/anoopkatti/events{/privacy}",
"received_events_url": "https://api.github.com/users/anoopkatti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.\r\n\r\nIf we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.\r\nCurrently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue.",
"Facing the same issue, is there any way to overtake this issue until it will be fixed? ",
"You can clear your extraction cache in the meantime (by default at `~/.cache/huggingface/datasets/downloads/extracted`)"
] | 1,631,612,726,000 | 1,633,513,039,000 | null | NONE | null | null | null | ## Describe the bug
With GenerateMode.FORCE_REDOWNLOAD, the documentation says
+------------------------------------+-----------+---------+
| | Downloads | Dataset |
+====================================+===========+=========+
| `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |
+------------------------------------+-----------+---------+
| `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |
+------------------------------------+-----------+---------+
| `FORCE_REDOWNLOAD` | Fresh | Fresh |
+------------------------------------+-----------+---------+
However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen.
## Steps to reproduce the bug
```python
import pandas as pd
from datasets import load_dataset, GenerateMode
pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False)
ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)
print(ee)
```
## Expected results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numerals'],
num_rows: 10
})
## Actual results
Dataset({
features: ['numbers'],
num_rows: 5
})
Dataset({
features: ['numbers'],
num_rows: 5
})
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2904/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2903/comments | https://api.github.com/repos/huggingface/datasets/issues/2903/events | https://github.com/huggingface/datasets/pull/2903 | 995,715,191 | PR_kwDODunzps4rtxxV | 2,903 | Fix xpathopen to accept positional arguments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"thanks!"
] | 1,631,606,570,000 | 1,631,609,481,000 | 1,631,608,847,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2903",
"html_url": "https://github.com/huggingface/datasets/pull/2903",
"diff_url": "https://github.com/huggingface/datasets/pull/2903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2903.patch",
"merged_at": 1631608847000
} | Fix `xpathopen()` so that it also accepts positional arguments.
Fix #2901. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2903/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2902/comments | https://api.github.com/repos/huggingface/datasets/issues/2902/events | https://github.com/huggingface/datasets/issues/2902 | 995,254,216 | MDU6SXNzdWU5OTUyNTQyMTY= | 2,902 | Add WIT Dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"@hassiahk is working on it #2810 ",
"WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/\r\nhttps://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/",
"> @hassiahk is working on it #2810\r\n\r\nThank you @bhavitvyamalik! Added this issue so we could track progress π . Just linked the PR as well for visibility. ",
"Hey folks, we are now hosting the merged pixel values + embeddings + metadata ourselves. I gave it a try - [nateraw/wit](https://huggingface.co./datasets/nateraw/wit)\r\n\r\n**β οΈ - Make sure you add `streaming=True` unless you're prepared to download 400GB of data!**\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw/wit', streaming=True)\r\nexample = next(iter(ds))\r\n```\r\n\r\n```python\r\n>>> example = next(iter(ds['train']))\r\n>>> example.keys()\r\ndict_keys(['b64_bytes', 'original_width', 'image_url', 'wit_features', 'original_height', 'metadata_url', 'mime_type', 'caption_attribution_description', 'embedding'])\r\n>>> example['wit_features'].keys()\r\ndict_keys(['hierarchical_section_title', 'language', 'attribution_passes_lang_id', 'context_section_description', 'is_main_image', 'page_title', 'caption_title_and_reference_description', 'caption_alt_text_description', 'caption_reference_description', 'page_url', 'context_page_description', 'section_title', 'page_changed_recently'])\r\n```",
"Hi! `datasets` now hosts two versions of the WIT dataset:\r\n* [`google/wit`](https://huggingface.co./datasets/google/wit): Google's version with the image URLs\r\n* [`wikimedia/wit_base`](https://huggingface.co./datasets/wikimedia/wit_base): Wikimedia's version with the images + ResNet embeddings, but with less data than Google's version"
] | 1,631,561,929,000 | 1,654,104,520,000 | 1,654,104,520,000 | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** *WIT*
- **Description:** *Wikipedia-based Image Text Dataset*
- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
](https://arxiv.org/abs/2103.01913)*
- **Data:** *https://github.com/google-research-datasets/wit*
- **Motivation:** (excerpt from their Github README.md)
> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.
> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.
> - A collection of diverse set of concepts and real world entities.
> - Brings forth challenging real-world test sets.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2902/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2901/comments | https://api.github.com/repos/huggingface/datasets/issues/2901/events | https://github.com/huggingface/datasets/issues/2901 | 995,232,844 | MDU6SXNzdWU5OTUyMzI4NDQ= | 2,901 | Incompatibility with pytest | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!"
] | 1,631,560,337,000 | 1,631,608,847,000 | 1,631,608,847,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
pytest complains about xpathopen / path.open("w")
## Steps to reproduce the bug
Create a test file, `test.py`:
```python
import datasets as ds
def load_dataset():
ds.load_dataset("counter", split="train", streaming=True)
```
And launch it with pytest:
```bash
python -m pytest test.py
```
## Expected results
It should give something like:
```
collected 1 item
test.py . [100%]
======= 1 passed in 3.15s =======
```
## Actual results
```
============================================================================================================================= test session starts ==============================================================================================================================
platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml
plugins: anyio-3.3.1
collected 1 item
tests/queries/test_rows.py . [100%]Traceback (most recent call last):
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module>
raise SystemExit(pytest.console_main())
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
return outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session
config.hook.pytest_sessionfinish(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
gen.send(outcome)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish
outcome.get_result()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish
config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set
f = path.open("w")
TypeError: xpathopen() takes 1 positional argument but 2 were given
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2901/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2900/comments | https://api.github.com/repos/huggingface/datasets/issues/2900/events | https://github.com/huggingface/datasets/pull/2900 | 994,922,580 | MDExOlB1bGxSZXF1ZXN0NzMyNzczNDkw | 2,900 | Fix null sequence encoding | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,541,308,000 | 1,631,542,663,000 | 1,631,542,662,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2900",
"html_url": "https://github.com/huggingface/datasets/pull/2900",
"diff_url": "https://github.com/huggingface/datasets/pull/2900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2900.patch",
"merged_at": 1631542662000
} | The Sequence feature encoding was failing when a `None` sequence was used in a dataset.
Fix https://github.com/huggingface/datasets/issues/2892 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2900/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2899/comments | https://api.github.com/repos/huggingface/datasets/issues/2899/events | https://github.com/huggingface/datasets/issues/2899 | 994,082,432 | MDU6SXNzdWU5OTQwODI0MzI= | 2,899 | Dataset | {
"login": "rcacho172",
"id": 90449239,
"node_id": "MDQ6VXNlcjkwNDQ5MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcacho172",
"html_url": "https://github.com/rcacho172",
"followers_url": "https://api.github.com/users/rcacho172/followers",
"following_url": "https://api.github.com/users/rcacho172/following{/other_user}",
"gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions",
"organizations_url": "https://api.github.com/users/rcacho172/orgs",
"repos_url": "https://api.github.com/users/rcacho172/repos",
"events_url": "https://api.github.com/users/rcacho172/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcacho172/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,631,432,333,000 | 1,631,463,135,000 | 1,631,463,135,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2899/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2898/comments | https://api.github.com/repos/huggingface/datasets/issues/2898/events | https://github.com/huggingface/datasets/issues/2898 | 994,032,814 | MDU6SXNzdWU5OTQwMzI4MTQ= | 2,898 | Hug emoji | {
"login": "Jackg-08",
"id": 90539794,
"node_id": "MDQ6VXNlcjkwNTM5Nzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/90539794?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jackg-08",
"html_url": "https://github.com/Jackg-08",
"followers_url": "https://api.github.com/users/Jackg-08/followers",
"following_url": "https://api.github.com/users/Jackg-08/following{/other_user}",
"gists_url": "https://api.github.com/users/Jackg-08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jackg-08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jackg-08/subscriptions",
"organizations_url": "https://api.github.com/users/Jackg-08/orgs",
"repos_url": "https://api.github.com/users/Jackg-08/repos",
"events_url": "https://api.github.com/users/Jackg-08/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jackg-08/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,631,417,271,000 | 1,631,463,193,000 | 1,631,463,193,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2898/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2897/comments | https://api.github.com/repos/huggingface/datasets/issues/2897/events | https://github.com/huggingface/datasets/pull/2897 | 993,798,386 | MDExOlB1bGxSZXF1ZXN0NzMxOTA0ODk4 | 2,897 | Add OpenAI's HumanEval dataset | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)"
] | 1,631,353,067,000 | 1,631,804,531,000 | 1,631,804,531,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2897",
"html_url": "https://github.com/huggingface/datasets/pull/2897",
"diff_url": "https://github.com/huggingface/datasets/pull/2897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2897.patch",
"merged_at": 1631804531000
} | This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2897/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2896/comments | https://api.github.com/repos/huggingface/datasets/issues/2896/events | https://github.com/huggingface/datasets/pull/2896 | 993,613,113 | MDExOlB1bGxSZXF1ZXN0NzMxNzcwMTE3 | 2,896 | add multi-proc in `to_csv` | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think you can just add a test `test_dataset_to_csv_multiproc` in `tests/io/test_csv.py` and we'll be good",
"Hi @lhoestq, \r\nI've added `test_dataset_to_csv` apart from `test_dataset_to_csv_multiproc` as no test was there to check generated CSV file when `num_proc=1`. Please let me know if anything is also required! "
] | 1,631,309,709,000 | 1,635,400,053,000 | 1,635,264,042,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2896",
"html_url": "https://github.com/huggingface/datasets/pull/2896",
"diff_url": "https://github.com/huggingface/datasets/pull/2896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2896.patch",
"merged_at": 1635264041000
} | This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well.
Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1):
```
Time taken on 1 num_proc, 10000 batch_size 674.2055702209473
Time taken on 4 num_proc, 10000 batch_size 425.6553490161896
Time taken on 1 num_proc, 50000 batch_size 623.5897650718689
Time taken on 4 num_proc, 50000 batch_size 380.0402421951294
Time taken on 4 num_proc, 100000 batch_size 361.7168130874634
```
This is a WIP as writing tests is pending for this PR.
I'm also exploring [this](https://arrow.apache.org/docs/python/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2896/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2895/comments | https://api.github.com/repos/huggingface/datasets/issues/2895/events | https://github.com/huggingface/datasets/pull/2895 | 993,462,274 | MDExOlB1bGxSZXF1ZXN0NzMxNjQ0NTY2 | 2,895 | Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast | {
"login": "arsarabi",
"id": 12345848,
"node_id": "MDQ6VXNlcjEyMzQ1ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/12345848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arsarabi",
"html_url": "https://github.com/arsarabi",
"followers_url": "https://api.github.com/users/arsarabi/followers",
"following_url": "https://api.github.com/users/arsarabi/following{/other_user}",
"gists_url": "https://api.github.com/users/arsarabi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arsarabi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arsarabi/subscriptions",
"organizations_url": "https://api.github.com/users/arsarabi/orgs",
"repos_url": "https://api.github.com/users/arsarabi/repos",
"events_url": "https://api.github.com/users/arsarabi/events{/privacy}",
"received_events_url": "https://api.github.com/users/arsarabi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,296,617,000 | 1,632,264,601,000 | 1,632,212,315,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2895",
"html_url": "https://github.com/huggingface/datasets/pull/2895",
"diff_url": "https://github.com/huggingface/datasets/pull/2895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2895.patch",
"merged_at": 1632212315000
} | This PR partially addresses #2252.
``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.replace_schema_metadata`` which is much faster. This PR adds a ``replace_schema_metadata`` method to all table classes, and modifies ``update_metadata_with_features`` to use it instead of ``cast``. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2895/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2895/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2894/comments | https://api.github.com/repos/huggingface/datasets/issues/2894/events | https://github.com/huggingface/datasets/pull/2894 | 993,375,654 | MDExOlB1bGxSZXF1ZXN0NzMxNTcxODc5 | 2,894 | Fix COUNTER dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,290,049,000 | 1,631,291,265,000 | 1,631,291,264,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2894",
"html_url": "https://github.com/huggingface/datasets/pull/2894",
"diff_url": "https://github.com/huggingface/datasets/pull/2894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2894.patch",
"merged_at": 1631291264000
} | Fix filename generating `FileNotFoundError`.
Related to #2866.
CC: @severo. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2894/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2894/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2893/comments | https://api.github.com/repos/huggingface/datasets/issues/2893/events | https://github.com/huggingface/datasets/pull/2893 | 993,342,781 | MDExOlB1bGxSZXF1ZXN0NzMxNTQ0NDQz | 2,893 | add mbpp dataset | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think it's fine to have the original schema"
] | 1,631,287,650,000 | 1,631,784,942,000 | 1,631,784,942,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2893",
"html_url": "https://github.com/huggingface/datasets/pull/2893",
"diff_url": "https://github.com/huggingface/datasets/pull/2893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2893.patch",
"merged_at": 1631784942000
} | This PR adds the mbpp dataset introduced by Google [here](https://github.com/google-research/google-research/tree/master/mbpp) as mentioned in #2816.
The dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. An open question is whether to harmonize the two schemas when loading the dataset or to preserve the original one. Since not all fields are overlapping the schema will not be exactly the same. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2893/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2892/comments | https://api.github.com/repos/huggingface/datasets/issues/2892/events | https://github.com/huggingface/datasets/issues/2892 | 993,274,572 | MDU6SXNzdWU5OTMyNzQ1NzI= | 2,892 | Error when encoding a dataset with None objects with a Sequence feature | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)"
] | 1,631,283,103,000 | 1,631,542,693,000 | 1,631,542,662,000 | MEMBER | null | null | null | There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-40add67f8751> in <module>
2 data = {"a": [[0], None]}
3 features = Features({"a": Sequence(Value("int32"))})
----> 4 dataset = Dataset.from_dict(data, features=features)
[...]
~/datasets/features.py in encode_nested_example(schema, obj)
888 if isinstance(obj, str): # don't interpret a string as a list
889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
--> 890 return [encode_nested_example(schema.feature, o) for o in obj]
891 # Object with special encoding:
892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
TypeError: 'NoneType' object is not iterable
```
Instead, if should run without error, as if the `features` were not passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2892/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2891/comments | https://api.github.com/repos/huggingface/datasets/issues/2891/events | https://github.com/huggingface/datasets/pull/2891 | 993,161,984 | MDExOlB1bGxSZXF1ZXN0NzMxMzkwNjM2 | 2,891 | Allow dynamic first dimension for ArrayXD | {
"login": "rpowalski",
"id": 10357417,
"node_id": "MDQ6VXNlcjEwMzU3NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/10357417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rpowalski",
"html_url": "https://github.com/rpowalski",
"followers_url": "https://api.github.com/users/rpowalski/followers",
"following_url": "https://api.github.com/users/rpowalski/following{/other_user}",
"gists_url": "https://api.github.com/users/rpowalski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rpowalski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rpowalski/subscriptions",
"organizations_url": "https://api.github.com/users/rpowalski/orgs",
"repos_url": "https://api.github.com/users/rpowalski/repos",
"events_url": "https://api.github.com/users/rpowalski/events{/privacy}",
"received_events_url": "https://api.github.com/users/rpowalski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq, thanks for your review.\r\n\r\nI added test for `to_pylist`, I didn't do that for `to_numpy` because this method shouldn't be called for dynamic dimension ArrayXD - this method will try to make a single numpy array for the whole column which cannot be done for dynamic arrays.\r\n\r\nI dig into `to_pandas()` functionality and I found it quite difficult to implement. `PandasArrayExtensionArray` takes single np.array as an argument. It might be a bit of changes to make it work with the list of arrays. Do you mind if we exclude this work from this PR. I added an error message for the case if somebody tries to use dynamic arrays with `to_pandas`",
"@lhoestq, I just fixed all the tests. Let me know if there is anything else to add.",
"@lhoestq, any chance you had some time to check out this PR?\r\n",
"Hi ! Sorry for the delay\r\n\r\nIt looks good to me ! I think the only thing missing is the support for passing a list of numpy arrays to `map` when the first dimension is dynamic.\r\n\r\nCurrently it raises an error:\r\n```python\r\nfrom datasets import *\r\nimport numpy as np\r\n\r\nfeatures= Features({\"a\": Array3D(shape=(None, 5, 2), dtype=\"int32\")})\r\nd = Dataset.from_dict({\"a\": [np.zeros((5,5,2)), np.zeros((2,5,2))]}, features=features)\r\nd = d.map(lambda a: {\"a\": np.concatenate([a]*2)}, input_columns=\"a\")\r\nprint(d[0])\r\n```\r\nraises\r\n```python\r\nTraceback (most recent call last):\r\n File \"playground/ttest.py\", line 6, in <module>\r\n d = d.map(lambda x: {\"a\": np.concatenate([x]*2)}, input_columns=\"a\")\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 1932, in map\r\n return self._map_single(\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 426, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/truent/hf/datasets/src/datasets/fingerprint.py\", line 406, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 2317, in _map_single\r\n writer.finalize()\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 443, in finalize\r\n self.write_examples_on_file()\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 312, in write_examples_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow/array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 108, in __arrow_array__\r\n storage = pa.array(self.data, type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values\r\n```\r\n\r\nI think the issue is that here we don't cover the case where self.data is a list of numpy arrays:\r\n\r\nhttps://github.com/huggingface/datasets/blob/55fd140a63b8f03a0e72985647e498f1fc799d3f/src/datasets/arrow_writer.py#L104-L109\r\n\r\nWe should remove the `isinstance(self.data[0], np.ndarray)` part and add these lines to cover this case:\r\n\r\nhttps://github.com/huggingface/datasets/blob/55fd140a63b8f03a0e72985647e498f1fc799d3f/src/datasets/arrow_writer.py#L112-L113",
"@lhoestq, thanks, good catch!\r\nAre you able to run this check with fixed dimension ArrayXD?\r\nfor below example\r\n```\r\nimport numpy as np\r\nfrom datasets import *\r\n\r\nfeatures = Features({\"a\": Array3D(shape=(2, 5, 2), dtype=\"int32\")})\r\nd = Dataset.from_dict({\"a\": [np.zeros((2, 5, 2)), np.zeros((2, 5, 2))]}, features=features)\r\nd = d.map(lambda a: {\"a\": np.array(a) + 1}, input_columns=\"a\")\r\nprint(d[0])\r\n```\r\n\r\nI am getting:\r\n```\r\n File \"/home/ib/datasets/src/datasets/arrow_writer.py\", line 116, in __arrow_array__\r\n if trying_type and out[0].as_py() != self.data[0]:\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n```",
"Nevertheless, I tried to fix that. Let me know if that works.",
"@lhoestq, just resolved the conflicts. Let me know if there is anything left to do with this PR",
"Hi, thanks a lot for your comments.\r\nAgree, happy to contribute to this topic in future PRs",
"Hi @rpowalski, thanks for adding this feature! \r\n\r\nI wanted to check if you are still interested in documenting this, otherwise I'd be happy to help with it :)"
] | 1,631,274,772,000 | 1,637,681,593,000 | 1,635,500,237,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2891",
"html_url": "https://github.com/huggingface/datasets/pull/2891",
"diff_url": "https://github.com/huggingface/datasets/pull/2891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2891.patch",
"merged_at": 1635500237000
} | Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887).
Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary.
@lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2891/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2890/comments | https://api.github.com/repos/huggingface/datasets/issues/2890/events | https://github.com/huggingface/datasets/issues/2890 | 993,074,102 | MDU6SXNzdWU5OTMwNzQxMDI= | 2,890 | 0x290B112ED1280537B24Ee6C268a004994a16e6CE | {
"login": "rcacho172",
"id": 90449239,
"node_id": "MDQ6VXNlcjkwNDQ5MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcacho172",
"html_url": "https://github.com/rcacho172",
"followers_url": "https://api.github.com/users/rcacho172/followers",
"following_url": "https://api.github.com/users/rcacho172/following{/other_user}",
"gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions",
"organizations_url": "https://api.github.com/users/rcacho172/orgs",
"repos_url": "https://api.github.com/users/rcacho172/repos",
"events_url": "https://api.github.com/users/rcacho172/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcacho172/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,631,267,477,000 | 1,631,274,329,000 | 1,631,274,329,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2890/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2889/comments | https://api.github.com/repos/huggingface/datasets/issues/2889/events | https://github.com/huggingface/datasets/issues/2889 | 992,968,382 | MDU6SXNzdWU5OTI5NjgzODI= | 2,889 | Coc | {
"login": "Bwiggity",
"id": 90444264,
"node_id": "MDQ6VXNlcjkwNDQ0MjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/90444264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bwiggity",
"html_url": "https://github.com/Bwiggity",
"followers_url": "https://api.github.com/users/Bwiggity/followers",
"following_url": "https://api.github.com/users/Bwiggity/following{/other_user}",
"gists_url": "https://api.github.com/users/Bwiggity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bwiggity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bwiggity/subscriptions",
"organizations_url": "https://api.github.com/users/Bwiggity/orgs",
"repos_url": "https://api.github.com/users/Bwiggity/repos",
"events_url": "https://api.github.com/users/Bwiggity/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bwiggity/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,631,259,127,000 | 1,631,274,354,000 | 1,631,274,354,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2889/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2888/comments | https://api.github.com/repos/huggingface/datasets/issues/2888/events | https://github.com/huggingface/datasets/issues/2888 | 992,676,535 | MDU6SXNzdWU5OTI2NzY1MzU= | 2,888 | v1.11.1 release date | {
"login": "fcakyon",
"id": 34196005,
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcakyon",
"html_url": "https://github.com/fcakyon",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Hi ! Probably 1.12 on monday :)\r\n",
"@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)"
] | 1,631,224,395,000 | 1,631,477,915,000 | 1,631,463,339,000 | NONE | null | null | null | Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2888/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/2888/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2887/comments | https://api.github.com/repos/huggingface/datasets/issues/2887/events | https://github.com/huggingface/datasets/pull/2887 | 992,576,305 | MDExOlB1bGxSZXF1ZXN0NzMwODg4MTU3 | 2,887 | #2837 Use cache folder for lockfile | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI fail about the meteor metric is unrelated to this PR "
] | 1,631,217,356,000 | 1,633,456,702,000 | 1,633,456,702,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2887",
"html_url": "https://github.com/huggingface/datasets/pull/2887",
"diff_url": "https://github.com/huggingface/datasets/pull/2887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2887.patch",
"merged_at": 1633456702000
} | Fixes #2837
Use a cache folder directory to store the FileLock.
The issue was that the lock file was in a readonly folder.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2887/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2886/comments | https://api.github.com/repos/huggingface/datasets/issues/2886/events | https://github.com/huggingface/datasets/issues/2886 | 992,534,632 | MDU6SXNzdWU5OTI1MzQ2MzI= | 2,886 | Hj | {
"login": "Noorasri",
"id": 90416328,
"node_id": "MDQ6VXNlcjkwNDE2MzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/90416328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Noorasri",
"html_url": "https://github.com/Noorasri",
"followers_url": "https://api.github.com/users/Noorasri/followers",
"following_url": "https://api.github.com/users/Noorasri/following{/other_user}",
"gists_url": "https://api.github.com/users/Noorasri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Noorasri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Noorasri/subscriptions",
"organizations_url": "https://api.github.com/users/Noorasri/orgs",
"repos_url": "https://api.github.com/users/Noorasri/repos",
"events_url": "https://api.github.com/users/Noorasri/events{/privacy}",
"received_events_url": "https://api.github.com/users/Noorasri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,213,932,000 | 1,631,274,389,000 | 1,631,274,389,000 | NONE | null | null | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2886/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2885/comments | https://api.github.com/repos/huggingface/datasets/issues/2885/events | https://github.com/huggingface/datasets/issues/2885 | 992,160,544 | MDU6SXNzdWU5OTIxNjA1NDQ= | 2,885 | Adding an Elastic Search index to a Dataset | {
"login": "MotzWanted",
"id": 36195371,
"node_id": "MDQ6VXNlcjM2MTk1Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/36195371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MotzWanted",
"html_url": "https://github.com/MotzWanted",
"followers_url": "https://api.github.com/users/MotzWanted/followers",
"following_url": "https://api.github.com/users/MotzWanted/following{/other_user}",
"gists_url": "https://api.github.com/users/MotzWanted/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MotzWanted/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MotzWanted/subscriptions",
"organizations_url": "https://api.github.com/users/MotzWanted/orgs",
"repos_url": "https://api.github.com/users/MotzWanted/repos",
"events_url": "https://api.github.com/users/MotzWanted/events{/privacy}",
"received_events_url": "https://api.github.com/users/MotzWanted/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env",
"I face similar issue with oscar dataset on remote ealsticsearch instance. It was mainly due to timeout of batch indexing requests and I solve these by adding large request_timeout param in `search.py`\r\n\r\n```\r\n for ok, action in es.helpers.streaming_bulk(\r\n client=self.es_client,\r\n index=index_name,\r\n actions=passage_generator(),\r\n request_timeout=3600,\r\n )\r\n ```",
"Hi @MotzWanted - are there any errors in the Elasticsearch cluster logs? Since it works in your local environment and the cluster versions are different between your poetry env and your local env, it is possible that it is some difference in the cluster - either settings or the cluster being under a different load etc that has this effect, so it would be useful to see if any errors are thrown in the cluster's logs when you try to ingest. \r\nWhich elasticsearch client method is the function `add_elasticsearch_index` from your code using under the hood? Is it `helpers.bulk` or is the indexing performed using something else? You can try adding a timeout to the indexing method to see if this helps. Also, you mention that it stops at around 90% - do you know if the timeout/hanging happens always when a particular document is being indexed or does it happen randomly at around 90% completeness but on different documents?"
] | 1,631,190,099,000 | 1,634,756,231,000 | null | NONE | null | null | null | ## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|βββββββββββββββββββββββββββββββββββββββββββββ | 9501/10570 [00:01<00:00, 6335.61docs/s]
No error is thrown, but the indexing breaks ~90%.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
from elasticsearch import Elasticsearch
es = Elasticsearch()
squad = load_dataset('squad', split='validation')
index_name = "corpus"
es_config = {
"settings": {
"number_of_shards": 1,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {
"properties": {
"idx" : {"type" : "keyword"},
"title" : {"type" : "keyword"},
"text": {
"type": "text",
"analyzer": "standard",
"similarity": "BM25"
},
}
},
}
class IndexBuilder:
"""
Elastic search indexing of a corpus
"""
def __init__(
self,
*args,
#corpus : None,
dataset : squad,
index_name = str,
query = str,
config = dict,
**kwargs,
):
#instantiate HuggingFace dataset
self.dataset = dataset
#instantiate ElasticSearch config
self.config = config
self.es = Elasticsearch()
self.index_name = index_name
self.query = query
def elastic_index(self):
print(self.es.info)
self.es.indices.delete(index=self.index_name, ignore=[400, 404])
search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config)
return search_index
def exact_match_method(self, index):
scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1)
return scores, retrieved_examples
if __name__ == "__main__":
print(type(squad))
Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config)
search_index = Index.elastic_index()
scores, examples = Index.exact_match_method(search_index)
print(scores, examples)
for name in squad.column_names:
print(type(squad[name]))
```
## Environment info
We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment.
Poetry:
- Python version: 3.8
- PyArrow: 4.0.1
- Elasticsearch: 7.13.4
- datasets: 1.10.2
Local:
- Python version: 3.8
- PyArrow: 3.0.0
- Elasticsearch: 7.7.1
- datasets: 1.7.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2885/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2884/comments | https://api.github.com/repos/huggingface/datasets/issues/2884/events | https://github.com/huggingface/datasets/pull/2884 | 992,135,698 | MDExOlB1bGxSZXF1ZXN0NzMwNTA4MTE1 | 2,884 | Add IC, SI, ER tasks to SUPERB | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ",
"Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co./datasets/anton-l/superb_source_data_dumps/tree/main\r\nAre we allowed to make these datasets public or would that violate the terms of their use?",
"@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us. \nFor example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(",
"> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.\r\n> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(\r\n\r\nI think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)?"
] | 1,631,188,563,000 | 1,632,129,478,000 | 1,632,128,449,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2884",
"html_url": "https://github.com/huggingface/datasets/pull/2884",
"diff_url": "https://github.com/huggingface/datasets/pull/2884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2884.patch",
"merged_at": 1632128449000
} | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co./datasets/anton-l/superb_source_data_dumps/tree/main | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2884/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2884/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2883/comments | https://api.github.com/repos/huggingface/datasets/issues/2883/events | https://github.com/huggingface/datasets/pull/2883 | 991,969,875 | MDExOlB1bGxSZXF1ZXN0NzMwMzYzNTQz | 2,883 | Fix data URLs and metadata in DocRED dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,177,734,000 | 1,631,532,271,000 | 1,631,532,271,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2883",
"html_url": "https://github.com/huggingface/datasets/pull/2883",
"diff_url": "https://github.com/huggingface/datasets/pull/2883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2883.patch",
"merged_at": 1631532270000
} | The host of `docred` dataset has updated the `dev` data file. This PR:
- Updates the dev URL
- Updates dataset metadata
This PR also fixes the URL of the `train_distant` split, which was wrong.
Fix #2882. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2883/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2882/comments | https://api.github.com/repos/huggingface/datasets/issues/2882/events | https://github.com/huggingface/datasets/issues/2882 | 991,800,141 | MDU6SXNzdWU5OTE4MDAxNDE= | 2,882 | `load_dataset('docred')` results in a `NonMatchingChecksumError` | {
"login": "tmpr",
"id": 51313597,
"node_id": "MDQ6VXNlcjUxMzEzNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/51313597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tmpr",
"html_url": "https://github.com/tmpr",
"followers_url": "https://api.github.com/users/tmpr/followers",
"following_url": "https://api.github.com/users/tmpr/following{/other_user}",
"gists_url": "https://api.github.com/users/tmpr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tmpr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tmpr/subscriptions",
"organizations_url": "https://api.github.com/users/tmpr/orgs",
"repos_url": "https://api.github.com/users/tmpr/repos",
"events_url": "https://api.github.com/users/tmpr/events{/privacy}",
"received_events_url": "https://api.github.com/users/tmpr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.\r\n\r\nI'm fixing all this.\r\n\r\n"
] | 1,631,166,902,000 | 1,631,532,270,000 | 1,631,532,270,000 | NONE | null | null | null | ## Describe the bug
I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.
## Steps to reproduce the bug
It is quasi only this code:
```python
import datasets
data = datasets.load_dataset('docred')
```
## Expected results
The DocRED dataset should be loaded without any problems.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-4-b1b83f25a16c> in <module>
----> 1 d = datasets.load_dataset('docred')
~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
845
846 # Download and prepare data
--> 847 builder_instance.download_and_prepare(
848 download_config=download_config,
849 download_mode=download_mode,
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
613 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
614 if not downloaded_from_gcs:
--> 615 self._download_and_prepare(
616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
617 )
~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
673 # Checksums verification
674 if verify_infos:
--> 675 verify_checksums(
676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
677 )
~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 5.0.0
This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.
## Remarks
- I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache.
- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2882/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2881/comments | https://api.github.com/repos/huggingface/datasets/issues/2881/events | https://github.com/huggingface/datasets/pull/2881 | 991,639,142 | MDExOlB1bGxSZXF1ZXN0NzMwMDc1OTAy | 2,881 | Add BIOSSES dataset | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,147,736,000 | 1,631,542,840,000 | 1,631,542,840,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2881",
"html_url": "https://github.com/huggingface/datasets/pull/2881",
"diff_url": "https://github.com/huggingface/datasets/pull/2881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2881.patch",
"merged_at": 1631542840000
} | Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in "Biomedical Datasets - BigScience Workshop 2021" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2881/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2880/comments | https://api.github.com/repos/huggingface/datasets/issues/2880/events | https://github.com/huggingface/datasets/pull/2880 | 990,877,940 | MDExOlB1bGxSZXF1ZXN0NzI5NDIzMDMy | 2,880 | Extend support for streaming datasets that use pathlib.Path stem/suffix | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,090,563,000 | 1,631,193,209,000 | 1,631,193,209,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2880",
"html_url": "https://github.com/huggingface/datasets/pull/2880",
"diff_url": "https://github.com/huggingface/datasets/pull/2880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2880.patch",
"merged_at": 1631193209000
} | This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`.
Related to #2876, #2874, #2866.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2880/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2879/comments | https://api.github.com/repos/huggingface/datasets/issues/2879/events | https://github.com/huggingface/datasets/issues/2879 | 990,257,404 | MDU6SXNzdWU5OTAyNTc0MDQ= | 2,879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | {
"login": "rcgale",
"id": 2279700,
"node_id": "MDQ6VXNlcjIyNzk3MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2279700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcgale",
"html_url": "https://github.com/rcgale",
"followers_url": "https://api.github.com/users/rcgale/followers",
"following_url": "https://api.github.com/users/rcgale/following{/other_user}",
"gists_url": "https://api.github.com/users/rcgale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcgale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcgale/subscriptions",
"organizations_url": "https://api.github.com/users/rcgale/orgs",
"repos_url": "https://api.github.com/users/rcgale/repos",
"events_url": "https://api.github.com/users/rcgale/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcgale/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that should work.\r\n\r\nOn the other hand, would it be possible for @patrickvonplaten to update the [blog post](https://huggingface.co./blog/fine-tune-wav2vec2-english) with the correct version of `datasets`?",
"I just proposed a change in the blog post.\r\n\r\nI had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.\r\n\r\nI still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem (\"Error: the requested data set requires `datasets>=1.5.0`.\"). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data.",
"Also, thank you for a quick and helpful reply!"
] | 1,631,040,825,000 | 1,631,120,119,000 | 1,631,092,348,000 | NONE | null | null | null | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co./blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2879/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2878/comments | https://api.github.com/repos/huggingface/datasets/issues/2878/events | https://github.com/huggingface/datasets/issues/2878 | 990,093,316 | MDU6SXNzdWU5OTAwOTMzMTY= | 2,878 | NotADirectoryError: [WinError 267] During load_from_disk | {
"login": "Grassycup",
"id": 1875064,
"node_id": "MDQ6VXNlcjE4NzUwNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1875064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Grassycup",
"html_url": "https://github.com/Grassycup",
"followers_url": "https://api.github.com/users/Grassycup/followers",
"following_url": "https://api.github.com/users/Grassycup/following{/other_user}",
"gists_url": "https://api.github.com/users/Grassycup/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Grassycup/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Grassycup/subscriptions",
"organizations_url": "https://api.github.com/users/Grassycup/orgs",
"repos_url": "https://api.github.com/users/Grassycup/repos",
"events_url": "https://api.github.com/users/Grassycup/events{/privacy}",
"received_events_url": "https://api.github.com/users/Grassycup/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,631,027,705,000 | 1,631,027,705,000 | null | NONE | null | null | null | ## Describe the bug
Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails.
Performing the same operation succeeds on non-windows environment (AWS Sagemaker).
## Steps to reproduce the bug
```python
# Followed https://huggingface.co./docs/datasets/filesystems.html#loading-a-processed-dataset-from-s3
from datasets import load_from_disk
from datasets.filesystems import S3FileSystem
s3_file = "output of save_to_disk"
s3_filesystem = S3FileSystem()
load_from_disk(s3_file, fs=s3_filesystem)
```
## Expected results
load_from_disk succeeds without error
## Actual results
Seems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it.
```
Exception ignored in: <finalize object at 0x26409231ce0; dead>
Traceback (most recent call last):
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup
cls._rmtree(name)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
[Previous line repeated 2 more times]
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror
cls._rmtree(path)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe
onerror(os.scandir, path, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe
with os.scandir(path) as scandir_it:
NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow'
Exception ignored in: <finalize object at 0x264091c7880; dead>
Traceback (most recent call last):
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup
cls._rmtree(name)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
[Previous line repeated 2 more times]
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror
cls._rmtree(path)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe
onerror(os.scandir, path, sys.exc_info())
File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe
with os.scandir(path) as scandir_it:
NotADirectoryError: [WinError 267] The directory name is invalid:
'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2878/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2877/comments | https://api.github.com/repos/huggingface/datasets/issues/2877/events | https://github.com/huggingface/datasets/issues/2877 | 990,027,249 | MDU6SXNzdWU5OTAwMjcyNDk= | 2,877 | Don't keep the dummy data folder or dataset_infos.json when resolving data files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi @lhoestq I am new to huggingface datasets, I would like to work on this issue!\r\n",
"Thanks for the help :) \r\n\r\nAs mentioned in the PR, excluding files named \"dummy_data.zip\" is actually more general than excluding the files inside a \"dummy\" folder. I just did the change in the PR, I think we can merge it now"
] | 1,631,023,744,000 | 1,632,906,338,000 | 1,632,906,338,000 | MEMBER | null | null | null | When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files.
There are already a few exceptions:
- files starting with "." are ignored
- the dataset card "README.md" is ignored
- any file named "config.json" is ignored (currently it isn't used anywhere, but it could be used in the future to define splits or configs for example, but not 100% sure)
However any data files in a folder named "dummy" should be ignored as well as they should only be used to test the dataset.
Same for "dataset_infos.json" which should only be used to get the `dataset.info` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2877/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2876/comments | https://api.github.com/repos/huggingface/datasets/issues/2876/events | https://github.com/huggingface/datasets/pull/2876 | 990,001,079 | MDExOlB1bGxSZXF1ZXN0NzI4NjU3MDc2 | 2,876 | Extend support for streaming datasets that use pathlib.Path.glob | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I am thinking that ideally we should call `fs.glob()` instead...",
"Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs."
] | 1,631,022,225,000 | 1,631,267,449,000 | 1,631,267,448,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2876",
"html_url": "https://github.com/huggingface/datasets/pull/2876",
"diff_url": "https://github.com/huggingface/datasets/pull/2876.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2876.patch",
"merged_at": 1631267448000
} | This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`.
Related to #2874, #2866.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2876/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2876/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2875/comments | https://api.github.com/repos/huggingface/datasets/issues/2875/events | https://github.com/huggingface/datasets/issues/2875 | 989,919,398 | MDU6SXNzdWU5ODk5MTkzOTg= | 2,875 | Add Congolese Swahili speech datasets | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 1,631,016,830,000 | 1,631,016,830,000 | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Congolese Swahili speech corpora
- **Data:** https://gamayun.translatorswb.org/data/
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Also related: https://mobile.twitter.com/OktemAlp/status/1435196393631764482 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2875/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2874/comments | https://api.github.com/repos/huggingface/datasets/issues/2874/events | https://github.com/huggingface/datasets/pull/2874 | 989,685,328 | MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4 | 2,874 | Support streaming datasets that use pathlib | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```",
"@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... π
",
"No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!"
] | 1,631,000,149,000 | 1,631,039,122,000 | 1,631,014,875,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2874",
"html_url": "https://github.com/huggingface/datasets/pull/2874",
"diff_url": "https://github.com/huggingface/datasets/pull/2874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2874.patch",
"merged_at": 1631014875000
} | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2874/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2874/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2873/comments | https://api.github.com/repos/huggingface/datasets/issues/2873/events | https://github.com/huggingface/datasets/pull/2873 | 989,587,695 | MDExOlB1bGxSZXF1ZXN0NzI4MzA0MTMw | 2,873 | adding swedish_medical_ner | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?",
"Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset"
] | 1,630,989,893,000 | 1,631,911,657,000 | 1,631,911,657,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2873",
"html_url": "https://github.com/huggingface/datasets/pull/2873",
"diff_url": "https://github.com/huggingface/datasets/pull/2873.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2873.patch",
"merged_at": null
} | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2873/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2872/comments | https://api.github.com/repos/huggingface/datasets/issues/2872/events | https://github.com/huggingface/datasets/pull/2872 | 989,453,069 | MDExOlB1bGxSZXF1ZXN0NzI4MTkzMjkz | 2,872 | adding swedish_medical_ner | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,630,965,652,000 | 1,630,989,392,000 | 1,630,989,392,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2872",
"html_url": "https://github.com/huggingface/datasets/pull/2872",
"diff_url": "https://github.com/huggingface/datasets/pull/2872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2872.patch",
"merged_at": null
} | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2872/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2871/comments | https://api.github.com/repos/huggingface/datasets/issues/2871/events | https://github.com/huggingface/datasets/issues/2871 | 989,436,088 | MDU6SXNzdWU5ODk0MzYwODg= | 2,871 | datasets.config.PYARROW_VERSION has no attribute 'major' | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.",
"Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:\r\n- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`\r\n- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists\r\n",
"Sorted. Thanks!",
"Reopening this. Although the `test_dataset_common.py` script works fine now.\r\n\r\nHas this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?\r\n\r\nhttps://github.com/huggingface/datasets/pull/2873",
"Hi @bwang482,\r\n\r\nIf you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.\r\n\r\nFor example, for [\"ci/circleci: run_dataset_script_tests_pyarrow_1\" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]\r\n= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =\r\n```\r\n\r\nTherefore, your PR non-passing test has nothing to do with this issue."
] | 1,630,962,417,000 | 1,631,091,112,000 | 1,631,091,112,000 | CONTRIBUTOR | null | null | null | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2871/timeline | null | completed | false |