url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.2B
1.82B
| node_id
stringlengths 18
19
| number
int64 4.13k
6.08k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4435/comments | https://api.github.com/repos/huggingface/datasets/issues/4435/events | https://github.com/huggingface/datasets/issues/4435 | 1,257,496,552 | I_kwDODunzps5K89_o | 4,435 | Load a local cached dataset that has been modified | {
"login": "mihail911",
"id": 2789441,
"node_id": "MDQ6VXNlcjI3ODk0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihail911",
"html_url": "https://github.com/mihail911",
"followers_url": "https://api.github.com/users/mihail911/followers",
"following_url": "https://api.github.com/users/mihail911/following{/other_user}",
"gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihail911/subscriptions",
"organizations_url": "https://api.github.com/users/mihail911/orgs",
"repos_url": "https://api.github.com/users/mihail911/repos",
"events_url": "https://api.github.com/users/mihail911/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihail911/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.",
"Awesome, hvala Mario! This works. "
] | 2022-06-02T01:51:49 | 2022-06-02T23:59:26 | 2022-06-02T23:59:18 | NONE | null | null | null | ## Describe the bug
I have loaded a dataset as follows:
```
d = load_dataset("emotion", split="validation")
```
Afterwards I make some modifications to the dataset via a `map` call:
```
d.map(some_update_func, cache_file_name=modified_dataset)
```
This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns:
```
modified_dataset
dataset_info.json
emotion-test.arrow
emotion-train.arrow
emotion-validation.arrow
```
as expected. However, when I try to load up the modified cached dataset via a call to
```
modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset")
```
it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset:
```
Using custom data configuration validation-cdbf51685638421b
Downloading and preparing dataset emotion/validation to ...
```
How am I supposed to load the original modified local cache copy of the dataset?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4435/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4434/comments | https://api.github.com/repos/huggingface/datasets/issues/4434/events | https://github.com/huggingface/datasets/pull/4434 | 1,256,207,321 | PR_kwDODunzps443mAr | 4,434 | Fix dummy dataset generation script for handling nested types of _URLs | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-06-01T14:53:15 | 2022-06-07T12:08:28 | 2022-06-07T09:24:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4434",
"html_url": "https://github.com/huggingface/datasets/pull/4434",
"diff_url": "https://github.com/huggingface/datasets/pull/4434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4434.patch",
"merged_at": "2022-06-07T09:24:09"
} | It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset.
I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types.
Linked to issue #4428
PS: I am not sure whether my code fix this issue in a proper way. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4434/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4433/comments | https://api.github.com/repos/huggingface/datasets/issues/4433/events | https://github.com/huggingface/datasets/pull/4433 | 1,255,830,758 | PR_kwDODunzps442P5L | 4,433 | Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added back the `[:]` and a comment to explain why this is needed. "
] | 2022-06-01T12:09:56 | 2022-06-09T10:34:54 | 2022-06-09T10:26:07 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4433",
"html_url": "https://github.com/huggingface/datasets/pull/4433",
"diff_url": "https://github.com/huggingface/datasets/pull/4433.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4433.patch",
"merged_at": "2022-06-09T10:26:06"
} | Fix #4348 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4433/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4433/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4432/comments | https://api.github.com/repos/huggingface/datasets/issues/4432/events | https://github.com/huggingface/datasets/pull/4432 | 1,255,523,720 | PR_kwDODunzps441JmK | 4,432 | Fix builder docstring | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-01T09:45:30 | 2022-06-02T17:43:47 | 2022-06-02T17:35:15 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4432",
"html_url": "https://github.com/huggingface/datasets/pull/4432",
"diff_url": "https://github.com/huggingface/datasets/pull/4432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4432.patch",
"merged_at": "2022-06-02T17:35:15"
} | Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co./docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4432/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"These test errors are related to issue #4428 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.",
"> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https://huggingface.co./datasets/silver/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?",
"Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https://huggingface.co./datasets/silver/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https://huggingface.co./datasets/silver/personal_dialog as well"
] | 2022-06-01T01:20:40 | 2022-06-11T12:40:23 | 2022-06-11T12:31:16 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"merged_at": null
} | It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4430/comments | https://api.github.com/repos/huggingface/datasets/issues/4430/events | https://github.com/huggingface/datasets/issues/4430 | 1,254,412,591 | I_kwDODunzps5KxNEv | 4,430 | Add ability to load newer, cleaner version of Multi-News | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.",
"@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a zip but as three files (train/dev/test). How is this usually handled in HF Datasets, should `_URL` be a dict with keys `train`, `val`, `test` perhaps?",
"Yes! Let me help you with more detailed instructions.\r\n\r\nIn the first step, we need to update the URLs. One of the possible dictionary structures is as follows:\r\n```python\r\n_URLs = {\r\n \"train\": {\"src\": \"https://drive.google.com/uc?export=download&id=1wHAWDOwOoQWSj7HYpyJ3Aeud8WhhaJ7P\", \"tgt\": \"https://drive.google.com/uc?export=download&id=1QVgswwhVTkd3VLCzajK6eVkcrSWEK6kq\"}\r\n \"val\": ...\r\n \"test\": ...\r\n}\r\n```\r\n\r\n(You can use this page to generate direct download links: https://sites.google.com/site/gdocs2direct/)\r\n\r\nThen we move to the `split_generators` method:\r\n```python\r\ndef _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n files = dl_manager.download(_URLs)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\"src_file\": files[\"train\"][\"src\"], \"tgt_file\": files[\"train\"][\"tgt\"]},\r\n ),\r\n ... # same for val and test\r\n ]\r\n```\r\nFinally, we adjust the signature of `_generate_examples`:\r\n```python\r\ndef _generate_examples(self, src_file, tgt_file):\r\n \"\"\"Yields examples.\"\"\"\r\n with open(src_file, encoding=\"utf-8\") as src_f, open(\r\n tgt_file, encoding=\"utf-8\"\r\n ) as tgt_f:\r\n ... # the rest is the same\r\n```\r\n\r\nAnd that's it!\r\n\r\nPS: Let me know if you need help updating the dummy data and regenerating the metadata file.",
"Awesome! Thanks for the detailed help, that was straightforward with your instruction. However, I think I am being blocked by this issue: https://github.com/huggingface/datasets/issues/4428",
"Feel free to open a PR, and I can fix this manually.",
"Awsome, done in #4451!"
] | 2022-05-31T21:00:44 | 2022-06-07T17:14:44 | 2022-06-07T17:14:44 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq).
Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility.
**Describe the solution you'd like**
Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues.
**Describe alternatives you've considered**
Replace the current URL to the original version to the dataset with the URL to the version with fixes.
**Additional context**
Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4430/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4429/comments | https://api.github.com/repos/huggingface/datasets/issues/4429/events | https://github.com/huggingface/datasets/pull/4429 | 1,254,184,358 | PR_kwDODunzps44whxN | 4,429 | Update builder docstring for deprecated/added arguments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 is investigating why deprecated/added do not affect the enclosed text format when used in args docstring: no special formatting appears: \r\n- https://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder",
"@albertvillanova please check now 👍 \r\nhttps://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder\r\n\r\n<img width=\"500\" alt=\"Screenshot 2022-06-06 at 10 20 34\" src=\"https://user-images.githubusercontent.com/11827707/172123471-fab97138-c903-4a71-ab7f-c90e5e43c58f.png\">\r\n",
"Thanks @mishig25.\r\n\r\nJust one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?",
"> Just one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?\r\n\r\nYes, that is expected 😊 because the depreacted box is being bounded by its parent box (the box for `name` argument in the screenshot above)"
] | 2022-05-31T17:37:25 | 2022-06-08T11:40:18 | 2022-06-08T11:31:45 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4429",
"html_url": "https://github.com/huggingface/datasets/pull/4429",
"diff_url": "https://github.com/huggingface/datasets/pull/4429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4429.patch",
"merged_at": "2022-06-08T11:31:45"
} | This PR updates the builder docstring with deprecated/added directives for arguments name/config_name.
Follow up of:
- #4414
- huggingface/doc-builder#233
First merge:
- #4432 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4429/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4428/comments | https://api.github.com/repos/huggingface/datasets/issues/4428/events | https://github.com/huggingface/datasets/issues/4428 | 1,254,092,818 | I_kwDODunzps5Kv_AS | 4,428 | Errors when building dummy data if you use nested _URLS | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-05-31T16:10:57 | 2022-06-07T09:24:09 | 2022-06-07T09:24:09 | CONTRIBUTOR | null | null | null | ## Describe the bug
When making dummy data with the `datasets-cli dummy_data` tool,
an error will be raised if you use a nested _URLS in your dataset script.
Traceback (most recent call last):
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module>
main()
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run
self._autogenerate_dummy_data(
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data
dataset_builder._split_generators(dl_manager)
File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators
data_dir = dl_manager.download_and_extract(urls)
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract
dummy_output = self.mock_download_manager.download(url_or_urls)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download
return self.download_and_extract(data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract
return self.create_dummy_data_dict(dummy_file, data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict
if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
TypeError: unhashable type: 'list'
## Steps to reproduce the bug
You can use my dataset script implemented here:
https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py
```python
datasets_cli dummy_data datasets/personal_dialog --auto_generate
```
You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54
to
```
"train": "https://huggingface.co./datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz"
```
before runing the above script to avoid downloading a large training data.
## Expected results
The dummy data should be generated
## Actual results
An error is raised.
It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165
We only check if the first item of dummy_data_dict.values() is str.
However, dummy_data_dict.values() may have the type of [str, list, list].
A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to
```python
if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
```
But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4428/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4427/comments | https://api.github.com/repos/huggingface/datasets/issues/4427/events | https://github.com/huggingface/datasets/pull/4427 | 1,253,959,313 | PR_kwDODunzps44vyGg | 4,427 | Add HF.co for PRs/Issues for specific datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-31T14:31:21 | 2022-06-01T12:37:42 | 2022-06-01T12:29:02 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4427",
"html_url": "https://github.com/huggingface/datasets/pull/4427",
"diff_url": "https://github.com/huggingface/datasets/pull/4427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4427.patch",
"merged_at": "2022-06-01T12:29:02"
} | As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4427/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4426/comments | https://api.github.com/repos/huggingface/datasets/issues/4426/events | https://github.com/huggingface/datasets/issues/4426 | 1,253,887,311 | I_kwDODunzps5KvM1P | 4,426 | Add loading variable number of columns for different splits | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "
] | 2022-05-31T13:40:16 | 2022-06-03T16:25:25 | 2022-06-03T16:25:25 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4426/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4425/comments | https://api.github.com/repos/huggingface/datasets/issues/4425/events | https://github.com/huggingface/datasets/pull/4425 | 1,253,641,604 | PR_kwDODunzps44uuDq | 4,425 | Make extensions case-insensitive in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-31T10:10:04 | 2022-06-01T14:15:30 | 2022-06-01T14:06:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4425",
"html_url": "https://github.com/huggingface/datasets/pull/4425",
"diff_url": "https://github.com/huggingface/datasets/pull/4425.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4425.patch",
"merged_at": "2022-06-01T14:06:51"
} | Related to #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4425/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4424/comments | https://api.github.com/repos/huggingface/datasets/issues/4424/events | https://github.com/huggingface/datasets/pull/4424 | 1,253,542,488 | PR_kwDODunzps44uZBD | 4,424 | Fix DuplicatedKeysError in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-31T08:47:45 | 2022-05-31T13:50:50 | 2022-05-31T13:42:31 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4424",
"html_url": "https://github.com/huggingface/datasets/pull/4424",
"diff_url": "https://github.com/huggingface/datasets/pull/4424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4424.patch",
"merged_at": "2022-05-31T13:42:31"
} | Fix #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4424/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4423/comments | https://api.github.com/repos/huggingface/datasets/issues/4423/events | https://github.com/huggingface/datasets/pull/4423 | 1,253,326,023 | PR_kwDODunzps44trdP | 4,423 | Add new dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! As for https://github.com/huggingface/datasets/pull/4431 please also update the licensing section in https://huggingface.co./datasets/silver/mmchat ;)\r\n\r\nThen if it's fine for you feel free to close this PR"
] | 2022-05-31T04:45:07 | 2022-06-11T12:40:52 | 2022-06-11T12:31:42 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4423",
"html_url": "https://github.com/huggingface/datasets/pull/4423",
"diff_url": "https://github.com/huggingface/datasets/pull/4423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4423.patch",
"merged_at": null
} | Hi, I am adding a new dataset MMChat.
It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4423/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4422/comments | https://api.github.com/repos/huggingface/datasets/issues/4422/events | https://github.com/huggingface/datasets/issues/4422 | 1,253,146,511 | I_kwDODunzps5KsX-P | 4,422 | Cannot load timit_asr data set | {
"login": "bhaddow",
"id": 992795,
"node_id": "MDQ6VXNlcjk5Mjc5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/992795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhaddow",
"html_url": "https://github.com/bhaddow",
"followers_url": "https://api.github.com/users/bhaddow/followers",
"following_url": "https://api.github.com/users/bhaddow/following{/other_user}",
"gists_url": "https://api.github.com/users/bhaddow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhaddow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhaddow/subscriptions",
"organizations_url": "https://api.github.com/users/bhaddow/orgs",
"repos_url": "https://api.github.com/users/bhaddow/repos",
"events_url": "https://api.github.com/users/bhaddow/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhaddow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.",
"Thanks for the quick fix!",
"@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ",
"Hi @albertvillanova -It loads fine on a copy of the data from deepai - although I have to remove the copies of the .WAV files (with extension .WAV,wav). On a copy of the data that was obtained from the LDC, the glob still fails to find the files. The LDC copy looks like it was copied from CD, in 2004, so the structure may be different to a current download.",
"Ah, if I change the train/ and test/ directories to TRAIN/ and TEST/ then it works!",
"Thanks for your investigation and report, @bhaddow. I'm adding another fix for the TRAIN/train and TEST/test directory names."
] | 2022-05-30T22:00:22 | 2022-06-02T06:34:05 | 2022-05-31T13:42:31 | NONE | null | null | null | ## Describe the bug
I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all.
## Steps to reproduce the bug
```python
timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset")
# Sample code to reproduce the bug
```
## Expected results
The data set should load without error. It worked for me before the LDC url change.
## Actual results
```
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: SA1
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4422/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4421/comments | https://api.github.com/repos/huggingface/datasets/issues/4421/events | https://github.com/huggingface/datasets/pull/4421 | 1,253,059,467 | PR_kwDODunzps44szxR | 4,421 | Add extractor for bzip2-compressed files | {
"login": "asivokon",
"id": 2910707,
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asivokon",
"html_url": "https://github.com/asivokon",
"followers_url": "https://api.github.com/users/asivokon/followers",
"following_url": "https://api.github.com/users/asivokon/following{/other_user}",
"gists_url": "https://api.github.com/users/asivokon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asivokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asivokon/subscriptions",
"organizations_url": "https://api.github.com/users/asivokon/orgs",
"repos_url": "https://api.github.com/users/asivokon/repos",
"events_url": "https://api.github.com/users/asivokon/events{/privacy}",
"received_events_url": "https://api.github.com/users/asivokon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-30T19:19:40 | 2022-06-06T15:22:50 | 2022-06-06T15:22:50 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4421",
"html_url": "https://github.com/huggingface/datasets/pull/4421",
"diff_url": "https://github.com/huggingface/datasets/pull/4421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4421.patch",
"merged_at": "2022-06-06T15:22:49"
} | This change enables loading bzipped datasets, just like any other compressed dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4421/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4420/comments | https://api.github.com/repos/huggingface/datasets/issues/4420/events | https://github.com/huggingface/datasets/issues/4420 | 1,252,739,239 | I_kwDODunzps5Kq0in | 4,420 | Metric evaluation problems in multi-node, shared file system | {
"login": "gullabi",
"id": 40303490,
"node_id": "MDQ6VXNlcjQwMzAzNDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/40303490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gullabi",
"html_url": "https://github.com/gullabi",
"followers_url": "https://api.github.com/users/gullabi/followers",
"following_url": "https://api.github.com/users/gullabi/following{/other_user}",
"gists_url": "https://api.github.com/users/gullabi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gullabi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gullabi/subscriptions",
"organizations_url": "https://api.github.com/users/gullabi/orgs",
"repos_url": "https://api.github.com/users/gullabi/repos",
"events_url": "https://api.github.com/users/gullabi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gullabi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"If you call `metric.compute` in a distributed setup like yours, then `metric.compute` is called in each process. `metric.compute` first calls `metric.add_batch`, and it looks like your error appears at that stage.\r\n\r\nTo make sure that all the processes have started writing their predictions/references at the same time, each process waits for process 0 to lock `slurm-{world_size}-0.arrow.lock`. Process 0 locks this file when `metric.add_batch` is called, so here when `metric.compute` is called.\r\n\r\nTherefore your error can happen when process 0 takes too much time to call `metric.compute` compared to process 3 (>100 seconds by default). I haven't tried running your code but could it be the case ?\r\n\r\nI guess it could also happen if you run multiple times the same distributed job at the same time with the same `experiment_id` because they would collide.\r\n",
"We've finally been able to isolate the problem, it wasn't a timing problem, but rather a file locking one. \r\nThe locks produced by calling `flock` where not visible between nodes (so the master node couldn't check other node's locks nor the other way around). \r\n\r\nWe are now having issues with the pre-processing in our runner script, but are not related with the rendezvous process during the evaluation phase. We will let you know about it once we address it. \r\n\r\nOur solution to the rendezvous is as follows:\r\n- We solved the problem by calling `lockf` instead of `flock`.\r\n- We had to change slightly the `_check_all_processes_locks` method so that the main process (i.e. process 0) didn't check it's own lock (because `lockf` permits recursive locks and thus checking it only replaced the current lock with a new one). \r\n\r\nWe use a shared file system between nodes using GPFS in our cluster setup. Maybe the difference between the behavior we see with respect to your usage in multi-node executions comes from that fact. Which file system scheme do you use for the multi-node executions? \r\n\r\n`lockf` seems to work in more settings than `flock`, so maybe we could write a PR so you could test it in your environment. ",
"Cool, I'm glad you managed to make evaluation work :)\r\n\r\nI'm not completely aware of the differences between lockf and flock, but I've read somewhere that flock is preferable over lockf in multithreading and multiprocessing situations. Here we definitely are in such a situation so unless it is super important I don't think we will switch to lockf",
"> * We had to change slightly the `_check_all_processes_locks` method so that the main process (i.e. process 0) didn't check it's own lock (because `lockf` permits recursive locks and thus checking it only replaced the current lock with a new one).\r\n\r\nHi @panserbjorn , Can you share your `_check_all_processes_locks` function? thanks!",
"```\r\ndef _check_all_processes_locks(self):\r\n expected_lock_file_names = [\r\n os.path.join(self.data_dir, f\"{self.experiment_id}-{self.num_process}-{process_id}.arrow.lock\")\r\n for process_id in range(self.num_process)\r\n ]\r\n #for expected_lock_file_name in expected_lock_file_names: # OUR CHANGE process 0 shouldn't check its own lock\r\n for expected_lock_file_name in expected_lock_file_names[1:]:\r\n nofilelock = FileFreeLock(expected_lock_file_name)\r\n try:\r\n nofilelock.acquire(timeout=self.timeout)\r\n except Timeout:\r\n raise ValueError(\r\n f\"Expected to find locked file {expected_lock_file_name} from process {self.process_id} but it doesn't exist.\"\r\n )\r\n else:\r\n nofilelock.release()\r\n```\r\n\r\n### Changed files:\r\n- metric.py file in the datasets library \r\n- filelock.py file in the datasets/utils library. \r\n\r\n\r\nChanges we made:\r\n\r\n1. We changed the flock for lockf \r\n flock and lockf both perform a lock over a file (like the lock for writing). \r\n The difference is that flock only works in local file systems, but if you have a shared file system (like what we have in the clusters) the flock fails to “see” the lock of another node. The only disadvantage we had was that a single process couldn’t detect it’s own lock so we did the second change.\r\n2. We prevented the process 0 (which is the one that coordinates the rendezvous) from checking its own lock on its arrow because it didn't work with lockf (as stated in the previous change). \r\n3. We made a second rendezvous so that all the process had the results of the metrics (other than the loss) and not only the process 0.\r\n What happened was that only process 0 computed the metric and that didn’t present any problem if you are using the loss. However, if you are using another metric, the only process which had the information to choose the best checkpoint at evaluation time was the process 0. But since the evaluation was performed over all processes, every process except the process 0 chose a bad check point (bad meaning it wasn’t the best one) because they didn’t have the information of the metric of the best checkpoint. \r\n The consequence was that the evaluation was different from what would result if using only the best checkpoint, because each process chose a different checkpoint to run the evaluation and thus the numbers were often worse than the numbers that would be obtained if all processes choose the best checkpoint (correct one) to perform the evaluation of their samples. \r\n We performed a second rendezvous so that all processes had the same best_metric and best_model as process 0 after the evaluation cycle. \r\n",
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] | 2022-05-30T13:24:05 | 2023-07-11T09:33:18 | 2023-07-11T09:33:17 | NONE | null | null | null | ## Describe the bug
Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412)
## Steps to reproduce the bug
1. clone [this huggingface model](https://huggingface.co./PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py).
2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0`
3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71)
Specifically for the datasets, for the distributed setup the `load_metric` is called as:
```
process_id=int(os.environ["RANK"])
num_process=int(os.environ["WORLD_SIZE"])
eval_metrics = {metric: load_metric(metric,
process_id=process_id,
num_process=num_process,
experiment_id="slurm")
for metric in data_args.eval_metrics}
```
## Expected results
The training should not fail, due to the failure of the `Metric.compute()` step.
## Actual results
For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files
```
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module>
main()
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate
metric_key_prefix=metric_key_prefix,
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp>
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute
self.add_batch(**inputs)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch
self._init_writer()
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer
self._check_rendez_vous() # wait for master to be ready and to let everyone go
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous
) from None
ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist.
```
When I look at the cache directory, I can see all the lock files in principle:
```
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock
```
I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
- Python version: 3.7.4
- PyArrow version: 7.0.0
- Pandas version: 1.3.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4420/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4419/comments | https://api.github.com/repos/huggingface/datasets/issues/4419/events | https://github.com/huggingface/datasets/issues/4419 | 1,252,652,896 | I_kwDODunzps5Kqfdg | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.",
"Hi @mariosasko, right! I'll update the issue title/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual",
"I thought we were supposed to move gradually from `unittest` to `pytest`..."
] | 2022-05-30T12:13:18 | 2022-09-30T16:01:37 | 2022-09-30T16:01:37 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4419/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4418/comments | https://api.github.com/repos/huggingface/datasets/issues/4418/events | https://github.com/huggingface/datasets/pull/4418 | 1,252,506,268 | PR_kwDODunzps44q9pG | 4,418 | Add dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-30T10:10:40 | 2022-05-30T14:58:18 | 2022-05-30T14:58:18 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4418",
"html_url": "https://github.com/huggingface/datasets/pull/4418",
"diff_url": "https://github.com/huggingface/datasets/pull/4418.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4418.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4418/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4417/comments | https://api.github.com/repos/huggingface/datasets/issues/4417/events | https://github.com/huggingface/datasets/issues/4417 | 1,251,933,091 | I_kwDODunzps5Knvuj | 4,417 | how to convert a dict generator into a huggingface dataset. | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@albertvillanova @lhoestq , could you please help me on this issue. ",
"Hi ! As mentioned on the [forum](https://discuss.huggingface.co/t/how-to-wrap-a-generator-with-hf-dataset/18464), the simplest for now would be to define a [dataset script](https://huggingface.co./docs/datasets/dataset_script) which can contain your generator. But we can also explore adding something like `ds = Dataset.from_iterable(seqio_dataset)`",
"@lhoestq , hey i did as you instructed, but sadly i cannot get pass through the download_manager, as i dont have anything to download. i was skipping the ` def _split_generators(self, dl_manager):` function. but i cannot get around it. I get a `NotImplementedError: `\r\n\r\nthe following is my code for the same: \r\n\r\n\r\n\r\n```\r\nimport datasets \r\nimport functools\r\nimport glob \r\nfrom datasets import load_from_disk\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_dataset\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\ndata_path = glob.glob(\"/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/*\", recursive=False)\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\[email protected]_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\n\r\n_CITATION = \"Not ready yet\"\r\n_DESCRIPTION = \"a custom seqio based mixed samples on a given temperature value, that again returns a dataset in HF dataset format well samples on the Mixture temperature\"\r\n_HOMEPAGE = \"ldcil.org\"\r\n\r\nclass CustomSeqio(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n homepage=\"https://ldcil.org\",\r\n citation=_CITATION,)\r\n\r\ndef generate_examples(self):\r\n seqio_train_list = []\r\n for lang in data_path:\r\n dataset_name = lang.split(\"/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n \r\n lang_name_list = []\r\n for lang in data_path:\r\n lang_name = lang.split(\"/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\n seqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7)\r\n \r\n seqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\n for id, ex in enumerate(seqio_mixture_dataset):\r\n yield id, {\"text\": ex[\"targets\"].numpy().decode()}\r\n```\r\n\r\nand i load it by:\r\n\r\n`seqio_mixture = load_dataset(\"seqio_loader\")`",
"@lhoestq , just to make things clear ... \r\n\r\nthe following is my original code, thats not in the HF dataset loading script: \r\n\r\n```\r\nimport functools\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_from_disk\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\nimport glob \r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\n\r\[email protected]_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\ndata_path = glob.glob(\"/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/*\", recursive=False)\r\n\r\nseqio_train_list = []\r\n\r\nfor lang in data_path:\r\n dataset_name = lang.split(\"/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n\r\nlang_name_list = []\r\nfor lang in data_path:\r\n lang_name = lang.split(\"/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\nseqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7\r\n)\r\n\r\nseqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\nfor _, ex in zip(range(15), seqio_mixture_dataset):\r\n print(ex[\"targets\"].numpy().decode())\r\n```\r\n\r\nwhere the seqio_mixture_dataset is the generator that i wanted to be wrapped in HF dataset. \r\n\r\nalso additionally, could you please tell me how do i set the `default_rate=0.7` args where `seqio_mixture` is defined to be made as a custom option in the HF load_dataset() method,\r\n\r\nmaybe like this: \r\n`seqio_mixture_dataset = datasets.load_dataset(\"seqio_loader\",temperature=0.5)`",
"I like the idea of having `Dataset.from_iterable(iterable)` in the API. The only problem is that we also want to make this part cachable, which is tricky if `iterable` is a generator. \r\n\r\nSome resources on this issue:\r\n* https://github.com/uqfoundation/dill/issues/311\r\n* https://stackoverflow.com/questions/7180212/why-cant-generators-be-pickled\r\n* https://github.com/tonyroberts/generator_tools - python package for pickling generators; pickles bytecode, so it creates version-specific dumps",
"For the caching maybe we can have `Dataset.from_generator` as TF and pickle+hash the generator function (not the generator object itself) ?\r\n\r\nAnd then keep `Dataset.from_iterable` fo pickable objects like lists",
"@lhoestq, @mariosasko do you too have any examples where the dataset is a generator and needs to be wrapped into hf dataset ? ",
"@lhoestq, following to my previous question ... what possibly could be done in this [link1](https://github.com/huggingface/datasets/issues/4417#issuecomment-1146627404) [link2](https://github.com/huggingface/datasets/issues/4417#issuecomment-1146627593) case? do you have any ideas? ",
"@lhoestq +1 for the `Dataset.from_generator` idea.\r\n\r\nHaving thought about it, let's avoid adding `Dataset.from_iterable` to the API since dictionaries are technically iteralbles (\"iterable\" is a broad term in Python), and we already provide `Dataset.from_dict`. And for lists maybe we can add `Dataset.from_list` similar to `pa.Table.from_pylist`. WDYT?\r\n",
"Hi @StephennFernandes!\r\n\r\nTo fix the issues in the copied code, rename `generate_examples` to` _generate_examples` and add one level of indentation as this is a method of `GeneratorBasedBuilder` and define `_split_generators` as follows (again as a method of `GeneratorBasedBuilder):\r\n```python\r\n def _split_generators(self, dl_manager):\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={},\r\n ),\r\n ]\r\n```\r\n\r\nAnd if you are feeling extra adventurous, you can try to use ArrowWriter to directly create a cache file:\r\n```python\r\nfrom datasets import Dataset\r\nfrom datasets.arrow_writer import ArrowWriter\r\n\r\nwriter = ArrowWriter(path=\"path/to/cache_file.arrow\", writer_batch_size=1000)\r\n\r\nwith writer:\r\n for ex in generator:\r\n writer.write(ex) \r\n writer.finalize()\r\n\r\ndset = Dataset.from_file(\"path/to/cache_file.arrow\")\r\n```\r\n\r\n",
"I have a problem which I think is very similar: I would like to \"stream\" data to a HF Array (memory-mapped) Dataset, where the final size of the dataset is unknown, but could be much larger than what fits into memory.\r\nWhat I want to end up with is an Array Dataset which I can open using `Dataset.load_from_disk(dataset_path=\"somename\")` and use e.g. as the training set. \r\n\r\nFor this I would have thought there should be an API which allows me to open/create the dataset (and define the features etc), then write examples to the dataset, but I could not find a way to do this. \r\n\r\nI tried doing this and it looks like it works, but it feels very hacky and I am not sure if this might fail to update some of the fields in the json files which may turn out to be important:\r\n```\r\nfrom datasets import Dataset, Features, ClassLabel, Sequence, Value\r\nfrom datasets.arrow_writer import ArrowWriter \r\n# 1) define the features\r\nfeatures = Features(dict(\r\n id=Value(dtype=\"string\"),\r\n tokens=Sequence(feature=Value(dtype=\"string\")),\r\n ner_tags=Sequence(feature=ClassLabel(names=['O', 'B-corporation', 'I-corporation', 'B-creative-work', 'I-creative-work', 'B-group', 'I-group', 'B-location', 'I-location', 'B-person', 'I-person', 'B-product', 'I-product'])),\r\n))\r\n# 2) create empty dataset for examples with these features and store to disk\r\nempty = dict(\r\n id = [],\r\n tokens = [],\r\n ner_tags = [],\r\n)\r\nds = Dataset.from_dict(empty, features=features)\r\nds.save_to_disk(dataset_path=\"debug_ds1\")\r\n\r\n# 3) directly write all the examples to the arrow dataset \r\nwith ArrowWriter(path=\"debug_ds1/dataset.arrow\") as writer: \r\n writer.write(dict(id=0, tokens=[\"a\", \"b\"], ner_tags=[0, 0])) \r\n writer.write(dict(id=1, tokens=[\"x\", \"y\"], ner_tags=[1, 0])) \r\n writer.finalize() \r\n \r\nds2 = Dataset.load_from_disk(dataset_path=\"debug_ds1\")\r\nlen(ds2)\r\n```\r\nIs there a cleaner/proper way to do this?\r\n\r\nI like the sound of `Dataset.from_iterable` or `Dataset.from_generator` (should not from iterable be able to handle from generator too as all generators are iterables?) but how would I define the features for me examples there? ",
"Hi @johann-petrak! You can pass the features directly to ArrowWriter's initializer like so `ArrowWriter(..., features=features)`.\r\n\r\nAnd the reason why I prefer `Dataset.from_generator` over `Dataset.from_iterable` is mentioned in one of my previous comments.",
"@mariosasko so at the moment we still have to create a fake `Dataset` first and then use `ArrowWriter` to write an actual dataset? I'm using the latest version of `datasets` on pypi but my final file is always empty. Is there anything wrong with the code below?\r\n\r\n```python\r\n total = 0\r\n with ArrowWriter(path=str(final_data_path), features=features) as writer:\r\n for batch in loader:\r\n for traj in batch:\r\n for generator in question_generators:\r\n for xi in generator(traj):\r\n # print(f\"Question: {xi.question}, answer: {xi.answer}\")\r\n total += 1\r\n writer.write(\r\n {\r\n \"id\": f\"qa_{total}\",\r\n \"question\": xi.question,\r\n \"answer\": xi.answer,\r\n }\r\n )\r\n writer.finalize()\r\n print(f\"Total #questions = {total}\") # this prints 402\r\n```",
"This works for me if I then (actually I also close the writer: `writer.close()`) open the Arrow file as a dataset using `ds=Dataset.from_file(final_data_path)` then `ds.save_to_disk(somedir)`. The Dataset created that way contains the expected examples.",
"Oh thanks. That did the trick I believe. Shouldn't ArrowWriter have a context manager that does these operations?",
"You can just use `Dataset.from_file` to get your dataset, no need to do an extra `save_to_disk` somewhere else ;)",
"I was thinking that `save_to_disk` is necessary when one wants to re-use that dataset as a proper HF dataset later, no?\r\nAt least what I wanted to achieve is create a dataset that can be opened like any other local or remote dataset. ",
"`save_to_disk`/`load_from_disk` is indeed more general, e.g. it supports datasets that consist in several files, and saves some extra info in a dataset_info.json file (description, citation, split sizes, etc.)\r\n\r\nIf you have one single file it's fine to simply do `.from_file()`"
] | 2022-05-29T16:28:27 | 2022-09-16T14:44:19 | 2022-09-16T14:44:19 | NONE | null | null | null | ### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset.
The code looks like this:
```
for ex in seqio_data:
print(ex[“text”])
```
I need to convert the seqio_data (generator) into huggingface dataset.
the complete seqio code goes here:
```
import functools
import seqio
import tensorflow as tf
import t5.data
from datasets import load_dataset
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils
TaskRegistry = seqio.TaskRegistry
def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
dataset = load_dataset(**dataset_params)
if shuffle:
if seed:
dataset = dataset.shuffle(seed=seed)
else:
dataset = dataset.shuffle()
while True:
for item in dataset[str(split)]:
yield item[column]
def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
return tf.data.Dataset.from_generator(
functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
)
@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
"""Assign the value from the dataset to target_key in key_map"""
return {**key_map, target_key: x}
dataset_name = 'oscar-corpus/OSCAR-2109'
subset= 'mr'
dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True}
dataset_shapes = None
TaskRegistry.add(
"oscar_marathi_corpus",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=dataset_shapes,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"targets": None,
}, target_key="targets")],
output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},
metric_fns=[]
)
dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset(
sequence_length=None,
split="train",
shuffle=True,
num_epochs=1,
shard_info=seqio.ShardInfo(index=0, num_shards=10),
use_cached=False,
seed=42
)
for _, ex in zip(range(5), dataset):
print(ex['targets'].numpy().decode())
```
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4417/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4416/comments | https://api.github.com/repos/huggingface/datasets/issues/4416/events | https://github.com/huggingface/datasets/pull/4416 | 1,251,875,763 | PR_kwDODunzps44o7sF | 4,416 | Add LCCC dataset | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for your help @albertvillanova .\r\n\r\nI think I have fixed all the comments.\r\n\r\nPlease let me know if this PR need further modification ;)",
"@albertvillanova Thank you very much for your kind help.\r\nThese suggestions make the code looks more pythonic.\r\n\r\nI have commited these changes",
"Hi ! The dataset seems to be a duplicate of https://huggingface.co./datasets/silver/lccc - next time no need to add it on github if it's already available on huggingface.co ;)",
"> Hi ! The dataset seems to be a duplicate of https://huggingface.co./datasets/silver/lccc - next time no need to add it on github if it's already available on huggingface.co ;)\r\n\r\nOK, sorry for the inconvenience. I have closed another two PRs since these datasets are already available on huggingface.co",
"It's fine, thanks @silverriver for adding these datasets !"
] | 2022-05-29T12:27:19 | 2022-06-15T10:28:59 | 2022-06-02T09:13:46 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4416",
"html_url": "https://github.com/huggingface/datasets/pull/4416",
"diff_url": "https://github.com/huggingface/datasets/pull/4416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4416.patch",
"merged_at": "2022-06-02T09:13:46"
} | Hi, I am trying to add a new dataset lccc.
All tests are passed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4416/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4415/comments | https://api.github.com/repos/huggingface/datasets/issues/4415/events | https://github.com/huggingface/datasets/pull/4415 | 1,251,002,981 | PR_kwDODunzps44mIJk | 4,415 | Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-27T17:03:42 | 2022-06-07T12:42:25 | 2022-06-07T12:33:52 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4415",
"html_url": "https://github.com/huggingface/datasets/pull/4415",
"diff_url": "https://github.com/huggingface/datasets/pull/4415.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4415.patch",
"merged_at": "2022-06-07T12:33:52"
} | Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error.
TODO:
~~- [ ] handle token + `{Audio, Image}.embed_storage`~~
- [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4415/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4414/comments | https://api.github.com/repos/huggingface/datasets/issues/4414/events | https://github.com/huggingface/datasets/pull/4414 | 1,250,546,888 | PR_kwDODunzps44klhY | 4,414 | Rename DatasetBuilder config_name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-27T09:28:02 | 2022-05-31T15:07:21 | 2022-05-31T14:58:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4414",
"html_url": "https://github.com/huggingface/datasets/pull/4414",
"diff_url": "https://github.com/huggingface/datasets/pull/4414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4414.patch",
"merged_at": "2022-05-31T14:58:51"
} | This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that:
- it avoids confusion with the attribute `DatasetBuilder.name`, which is different
- it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name`
Other simpler possibility could be to rename it to just `config` instead.
Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library.
It would have a major impact to rename it also in:
- load_dataset
- load_dataset_builder: although this could also be assumed as inners...
- in our CLI commands
Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release):
```
load_dataset(dataset, config,...
```
instead of
```
load_dataset(path, name,...
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4414/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4413/comments | https://api.github.com/repos/huggingface/datasets/issues/4413/events | https://github.com/huggingface/datasets/issues/4413 | 1,250,259,822 | I_kwDODunzps5KhXNu | 4,413 | Dataset Viewer issue for ett | {
"login": "dgcnz",
"id": 24966039,
"node_id": "MDQ6VXNlcjI0OTY2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/24966039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgcnz",
"html_url": "https://github.com/dgcnz",
"followers_url": "https://api.github.com/users/dgcnz/followers",
"following_url": "https://api.github.com/users/dgcnz/following{/other_user}",
"gists_url": "https://api.github.com/users/dgcnz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgcnz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgcnz/subscriptions",
"organizations_url": "https://api.github.com/users/dgcnz/orgs",
"repos_url": "https://api.github.com/users/dgcnz/repos",
"events_url": "https://api.github.com/users/dgcnz/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgcnz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https://huggingface.co./datasets/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not responsive: Connection timed out\r\n\r\nCC: @severo ",
"I've just resent the refresh of the preview to the new endpoint, without success.\r\n\r\nCC: @severo ",
"Fixed!\r\n\r\nhttps://huggingface.co./datasets/ett/viewer/h1/test\r\n\r\n<img width=\"982\" alt=\"Capture d’écran 2022-06-15 à 09 30 22\" src=\"https://user-images.githubusercontent.com/1676121/173769035-a075d753-ecfc-4a43-b54b-973105d464d3.png\">\r\n"
] | 2022-05-27T02:12:35 | 2022-06-15T07:30:46 | 2022-06-15T07:30:46 | NONE | null | null | null | ### Link
https://huggingface.co./datasets/ett
### Description
Timestamp is not JSON serializable.
```
Status code: 500
Exception: Status500Error
Message: Type is not JSON serializable: Timestamp
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4413/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4412/comments | https://api.github.com/repos/huggingface/datasets/issues/4412/events | https://github.com/huggingface/datasets/pull/4412 | 1,249,490,179 | PR_kwDODunzps44hFvq | 4,412 | Skip hidden files/directories in data files resolution and `iter_files` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR (via new release) broke many transformers tests.\r\n\r\nI will try to post a summary shortly.\r\n\r\ncc: @ydshieh ",
"So now it can't handle a local path via: `--train_file tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json` even though it's there. it works just fine if I change the path to not have `..`\r\n\r\nYou can reproduce the original problem with:\r\n\r\n```\r\n$ cd transformers \r\n$ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --train_file tests/fixtures/tests_samples/wmt_en_ro/train.json --validation_file tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json --output_dir /tmp/tmp5o5to4k0 --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 1 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix \"translate English to Romanian: \" --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3\r\n[...]\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 656, in <module>\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 346, in main\r\n raw_datasets = load_dataset(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1656, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1439, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1097, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 743, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 588, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 556, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 194, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 144, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '/mnt/nvme0/code/huggingface/transformers-master/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json' at /mnt/nvme0/code/huggingface/transformers-master\r\n```",
"will apply a workaround to `transformers` tests here https://github.com/huggingface/transformers/pull/17721\r\n",
"This has been fixed with https://github.com/huggingface/datasets/pull/4505, will do a patch release tomorrow for `datasets` ;)",
"Thank you for the quick fix, @lhoestq "
] | 2022-05-26T12:10:28 | 2022-06-15T17:11:25 | 2022-06-01T13:04:16 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4412",
"html_url": "https://github.com/huggingface/datasets/pull/4412",
"diff_url": "https://github.com/huggingface/datasets/pull/4412.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4412.patch",
"merged_at": "2022-06-01T13:04:16"
} | Fix #4115 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4412/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4411/comments | https://api.github.com/repos/huggingface/datasets/issues/4411/events | https://github.com/huggingface/datasets/pull/4411 | 1,249,462,390 | PR_kwDODunzps44g_yL | 4,411 | Update `_format_columns` in `remove_columns` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"🤗 This PR closes https://github.com/huggingface/datasets/issues/4398",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.",
"Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code 😩 ",
"Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:",
"Hi again @albertvillanova! Let me know if those tests are fine 🤗 ",
"Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests",
"Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there 😩 ",
"@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.",
"@lhoestq any idea why the CI is not triggered?",
"@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n",
"You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...",
"> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore.",
"Hi again @albertvillanova sorry I was on leave! I'll do that ASAP :hugs:",
"@albertvillanova, does it make sense to add regression tests for `DatasetDict`? As `DatasetDict` doesn't have the attribute `_format_columns`, when we call `remove_columns` over a `DatasetDict` it removes the columns and updates the attributes of each split which is an `ArrowDataset`.\r\n\r\nSo on, we can either:\r\n- Update first the `_format_columns` attribute of each split and then remove the columns over the `DatasetDict`\r\n- Loop over the splits of `DatasetDict` and remove the columns right after updating `_format_columns` of each `ArrowDataset`.\r\n\r\nI assume that the best regression test is the one implemented (mentioned first above), let me know if there's a better way to do that 👍🏻 ",
"I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?",
"> I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?\r\n\r\nHi @lhoestq I can check now!",
"It worked indeed @lhoestq, thanks for the proposal and the review! 🤗 ",
"Oops, I forgot about `@transmit_format`'s existence. From what I see, we should also use this decorator in `flatten`, `rename_column` and `rename_columns`. \r\n\r\n@alvarobartt Let me know if you'd like to work on this (in a subsequent PR).",
"Sure @mariosasko I can prepare another PR to add those too, thanks! "
] | 2022-05-26T11:40:06 | 2022-06-14T19:05:37 | 2022-06-14T16:01:56 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4411",
"html_url": "https://github.com/huggingface/datasets/pull/4411",
"diff_url": "https://github.com/huggingface/datasets/pull/4411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4411.patch",
"merged_at": "2022-06-14T16:01:55"
} | As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns.
So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function.
Hope this helps! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4411/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4410/comments | https://api.github.com/repos/huggingface/datasets/issues/4410/events | https://github.com/huggingface/datasets/pull/4410 | 1,249,148,457 | PR_kwDODunzps44f_Td | 4,410 | Remove Google Drive URL in spider dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-26T06:17:35 | 2022-05-26T06:48:42 | 2022-05-26T06:40:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4410",
"html_url": "https://github.com/huggingface/datasets/pull/4410",
"diff_url": "https://github.com/huggingface/datasets/pull/4410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4410.patch",
"merged_at": "2022-05-26T06:40:12"
} | The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
Fix #4401. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4410/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4409/comments | https://api.github.com/repos/huggingface/datasets/issues/4409/events | https://github.com/huggingface/datasets/pull/4409 | 1,249,083,179 | PR_kwDODunzps44fxiH | 4,409 | Update: add using pcm bytes (#4323) | {
"login": "YooSungHyun",
"id": 34292279,
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YooSungHyun",
"html_url": "https://github.com/YooSungHyun",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files/bytes decoding.",
"Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)",
"But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.",
"How does it get the sampling rate of a PCM file then ? According to [SO](https://stackoverflow.com/a/57027667/17517845) it's not possible to infer it from the file alone",
"> Awesome thanks ! Could you also add tests in `tests/features/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests/features/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?",
"> But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio META information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n",
"There is no \"main\" function in test scripts :) To run a test script you must use the `pytest` command:\r\n```\r\npytest tests/features/test_audio.py\r\n```\r\n\r\nto run only one function you can also do\r\n```\r\npytest tests/features/test_audio.py::test_audio_feature_type_to_arrow\r\n```\r\nfor example",
"@lhoestq\r\nmaybe, if i write test code, i have to commit test_audio.py and send pr?\r\nbecause, we need to keep `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` method after my pr merged?",
"You can add your tests in this PR with the other changes you did",
"@lhoestq \r\ntest complete & commit my test_audio.py\r\n\r\nAND, some change in my code.\r\n\r\naudio.py\r\ni think \"sampling_rate\" is already Audio object initial variable. so, we don`t have to use input parameter.\r\n\r\ntest_audio.py\r\nwe can check \"PCM\" file to path (exactly, extenstion)\r\nso, test case has to know `path`. if only have `bytes`, we don`t know that is \"PCM\" or not",
"@lhoestq\r\nand, why circleci raised exception?\r\nmaybe, [repo](https://huggingface.co./api/datasets/lhoestq/_dummy?full=true) url is not found!\r\nPLZ, CHK!",
"@lhoestq\r\nhello????",
"@lhoestq \r\ntest_audio.py\r\nif we don`t use path in pcm, test-case need to be changed\r\nso, we check path just None",
"i'm merge branch already and `multiprocess` in `setup.py` but circleci error only win version\r\n![image](https://user-images.githubusercontent.com/34292279/175461714-c7d2e741-3b7b-40a3-bba9-13ce2af0055c.png)\r\nhow can i fixed it?",
"@lhoestq thx for comment!\r\ntest_audio.py test complete. it runs sucessfully\r\nand, self.get(\"sampling_rate\") -> value.get(\"sampling_rate\") changed\r\n\r\nand, some comment is not agreed to me, plz check my sub comment!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-26T04:26:36 | 2022-07-07T13:27:29 | 2022-07-07T13:16:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4409",
"html_url": "https://github.com/huggingface/datasets/pull/4409",
"diff_url": "https://github.com/huggingface/datasets/pull/4409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4409.patch",
"merged_at": "2022-07-07T13:16:08"
} | first of all, please look #4323
why i can not use {"path","array","sampling_rate"}
because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value
maybe, i think wav got header but, pcm is not.
and variable naming, pcm data is "byte" type. so, "array" name is not fair i think
so, i use scipy lib and numpy (that is huggingface dependency)
and refer to @lhoestq answered,
1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)
2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py)
4. decode -> read wavfile.read
that way is not screw up my pcm byte to float data, and another audio type(wav) safety
please check! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4409/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4408/comments | https://api.github.com/repos/huggingface/datasets/issues/4408/events | https://github.com/huggingface/datasets/pull/4408 | 1,248,687,574 | PR_kwDODunzps44ecNI | 4,408 | Update imagenet gate | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-25T20:32:19 | 2022-05-25T20:45:11 | 2022-05-25T20:36:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4408",
"html_url": "https://github.com/huggingface/datasets/pull/4408",
"diff_url": "https://github.com/huggingface/datasets/pull/4408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4408.patch",
"merged_at": "2022-05-25T20:36:47"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4408/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4407/comments | https://api.github.com/repos/huggingface/datasets/issues/4407/events | https://github.com/huggingface/datasets/issues/4407 | 1,248,671,778 | I_kwDODunzps5KbTgi | 4,407 | Dataset Viewer issue for conll2012_ontonotesv5 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo ",
"I've just sent the forcing of the refresh of the preview to the new endpoint.",
"Fixed, thanks for the patience. The issue was the amount of RAM allowed to extract the first rows of the dataset was not sufficient."
] | 2022-05-25T20:18:33 | 2022-06-07T18:39:16 | 2022-06-07T18:39:16 | NONE | null | null | null | ### Link
https://huggingface.co./datasets/conll2012_ontonotesv5
### Description
Dataset viewer outage.
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4407/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4406/comments | https://api.github.com/repos/huggingface/datasets/issues/4406/events | https://github.com/huggingface/datasets/pull/4406 | 1,248,626,622 | PR_kwDODunzps44ePLU | 4,406 | Improve language tag for PIAF dataset | {
"login": "lbourdois",
"id": 58078086,
"node_id": "MDQ6VXNlcjU4MDc4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbourdois",
"html_url": "https://github.com/lbourdois",
"followers_url": "https://api.github.com/users/lbourdois/followers",
"following_url": "https://api.github.com/users/lbourdois/following{/other_user}",
"gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions",
"organizations_url": "https://api.github.com/users/lbourdois/orgs",
"repos_url": "https://api.github.com/users/lbourdois/repos",
"events_url": "https://api.github.com/users/lbourdois/events{/privacy}",
"received_events_url": "https://api.github.com/users/lbourdois/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-25T19:41:55 | 2022-05-27T14:51:23 | 2022-05-27T14:51:23 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4406",
"html_url": "https://github.com/huggingface/datasets/pull/4406",
"diff_url": "https://github.com/huggingface/datasets/pull/4406.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4406.patch",
"merged_at": null
} | Hi,
As pointed out by @lhoestq in this discussion (https://huggingface.co./datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub.
This modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4406/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4405/comments | https://api.github.com/repos/huggingface/datasets/issues/4405/events | https://github.com/huggingface/datasets/issues/4405 | 1,248,574,087 | I_kwDODunzps5Ka7qH | 4,405 | [TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform normally, I still wonder if there is a more elegant way?"
] | 2022-05-25T18:56:43 | 2022-06-07T14:27:20 | 2022-06-07T14:27:20 | NONE | null | null | null | ## Describe the bug
I am trying to process the [conll2012_ontonotesv5](https://huggingface.co./datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features.
## Steps to reproduce the bug
```python
import os
from typing import (
List,
Dict,
)
from collections import (
defaultdict,
)
from dataclasses import (
dataclass,
)
from datasets import (
load_dataset,
)
@dataclass
class ConllConverter:
path: str
name: str
cache_dir: str
def __post_init__(
self,
):
self.dataset = load_dataset(
path=self.path,
name=self.name,
cache_dir=self.cache_dir,
)
def convert(
self,
):
class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature
# label_set = list(set([
# label.split("-")[1] if label != "O" else label for label in class_label.names
# ]))
def prepare_chunk(token, entity):
assert len(token) == len(entity)
# Sequence length
length = len(token)
# Variable used
entity_chunk = defaultdict(list)
idx = flag = 0
# While loop
while idx < length:
if entity[idx] == "O":
flag += 1
idx += 1
else:
iob_tp, lab_tp = entity[idx].split("-")
assert iob_tp == "B"
idx += 1
while idx < length and entity[idx].startswith("I-"):
idx += 1
entity_chunk[lab_tp].append(token[flag: idx])
flag = idx
entity_chunk = dict(entity_chunk)
# for label in label_set:
# if label != "O" and label not in entity_chunk.keys():
# entity_chunk[label] = None
return entity_chunk
def prepare_features(
batch: Dict[str, List],
) -> Dict[str, List]:
sentence = [
sent for doc_sent in batch["sentences"] for sent in doc_sent
]
feature = {
"sentence": list(),
}
for sent in sentence:
token = sent["words"]
entity = class_label.int2str(sent["named_entities"])
entity_chunk = prepare_chunk(token, entity)
sent_feat = {
"token": token,
"entity": entity,
"entity_chunk": entity_chunk,
}
feature["sentence"].append(sent_feat)
return feature
column_names = self.dataset.column_names["train"]
dataset = self.dataset.map(
function=prepare_features,
with_indices=False,
batched=True,
batch_size=3,
remove_columns=column_names,
num_proc=1,
)
dataset.save_to_disk(
dataset_dict_path=os.path.join("data", self.path, self.name)
)
if __name__ == "__main__":
converter = ConllConverter(
path="conll2012_ontonotesv5",
name="english_v4",
cache_dir="cache",
)
converter.convert()
```
## Expected results
I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format.
## Actual results
<details>
<summary>Traceback</summary>
```python
Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single
writer.write_batch(batch)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module>
converter.convert()
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert
dataset = self.dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map
transformed_shards[index] = async_result.get()
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Ubuntu 18.04
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4405/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4404/comments | https://api.github.com/repos/huggingface/datasets/issues/4404/events | https://github.com/huggingface/datasets/issues/4404 | 1,248,572,899 | I_kwDODunzps5Ka7Xj | 4,404 | Dataset should have a `.name` field | {
"login": "f4hy",
"id": 36440,
"node_id": "MDQ6VXNlcjM2NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f4hy",
"html_url": "https://github.com/f4hy",
"followers_url": "https://api.github.com/users/f4hy/followers",
"following_url": "https://api.github.com/users/f4hy/following{/other_user}",
"gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f4hy/subscriptions",
"organizations_url": "https://api.github.com/users/f4hy/orgs",
"repos_url": "https://api.github.com/users/f4hy/repos",
"events_url": "https://api.github.com/users/f4hy/events{/privacy}",
"received_events_url": "https://api.github.com/users/f4hy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the same unless it's manually updated after each op.",
"@mariosasko Can we make ._fingerprint not private? seems a critical component for tracking how a model was generated to ensure reproducibility."
] | 2022-05-25T18:56:08 | 2022-09-13T15:09:30 | 2022-06-16T10:47:53 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}`
Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used.
**Describe the solution you'd like**
The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name`
**Describe alternatives you've considered**
For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field.
**Additional context**
My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4404/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4403/comments | https://api.github.com/repos/huggingface/datasets/issues/4403/events | https://github.com/huggingface/datasets/pull/4403 | 1,248,390,134 | PR_kwDODunzps44dcpl | 4,403 | Uncomment logging deactivation for ArrowBasedBuilder | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-25T16:46:15 | 2022-05-31T08:33:36 | 2022-05-31T08:25:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4403",
"html_url": "https://github.com/huggingface/datasets/pull/4403",
"diff_url": "https://github.com/huggingface/datasets/pull/4403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4403.patch",
"merged_at": "2022-05-31T08:25:02"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4403/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4402/comments | https://api.github.com/repos/huggingface/datasets/issues/4402/events | https://github.com/huggingface/datasets/pull/4402 | 1,248,078,067 | PR_kwDODunzps44cdR5 | 4,402 | Skip identical files in `push_to_hub` instead of overwriting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-25T13:12:51 | 2022-05-25T15:16:36 | 2022-05-25T15:08:03 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4402",
"html_url": "https://github.com/huggingface/datasets/pull/4402",
"diff_url": "https://github.com/huggingface/datasets/pull/4402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4402.patch",
"merged_at": "2022-05-25T15:08:03"
} | Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload.
To be able to check if an upload can be resumed, this PR modifies the shard naming scheme from:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet
```
to:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-<SHARD_FINGERPRINT>.parquet
```
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4402/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4402/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4401/comments | https://api.github.com/repos/huggingface/datasets/issues/4401/events | https://github.com/huggingface/datasets/issues/4401 | 1,247,695,921 | I_kwDODunzps5KXlQx | 4,401 | "NonMatchingChecksumError" when importing 'spider' dataset | {
"login": "OmarAlaaeldein",
"id": 81417777,
"node_id": "MDQ6VXNlcjgxNDE3Nzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/81417777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OmarAlaaeldein",
"html_url": "https://github.com/OmarAlaaeldein",
"followers_url": "https://api.github.com/users/OmarAlaaeldein/followers",
"following_url": "https://api.github.com/users/OmarAlaaeldein/following{/other_user}",
"gists_url": "https://api.github.com/users/OmarAlaaeldein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OmarAlaaeldein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OmarAlaaeldein/subscriptions",
"organizations_url": "https://api.github.com/users/OmarAlaaeldein/orgs",
"repos_url": "https://api.github.com/users/OmarAlaaeldein/repos",
"events_url": "https://api.github.com/users/OmarAlaaeldein/events{/privacy}",
"received_events_url": "https://api.github.com/users/OmarAlaaeldein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4069435429,
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive",
"name": "hosted-on-google-drive",
"color": "8B51EF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @OmarAlaaeldein.\r\n\r\nDatasets hosted at Google Drive give problems quite often due to a change in their service:\r\n- #3786 \r\n\r\nRelated to:\r\n- #3906\r\n\r\nI'm having a look.",
"We have made a Pull Request to replace the Google Drive URL. This fix will be accessible in our next `datasets` library release.\r\n\r\nIn the meantime, once the PR merged into master, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 2022-05-25T07:45:07 | 2022-05-26T06:40:12 | 2022-05-26T06:40:12 | NONE | null | null | null | ## Describe the bug
When importing 'spider' dataset [https://huggingface.co./datasets/spider] an error occurs
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('spider')
```
## Expected results
Dataset object
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Environment info
- `datasets` version: 2.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4401/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4400/comments | https://api.github.com/repos/huggingface/datasets/issues/4400/events | https://github.com/huggingface/datasets/issues/4400 | 1,247,404,237 | I_kwDODunzps5KWeDN | 4,400 | load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py. | {
"login": "cailun01",
"id": 20658907,
"node_id": "MDQ6VXNlcjIwNjU4OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20658907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cailun01",
"html_url": "https://github.com/cailun01",
"followers_url": "https://api.github.com/users/cailun01/followers",
"following_url": "https://api.github.com/users/cailun01/following{/other_user}",
"gists_url": "https://api.github.com/users/cailun01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cailun01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cailun01/subscriptions",
"organizations_url": "https://api.github.com/users/cailun01/orgs",
"repos_url": "https://api.github.com/users/cailun01/repos",
"events_url": "https://api.github.com/users/cailun01/events{/privacy}",
"received_events_url": "https://api.github.com/users/cailun01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I tried in this way.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(path=\"wikitext\", name=\"wikitext-103-v1\", split=\"train\")\r\n```"
] | 2022-05-25T03:10:44 | 2022-10-24T06:10:27 | 2022-05-25T03:26:36 | NONE | null | null | null | ## Describe the bug
Could not reach wikitext-2-raw-v1.py
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikitext-2-raw-v1")
```
## Expected results
Download `wikitext-2-raw-v1` dataset successfully.
## Actual results
```
File "load_datasets.py", line 13, in <module>
load_dataset("wikitext-2-raw-v1")
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset
**config_kwargs,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder
data_files=data_files,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory
raise e1 from None
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory
dynamic_modules_path=dynamic_modules_path,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module
local_path = self.download_loading_script(revision)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path
download_desc=download_config.download_desc,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),))
```
I tried to download wikitext-2-raw-v1.py by chrome and got:
![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: CentOS 7
- Python version: 3.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4400/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4399/comments | https://api.github.com/repos/huggingface/datasets/issues/4399/events | https://github.com/huggingface/datasets/issues/4399 | 1,246,948,299 | I_kwDODunzps5KUuvL | 4,399 | LocalDatasetModuleFactoryWithoutScript extracts invalid builder name | {
"login": "apohllo",
"id": 40543,
"node_id": "MDQ6VXNlcjQwNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apohllo",
"html_url": "https://github.com/apohllo",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"repos_url": "https://api.github.com/users/apohllo/repos",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Ok, so\r\n```\r\nos.path.basename(\"/home/user/\")\r\n```\r\ngives `''` while \r\n```\r\nos.path.basename(\"/home/user\")\r\n```\r\ngives `user`. \r\nThe code should check if the last char is a slash.\r\n",
"The fix is:\r\n```\r\n\"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)\r\n```",
"I came through the same issue , just removing the last slash in the dataset path fixed it for me, may be this repo moderators could accept this as an accepted answer atleast if this could not be integrated\r\n\r\n> The fix is:\r\n> \r\n> ```\r\n> \"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)\r\n> ```\r\n\r\n@apohllo consider making a pull request on this \r\n\r\nThanks for the amazing contributions from huggingface people !!\r\n",
"@apohllo Would you be interested in submitting a PR with the fix?",
"@mariosasko here we go:\r\n\r\nhttps://github.com/huggingface/datasets/pull/4967\r\n\r\nTBH I haven't tested it yet, but should work, since this is a basic change."
] | 2022-05-24T18:03:01 | 2022-09-12T15:30:43 | 2022-09-12T15:30:43 | CONTRIBUTOR | null | null | null | ## Describe the bug
Trying to load a local dataset raises an error indicating that the config builder has to have a name.
No error should be reported, since the call is completly valid.
## Steps to reproduce the bug
```python
load_dataset("./data/some-dataset/", name="some-name")
```
## Expected results
The dataset should be loaded.
## Actual results
```
Traceback (most recent call last):
File "train_lquad.py", line 19, in <module>
load(tokenize_target_function, tokenize_target_function, {}, tokenizer)
File "train_lquad.py", line 14, in load
dataset = load_dataset("./data/lquad/", name="lquad")
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset
builder_instance = load_dataset_builder(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__
self.config, self.config_id = self._create_builder_config(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config
raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}")
ValueError: BuilderConfig must have a name, got
```
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
The error is probably in line 795 in load.py:
```
builder_kwargs = {
"hash": hash,
"data_files": data_files,
"name": os.path.basename(self.path),
"base_path": self.path,
**builder_kwargs,
}
```
`os.path.basename` for a directory returns an empty string, rather than the name of the directory.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4399/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4398/comments | https://api.github.com/repos/huggingface/datasets/issues/4398/events | https://github.com/huggingface/datasets/issues/4398 | 1,246,666,749 | I_kwDODunzps5KTp_9 | 4,398 | Calling `cast_column`/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"It works if we either remove the `ds = ds.cast_column(\"id\", Value(\"int32\"))` line from the code above, or if instead calling `ds.remove_columns()` we remove the columns inside each mapping as `ds.map(..., remove_columns=[...])` instead of right after the mapping.\r\n\r\nBoth of those solutions seem to fix the issue, so the root cause of it may be around that. Sorry I cannot provide you more insights, in case I get to fix it I'll submit a PR, in the meanwhile the code that I'm using as a workaround is the following:\r\n\r\n```python\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\nimport torch\r\n\r\ntorch.set_grad_enabled(False)\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\nctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n\r\nfrom datasets import load_dataset, Value\r\n\r\nds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\nds = ds.cast_column(\"id\", Value(\"int32\"))\r\nds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n\r\ndef generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n\r\nds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\nds.add_faiss_index(column=\"embeddings\")\r\n```",
"FYI the main reason I want to use `dataset.remove_columns` rather than the function inside `dataset.map` is because according to the 🤗 Datasets documentation, it's faster.\r\n\r\n\"🤗 Datasets also has a [Dataset.remove_columns()](https://huggingface.co./docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.Dataset.remove_columns) method that is functionally identical, but faster, because it doesn’t copy the data of the remaining columns.\"\r\n\r\nMore information at https://huggingface.co./docs/datasets/process#map",
"Here I'm presenting all the scenarios so that you can further investigate the issue:\r\n\r\n- ✅ `cast_column` -> `map` with `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ❌ `cast_column` -> `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ❌ `cast_column` -> `map` with `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ✅ `cast_column` -> `map` -> `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ✅ `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```",
"So on, I've created #4411 so as to fix the bug with `remove_columns` under certain conditions before `add_faiss_index`, which means that the scenarios not working above are already working fine."
] | 2022-05-24T14:41:34 | 2022-06-14T16:01:56 | 2022-06-14T16:01:56 | CONTRIBUTOR | null | null | null | First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue.
## Describe the bug
Calling a certain combination of operations over a 🤗 `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below!
## Steps to reproduce the bug
Assuming the following dataset named `sample.csv` with some IMDb data:
```csv
id,title,summary
1877830,"The Batman","When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement."
9419884,"Doctor Strange in the Multiverse of Madness","Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others."
11138512,"The Northman","From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder."
1745960,"Top Gun: Maverick","After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him."
```
We'll be able to reproduce the bug using the following piece of code:
```python
# Sample code to reproduce the bug
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
import torch
torch.set_grad_enabled(False)
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
from datasets import load_dataset, Value
ds = load_dataset("csv", data_files=["sample.csv"], split="train")
ds = ds.cast_column("id", Value("int32")) # from `int64` to `int32`
ds = ds.map(lambda x: {"inputs": f"{ctx_tokenizer.sep_token}".join(["title", "summary"])})
ds = ds.remove_columns(["title", "summary"])
def generate_embeddings(x):
return {"embeddings": ctx_encoder(**ctx_tokenizer(x["inputs"], return_tensors="pt"))[0][0].numpy()}
ds = ds.map(generate_embeddings)
ds = ds.remove_columns("inputs")
ds.add_faiss_index(column="embeddings") # It fails here!
```
The code above is an adaptation of https://huggingface.co./docs/datasets/faiss_es, for the sake of presenting the bug with a simple example.
## Expected results
Ideally, the `faiss` index should be calculated over the 🤗 `Dataset` and no exception should be triggered.
## Actual results
But what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4398/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4397/comments | https://api.github.com/repos/huggingface/datasets/issues/4397/events | https://github.com/huggingface/datasets/pull/4397 | 1,246,597,632 | PR_kwDODunzps44XcG3 | 4,397 | Fix dependency on dill version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-24T13:54:23 | 2022-10-26T08:45:37 | 2022-05-25T13:54:08 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4397",
"html_url": "https://github.com/huggingface/datasets/pull/4397",
"diff_url": "https://github.com/huggingface/datasets/pull/4397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4397.patch",
"merged_at": "2022-05-25T13:54:08"
} | We had to make a hotfix by pinning dill:
- #4380
because from version 0.3.5, our custom `save_function` pickling function was raising an exception:
- #4379
This PR fixes this by implementing our custom `save_function` depending on the version of dill.
CC: @anivegesana
This PR needs first being merged:
- [x] #4384
- so that a circular import is fixed
It is also convenient to merge first:
- [x] #4385 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4397/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4396/comments | https://api.github.com/repos/huggingface/datasets/issues/4396/events | https://github.com/huggingface/datasets/pull/4396 | 1,245,479,399 | PR_kwDODunzps44T0Di | 4,396 | Fix URL in gem dataset for totto config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T17:16:12 | 2022-05-24T05:49:11 | 2022-05-24T05:41:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4396",
"html_url": "https://github.com/huggingface/datasets/pull/4396",
"diff_url": "https://github.com/huggingface/datasets/pull/4396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4396.patch",
"merged_at": "2022-05-24T05:40:59"
} | As commented in:
- https://github.com/huggingface/datasets/issues/4386#issuecomment-1134902372
CC: @StevenTang1998 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4396/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4395/comments | https://api.github.com/repos/huggingface/datasets/issues/4395/events | https://github.com/huggingface/datasets/pull/4395 | 1,245,436,486 | PR_kwDODunzps44TrBA | 4,395 | Add Pascal VOC dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Some CI fails are unrelated to your PR and fixed on master, feel free to merge master into your branch :)",
"Thanks @nateraw for the addition of this dataset.\r\n\r\nI would suggest to transfer it to the Hugging Face Hub, under a \"pascal\" organization namespace: \"pascal/voc\".\r\n\r\nWhat do you think?",
"FYI I think this dataset is also available at (internal) https://huggingface.co./datasets/HuggingFaceM4/pascal_voc",
"@lhoestq @albertvillanova what do you think best path forward is? No idea when I'll get to looking at this again, but would be nice to know plan so when I find time I can just get it done in one sitting. ",
"My (not strong) opinion on this:\r\n- as we are removing dataset scripts from GitHub, this dataset should be created directly on the Hub\r\n- I proposed doing it under some kind of \"official\" org namespace, like pascal or pascal2; other suggestions are welcome\r\n- the link given by @lhoestq might serve as inspiration for your implementation (I think yours misses data about action classification): their implementation comprises tasks: classification/detection, segmentation, action classification, person layout; it misses other tasks though\r\n\r\nWhat do you think?"
] | 2022-05-23T16:34:05 | 2022-10-03T09:39:08 | 2022-10-03T09:36:56 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4395",
"html_url": "https://github.com/huggingface/datasets/pull/4395",
"diff_url": "https://github.com/huggingface/datasets/pull/4395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4395.patch",
"merged_at": null
} | This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4395/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4394/comments | https://api.github.com/repos/huggingface/datasets/issues/4394/events | https://github.com/huggingface/datasets/issues/4394 | 1,245,221,657 | I_kwDODunzps5KOJMZ | 4,394 | trainer became extremely slow after reload dataset by `load_from_disk` | {
"login": "conan1024hao",
"id": 50416856,
"node_id": "MDQ6VXNlcjUwNDE2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conan1024hao",
"html_url": "https://github.com/conan1024hao",
"followers_url": "https://api.github.com/users/conan1024hao/followers",
"following_url": "https://api.github.com/users/conan1024hao/following{/other_user}",
"gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions",
"organizations_url": "https://api.github.com/users/conan1024hao/orgs",
"repos_url": "https://api.github.com/users/conan1024hao/repos",
"events_url": "https://api.github.com/users/conan1024hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/conan1024hao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it/s` from`8.62s/it`. It's nearly 200 times... Do you have any idea? Thank you!",
"Similar issue: https://github.com/huggingface/transformers/issues/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `trainer.py`, but the speed didn't become faster.",
"I changed\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\"\r\n )\r\n```\r\nto\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\", keep_in_memory=True\r\n )\r\n```\r\nand obtained normal speed. It's seems that the problem is on the os's speed limit.",
"Hi ! Currently `save_to_disk` saves one big Arrow file, which causes some slow downs. This has been discussed in #3735 and we'll implement sharding pretty soon to solve this\r\n\r\nFor now you can try splitting and saving your dataset in several Arrow files. Then you can load them one by one and use `concatenate_datasets` to have your big dataset again and hopefully with a better speed"
] | 2022-05-23T14:04:37 | 2022-06-06T16:08:01 | null | NONE | null | null | null | ## Describe the bug
Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card.
Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)**
## Steps to reproduce the bug
```python
tokenized_datasets.save_to_disk(
"/pathto/dataset"
)
tokenized_datasets = load_from_disk(
"/pathto/dataset"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"] if training_args.do_train else None,
eval_dataset=tokenized_datasets["validation"]
if training_args.do_eval
else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
## Expected results
Without the save and reload process, I only need about one day to run the whole script with one A100 card.
## Actual results
```
[INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training *****
[INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165
[INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5
[INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540
0%| | 1/567540 [00:09<1544:49:04, 9.80s/it]
0%| | 2/567540 [00:17<1320:00:17, 8.37s/it]
0%| | 3/567540 [00:26<1393:10:17, 8.84s/it]
0%| | 4/567540 [00:34<1344:56:33, 8.53s/it]
0%| | 5/567540 [00:43<1359:36:12, 8.62s/it]
```
## Environment info
```
torch 1.11.0+cu113
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113
transformers 4.18.0
datasets 2.2.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4394/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4393/comments | https://api.github.com/repos/huggingface/datasets/issues/4393/events | https://github.com/huggingface/datasets/pull/4393 | 1,244,876,662 | PR_kwDODunzps44RxWN | 4,393 | Update CI deprecated legacy image | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T09:35:42 | 2022-05-23T10:08:28 | 2022-05-23T09:59:55 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4393",
"html_url": "https://github.com/huggingface/datasets/pull/4393",
"diff_url": "https://github.com/huggingface/datasets/pull/4393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4393.patch",
"merged_at": "2022-05-23T09:59:55"
} | Now our CI still uses a deprecated legacy image:
> You’re using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image.
This PR updates to next-generation convenience image.
Related to:
- #2955 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4393/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4392/comments | https://api.github.com/repos/huggingface/datasets/issues/4392/events | https://github.com/huggingface/datasets/pull/4392 | 1,244,859,971 | PR_kwDODunzps44RtsX | 4,392 | remove int documentation from logging docs | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T09:24:55 | 2022-05-23T15:16:55 | 2022-05-23T15:08:32 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4392",
"html_url": "https://github.com/huggingface/datasets/pull/4392",
"diff_url": "https://github.com/huggingface/datasets/pull/4392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4392.patch",
"merged_at": "2022-05-23T15:08:32"
} | Removes the `int` documentation from the [logging section](https://huggingface.co./docs/datasets/package_reference/logging_methods#levels) of the docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4392/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4391/comments | https://api.github.com/repos/huggingface/datasets/issues/4391/events | https://github.com/huggingface/datasets/pull/4391 | 1,244,839,185 | PR_kwDODunzps44RpGv | 4,391 | Refactor column mappings for question answering datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks.\r\n> \r\n> I have no visibility about this, but if you say it is more useful for AutoTrain this way...\r\n\r\nThanks for the review @albertvillanova ! Yes, I need some way to reconstruct the original column names with a period because that's how they appear after we flatten the nested columns. In any case, we can adjust this later if needed :)",
"Does that mean that we need to change the metadata?",
"> Does that mean that we need to change the metadata?\r\n\r\nYes, but this PR takes care of it :)",
"Oh good! thanks for the heads up!"
] | 2022-05-23T09:13:14 | 2022-05-24T12:57:00 | 2022-05-24T12:48:48 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4391",
"html_url": "https://github.com/huggingface/datasets/pull/4391",
"diff_url": "https://github.com/huggingface/datasets/pull/4391.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4391.patch",
"merged_at": "2022-05-24T12:48:48"
} | This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain.
As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR.
cc @sashavor | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4391/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4390/comments | https://api.github.com/repos/huggingface/datasets/issues/4390/events | https://github.com/huggingface/datasets/pull/4390 | 1,244,835,877 | PR_kwDODunzps44RoXs | 4,390 | Fix metadata validation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T09:11:20 | 2022-06-01T09:27:52 | 2022-06-01T09:19:25 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4390",
"html_url": "https://github.com/huggingface/datasets/pull/4390",
"diff_url": "https://github.com/huggingface/datasets/pull/4390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4390.patch",
"merged_at": "2022-06-01T09:19:25"
} | Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4390/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4389/comments | https://api.github.com/repos/huggingface/datasets/issues/4389/events | https://github.com/huggingface/datasets/pull/4389 | 1,244,693,690 | PR_kwDODunzps44RKMn | 4,389 | Fix bug in gem dataset for wiki_auto_asset_turk config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T07:19:49 | 2022-05-23T10:38:26 | 2022-05-23T10:29:55 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4389",
"html_url": "https://github.com/huggingface/datasets/pull/4389",
"diff_url": "https://github.com/huggingface/datasets/pull/4389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4389.patch",
"merged_at": "2022-05-23T10:29:55"
} | This PR fixes some URLs.
Fix #4386. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4389/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4388/comments | https://api.github.com/repos/huggingface/datasets/issues/4388/events | https://github.com/huggingface/datasets/pull/4388 | 1,244,645,158 | PR_kwDODunzps44RAG1 | 4,388 | Set builder name from module instead of class | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T06:26:35 | 2022-05-25T05:24:43 | 2022-05-25T05:16:15 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4388",
"html_url": "https://github.com/huggingface/datasets/pull/4388",
"diff_url": "https://github.com/huggingface/datasets/pull/4388.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4388.patch",
"merged_at": "2022-05-25T05:16:15"
} | Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset
- The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name
- On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id
IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name.
Fix #4381. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4388/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4388/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4387/comments | https://api.github.com/repos/huggingface/datasets/issues/4387/events | https://github.com/huggingface/datasets/issues/4387 | 1,244,147,817 | I_kwDODunzps5KKDBp | 4,387 | device/google/accessory/adk2012 - Git at Google | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-22T04:57:19 | 2022-05-23T06:36:27 | 2022-05-23T06:36:27 | NONE | null | null | null | "git clone https://android.googlesource.com/device/google/accessory/adk2012"
https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4387/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4386/comments | https://api.github.com/repos/huggingface/datasets/issues/4386/events | https://github.com/huggingface/datasets/issues/4386 | 1,243,965,532 | I_kwDODunzps5KJWhc | 4,386 | Bug for wiki_auto_asset_turk from GEM | {
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ",
"Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```",
"Thanks for your reply!!\r\nAnd the totto dataset has the same problem. The url should be change to [https://storage.googleapis.com/totto-public/totto_data.zip](https://storage.googleapis.com/totto-public/totto_data.zip).",
"Hi again @StevenTang1998,\r\n\r\nI don't see any problem when loading `totto` dataset:\r\n```python\r\nIn [4]: import datasets\r\n ...: ds = datasets.load_dataset(\"totto\")\r\nDownloading builder script: 5.58kB [00:00, 5.33MB/s] \r\nDownloading metadata: 2.78kB [00:00, 2.96MB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset totto/default (download: 179.03 MiB, generated: 706.59 MiB, post-processed: Unknown size, total: 885.62 MiB) to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 188M/188M [00:32<00:00, 5.77MB/s]\r\nDataset totto downloaded and prepared to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 147.95it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 120761\r\n })\r\n validation: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n test: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n})\r\n```",
"Sorry, I didn't express it clearly. It's the totto dataset from gem.\r\ndatasets.load_dataset('gem', 'totto')\r\n",
"@StevenTang1998 fixed in:\r\n- #4396",
"Thanks!!"
] | 2022-05-21T12:31:30 | 2022-05-24T05:55:52 | 2022-05-23T10:29:55 | NONE | null | null | null | ## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare
self._download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators
dl_dir = dl_manager.download_and_extract(_URLs[self.config.name])
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download
downloaded_path_or_paths = map_nested(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested
mapped = [
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested
return function(data_struct)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path
output_path = get_from_cache(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4386/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4385/comments | https://api.github.com/repos/huggingface/datasets/issues/4385/events | https://github.com/huggingface/datasets/pull/4385 | 1,243,921,287 | PR_kwDODunzps44OwXF | 4,385 | Test dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.5 and ones that I will make in 0.3.6 will result in different pickles than the ones dill 0.3.4 was making. This should still be fine for caching.",
"Just some comments @lhoestq:\r\n\r\nThe best practice for testing is to have a `test_<filename>.py` for each `<filename>.py`. Therefore in order to have the filenames aligned, I would propose:\r\n- either renaming `fingerprint.py` to `caching.py`\r\n- or renaming `test_caching.py` to `test_fingerprint.py`\r\n\r\nOn the other hand, my idea when implementing this test was not to test all the functionalities of the `Hasher`, but just to have a regression test that fails if dill version is > 0.3.4 and the pin in our `setup.py` is not present. Just recall that we had no failing test in our CI when the issue with dill was found on `transformers`.\r\n\r\nThe objective of this PR is just to have a regression test for that case: I tested and I got `AttributeError: module 'dill._dill' has no attribute 'stack'`\r\n\r\nFor this regression test, I took into account this comment by @gugarosa: https://github.com/huggingface/datasets/issues/4379#issuecomment-1133131825\r\n\r\nThere is no equivalent test in `test_caching.py` because our CI did not fail before pinning dill.",
"Ok I see, renaming it to `test_fingerprint.py` sounds like a good idea :)"
] | 2022-05-21T08:57:43 | 2022-05-25T08:30:13 | 2022-05-25T08:21:48 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4385",
"html_url": "https://github.com/huggingface/datasets/pull/4385",
"diff_url": "https://github.com/huggingface/datasets/pull/4385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4385.patch",
"merged_at": "2022-05-25T08:21:48"
} | Regression test for future releases of `dill`.
Related to #4379. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4385/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4384/comments | https://api.github.com/repos/huggingface/datasets/issues/4384/events | https://github.com/huggingface/datasets/pull/4384 | 1,243,919,748 | PR_kwDODunzps44OwFr | 4,384 | Refactor download | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?",
"The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might be useful:\n\nhttps://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING",
"> This looks like a breaking change no ?\r\n> Also could you explain why it would be better this way ?\r\n\r\nSorry, @lhoestq, I naively thought it was obvious. I have tried to give some arguments in the motivation of this PR (see above). I can give additional arguments if needed. "
] | 2022-05-21T08:49:24 | 2022-05-25T10:52:02 | 2022-05-25T10:43:43 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4384",
"html_url": "https://github.com/huggingface/datasets/pull/4384",
"diff_url": "https://github.com/huggingface/datasets/pull/4384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4384.patch",
"merged_at": "2022-05-25T10:43:43"
} | This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments:
- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities
- abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower
- architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture
Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements.
As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860
- After an extension, a circular import is found
- Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction:
```
ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'.
tests/conftest.py:12: in <module>
import datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>
from .arrow_dataset import Dataset, concatenate_datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module>
from . import config
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module>
from .utils.logging import get_logger
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module>
from .download_manager import DownloadConfig, DownloadManager, DownloadMode
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module>
from .py_utils import NestedDataStructure, map_nested, size_str
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module>
if config.DILL_VERSION < version.parse("0.3.5"):
E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION'
```
Imports:
- datasets
- Dataset: lower level than datasets
- config: lower level than Dataset
- logger: lower level than config
- DownloadManager: !!! HIGHER level of abstraction than logger!!
Why when importing logger we require importing DownloadManager?!?
- Logically, it does not make sense
- This is due to an error in the design/architecture of our library:
- To import the logger, we need to import it from `.utils.logging`
- To import `.utils.logging` we need to import `.utils`
- The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`!
When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4384/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4383/comments | https://api.github.com/repos/huggingface/datasets/issues/4383/events | https://github.com/huggingface/datasets/issues/4383 | 1,243,856,981 | I_kwDODunzps5KI8BV | 4,383 | L | {
"login": "AronCodes21",
"id": 99847861,
"node_id": "U_kgDOBfOOtQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99847861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AronCodes21",
"html_url": "https://github.com/AronCodes21",
"followers_url": "https://api.github.com/users/AronCodes21/followers",
"following_url": "https://api.github.com/users/AronCodes21/following{/other_user}",
"gists_url": "https://api.github.com/users/AronCodes21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AronCodes21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AronCodes21/subscriptions",
"organizations_url": "https://api.github.com/users/AronCodes21/orgs",
"repos_url": "https://api.github.com/users/AronCodes21/repos",
"events_url": "https://api.github.com/users/AronCodes21/events{/privacy}",
"received_events_url": "https://api.github.com/users/AronCodes21/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-05-21T03:47:58 | 2022-05-21T19:20:13 | 2022-05-21T19:20:13 | NONE | null | null | null | ## Describe the L
L
## Expected L
A clear and concise lmll
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4383/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4382/comments | https://api.github.com/repos/huggingface/datasets/issues/4382/events | https://github.com/huggingface/datasets/issues/4382 | 1,243,839,783 | I_kwDODunzps5KI30n | 4,382 | First time trying | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 2022-05-21T02:15:18 | 2022-05-21T19:20:44 | 2022-05-21T19:20:44 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4382/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4381/comments | https://api.github.com/repos/huggingface/datasets/issues/4381/events | https://github.com/huggingface/datasets/issues/4381 | 1,243,478,863 | I_kwDODunzps5KHftP | 4,381 | Bug in caching 2 datasets both with the same builder class name | {
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`",
"Hi @NouamaneTazi, please note that after our fix:\r\n- #4388\r\n\r\nwe do not consider the class name anymore, but the name of the file where the loading builder class is implemented. "
] | 2022-05-20T18:18:03 | 2022-06-02T08:18:37 | 2022-05-25T05:16:15 | MEMBER | null | null | null | ## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text).
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("mteb/mtop_intent", "en")
print(dataset['train'][0])
dataset = datasets.load_dataset("mteb/mtop_domain", "en")
print(dataset['train'][0])
```
## Expected results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'}
```
## Actual results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1
- Platform: macOS-12.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4381/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4380/comments | https://api.github.com/repos/huggingface/datasets/issues/4380/events | https://github.com/huggingface/datasets/pull/4380 | 1,243,183,054 | PR_kwDODunzps44MUz0 | 4,380 | Pin dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-20T13:54:19 | 2022-06-13T10:03:52 | 2022-05-20T16:33:04 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4380",
"html_url": "https://github.com/huggingface/datasets/pull/4380",
"diff_url": "https://github.com/huggingface/datasets/pull/4380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4380.patch",
"merged_at": "2022-05-20T16:33:04"
} | Hotfix #4379.
CC: @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4380/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4380/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4379/comments | https://api.github.com/repos/huggingface/datasets/issues/4379/events | https://github.com/huggingface/datasets/issues/4379 | 1,243,175,854 | I_kwDODunzps5KGVuu | 4,379 | Latest dill release raises exception | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Fixed by:\r\n- #4380 ",
"Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns the standard non-dillable error:\r\n```\r\nParameter 'function'=<function <lambda> at 0x7fe7d18c9560> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly....\r\n```",
"@albertvillanova ExamplesTests.test_run_speech_recognition_seq2seq is in which file?",
"Thanks a lot @gugarosa for the insight: we will incorporate it in our CI as regression testing for future dill releases.",
"Hi @anivegesana, that test is in `transformers` library:\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/test_pytorch_examples.py#L449\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py ",
"@albertvillanova\n\nI did a deep dive into @gugarosa's problem and found the issue and it might be related to the one @sgugger discovered. In dill 0.3.5(.1), I created a new `save_function` that fixes a bug in dill that prevented the pickling of recursive inner functions. It was a more complete solution to the problem that `dill._dill.stack` tried to solve in the internal API of dill. Since `dill._dill.stack` was no longer needed, I removed it. Since datasets copies the `save_function` directly from the dill API, it stops working with the new dill version since `dill._dill.stack` is no longer present and the `save_function` has been updated with new code.\r\n\r\nhttps://github.com/huggingface/datasets/blob/95193ae61e92aa537d0c65d37a1fd9d2393aae89/src/datasets/utils/py_utils.py#L607-L678\r\n\r\n~If the dill version is below 0.3.5, you should keep this function. If it is after, you would need to update your copy of `save_function` to use the code I introduced, or manually add a `stack` variable to `dill._dill` if it doesn't exist. Fortunately, in any version of Python 3.7+, dictionaries are always in insertion order and dill no longer supports Python 3.6 or older. So, any globals dictionary saved by dill 0.3.5+ will be deterministic given that the version of dill is held constant and this save_function is unnecessary for newer versions of dill.~\r\n\r\nAh. I see what is happening. I guess a different copy of the function code is needed that sorts the global variables by name.\r\n\r\n```py\r\nif dill.__version__.split('.') < ['0', '3', '5']:\r\n # current save_function code inside here\r\nelse:\r\n # new save_function code inside here with the following line inserted after creating the globals\r\n globs = {k: globs[k] for k in sorted(globs.keys())} \r\n```\r\n\r\nWill look into the test case @sgugger pointed out after that and verify if this is causing the problem.\r\n\r\nI am actually looking into rewriting the global variables code in uqfoundation/dill#466 and will keep this in mind and will try to create an easy way to modify the global variables in dill 0.3.6 (for example, sort them by key like datasets does).",
"Thanks a lot for your investigation @anivegesana.\r\n\r\nYes, we copied-pasted the old `save_function` function from `dill`, just adding a line to make deterministic the order of global variables `globs`. \r\n\r\nHowever, this function has changed a lot from version 0.3.5, after your PR (thank you for the fix in recursiveness, indeed):\r\n- uqfoundation/dill#443\r\n\r\nWe have to address this change.\r\n\r\nIf finally your PR to sort global variables is merged into dill 0.3.6, that will make our life easier, as the tweak will no longer be necessary. ;)\r\n\r\nI have included a regression test so that we are sure future releases of dill do not break `datasets`:\r\n- #4385 ",
"I should note that because Python 3.6 and older are now deprecated and Python 3.7 has insertion order dictionaries, the globals in dill will have a deterministic order, just not sorted. I would still keep it sorted like you have it to help with stability (for example, if someone reorders variables in a file, then sorting the globals would not invalidate the cache.)\n\nIt seems that the order is not quite deterministic in IPython. Huggingface datasets seems to do well in Jupyter regardless, so it is not a good idea to remove the sorting. uqfoundation/dill#19"
] | 2022-05-20T13:48:36 | 2022-05-21T15:53:26 | 2022-05-20T17:06:27 | MEMBER | null | null | null | ## Describe the bug
As reported by @sgugger, latest dill release is breaking things with Datasets.
```
______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________
self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None
def get(self, timeout=None):
self.wait(timeout)
if not self.ready():
raise TimeoutError
if self._success:
return self._value
else:
> raise self._value
E TypeError: '>' not supported between instances of 'NoneType' and 'float'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4379/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4379/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4378/comments | https://api.github.com/repos/huggingface/datasets/issues/4378/events | https://github.com/huggingface/datasets/pull/4378 | 1,242,935,373 | PR_kwDODunzps44Lf2R | 4,378 | Tidy up license metadata for google_wellformed_query, newspop, sick | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"& thank you!"
] | 2022-05-20T10:16:12 | 2022-05-24T13:50:23 | 2022-05-24T13:10:27 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4378",
"html_url": "https://github.com/huggingface/datasets/pull/4378",
"diff_url": "https://github.com/huggingface/datasets/pull/4378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4378.patch",
"merged_at": "2022-05-24T13:10:27"
} | Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4378/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4377/comments | https://api.github.com/repos/huggingface/datasets/issues/4377/events | https://github.com/huggingface/datasets/pull/4377 | 1,242,746,186 | PR_kwDODunzps44K4OY | 4,377 | Fix checksum and bug in irc_disentangle dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-20T07:29:28 | 2022-05-20T09:34:36 | 2022-05-20T09:26:32 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4377",
"html_url": "https://github.com/huggingface/datasets/pull/4377",
"diff_url": "https://github.com/huggingface/datasets/pull/4377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4377.patch",
"merged_at": "2022-05-20T09:26:32"
} | There was a bug in filepath segment:
- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`
- right: `jkkummerfeld-irc-disentanglement-35f0a40`
Also there was a bug in the checksum of the downloaded file.
This PR fixes these issues.
Fix partially #4376.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4377/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4376/comments | https://api.github.com/repos/huggingface/datasets/issues/4376/events | https://github.com/huggingface/datasets/issues/4376 | 1,242,218,144 | I_kwDODunzps5KCr6g | 4,376 | irc_disentagle viewer error | {
"login": "labouz",
"id": 25671683,
"node_id": "MDQ6VXNlcjI1NjcxNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/25671683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/labouz",
"html_url": "https://github.com/labouz",
"followers_url": "https://api.github.com/users/labouz/followers",
"following_url": "https://api.github.com/users/labouz/following{/other_user}",
"gists_url": "https://api.github.com/users/labouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/labouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/labouz/subscriptions",
"organizations_url": "https://api.github.com/users/labouz/orgs",
"repos_url": "https://api.github.com/users/labouz/repos",
"events_url": "https://api.github.com/users/labouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/labouz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\nI attempted to use the `ignore_verifications' as such:\r\n\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗",
"Thanks for reporting, @labouz. I'm addressing it. ",
"The issue with checksum and empty dataset has been fixed by:\r\n- #4377\r\n\r\nTo load the dataset, you should force the re-generation of the dataset from the downloaded file by passing `download_mode=\"reuse_cache_if_exists\"` to `load_dataset`.\r\n\r\nIn relation with the issue with the dataset viewer, first the dataset should be refactored to support streaming.",
"parfait!\r\nit works now, thank you 🙏 ",
"Hi there, \r\nI see this issue is closed, but I am wondering if there is any chance the source files have been moved since this fix? I am stumbling into the same NonMatchingChecksumError noted by lebouz's second post once 118MB of data has been downloaded, and have tried the solutions noted in the various fix checksum posts linked here and in other posts regarding passing in \"reuse_cache_if_exists\" to download_mode. Any suggestions? Thank you!\r\n\r\n"
] | 2022-05-19T19:15:16 | 2023-01-12T16:56:13 | 2022-06-02T08:20:00 | NONE | null | null | null | the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4376/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4375/comments | https://api.github.com/repos/huggingface/datasets/issues/4375/events | https://github.com/huggingface/datasets/pull/4375 | 1,241,921,147 | PR_kwDODunzps44IMCS | 4,375 | Support DataLoader with num_workers > 0 in streaming mode | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright this is finally ready for review ! It's quite long I'm sorry, but it's not easy to disentangle everything ^^'\r\n\r\nThe main additions are in\r\n- src/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py\r\n- src/datasets/iterable_dataset.py\r\n- src/datasets/utils/patching.py",
"Added some comments and an error when lists have different lengths for sharding :)",
"Let's resolve the merge conflict and the CI error (if it's related to the changes), and I can review the PR again.",
"Feel free to review again :) The CI fail is unrelated to this PR and will be fixed by https://github.com/huggingface/datasets/pull/4472 (the hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos)",
"CI failures are unrelated to this PR - merging :)\r\n\r\n(CI fails are a mix of pip install fails and Hub fails)",
"@lhoestq you're our hero :)"
] | 2022-05-19T15:00:31 | 2022-07-04T16:05:14 | 2022-06-10T20:47:27 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4375",
"html_url": "https://github.com/huggingface/datasets/pull/4375",
"diff_url": "https://github.com/huggingface/datasets/pull/4375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4375.patch",
"merged_at": "2022-06-10T20:47:26"
} | ### Issue
It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:
- the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950
- streaming extension is failing: https://github.com/huggingface/datasets/issues/3951
- `fsspec` doesn't work out of the box in subprocesses
### Solution in this PR
I fixed these to enable passing an `IterableDataset` to a `torch.utils.data.DataLoader` with `num_workers > 0`.
I also had to shard the `IterableDataset` to give each worker a shard, otherwise data would be duplicated. This is implemented in `TorchIterableDataset.__iter__` and uses the new `IterableDataset._iter_shard(shard_idx)` method
I also had to do a few changes the patching that enable streaming in dataset scripts:
- the patches are now always applied - not just for streaming mode. They're applied when a builder is instantiated
- I improved it to also check for renamed modules or attributes (ex: pandas vs pd)
- I grouped all the patches of pathlib.Path into a class `xPath`, so that `Path` outside of dataset scripts stay unchanged - otherwise I didn't change the content of the extended Path methods for streaming
- I fixed a bug with the `pd.read_csv` patch, opening the file in "rb" mode was missing and causing some datasets to not work in streaming mode, and compression inference was missing
### A few details regarding `fsspec` in multiprocessing
From https://github.com/fsspec/filesystem_spec/pull/963#issuecomment-1131709948 :
> Non-async instances might be safe in the forked child, if they hold no open files/sockets etc.; I'm not sure any implementations pass this test!
> If any async instance has been created, the newly forked processes must:
> 1. discard references to locks, threads and event loops and make new ones
> 2. not use any async fsspec instances from the parent process
> 3. clear all class instance caches
Therefore in a DataLoader's worker, I clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use fsspec class instances from the parent process.
Fix https://github.com/huggingface/datasets/issues/3950
Fix https://github.com/huggingface/datasets/issues/3951
TODO:
- [x] fix tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4375/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4374/comments | https://api.github.com/repos/huggingface/datasets/issues/4374/events | https://github.com/huggingface/datasets/issues/4374 | 1,241,860,535 | I_kwDODunzps5KBUm3 | 4,374 | extremely slow processing when using a custom dataset | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Hi !\r\n\r\nMy guess is that some examples in your dataset are bigger than your RAM, and therefore loading them in RAM to pass them to `remove_non_indic_sentences` takes forever because it might use SWAP memory.\r\n\r\nMaybe several examples in your dataset are grouped together, can you check `len(lang_dataset[\"train\"])` and `lang_dataset[\"train\"].data.nbytes` of both datasets please ? It can also be helpful to check the distribution of lengths of each examples in your dataset.",
"Closing due to inactivity"
] | 2022-05-19T14:18:05 | 2023-07-25T15:07:17 | 2023-07-25T15:07:16 | NONE | null | null | null | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the dataset
`lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)`
the following processing takes astronomical time to process, while hoging all the ram.
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)`
the hours predicted to preprocess are as follows:
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format.
## Steps to reproduce the bug
```
import datasets
import psutil
import sys
import glob
from fastcore.utils import listify
import re
import gc
def remove_non_indic_sentences(example):
tmp_ls = []
eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*'
for e in listify(example['text']):
matches = re.findall(eng_regex, e)
for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]):
if len(list(match.split(" "))) > 2:
e = re.sub(match," ",e,count=1)
tmp_ls.append(e)
gc.collect()
example['clean_text'] = tmp_ls
return example
lang_dataset = datasets.load_dataset("text", data_files="hi.txt")
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
## same thing work much faster when loading similar dataset from hub
lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True)
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
```
## Actual results
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)
**the hours predicted to preprocess are as follows:**
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
**i even tried the following:**
- sharding the large 22gb text files into smaller files and loading
- saving the file to disk and then loading
- using lesser num_proc
- using smaller batch size
- processing without batches ie : without `batched=True`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2.dev0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.9.7
- PyArrow version:8.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4374/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4373/comments | https://api.github.com/repos/huggingface/datasets/issues/4373/events | https://github.com/huggingface/datasets/pull/4373 | 1,241,769,310 | PR_kwDODunzps44HsaY | 4,373 | Remove links in docs to old dataset viewer | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-19T13:24:39 | 2022-05-20T15:24:28 | 2022-05-20T15:16:05 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4373",
"html_url": "https://github.com/huggingface/datasets/pull/4373",
"diff_url": "https://github.com/huggingface/datasets/pull/4373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4373.patch",
"merged_at": "2022-05-20T15:16:05"
} | Remove the links in the docs to the no longer maintained dataset viewer. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4373/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4372/comments | https://api.github.com/repos/huggingface/datasets/issues/4372/events | https://github.com/huggingface/datasets/pull/4372 | 1,241,703,826 | PR_kwDODunzps44HeYC | 4,372 | Check if dataset features match before push in `DatasetDict.push_to_hub` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-19T12:32:30 | 2022-05-20T15:23:36 | 2022-05-20T15:15:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4372",
"html_url": "https://github.com/huggingface/datasets/pull/4372",
"diff_url": "https://github.com/huggingface/datasets/pull/4372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4372.patch",
"merged_at": "2022-05-20T15:15:30"
} | Fix #4211 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4372/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4371/comments | https://api.github.com/repos/huggingface/datasets/issues/4371/events | https://github.com/huggingface/datasets/pull/4371 | 1,241,500,906 | PR_kwDODunzps44GzSZ | 4,371 | Add missing language tags for udhr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-19T09:34:10 | 2022-06-08T12:03:24 | 2022-05-20T09:43:10 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4371",
"html_url": "https://github.com/huggingface/datasets/pull/4371",
"diff_url": "https://github.com/huggingface/datasets/pull/4371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4371.patch",
"merged_at": "2022-05-20T09:43:10"
} | Related to #4362. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4371/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4369/comments | https://api.github.com/repos/huggingface/datasets/issues/4369/events | https://github.com/huggingface/datasets/pull/4369 | 1,240,245,642 | PR_kwDODunzps44CpCe | 4,369 | Add redirect to dataset script in the repo structure page | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-18T17:05:33 | 2022-05-19T08:19:01 | 2022-05-19T08:10:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4369",
"html_url": "https://github.com/huggingface/datasets/pull/4369",
"diff_url": "https://github.com/huggingface/datasets/pull/4369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4369.patch",
"merged_at": "2022-05-19T08:10:51"
} | Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4369/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4368/comments | https://api.github.com/repos/huggingface/datasets/issues/4368/events | https://github.com/huggingface/datasets/pull/4368 | 1,240,064,860 | PR_kwDODunzps44CDFk | 4,368 | Add long answer candidates to natural questions dataset | {
"login": "seirasto",
"id": 4257308,
"node_id": "MDQ6VXNlcjQyNTczMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4257308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seirasto",
"html_url": "https://github.com/seirasto",
"followers_url": "https://api.github.com/users/seirasto/followers",
"following_url": "https://api.github.com/users/seirasto/following{/other_user}",
"gists_url": "https://api.github.com/users/seirasto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seirasto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seirasto/subscriptions",
"organizations_url": "https://api.github.com/users/seirasto/orgs",
"repos_url": "https://api.github.com/users/seirasto/repos",
"events_url": "https://api.github.com/users/seirasto/events{/privacy}",
"received_events_url": "https://api.github.com/users/seirasto/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Once we have added `long_answer_candidates` maybe it would be worth to also add the missing `candidate_index` (inside `long_answer`). What do you think, @seirasto ?",
"Also note the \"Data Fields\" section in the README is missing the `long_answer` field.\r\n\r\nMoreover, there is no instance example in \"Data Instances\" section.",
"We could either make these fixes in this PR or in a subsequent PR.",
"@albertvillanova I've added the missing fields and updated the README to include a data instance and some other things. ",
"Great! I've made the updates to align the README. Please let me know if I missed anything.",
"As there were many minor little fixes, I thought it would be easier to fix them directly.",
"I think the loading script is OK now. If it is also validated by another datasets maintainer, I could run the generation of the pre-processed data and then merge this PR into master (once all the tests are green).\r\n\r\nCC: @lhoestq ",
"It looks good to me, thanks @seirasto !",
"I have merged the master branch, so that we include all the fixes on Apache Beam + Google Dataflow.",
"Pre-processing is running!\r\n\r\nAlready finished for \"dev\" config:\r\n```python\r\nIn [2]: ds = load_dataset(\"datasets/natural_questions\", \"dev\")\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n validation: Dataset({\r\n features: ['id', 'document', 'question', 'long_answer_candidates', 'annotations'],\r\n num_rows: 7830\r\n })\r\n})\r\n```",
"There is an issue while running the preprocessing for the \"default\" (train+dev) config. Train data files are larger than than dev ones and workers run out of memory.\r\n\r\nI'm opening a separate issue to handle this problem: #4525",
"@seirasto is proposing uploading their preprocessed data files to our Datasets bucket.\r\n\r\nI think @lhoestq can give a more informed answer about authentication requirements.",
"Now that the data fiels are uploaded, can you merge the `main` branch into yours to re-trigger the CI @seirasto please ? :) Then I think we can merge if it's good for you @albertvillanova ",
"Merge is done! I think someone needs to approve the CI to run :) ",
"Can you run `make style` to fix the code formatting required by the CI please ?",
"Thanks @albertvillanova! I've committed all your suggestions.",
"The CI is green. I'm merging this PR."
] | 2022-05-18T14:35:42 | 2022-07-26T20:30:41 | 2022-07-26T20:18:42 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4368",
"html_url": "https://github.com/huggingface/datasets/pull/4368",
"diff_url": "https://github.com/huggingface/datasets/pull/4368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4368.patch",
"merged_at": "2022-07-26T20:18:42"
} | This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does not disturb the rest of the format . @lhoestq @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4368/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4367/comments | https://api.github.com/repos/huggingface/datasets/issues/4367/events | https://github.com/huggingface/datasets/pull/4367 | 1,240,011,602 | PR_kwDODunzps44B340 | 4,367 | Remove config names as yaml keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I included the change from https://github.com/huggingface/datasets/pull/4302 directly in this PR, this way the datasets will be updated right away in the CI (the CI is only triggered when a dataset card is changed)",
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright it's ready now :)\r\n\r\nHere is an example for the `ade_corpus_v2` dataset card. Notice the new `configs` key:\r\n\r\nhttps://github.com/huggingface/datasets/blob/76d9a141740a03f6836feb251f6059894b8d8046/datasets/ade_corpus_v2/README.md#L1-L78\r\n\r\nCI failures are only related to dataset cards missing some content."
] | 2022-05-18T13:59:24 | 2022-05-20T09:35:26 | 2022-05-20T09:27:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4367",
"html_url": "https://github.com/huggingface/datasets/pull/4367",
"diff_url": "https://github.com/huggingface/datasets/pull/4367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4367.patch",
"merged_at": "2022-05-20T09:27:19"
} | Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway. I removed all the config names used as YAML keys, and I moved them in under a new `config:` key.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the YAML keys would allow us to do as in https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like languages or task_ids | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4367/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4366/comments | https://api.github.com/repos/huggingface/datasets/issues/4366/events | https://github.com/huggingface/datasets/issues/4366 | 1,239,534,165 | I_kwDODunzps5J4cpV | 4,366 | TypeError: __init__() missing 1 required positional argument: 'scheme' | {
"login": "jffgitt",
"id": 99231535,
"node_id": "U_kgDOBeonLw",
"avatar_url": "https://avatars.githubusercontent.com/u/99231535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jffgitt",
"html_url": "https://github.com/jffgitt",
"followers_url": "https://api.github.com/users/jffgitt/followers",
"following_url": "https://api.github.com/users/jffgitt/following{/other_user}",
"gists_url": "https://api.github.com/users/jffgitt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jffgitt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jffgitt/subscriptions",
"organizations_url": "https://api.github.com/users/jffgitt/orgs",
"repos_url": "https://api.github.com/users/jffgitt/repos",
"events_url": "https://api.github.com/users/jffgitt/events{/privacy}",
"received_events_url": "https://api.github.com/users/jffgitt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py"
] | 2022-05-18T07:17:29 | 2022-05-18T16:36:22 | 2022-05-18T16:36:21 | NONE | null | null | null | "name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
when I run the order:
nohup python3 custom_service.pyc > service.log 2>&1&
the log:
nohup: 忽略输入
Traceback (most recent call last):
File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module>
File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize
File "custom_impl.py", line 286, in custom_setup
File "custom_impl.py", line 127, in create_es_index
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__
ssl_show_warn=ssl_show_warn,
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs
node_configs = hosts_to_node_configs(hosts)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs
node_configs.append(host_mapping_to_node_config(host))
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config
return NodeConfig(**options) # type: ignore
TypeError: __init__() missing 1 required positional argument: 'scheme'
[1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1
custom_service_pyc can't running
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4366/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4365/comments | https://api.github.com/repos/huggingface/datasets/issues/4365/events | https://github.com/huggingface/datasets/pull/4365 | 1,239,109,943 | PR_kwDODunzps43-4fC | 4,365 | Remove dots in config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing in favor of https://github.com/huggingface/datasets/pull/4367"
] | 2022-05-17T20:12:57 | 2022-05-18T14:07:52 | 2022-05-18T13:59:41 | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4365",
"html_url": "https://github.com/huggingface/datasets/pull/4365",
"diff_url": "https://github.com/huggingface/datasets/pull/4365.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4365.patch",
"merged_at": null
} | 20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the config names would allow us to merge https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like `languages` or `task_ids`
- they contain valid config names (no invalid characters `<>:/\|?*.`) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4365/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4364/comments | https://api.github.com/repos/huggingface/datasets/issues/4364/events | https://github.com/huggingface/datasets/pull/4364 | 1,238,976,106 | PR_kwDODunzps43-bmq | 4,364 | Support complex feature types as `features` in packaged loaders | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-17T17:53:23 | 2022-05-31T12:26:23 | 2022-05-31T12:16:32 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4364",
"html_url": "https://github.com/huggingface/datasets/pull/4364",
"diff_url": "https://github.com/huggingface/datasets/pull/4364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4364.patch",
"merged_at": "2022-05-31T12:16:31"
} | This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range.
Fix https://github.com/huggingface/datasets/issues/4210
This PR is also a solution for these (popular) discussions: https://discuss.huggingface.co/t/converting-string-label-to-int/2816 and https://discuss.huggingface.co/t/class-labels-for-custom-datasets/15130/2
TODO:
* [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4364/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4364/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4363/comments | https://api.github.com/repos/huggingface/datasets/issues/4363/events | https://github.com/huggingface/datasets/issues/4363 | 1,238,897,652 | I_kwDODunzps5J2BP0 | 4,363 | The dataset preview is not available for this split. | {
"login": "roholazandie",
"id": 7584674,
"node_id": "MDQ6VXNlcjc1ODQ2NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roholazandie",
"html_url": "https://github.com/roholazandie",
"followers_url": "https://api.github.com/users/roholazandie/followers",
"following_url": "https://api.github.com/users/roholazandie/following{/other_user}",
"gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions",
"organizations_url": "https://api.github.com/users/roholazandie/orgs",
"repos_url": "https://api.github.com/users/roholazandie/repos",
"events_url": "https://api.github.com/users/roholazandie/events{/privacy}",
"received_events_url": "https://api.github.com/users/roholazandie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! A dataset has to be streamable to work with the viewer. I did a quick test, and yours is, so this might be a bug in the viewer. cc @severo \r\n",
"Looking at it. The message is now:\r\n\r\n```\r\nMessage: cannot cache function '__shear_dense': no locator available for file '/src/services/worker/.venv/lib/python3.9/site-packages/librosa/util/utils.py'\r\n```\r\n\r\nso possibly it's related to the libraries versions?\r\n",
"Maybe this SO thread can help: https://stackoverflow.com/questions/59290386/runtimeerror-at-cannot-cache-function-shear-dense-no-locator-available-fo",
"Same error for https://huggingface.co./datasets/LIUM/tedlium/viewer/release1/test. cc @sanchit-gandhi . I'm on it",
"Fixed in the datasets viewer, by setting the `NUMBA_CACHE_DIR` env var to a writable directory.",
"https://huggingface.co./datasets/Roh/ryanspeech/viewer/male/train\r\n\r\n<img width=\"1538\" alt=\"Capture d’écran 2022-06-08 à 11 30 08\" src=\"https://user-images.githubusercontent.com/1676121/172583285-4cd49a0f-5715-423b-95dd-5f6ace3b2416.png\">\r\n",
"https://huggingface.co./datasets/LIUM/tedlium/viewer/\r\n\r\n<img width=\"1538\" alt=\"Capture d’écran 2022-06-08 à 14 31 52\" src=\"https://user-images.githubusercontent.com/1676121/172616897-fbcb7df7-0308-4d09-a17d-48826bc91374.png\">\r\n"
] | 2022-05-17T16:34:43 | 2022-06-08T12:32:10 | 2022-06-08T09:26:56 | NONE | null | null | null | I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co./datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it?
```
Status code: 400
Exception: AttributeError
Message: 'NoneType' object has no attribute 'split'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4363/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4362/comments | https://api.github.com/repos/huggingface/datasets/issues/4362/events | https://github.com/huggingface/datasets/pull/4362 | 1,238,680,112 | PR_kwDODunzps439bkf | 4,362 | Update dataset_infos for UDHN/udhr dataset | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for contributing @leondz.\r\n\r\nThe checksums of the files have changed because more languages have been added:\r\n- the new language codes need to be added to the dataset card (README file)\r\n- I think the dataset version number should also be increased, so that users who had previously cached it, get a new dataset download (with the additional languages)",
"Yep! All done (also fixed the language tags in the README which were iso639-3 instead of the expected bcp47)",
"I guess the language code CI failure is due to languages.json being a subset of bcp47 (see issue #4304), happy to contribute a solution here, e.g. autogeneration of the lang list from the relevant isos and the ietf bcp47 subtag register or full code for validation",
"> Thanks again for your contribution, @leondz.\r\n> \r\n> Yes, I think it is OK to set version 1.0.0 (as previous was 0.0.0).\r\n> \r\n> One of the CI failures is related to dummy data: once you have updated the dataset version, the dummy_data ZIP file should be moved from \"dummy/0.0.0/dummy_data.zip\" to \"dummy/1.0.0/dummy_data.zip\".\r\n\r\nOh, thanks, I missed that one\r\n\r\n\r\n> Other CI failure is related to missing languages in our resources file. This has been addressed in this PR:\r\n> \r\n> * #4371\r\n> \r\n> You should merge master branch into your feature branch to incorporate that fix.\r\n\r\nYeah, I saw this :) I already have the merge, thanks. I'm talking about the longer-term picture: every time another language code comes up (e.g. da-bornholm or es-VE), the json will need updating, because the current approach is non-exhaustive manual whitelisting instead of relying on the established bcp standard."
] | 2022-05-17T13:52:59 | 2022-06-08T19:20:11 | 2022-06-08T19:11:21 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4362",
"html_url": "https://github.com/huggingface/datasets/pull/4362",
"diff_url": "https://github.com/huggingface/datasets/pull/4362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4362.patch",
"merged_at": "2022-06-08T19:11:20"
} | Checksum update to `udhr` for issue #4361 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4362/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4361/comments | https://api.github.com/repos/huggingface/datasets/issues/4361/events | https://github.com/huggingface/datasets/issues/4361 | 1,238,671,931 | I_kwDODunzps5J1KI7 | 4,361 | `udhr` doesn't load, dataset checksum mismatch | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-05-17T13:47:09 | 2022-06-08T19:11:21 | 2022-06-08T19:11:21 | CONTRIBUTOR | null | null | null | ## Describe the bug
Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed:
size + checksum in datasets repo:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2273633,
"checksum": "0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2107471,
"checksum": "087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5"
}
}
```
size + checksum regenerated from current source files:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ rm dataset_infos.json
(hfdev) leon@blade:~/datasets/datasets/udhr$ datasets-cli test --save_infos udhr.py
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Dataset udhn downloaded and prepared to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 686.69it/s]
Dataset Infos file saved at dataset_infos.json
Test successful.
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2389690,
"checksum": "a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2215441,
"checksum": "cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe"
}
}
(hfdev) leon@blade:~/datasets/datasets/udhr$
```
--- is unicode.org a sustainable hosting solution for this dataset?
## Steps to reproduce the bug
```python
from datasets import load_dataset
udhr = load_dataset("udhr")
```
## Expected results
That a Dataset object containing the UDHR data will be returned.
## Actual results
```
>>> d = load_dataset('udhr')
Using custom data configuration default
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/leon/.local/lib/python3.9/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 613, in download_and_prepare
self._download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 1117, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 684, in _download_and_prepare
verify_checksums(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://unicode.org/udhr/assemblies/udhr_xml.zip', 'https://unicode.org/udhr/assemblies/udhr_txt.zip']
>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1 commit/4110fb6034f79c5fb470cf1043ff52180e9c63b7
- Platform: Linux Ubuntu 20.04
- Python version: 3.9.12
- PyArrow version: 8.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4361/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4360/comments | https://api.github.com/repos/huggingface/datasets/issues/4360/events | https://github.com/huggingface/datasets/pull/4360 | 1,237,239,096 | PR_kwDODunzps434izs | 4,360 | Fix example in opus_ubuntu, Add license info | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI seems to fail due to languages incorrectly being flagged as invalid, I guess that's related to the currently-broken bcp47 validation (see #4304)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T14:22:28 | 2022-06-01T13:06:07 | 2022-06-01T12:57:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4360",
"html_url": "https://github.com/huggingface/datasets/pull/4360",
"diff_url": "https://github.com/huggingface/datasets/pull/4360.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4360.patch",
"merged_at": "2022-06-01T12:57:09"
} | This PR
* fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu`
* adds the declared license info for this corpus' origin
* adds an example instance
* updates the data origin type | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4360/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4359/comments | https://api.github.com/repos/huggingface/datasets/issues/4359/events | https://github.com/huggingface/datasets/pull/4359 | 1,237,149,578 | PR_kwDODunzps434Pb6 | 4,359 | Fix Version equality | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T13:19:26 | 2022-05-24T16:25:37 | 2022-05-24T16:17:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4359",
"html_url": "https://github.com/huggingface/datasets/pull/4359",
"diff_url": "https://github.com/huggingface/datasets/pull/4359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4359.patch",
"merged_at": "2022-05-24T16:17:14"
} | I think `Version` equality should align with other similar cases in Python, like:
```python
In [1]: "a" == 5, "a" == None
Out[1]: (False, False)
In [2]: "a" != 5, "a" != None
Out[2]: (True, True)
```
With this PR, we will get:
```python
In [3]: Version("1.0.0") == 5, Version("1.0.0") == None
Out[3]: (False, False)
In [4]: Version("1.0.0") != 5, Version("1.0.0") != None
Out[4]: (True, True)
```
Note I found this issue when `doc-builder` tried to compare:
```python
if param.default != inspect._empty
```
where `param.default` is an instance of `Version`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4359/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4358/comments | https://api.github.com/repos/huggingface/datasets/issues/4358/events | https://github.com/huggingface/datasets/issues/4358 | 1,237,147,692 | I_kwDODunzps5JvWAs | 4,358 | Missing dataset tags and sections in some dataset cards | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"@lhoestq I can take this issue. Please can you point out to me where I can find the other positional arguments?",
"Hi @RohitRathore1 :)\r\n\r\nYou can find all the YAML tags in the tagging app here: https://hf.co/spaces/huggingface/datasets-tagging). They're all passed as arguments to a DatasetMetadata object used to validate the tags."
] | 2022-05-16T13:18:16 | 2022-05-30T15:36:52 | null | NONE | null | null | null | Summary of CircleCI errors for different dataset metadata:
- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **Conllpp**: expected some content in section `Citation Information` but it is empty.
- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags
- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'
- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty
- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.
- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.
- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.
- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sms_spam**: `Data Instances` and`Data Splits` are empty.
- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4358/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4357/comments | https://api.github.com/repos/huggingface/datasets/issues/4357/events | https://github.com/huggingface/datasets/pull/4357 | 1,237,037,069 | PR_kwDODunzps4333b9 | 4,357 | Fix warning in push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T11:50:17 | 2022-05-16T15:18:49 | 2022-05-16T15:10:41 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4357",
"html_url": "https://github.com/huggingface/datasets/pull/4357",
"diff_url": "https://github.com/huggingface/datasets/pull/4357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4357.patch",
"merged_at": "2022-05-16T15:10:41"
} | Fix warning:
```
FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4357/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4356/comments | https://api.github.com/repos/huggingface/datasets/issues/4356/events | https://github.com/huggingface/datasets/pull/4356 | 1,236,846,308 | PR_kwDODunzps433OsB | 4,356 | Fix dataset builder default version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR requires one of these other PRs being merged first:\r\n- #4359 \r\n- huggingface/doc-builder#211"
] | 2022-05-16T09:05:10 | 2022-05-30T13:56:58 | 2022-05-30T13:47:54 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4356",
"html_url": "https://github.com/huggingface/datasets/pull/4356",
"diff_url": "https://github.com/huggingface/datasets/pull/4356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4356.patch",
"merged_at": "2022-05-30T13:47:54"
} | Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.
However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and "0.0.0" is used instead:
```python
ds = load_dataset("wikipedia", language="co", date="20220501", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.')
```
with version "0.0.0" instead of "2.0.0".
See as a counter-example, when the config is present in `BUILDER_CONFIGS`:
```python
ds = load_dataset("wikipedia", "20220301.fr", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.')
```
with correct version "2.0.0", as set in the custom config class.
The reason for this is that `DatasetBuilder` has a default VERSION ("0.0.0") that overwrites the default version set at the custom config class.
This PR:
- Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version).
- Note that the `BuilderConfig` class already sets a default version = "0.0.0"; no need to pass this from the builder. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4356/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4355/comments | https://api.github.com/repos/huggingface/datasets/issues/4355/events | https://github.com/huggingface/datasets/pull/4355 | 1,236,797,490 | PR_kwDODunzps433EgP | 4,355 | Fix warning in upload_file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T08:21:31 | 2022-05-16T11:28:02 | 2022-05-16T11:19:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4355",
"html_url": "https://github.com/huggingface/datasets/pull/4355",
"diff_url": "https://github.com/huggingface/datasets/pull/4355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4355.patch",
"merged_at": "2022-05-16T11:19:57"
} | Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4355/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4354/comments | https://api.github.com/repos/huggingface/datasets/issues/4354/events | https://github.com/huggingface/datasets/issues/4354 | 1,236,404,383 | I_kwDODunzps5Jsgif | 4,354 | Problems with WMT dataset | {
"login": "eldarkurtic",
"id": 8884008,
"node_id": "MDQ6VXNlcjg4ODQwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8884008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldarkurtic",
"html_url": "https://github.com/eldarkurtic",
"followers_url": "https://api.github.com/users/eldarkurtic/followers",
"following_url": "https://api.github.com/users/eldarkurtic/following{/other_user}",
"gists_url": "https://api.github.com/users/eldarkurtic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldarkurtic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldarkurtic/subscriptions",
"organizations_url": "https://api.github.com/users/eldarkurtic/orgs",
"repos_url": "https://api.github.com/users/eldarkurtic/repos",
"events_url": "https://api.github.com/users/eldarkurtic/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldarkurtic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! Yes, the docs are outdated. Expect this to be fixed soon. \r\n\r\nIn the meantime, you can try to fix the issue yourself.\r\n\r\nThese are the configs/language pairs supported by `wmt15` from which you can choose:\r\n* `cs-en` (Czech - English)\r\n* `de-en` (German - English)\r\n* `fi-en` (Finnish- English)\r\n* `fr-en` (French - English)\r\n* `ru-en` (Russian - English)\r\n\r\nAnd the current implementation always uses all the subsets available for a language, so to define custom subsets, you'll have to clone the repo from the Hub and replace the line https://huggingface.co./datasets/wmt15/blob/main/wmt_utils.py#L688 with:\r\n`for split, ss_names in (self._subsets if self.config.subsets is None else self.config.subsets).items()`\r\n\r\nThen, you can load the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/local/wmt15_folder\", \"<one of 5 available configs>\", subsets=...)",
"@mariosasko thanks a lot for the suggested fix! ",
"Hi @mariosasko \r\n\r\nAre the docs updated? If not, I would like to get on it. I am new around here, would we helpful, if you can guide.\r\n\r\nThanks",
"Hi @khushmeeet! The docs haven't been updated, so feel free to work on this issue. This is a tricky issue, so I'll give the steps you can follow to fix this:\r\n\r\nFirst, this code:\r\nhttps://github.com/huggingface/datasets/blob/7cff5b9726a223509dbd6224de3f5f452c8d924f/src/datasets/load.py#L113-L118\r\n\r\nneeds to be replaced with (makes the dataset builder search more robust and allows us to remove the ABC stuff from `wmt_utils.py`):\r\n```python\r\n for name, obj in module.__dict__.items():\r\n if inspect.isclass(obj) and issubclass(obj, main_cls_type):\r\n if inspect.isabstract(obj):\r\n continue\r\n module_main_cls = obj\r\n obj_module = inspect.getmodule(obj)\r\n if obj_module is not None and module == obj_module:\r\n break\r\n```\r\n\r\nThen, all the `wmt_utils.py` scripts need to be updated as follows (these are the diffs with the requiered changes):\r\n````diff\r\n import os\r\n import re\r\n import xml.etree.cElementTree as ElementTree\r\n-from abc import ABC, abstractmethod\r\n\r\n import datasets\r\n````\r\n\r\n````diff\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n _DESCRIPTION = \"\"\"\\\r\n-Translate dataset based on the data from statmt.org.\r\n+Translation dataset based on the data from statmt.org.\r\n\r\n-Versions exists for the different years using a combination of multiple data\r\n-sources. The base `wmt_translate` allows you to create your own config to choose\r\n-your own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\r\n+Versions exist for different years using a combination of data\r\n+sources. The base `wmt` allows you to create a custom dataset by choosing\r\n+your own data/language pair. This can be done as follows:\r\n\r\n ```\r\n-config = datasets.wmt.WmtConfig(\r\n- version=\"0.0.1\",\r\n+from datasets import inspect_dataset, load_dataset_builder\r\n+\r\n+inspect_dataset(\"<insert the dataset name\", \"path/to/scripts\")\r\n+builder = load_dataset_builder(\r\n+ \"path/to/scripts/wmt_utils.py\",\r\n language_pair=(\"fr\", \"de\"),\r\n subsets={\r\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\r\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\r\n },\r\n )\r\n-builder = datasets.builder(\"wmt_translate\", config=config)\r\n-```\r\n\r\n+# Standard version\r\n+builder.download_and_prepare()\r\n+ds = builder.as_dataset()\r\n+\r\n+# Streamable version\r\n+ds = builder.as_streaming_dataset()\r\n+```\r\n \"\"\"\r\n````\r\n\r\n````diff\r\n+class Wmt(datasets.GeneratorBasedBuilder):\r\n \"\"\"WMT translation dataset.\"\"\"\r\n+\r\n+ BUILDER_CONFIG_CLASS = WmtConfig\r\n\r\n def __init__(self, *args, **kwargs):\r\n- if type(self) == Wmt and \"config\" not in kwargs: # pylint: disable=unidiomatic-typecheck\r\n- raise ValueError(\r\n- \"The raw `wmt_translate` can only be instantiated with the config \"\r\n- \"kwargs. You may want to use one of the `wmtYY_translate` \"\r\n- \"implementation instead to get the WMT dataset for a specific year.\"\r\n- )\r\n super(Wmt, self).__init__(*args, **kwargs)\r\n\r\n @property\r\n- @abstractmethod\r\n def _subsets(self):\r\n \"\"\"Subsets that make up each split of the dataset.\"\"\"\r\n````\r\n```diff\r\n \"\"\"Subsets that make up each split of the dataset for the language pair.\"\"\"\r\n source, target = self.config.language_pair\r\n filtered_subsets = {}\r\n- for split, ss_names in self._subsets.items():\r\n+ subsets = self._subsets if self.config.subsets is None else self.config.subsets\r\n+ for split, ss_names in subsets.items():\r\n filtered_subsets[split] = []\r\n for ss_name in ss_names:\r\n dataset = DATASET_MAP[ss_name]\r\n```\r\n\r\n`wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t` have this script, so all of them need to be updated. Also, the dataset summaries from the READMEs of these datasets need to be updated to match the new `_DESCRIPTION` string. And that's it! Let me know if you need additional help.",
"Hi @mariosasko ,\r\n\r\nI have made the changes as suggested by you and have opened a PR #4537.\r\n\r\nThanks",
"Resolved via #4554 "
] | 2022-05-15T20:58:26 | 2022-07-11T14:54:02 | 2022-07-11T14:54:01 | NONE | null | null | null | ## Describe the bug
I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co./datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingface.co./datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)) doesn't work anymore.
## Steps to reproduce the bug
```shell
>>> import datasets
>>> a = datasets.translate.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'translate'
>>> a = datasets.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'wmt'
```
## Expected results
To load WMT15 with given data-sources.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4354/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4353/comments | https://api.github.com/repos/huggingface/datasets/issues/4353/events | https://github.com/huggingface/datasets/pull/4353 | 1,236,092,176 | PR_kwDODunzps43016x | 4,353 | Don't strip proceeding hyphen | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-14T18:25:29 | 2022-05-16T18:51:38 | 2022-05-16T13:52:11 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4353",
"html_url": "https://github.com/huggingface/datasets/pull/4353",
"diff_url": "https://github.com/huggingface/datasets/pull/4353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4353.patch",
"merged_at": "2022-05-16T13:52:10"
} | Closes #4320. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4353/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4352/comments | https://api.github.com/repos/huggingface/datasets/issues/4352/events | https://github.com/huggingface/datasets/issues/4352 | 1,236,086,170 | I_kwDODunzps5JrS2a | 4,352 | When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting :) `datasets` usually returns a `pa.lib.ArrowInvalid` error if the feature types don't match.\r\n\r\nIt would be awesome if we had a way to reproduce the `OverflowError` in this case, to better understand what happened and be able to provide the best error message"
] | 2022-05-14T17:55:15 | 2022-05-16T15:09:17 | null | NONE | null | null | null | ## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on.
It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me.
## Steps to reproduce the bug
I don't have explicit code to repro the bug, but ill show an example
Code prior to the fix:
```python
def preprocess(examples):
# returns an encoded data dict with keys that match the features, but the types do not match
...
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['audit_type'].unique().tolist()
features = Features({
'image': Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Value(dtype='int64'))),
'bbox': Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names)
```
The Features set that fixed it:
```python
features = Features({
'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Sequence(Value(dtype='int64'))),
'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
```
The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not.
## Expected results
Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated.
## Actual results
Specify the actual results or traceback.
Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious
Example errors:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
datasets version: 2.1.0
Platform: macOS-12.2.1-arm64-arm-64bit
Python version: 3.9.12
PyArrow version: 6.0.1
Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4352/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4351/comments | https://api.github.com/repos/huggingface/datasets/issues/4351/events | https://github.com/huggingface/datasets/issues/4351 | 1,235,950,209 | I_kwDODunzps5JqxqB | 4,351 | Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems | {
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/datasets/issues/4196)."
] | 2022-05-14T11:30:42 | 2022-12-14T18:22:59 | 2022-12-14T18:22:59 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence.
**Describe the solution you'd like**
I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm.
**Describe alternatives you've considered**
- Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4351/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4350/comments | https://api.github.com/repos/huggingface/datasets/issues/4350/events | https://github.com/huggingface/datasets/pull/4350 | 1,235,505,104 | PR_kwDODunzps43zKIV | 4,350 | Add a new metric: CTC_Consistency | {
"login": "YEdenZ",
"id": 92551194,
"node_id": "U_kgDOBYQ4Gg",
"avatar_url": "https://avatars.githubusercontent.com/u/92551194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YEdenZ",
"html_url": "https://github.com/YEdenZ",
"followers_url": "https://api.github.com/users/YEdenZ/followers",
"following_url": "https://api.github.com/users/YEdenZ/following{/other_user}",
"gists_url": "https://api.github.com/users/YEdenZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YEdenZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YEdenZ/subscriptions",
"organizations_url": "https://api.github.com/users/YEdenZ/orgs",
"repos_url": "https://api.github.com/users/YEdenZ/repos",
"events_url": "https://api.github.com/users/YEdenZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/YEdenZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for your contribution, @YEdenZ.\r\n\r\nPlease note that our old `metrics` module is in the process of being incorporated to a separate library called `evaluate`: https://github.com/huggingface/evaluate\r\n\r\nTherefore, I would ask you to transfer your PR to that repository. Thank you."
] | 2022-05-13T17:31:19 | 2022-05-19T10:23:04 | 2022-05-19T10:23:03 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4350",
"html_url": "https://github.com/huggingface/datasets/pull/4350",
"diff_url": "https://github.com/huggingface/datasets/pull/4350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4350.patch",
"merged_at": null
} | Add CTC_Consistency metric
Do I also need to modify the `test_metric_common.py` file to make it run on test? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4350/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4349/comments | https://api.github.com/repos/huggingface/datasets/issues/4349/events | https://github.com/huggingface/datasets/issues/4349 | 1,235,474,765 | I_kwDODunzps5Jo9lN | 4,349 | Dataset.map()'s fails at any value of parameter writer_batch_size | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\r\n processor = LayoutLMv2Processor(feature_extractor, tokenizer)\r\n encoded_inputs = processor(images, padding=\"max_length\", truncation=True)\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n```",
"Wanted to make sure anyone that finds this also finds my other report: https://github.com/huggingface/datasets/issues/4352",
"Did you close it because you found that it was due to the incorrect Feature types ?",
"Yeah-- my analysis of the issue was wrong in this one so I just closed it while linking to the new issue",
"I met with the same problem when doing some experiments about layoutlm. I tried to set the writer_batch_size to 1, and the error still exists. Is there any solutions to this problem?",
"The problem lies in how your Features are defined. It's erroring out when it actually goes to write them to disk"
] | 2022-05-13T16:55:12 | 2022-06-02T12:51:11 | 2022-05-14T15:08:08 | NONE | null | null | null | ## Describe the bug
If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.
Context:
I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co./docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug.
I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages.
Code I am using is provided below
## Steps to reproduce the bug
I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents.
```python
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['label'].unique()
features = Features({
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'token_type_ids': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1)
encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME)
encoded_dataset.set_format(type="torch")
return encoded_dataset
```
```python
PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False)
def preprocess_data(examples):
directory = os.path.join(FILES_PATH, examples['file_location'])
images_dir = os.path.join(directory, PDF_IMAGE_DIR)
textract_response_path = os.path.join(directory, 'textract.json')
doc_meta_path = os.path.join(directory, 'doc_meta.json')
textract_document = get_textract_document(textract_response_path, doc_meta_path)
images, words, bboxes = get_doc_training_data(images_dir, textract_document)
encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True)
# https://github.com/NielsRogge/Transformers-Tutorials/issues/36
encoded_inputs["image"] = np.array(encoded_inputs["image"])
encoded_inputs["label"] = examples['label_id']
return encoded_inputs
```
## Expected results
My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly.
## Actual results
If writer_batch_size is set to a value less than the number of rows, I get either:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
or simply
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
If it is greater than the number of rows, i get the `zsh: killed` error above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4349/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4348/comments | https://api.github.com/repos/huggingface/datasets/issues/4348/events | https://github.com/huggingface/datasets/issues/4348 | 1,235,432,976 | I_kwDODunzps5JozYQ | 4,348 | `inspect` functions can't fetch dataset script from the Hub | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://github.com/huggingface/datasets/blob/cfae0545b2ba05452e16136cacc7d370b4b186a1/src/datasets/inspect.py#L89-L91\r\n\r\ncc @lhoestq: `force_local_path` is only used in `inspect_dataset` and `inspect_metric`. Is it OK if we revert the behavior to match the old one?",
"Good catch ! Yea I think it's fine :)"
] | 2022-05-13T16:08:26 | 2022-06-09T10:26:06 | 2022-06-09T10:26:06 | MEMBER | null | null | null | The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4348/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4347/comments | https://api.github.com/repos/huggingface/datasets/issues/4347/events | https://github.com/huggingface/datasets/pull/4347 | 1,235,318,064 | PR_kwDODunzps43yihq | 4,347 | Support remote cache_dir | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq thanks for your review.\r\n\r\nPlease note that `xjoin` cannot be used in this context, as it always returns a POSIX path string and this is not suitable on Windows machines.",
"<s>`xjoin` returns windows paths (not posix) on windows, since it just extends`os.path.join` </s>\r\n\r\nActually you are right.\r\n\r\nhttps://github.com/huggingface/datasets/blob/08ec04ccb59630a3029b2ecd8a14d327bddd0c4a/src/datasets/utils/streaming_download_manager.py#L104-L105\r\n\r\nThough this is not an issue because posix paths (as returned by Path().as_posix()) work on windows. That's why we can replace `os.path.join` with `xjoin` in streaming mode. They look like `c:/Program Files/` or something (can't confirm right now, I don't have a windows with me)",
"Until now, we have always replaced \"/\" in paths with `os.path.join` (`os.sep`,...) in order to support Windows paths (that contain r\"\\\\\").\r\n\r\nNow, you suggest ignoring this and work with POSIX strings (with \"/\").\r\n\r\nAs an example, when passing `cache_dir=r\"C:\\Users\\Username\\.mycache\"`:\r\n- Until now, it results in `self._cache_downloaded_dir = r\"C:\\Users\\Username\\.mycache\\downloads\"`\r\n- If we use `xjoin`, it will give `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`\r\n\r\nYou say this is OK and we don't care if we work with POSIX strings on Windows machines.\r\n\r\nI'm incorporating your suggested changes then...",
"Also note that using `xjoin`, if we pass `cache_dir=\"C:\\\\Users\\\\Username\\\\.mycache\"`, we get:\r\n- `self._cache_dir_root = \"C:\\\\Users\\\\Username\\\\.mycache\"`\r\n- `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`",
"It looks like it broke the CI on windows :/ maybe this was not a good idea, sorry"
] | 2022-05-13T14:26:35 | 2022-05-25T16:35:23 | 2022-05-25T16:27:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4347",
"html_url": "https://github.com/huggingface/datasets/pull/4347",
"diff_url": "https://github.com/huggingface/datasets/pull/4347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4347.patch",
"merged_at": "2022-05-25T16:27:03"
} | This PR implements complete support for remote `cache_dir`. Before, the support was just partial.
This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4347/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4346/comments | https://api.github.com/repos/huggingface/datasets/issues/4346/events | https://github.com/huggingface/datasets/issues/4346 | 1,235,067,062 | I_kwDODunzps5JnaC2 | 4,346 | GH Action to build documentation never ends | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-05-13T10:44:44 | 2022-05-13T11:22:00 | 2022-05-13T11:22:00 | MEMBER | null | null | null | ## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4346/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4345/comments | https://api.github.com/repos/huggingface/datasets/issues/4345/events | https://github.com/huggingface/datasets/pull/4345 | 1,235,062,787 | PR_kwDODunzps43xrky | 4,345 | Fix never ending GH Action to build documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-13T10:40:10 | 2022-05-13T11:29:43 | 2022-05-13T11:22:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4345",
"html_url": "https://github.com/huggingface/datasets/pull/4345",
"diff_url": "https://github.com/huggingface/datasets/pull/4345.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4345.patch",
"merged_at": "2022-05-13T11:22:00"
} | There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should also be addressed in the `doc-builder` lib.
Fix #4346. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4345/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4344/comments | https://api.github.com/repos/huggingface/datasets/issues/4344/events | https://github.com/huggingface/datasets/pull/4344 | 1,234,882,542 | PR_kwDODunzps43xFEn | 4,344 | Fix docstring in DatasetDict::shuffle | {
"login": "felixdivo",
"id": 4403130,
"node_id": "MDQ6VXNlcjQ0MDMxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4403130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixdivo",
"html_url": "https://github.com/felixdivo",
"followers_url": "https://api.github.com/users/felixdivo/followers",
"following_url": "https://api.github.com/users/felixdivo/following{/other_user}",
"gists_url": "https://api.github.com/users/felixdivo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixdivo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixdivo/subscriptions",
"organizations_url": "https://api.github.com/users/felixdivo/orgs",
"repos_url": "https://api.github.com/users/felixdivo/repos",
"events_url": "https://api.github.com/users/felixdivo/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixdivo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-13T08:06:00 | 2022-05-25T09:23:43 | 2022-05-24T15:35:21 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4344",
"html_url": "https://github.com/huggingface/datasets/pull/4344",
"diff_url": "https://github.com/huggingface/datasets/pull/4344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4344.patch",
"merged_at": "2022-05-24T15:35:21"
} | I think due to #1626, the docstring contained this error ever since `seed` was added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4344/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4343/comments | https://api.github.com/repos/huggingface/datasets/issues/4343/events | https://github.com/huggingface/datasets/issues/4343 | 1,234,864,168 | I_kwDODunzps5Jmogo | 4,343 | Metrics documentation is not accessible in the datasets doc UI | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400959,
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion",
"name": "Metric discussion",
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics"
}
] | closed | false | null | [] | null | [
"Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https://github.com/huggingface/evaluate) repository cc @lvwerra @sashavor "
] | 2022-05-13T07:46:30 | 2022-06-03T08:50:25 | 2022-06-03T08:50:25 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co./docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects.
**Describe the solution you'd like**
Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63
I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4343/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4342/comments | https://api.github.com/repos/huggingface/datasets/issues/4342/events | https://github.com/huggingface/datasets/pull/4342 | 1,234,743,765 | PR_kwDODunzps43woHm | 4,342 | Fix failing CI on Windows for sari and wiki_split metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-13T05:03:38 | 2022-05-13T05:47:42 | 2022-05-13T05:47:42 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4342",
"html_url": "https://github.com/huggingface/datasets/pull/4342",
"diff_url": "https://github.com/huggingface/datasets/pull/4342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4342.patch",
"merged_at": "2022-05-13T05:47:41"
} | This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics).
Before, this library was installed as a third-party dependency, but this is no longer the case for Windows.
Fix #4341. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4342/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4341/comments | https://api.github.com/repos/huggingface/datasets/issues/4341/events | https://github.com/huggingface/datasets/issues/4341 | 1,234,739,703 | I_kwDODunzps5JmKH3 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-05-13T04:55:17 | 2022-05-13T05:47:41 | 2022-05-13T05:47:41 | MEMBER | null | null | null | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4341/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4340/comments | https://api.github.com/repos/huggingface/datasets/issues/4340/events | https://github.com/huggingface/datasets/pull/4340 | 1,234,671,025 | PR_kwDODunzps43wY1U | 4,340 | Fix irc_disentangle dataset script | {
"login": "i-am-pad",
"id": 32005017,
"node_id": "MDQ6VXNlcjMyMDA1MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/32005017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i-am-pad",
"html_url": "https://github.com/i-am-pad",
"followers_url": "https://api.github.com/users/i-am-pad/followers",
"following_url": "https://api.github.com/users/i-am-pad/following{/other_user}",
"gists_url": "https://api.github.com/users/i-am-pad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i-am-pad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i-am-pad/subscriptions",
"organizations_url": "https://api.github.com/users/i-am-pad/orgs",
"repos_url": "https://api.github.com/users/i-am-pad/repos",
"events_url": "https://api.github.com/users/i-am-pad/events{/privacy}",
"received_events_url": "https://api.github.com/users/i-am-pad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! This has been fixed in https://github.com/huggingface/datasets/pull/4377, we can close this PR"
] | 2022-05-13T02:37:57 | 2022-05-24T15:37:30 | 2022-05-24T15:37:29 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4340",
"html_url": "https://github.com/huggingface/datasets/pull/4340",
"diff_url": "https://github.com/huggingface/datasets/pull/4340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4340.patch",
"merged_at": null
} | updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4340/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4339/comments | https://api.github.com/repos/huggingface/datasets/issues/4339/events | https://github.com/huggingface/datasets/pull/4339 | 1,234,496,289 | PR_kwDODunzps43v0WT | 4,339 | Dataset loader for the MSLR2022 shared task | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the underlying issue is in https://github.com/huggingface/datasets/blob/c0ed6fdc29675b3565b01b77fde5ab5d9d8b60ec/src/datasets/commands/dummy_data.py#L124 - where `CSV`s are considered to be in the same class of file as text, jsonl, and tsv.\r\n\r\nI think this is an error because CSVs can have newlines within the rows of a file. I'm happy to make a PR to change how this handling works, or make the change within this PR. \r\n\r\nWe should figure out:\r\n1. Does this dummy data need to be generated more than once? (It looks like no)\r\n2. Should this be fixed generally? (needs a HF person to weigh in here)\r\n3. What is the right way for such a fix to exist permanently here; the [Contributing document](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) doesn't provide guidance on any tests. Writing a test is several times more effort than fixing the underlying issue. (again needs a HF person)",
"Would someone from HF mind taking a look at this PR? (@lhoestq)",
"Hi ! Sorry for the delay in responding :)\r\n\r\nI don't think there's a big need to fix this in the general case for now, feel free to just generate the dummy data for this specific dataset :)\r\n\r\nThe `datasets-cli dummy_data datasets/mslr2022` command should tell you what dummy files to generate. In each dummy file you just need to include enough data to generate 4 or 5 examples",
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome! Generated the dummy data and the tests now pass. @jayded thanks for your help! If you and @lucylw are happy with this I think it's ready to be merged. @lhoestq this is ready for another look :)",
"Hi @lhoestq, is there anything blocking this from being merged that I can address?",
"Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n\r\nI think this dataset can be under the AllenAI page here: https://huggingface.co./allenai What do you think ?\r\nFeel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n\r\nOnce the dataset is under the AllenAI org, we can close this PR\r\n",
"> Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n> \r\n> I think this dataset can be under the AllenAI page here: https://huggingface.co./allenai What do you think ? Feel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n> \r\n> Once the dataset is under the AllenAI org, we can close this PR\r\n\r\nSweet! It is uploaded here: https://huggingface.co./datasets/allenai/mslr2022",
"Nice ! Thanks :)\r\n\r\nI think we can close this PR then.\r\n\r\nI noticed that the dataset preview is not available on this dataset, this is because we require datasets to work in streaming mode to show a preview. However TAR archives don't work well in streaming mode (you can't know in advance what files are inside a TAR archive without reading it completely). This can be fixed by using a ZIP archive instead.\r\n\r\nLet me know if you have questions or if I can help."
] | 2022-05-12T21:23:41 | 2022-07-18T17:19:27 | 2022-07-18T16:58:34 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4339",
"html_url": "https://github.com/huggingface/datasets/pull/4339",
"diff_url": "https://github.com/huggingface/datasets/pull/4339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4339.patch",
"merged_at": null
} | This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:
```python
from datasets import load_dataset
ms2 = load_dataset("mslr2022", "ms2")
cochrane = load_dataset("mslr2022", "cochrane")
```
Usage looks like:
```python
>>> ms2 = load_dataset("mslr2022", "ms2", split="validation")
>>> ms2.keys()
dict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info'])
>>> ms2[0].target
'Conclusions SC therapy is effective for PAH in pre clinical studies .\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .'
```
I have tested this works with the following command:
```bash
datasets-cli test datasets/mslr2022 --save_infos --all_configs
```
However I have having a little trouble generating the dummy data
```bash
datasets-cli dummy_data datasets/mslr2022 --auto_generate
```
errors out with the following stack trace:
```
Couldn't generate dummy file 'datasets/mslr2022/dummy/ms2/1.0.0/dummy_data/mslr_data.tar.gz/mslr_data/ms2/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data.
Traceback (most recent call last):
File "/Users/johngiorgi/.pyenv/versions/datasets/bin/datasets-cli", line 11, in <module>
load_entry_point('datasets', 'console_scripts', 'datasets-cli')()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 319, in run
keep_uncompressed=self._keep_uncompressed,
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator, check_duplicate_keys=False)
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/builder.py", line 1146, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/johngiorgi/.cache/huggingface/modules/datasets_modules/datasets/mslr2022/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea/mslr2022.py", line 149, in _generate_examples
reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv
return _read(filepath_or_buffer, kwds)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read
return parser.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read
index, columns, col_dict = self._engine.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 224, in read
chunks = self._reader.read_low_memory(nrows)
File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2
```
I think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains:
```
The file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS).
It is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`.
```
Tagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4339/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4338/comments | https://api.github.com/repos/huggingface/datasets/issues/4338/events | https://github.com/huggingface/datasets/pull/4338 | 1,234,478,851 | PR_kwDODunzps43vwsm | 4,338 | Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T21:02:08 | 2022-05-16T15:51:02 | 2022-05-16T15:42:59 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4338",
"html_url": "https://github.com/huggingface/datasets/pull/4338",
"diff_url": "https://github.com/huggingface/datasets/pull/4338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4338.patch",
"merged_at": "2022-05-16T15:42:59"
} | Adding evaluation metadata for:
- Tweet Eval
- Tweets Hate Speech Detection
- VCTK
- Weibo NER
- Wisesight Sentiment
- XSum
- Yahoo Answers Topics
- Yelp Polarity
- Yelp Review Full | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4338/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4337/comments | https://api.github.com/repos/huggingface/datasets/issues/4337/events | https://github.com/huggingface/datasets/pull/4337 | 1,234,470,083 | PR_kwDODunzps43vuzF | 4,337 | Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n\r\nThere are also some timeout errors, I don't really understand the source though :confused: ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T20:52:02 | 2022-05-16T16:26:19 | 2022-05-16T16:18:30 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4337",
"html_url": "https://github.com/huggingface/datasets/pull/4337",
"diff_url": "https://github.com/huggingface/datasets/pull/4337.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4337.patch",
"merged_at": "2022-05-16T16:18:30"
} | Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4337/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4336/comments | https://api.github.com/repos/huggingface/datasets/issues/4336/events | https://github.com/huggingface/datasets/pull/4336 | 1,234,446,174 | PR_kwDODunzps43vpqG | 4,336 | Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n",
"The CI errors about missing content in the dataset cards can be ignored in this PR btw",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4336). All of your documentation changes will be reflected on that endpoint."
] | 2022-05-12T20:24:45 | 2022-05-16T16:25:00 | 2022-05-16T16:24:59 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4336",
"html_url": "https://github.com/huggingface/datasets/pull/4336",
"diff_url": "https://github.com/huggingface/datasets/pull/4336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4336.patch",
"merged_at": "2022-05-16T16:24:59"
} | Adding evaluation metadata for :
- Health Fact
- Jigsaw Toxicity
- LIAR
- LJ Speech
- MSRA NER
- Multi News
- NCBI Diseas
- Poem Sentiment | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4336/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4335/comments | https://api.github.com/repos/huggingface/datasets/issues/4335/events | https://github.com/huggingface/datasets/pull/4335 | 1,234,157,123 | PR_kwDODunzps43usJP | 4,335 | Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it is empty.\r\n- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags\r\n- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'\r\n- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty",
"And yes we can ignore all the CI errors related to missing content in the dataset cards, these issues can be fixed in other PRs",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T15:28:16 | 2022-05-16T16:31:10 | 2022-05-16T16:23:09 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4335",
"html_url": "https://github.com/huggingface/datasets/pull/4335",
"diff_url": "https://github.com/huggingface/datasets/pull/4335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4335.patch",
"merged_at": "2022-05-16T16:23:08"
} | Adding evaluation metadata for:
- BillSum
- CoNLL2003
- CoNLLPP
- CUAD
- Emotion
- GigaWord
- GLUE
- Hate Speech 18
- Hate Speech Offensive | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4335/timeline | null | null | true |