Datasets:
html_url
stringlengths 48
51
| title
stringlengths 5
155
| comments
stringlengths 63
15.7k
| body
stringlengths 0
17.7k
| comment_length
int64 16
949
| text
stringlengths 164
23.7k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/1167 | ❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders | We're working on adding on-the-fly transforms in datasets.
Currently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.
For example
```python
dataset.set_format("torch")
```
applies `torch.Tensor` to the dataset entries on-the-fly.
We plan to extend this to user-defined formatting transforms.
For example
```python
dataset.set_format(transform=tokenize)
```
What do you think ? | Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co./blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern.
I guess the solution would entail wrapping a dataset into a Pytorch dataset.
As a concrete example from the [docs](https://huggingface.co./transformers/custom_datasets.html)
```python
import torch
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
# instead of doing this beforehand, I'd like to do tokenization on the fly
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
```
How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers?
----
Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant
```python
class CustomPytorchDataset(Dataset):
def __init__(self):
self.dataset = some_hf_dataset(...)
self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
def __getitem__(self, batch_idx):
instance = self.dataset[text_col][batch_idx]
tokenized_text = self.tokenizer(instance, truncation=True, padding=True)
return tokenized_text
def __len__(self):
return len(self.dataset)
@staticmethod
def collate_fn(batch):
# batch is a list, however it will always contain 1 item because we should not use the
# batch_size argument as batch_size is controlled by the sampler
return {k: torch.tensor(v) for k, v in batch[0].items()}
torch_ds = CustomPytorchDataset()
# NOTE: batch_sampler returns list of integers and since here we have SequentialSampler
# it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)`
batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True)
# NOTE: no `batch_size` as now the it is controlled by the sampler!
dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn)
``` | 63 | ❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co./blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern.
I guess the solution would entail wrapping a dataset into a Pytorch dataset.
As a concrete example from the [docs](https://huggingface.co./transformers/custom_datasets.html)
```python
import torch
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
# instead of doing this beforehand, I'd like to do tokenization on the fly
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
```
How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers?
----
Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant
```python
class CustomPytorchDataset(Dataset):
def __init__(self):
self.dataset = some_hf_dataset(...)
self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
def __getitem__(self, batch_idx):
instance = self.dataset[text_col][batch_idx]
tokenized_text = self.tokenizer(instance, truncation=True, padding=True)
return tokenized_text
def __len__(self):
return len(self.dataset)
@staticmethod
def collate_fn(batch):
# batch is a list, however it will always contain 1 item because we should not use the
# batch_size argument as batch_size is controlled by the sampler
return {k: torch.tensor(v) for k, v in batch[0].items()}
torch_ds = CustomPytorchDataset()
# NOTE: batch_sampler returns list of integers and since here we have SequentialSampler
# it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)`
batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True)
# NOTE: no `batch_size` as now the it is controlled by the sampler!
dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn)
```
We're working on adding on-the-fly transforms in datasets.
Currently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.
For example
```python
dataset.set_format("torch")
```
applies `torch.Tensor` to the dataset entries on-the-fly.
We plan to extend this to user-defined formatting transforms.
For example
```python
dataset.set_format(transform=tokenize)
```
What do you think ? |
https://github.com/huggingface/datasets/issues/1110 | Using a feature named "_type" fails with certain operations | Thanks for reporting !
Indeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.
We can probably change `_type` to something that is less likely to collide with user feature names.
In this case we would want something backward compatible though.
Feel free to try a fix and open a PR, and to ping me if I can help :) | A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"_type": ["whatever"]}).map()
concatenate_datasets([ds])
# or simply
Dataset(ds._data)
```
Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column.
Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict.
Best wishes and keep up the awesome work! | 74 | Using a feature named "_type" fails with certain operations
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"_type": ["whatever"]}).map()
concatenate_datasets([ds])
# or simply
Dataset(ds._data)
```
Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column.
Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict.
Best wishes and keep up the awesome work!
Thanks for reporting !
Indeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.
We can probably change `_type` to something that is less likely to collide with user feature names.
In this case we would want something backward compatible though.
Feel free to try a fix and open a PR, and to ping me if I can help :) |
https://github.com/huggingface/datasets/issues/1103 | Add support to download kaggle datasets | Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`? | We can use API key | 17 | Add support to download kaggle datasets
We can use API key
Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`? |
https://github.com/huggingface/datasets/issues/1103 | Add support to download kaggle datasets | The workflow for downloading a Kaggle dataset and turning it into an HF dataset is pretty simple:
```python
!kaggle datasets download -p path
ds = load_dataset(path)
```
Native support would make our download logic even more complex, and I don't think this is a good idea considering this particular feature is not requested often.
PS: Kaggle should integrate their API with `fsspec` to allow us to use a common interface if they are interested in tighter integrations | We can use API key | 77 | Add support to download kaggle datasets
We can use API key
The workflow for downloading a Kaggle dataset and turning it into an HF dataset is pretty simple:
```python
!kaggle datasets download -p path
ds = load_dataset(path)
```
Native support would make our download logic even more complex, and I don't think this is a good idea considering this particular feature is not requested often.
PS: Kaggle should integrate their API with `fsspec` to allow us to use a common interface if they are interested in tighter integrations |
https://github.com/huggingface/datasets/issues/1064 | Not support links with 302 redirect | > Hi !
> This kind of links is now supported by the library since #1316
I updated links in TLC datasets to be the github links in this pull request
https://github.com/huggingface/datasets/pull/1737
Everything works now. Thank you. | I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz
it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests).
```
r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True)
# <Response [403]>
``` | 37 | Not support links with 302 redirect
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz
it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests).
```
r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True)
# <Response [403]>
```
> Hi !
> This kind of links is now supported by the library since #1316
I updated links in TLC datasets to be the github links in this pull request
https://github.com/huggingface/datasets/pull/1737
Everything works now. Thank you. |
https://github.com/huggingface/datasets/issues/1046 | Dataset.map() turns tensors into lists? | A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug. | I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset
print("version datasets", datasets.__version__)
dataset = load_dataset("snli", split='train[0:50]')
def tokenizer_fn(example):
# actually uses a tokenizer which does something like:
return {'input_ids': torch.tensor([[0, 1, 2]])}
print("First item in dataset:\n", dataset[0])
tokenized = tokenizer_fn(dataset[0])
print("Tokenized hyp:\n", tokenized)
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
```
The output is:
```
version datasets 1.1.3
Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c)
First item in dataset:
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}
Tokenized hyp:
{'input_ids': tensor([[0, 1, 2]])}
Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow
Tokenized using map:
{'input_ids': [[0, 1, 2]]}
<class 'torch.Tensor'> <class 'list'>
```
Or am I doing something wrong?
| 32 | Dataset.map() turns tensors into lists?
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset
print("version datasets", datasets.__version__)
dataset = load_dataset("snli", split='train[0:50]')
def tokenizer_fn(example):
# actually uses a tokenizer which does something like:
return {'input_ids': torch.tensor([[0, 1, 2]])}
print("First item in dataset:\n", dataset[0])
tokenized = tokenizer_fn(dataset[0])
print("Tokenized hyp:\n", tokenized)
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
```
The output is:
```
version datasets 1.1.3
Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c)
First item in dataset:
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}
Tokenized hyp:
{'input_ids': tensor([[0, 1, 2]])}
Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow
Tokenized using map:
{'input_ids': [[0, 1, 2]]}
<class 'torch.Tensor'> <class 'list'>
```
Or am I doing something wrong?
A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug. |
https://github.com/huggingface/datasets/issues/1046 | Dataset.map() turns tensors into lists? | It is expected behavior, you should set the format to `"torch"` as you mentioned to get pytorch tensors back.
By default datasets returns pure python objects. | I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset
print("version datasets", datasets.__version__)
dataset = load_dataset("snli", split='train[0:50]')
def tokenizer_fn(example):
# actually uses a tokenizer which does something like:
return {'input_ids': torch.tensor([[0, 1, 2]])}
print("First item in dataset:\n", dataset[0])
tokenized = tokenizer_fn(dataset[0])
print("Tokenized hyp:\n", tokenized)
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
```
The output is:
```
version datasets 1.1.3
Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c)
First item in dataset:
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}
Tokenized hyp:
{'input_ids': tensor([[0, 1, 2]])}
Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow
Tokenized using map:
{'input_ids': [[0, 1, 2]]}
<class 'torch.Tensor'> <class 'list'>
```
Or am I doing something wrong?
| 26 | Dataset.map() turns tensors into lists?
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset
print("version datasets", datasets.__version__)
dataset = load_dataset("snli", split='train[0:50]')
def tokenizer_fn(example):
# actually uses a tokenizer which does something like:
return {'input_ids': torch.tensor([[0, 1, 2]])}
print("First item in dataset:\n", dataset[0])
tokenized = tokenizer_fn(dataset[0])
print("Tokenized hyp:\n", tokenized)
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
```
The output is:
```
version datasets 1.1.3
Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c)
First item in dataset:
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}
Tokenized hyp:
{'input_ids': tensor([[0, 1, 2]])}
Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow
Tokenized using map:
{'input_ids': [[0, 1, 2]]}
<class 'torch.Tensor'> <class 'list'>
```
Or am I doing something wrong?
It is expected behavior, you should set the format to `"torch"` as you mentioned to get pytorch tensors back.
By default datasets returns pure python objects. |
https://github.com/huggingface/datasets/issues/1004 | how large datasets are handled under the hood | This library uses Apache Arrow under the hood to store datasets on disk.
The advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.
For example when you access one element or one batch
```python
from datasets import load_dataset
squad = load_dataset("squad", split="train")
first_element = squad[0]
one_batch = squad[:8]
```
then only this element/batch is loaded in memory, while the rest of the dataset is memory mapped. | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks | 90 | how large datasets are handled under the hood
Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
This library uses Apache Arrow under the hood to store datasets on disk.
The advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.
For example when you access one element or one batch
```python
from datasets import load_dataset
squad = load_dataset("squad", split="train")
first_element = squad[0]
one_batch = squad[:8]
```
then only this element/batch is loaded in memory, while the rest of the dataset is memory mapped. |
https://github.com/huggingface/datasets/issues/1004 | how large datasets are handled under the hood | How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.
EDIT:
My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks. | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks | 68 | how large datasets are handled under the hood
Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.
EDIT:
My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks. |
https://github.com/huggingface/datasets/issues/1004 | how large datasets are handled under the hood | > How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.
Loading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM.
Memory mapping is almost instantaneous and is done within one process.
Then, the speed of querying examples from the dataset is I/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast.
But since the I/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory.
Still, if you load the dataset in different processes then it can be faster but there will still be the I/O bottleneck of the disk.
> EDIT:
> My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.
Ok let me know if that helps !
| Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks | 192 | how large datasets are handled under the hood
Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
> How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.
Loading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM.
Memory mapping is almost instantaneous and is done within one process.
Then, the speed of querying examples from the dataset is I/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast.
But since the I/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory.
Still, if you load the dataset in different processes then it can be faster but there will still be the I/O bottleneck of the disk.
> EDIT:
> My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.
Ok let me know if that helps !
|
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | Looks like the google drive download failed.
I'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.
We should consider finding a better host than google drive for this dataset imo
related : #873 #864 |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 40 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
Looks like the google drive download failed.
I'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.
We should consider finding a better host than google drive for this dataset imo
related : #873 #864 |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | It is working now, thank you.
Should I leave this issue open to address the Quota-exceeded error? |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 17 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
It is working now, thank you.
Should I leave this issue open to address the Quota-exceeded error? |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | I've looked into it and couldn't find a solution. This looks like a Google Drive limitation..
Please try to use other hosts when possible |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 24 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
I've looked into it and couldn't find a solution. This looks like a Google Drive limitation..
Please try to use other hosts when possible |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS. |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 31 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS. |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik.
Otherwise you can use the google drive link, but it it's not that convenient because of this quota issue. |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 45 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik.
Otherwise you can use the google drive link, but it it's not that convenient because of this quota issue. |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome. |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 17 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome. |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | Not as long as the data is stored on GG drive unfortunately.
Maybe we can ask if there's a mirror ?
Hi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ?
To give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset. |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 84 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
Not as long as the data is stored on GG drive unfortunately.
Maybe we can ask if there's a mirror ?
Hi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ?
To give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset. |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | Thanks for the link @mrazizi !
Apparently the original authors don't host the dataset themselves ("for legal reasons", source [here](https://github.com/abisee/cnn-dailymail/issues/9)). |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' | 20 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-9-cd4bf8bea840> in <module>()
22
23
---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')
25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')
26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')
5 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
Thanks for the link @mrazizi !
Apparently the original authors don't host the dataset themselves ("for legal reasons", source [here](https://github.com/abisee/cnn-dailymail/issues/9)). |
https://github.com/huggingface/datasets/issues/993 | Problem downloading amazon_reviews_multi | Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion? | Thanks for adding the dataset.
After trying to load the dataset, I am getting the following error:
`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json
`
I used the following code to load the dataset:
`load_dataset(
dataset_name,
"all_languages",
cache_dir=".data"
)`
I am using version 1.1.3 of `datasets`
Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json` | 21 | Problem downloading amazon_reviews_multi
Thanks for adding the dataset.
After trying to load the dataset, I am getting the following error:
`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json
`
I used the following code to load the dataset:
`load_dataset(
dataset_name,
"all_languages",
cache_dir=".data"
)`
I am using version 1.1.3 of `datasets`
Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json`
Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion? |
https://github.com/huggingface/datasets/issues/988 | making sure datasets are not loaded in memory and distributed training of them | my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316
my implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496 | Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks | 16 | making sure datasets are not loaded in memory and distributed training of them
Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316
my implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496 |
https://github.com/huggingface/datasets/issues/988 | making sure datasets are not loaded in memory and distributed training of them | Hi! You can use the `assert not bool(dataset.cache_files)` assertion to ensure your data is in memory. And I suggest using `accelerate` for distributed training. | Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks | 24 | making sure datasets are not loaded in memory and distributed training of them
Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
Hi! You can use the `assert not bool(dataset.cache_files)` assertion to ensure your data is in memory. And I suggest using `accelerate` for distributed training. |
https://github.com/huggingface/datasets/issues/961 | sample multiple datasets | here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195
I need to train my model distributedly with this dataloader, "MultiTasksataloader", currently this does not work in distributed fasion,
to save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks
| Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. | 109 | sample multiple datasets
Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195
I need to train my model distributedly with this dataloader, "MultiTasksataloader", currently this does not work in distributed fasion,
to save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks
|
https://github.com/huggingface/datasets/issues/961 | sample multiple datasets | Thanks @rabeehk for sharing.
The sampler basically returns a list of integers to sample from each task's dataset. I was wondering how to use it with two `torch.Dataset` of different tasks. Also, do I need to shard across processes while creating an Iterable Dataset?
| Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. | 44 | sample multiple datasets
Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
Thanks @rabeehk for sharing.
The sampler basically returns a list of integers to sample from each task's dataset. I was wondering how to use it with two `torch.Dataset` of different tasks. Also, do I need to shard across processes while creating an Iterable Dataset?
|
https://github.com/huggingface/datasets/issues/961 | sample multiple datasets | We now have `interleave_datasets` in the API that allows you to cycle/sample with probabilities (with various stopping strategies) through a list of datasets. However, more specific behavior should be implemented manually. | Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. | 31 | sample multiple datasets
Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
We now have `interleave_datasets` in the API that allows you to cycle/sample with probabilities (with various stopping strategies) through a list of datasets. However, more specific behavior should be implemented manually. |
https://github.com/huggingface/datasets/issues/937 | Local machine/cluster Beam Datasets example/tutorial | I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.
From my experience the DirectRunner is fine though, even if it's clearly not memory efficient.
It would be awesome though to make it work locally on a SparkRunner !
Did you manage to make your processing work ? | Hi,
I'm wondering if https://huggingface.co./docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output.
Thanks!
Shang | 62 | Local machine/cluster Beam Datasets example/tutorial
Hi,
I'm wondering if https://huggingface.co./docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output.
Thanks!
Shang
I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.
From my experience the DirectRunner is fine though, even if it's clearly not memory efficient.
It would be awesome though to make it work locally on a SparkRunner !
Did you manage to make your processing work ? |
https://github.com/huggingface/datasets/issues/919 | wrong length with datasets | Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. | Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler,
collate_fn=self.data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
)
```
now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
| 26 | wrong length with datasets
Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler,
collate_fn=self.data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
)
```
now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. |
https://github.com/huggingface/datasets/issues/915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | This is an interesting idea !
Do you have ideas about how to approach the decoding and the normalization ? | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.
If you have interest in this, I'd love to help :). | 20 | Shall we change the hashing to encoding to reduce potential replicated cache files?
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.
If you have interest in this, I'd love to help :).
This is an interesting idea !
Do you have ideas about how to approach the decoding and the normalization ? |
https://github.com/huggingface/datasets/issues/915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | @lhoestq
I think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can
- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.
- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.
If we find one, we can adjust the list in `self._fingerprint` to it.
As for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.
And for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.
Because we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset. | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.
If you have interest in this, I'd love to help :). | 191 | Shall we change the hashing to encoding to reduce potential replicated cache files?
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.
If you have interest in this, I'd love to help :).
@lhoestq
I think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can
- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.
- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.
If we find one, we can adjust the list in `self._fingerprint` to it.
As for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.
And for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.
Because we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset. |
https://github.com/huggingface/datasets/issues/897 | Dataset viewer issues | Thanks for reporting !
cc @srush for the empty feature list issue and the encoding issue
cc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ? | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co./nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
| 38 | Dataset viewer issues
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co./nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
Thanks for reporting !
cc @srush for the empty feature list issue and the encoding issue
cc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ? |
https://github.com/huggingface/datasets/issues/897 | Dataset viewer issues | Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co./nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
| 36 | Dataset viewer issues
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co./nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time |
https://github.com/huggingface/datasets/issues/888 | Nested lists are zipped unexpectedly | Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.
See the [documentation](https://huggingface.co./docs/datasets/features.html?highlight=features) for more details | I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
{"bottom": 2}
]
}]
}
```
I then load my dataset:
```python
train = load_dataset("my dataset")["train"]
```
and expect to be able to access `data[0]["top"][0]["middle"][0]`.
That is not the case. Here is `data[0]` as JSON:
```json
{"top": {"middle": [{"bottom": [1, 2]}]}}
```
Clearly different than the thing I inputted.
```json
{"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]}
``` | 27 | Nested lists are zipped unexpectedly
I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
{"bottom": 2}
]
}]
}
```
I then load my dataset:
```python
train = load_dataset("my dataset")["train"]
```
and expect to be able to access `data[0]["top"][0]["middle"][0]`.
That is not the case. Here is `data[0]` as JSON:
```json
{"top": {"middle": [{"bottom": [1, 2]}]}}
```
Clearly different than the thing I inputted.
```json
{"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]}
```
Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.
See the [documentation](https://huggingface.co./docs/datasets/features.html?highlight=features) for more details |
https://github.com/huggingface/datasets/issues/888 | Nested lists are zipped unexpectedly | Thanks.
This is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :)
| I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
{"bottom": 2}
]
}]
}
```
I then load my dataset:
```python
train = load_dataset("my dataset")["train"]
```
and expect to be able to access `data[0]["top"][0]["middle"][0]`.
That is not the case. Here is `data[0]` as JSON:
```json
{"top": {"middle": [{"bottom": [1, 2]}]}}
```
Clearly different than the thing I inputted.
```json
{"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]}
``` | 28 | Nested lists are zipped unexpectedly
I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
{"bottom": 2}
]
}]
}
```
I then load my dataset:
```python
train = load_dataset("my dataset")["train"]
```
and expect to be able to access `data[0]["top"][0]["middle"][0]`.
That is not the case. Here is `data[0]` as JSON:
```json
{"top": {"middle": [{"bottom": [1, 2]}]}}
```
Clearly different than the thing I inputted.
```json
{"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]}
```
Thanks.
This is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :)
|
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.
With the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype="float32")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes.
For now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.
What do you think ? | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 85 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.
With the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype="float32")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes.
For now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.
What do you think ? |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | > Yes right now ArrayXD can only be used as a column feature type, not a subtype.
Meaning it can't be nested under `Sequence`?
If so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested. | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 45 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
> Yes right now ArrayXD can only be used as a column feature type, not a subtype.
Meaning it can't be nested under `Sequence`?
If so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested. |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Yea unfortunately..
That's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.
We already have an ExtensionArray that allows us to use them as column types but not for subtypes.
Maybe we can extend it, I haven't experimented with that yet | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 48 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Yea unfortunately..
That's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.
We already have an ExtensionArray that allows us to use them as column types but not for subtypes.
Maybe we can extend it, I haven't experimented with that yet |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Cool
So please consider this issue as a feature request for:
```
Array3D(shape=(None, 137, 2), dtype="float32")
```
its a way to represent videos, poses, and other cool sequences | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 28 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Cool
So please consider this issue as a feature request for:
```
Array3D(shape=(None, 137, 2), dtype="float32")
```
its a way to represent videos, poses, and other cool sequences |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | @lhoestq well, so sequence of sequences doesn't work either...
```
pyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
```
| I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 23 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
@lhoestq well, so sequence of sequences doesn't work either...
```
pyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
```
|
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Working with Arrow can be quite fun sometimes.
You can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741).
Let me know if it works.
I haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week. | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 67 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Working with Arrow can be quite fun sometimes.
You can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741).
Let me know if it works.
I haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week. |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)
Loading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough
| I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 37 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)
Loading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough
|
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported. | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 42 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported. |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Hi @lhoestq,
Any chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?
e.g.:
`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))`
`Array3D(shape=(None, 137, 2), dtype="float32")` | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 29 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Hi @lhoestq,
Any chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?
e.g.:
`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))`
`Array3D(shape=(None, 137, 2), dtype="float32")` |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point. | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 38 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point. |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | @lhoestq, thanks for the update.
I actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?
I think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)
Below are my modifications of this class.
```
class ArrayExtensionArray(pa.ExtensionArray):
def __array__(self):
zero_copy_only = _is_zero_copy_only(self.storage.type)
return self.to_numpy(zero_copy_only=zero_copy_only)
def __getitem__(self, i):
return self.storage[i]
def to_numpy(self, zero_copy_only=True):
storage: pa.ListArray = self.storage
size = 1
for i in range(self.type.ndims):
size *= self.type.shape[i]
storage = storage.flatten()
numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)
numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)
return numpy_arr
def to_list_of_numpy(self, zero_copy_only=True):
storage: pa.ListArray = self.storage
shape = self.type.shape
arrays = []
for dim in range(1, self.type.ndims):
assert shape[dim] is not None, f"Support only dynamic size on first dimension. Got: {shape}"
first_dim_offsets = np.array([off.as_py() for off in storage.offsets])
for i in range(len(storage)):
storage_el = storage[i:i+1]
first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]
# flatten storage
for dim in range(self.type.ndims):
storage_el = storage_el.flatten()
numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)
arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))
return arrays
def to_pylist(self):
zero_copy_only = _is_zero_copy_only(self.storage.type)
if self.type.shape[0] is None:
return self.to_list_of_numpy(zero_copy_only=zero_copy_only)
else:
return self.to_numpy(zero_copy_only=zero_copy_only).tolist()
```
I ran few tests and it works as expected. Let me know what you think. | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 224 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
@lhoestq, thanks for the update.
I actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?
I think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)
Below are my modifications of this class.
```
class ArrayExtensionArray(pa.ExtensionArray):
def __array__(self):
zero_copy_only = _is_zero_copy_only(self.storage.type)
return self.to_numpy(zero_copy_only=zero_copy_only)
def __getitem__(self, i):
return self.storage[i]
def to_numpy(self, zero_copy_only=True):
storage: pa.ListArray = self.storage
size = 1
for i in range(self.type.ndims):
size *= self.type.shape[i]
storage = storage.flatten()
numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)
numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)
return numpy_arr
def to_list_of_numpy(self, zero_copy_only=True):
storage: pa.ListArray = self.storage
shape = self.type.shape
arrays = []
for dim in range(1, self.type.ndims):
assert shape[dim] is not None, f"Support only dynamic size on first dimension. Got: {shape}"
first_dim_offsets = np.array([off.as_py() for off in storage.offsets])
for i in range(len(storage)):
storage_el = storage[i:i+1]
first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]
# flatten storage
for dim in range(self.type.ndims):
storage_el = storage_el.flatten()
numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)
arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))
return arrays
def to_pylist(self):
zero_copy_only = _is_zero_copy_only(self.storage.type)
if self.type.shape[0] is None:
return self.to_list_of_numpy(zero_copy_only=zero_copy_only)
else:
return self.to_numpy(zero_copy_only=zero_copy_only).tolist()
```
I ran few tests and it works as expected. Let me know what you think. |
https://github.com/huggingface/datasets/issues/887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | Thanks for diving into this !
Indeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).
Your code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.
Feel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.
In particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):
```python
from datasets import Dataset, Features, Array3D
# this works
matrix = [[1, 0], [0, 1]]
features = Features({"a": Array3D(dtype="int32", shape=(1, 2, 2))})
d = Dataset.from_dict({"a": [[matrix], [matrix]]})
print(d.to_pandas())
# this should work as well
matrix = [[1, 0], [0, 1]]
features = Features({"a": Array3D(dtype="int32", shape=(None, 2, 2))})
d = Dataset.from_dict({"a": [[matrix], [matrix] * 2]})
print(d.to_pandas())
```
I'll be happy to help you on this :) | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | 164 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Thanks for diving into this !
Indeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).
Your code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.
Feel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.
In particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):
```python
from datasets import Dataset, Features, Array3D
# this works
matrix = [[1, 0], [0, 1]]
features = Features({"a": Array3D(dtype="int32", shape=(1, 2, 2))})
d = Dataset.from_dict({"a": [[matrix], [matrix]]})
print(d.to_pandas())
# this should work as well
matrix = [[1, 0], [0, 1]]
features = Features({"a": Array3D(dtype="int32", shape=(None, 2, 2))})
d = Dataset.from_dict({"a": [[matrix], [matrix] * 2]})
print(d.to_pandas())
```
I'll be happy to help you on this :) |
https://github.com/huggingface/datasets/issues/883 | Downloading/caching only a part of a datasets' dataset. | I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.
This makes the task impossible with limited memory resources. | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | 40 | Downloading/caching only a part of a datasets' dataset.
Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir
I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.
This makes the task impossible with limited memory resources. |
https://github.com/huggingface/datasets/issues/880 | Add SQA | I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq | ## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/
- **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253
- **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71).
Note 1: this dataset actually consists of 2 types of files:
1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test)
2) a folder of csv files, which contain the actual tabular data
Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub.
Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format.
Instructions to add a new dataset can be found [here](https://huggingface.co./docs/datasets/share_dataset.html).
| 16 | Add SQA
## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/
- **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253
- **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71).
Note 1: this dataset actually consists of 2 types of files:
1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test)
2) a folder of csv files, which contain the actual tabular data
Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub.
Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format.
Instructions to add a new dataset can be found [here](https://huggingface.co./docs/datasets/share_dataset.html).
I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq |
https://github.com/huggingface/datasets/issues/880 | Add SQA | @thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings:
```
import pandas as pd
import ast
data = pd.read_csv("/content/sqa_data/random-split-1-dev.tsv", sep='\t')
def _parse_answer_coordinates(answer_coordinate_str):
"""Parses the answer_coordinates of a question.
Args:
answer_coordinate_str: A string representation of a Python list of tuple
strings.
For example: "['(1, 4)','(1, 3)', ...]"
"""
try:
answer_coordinates = []
# make a list of strings
coords = ast.literal_eval(answer_coordinate_str)
# parse each string as a tuple
for row_index, column_index in sorted(
ast.literal_eval(coord) for coord in coords):
answer_coordinates.append((row_index, column_index))
except SyntaxError:
raise ValueError('Unable to evaluate %s' % answer_coordinate_str)
return answer_coordinates
def _parse_answer_text(answer_text):
"""Populates the answer_texts field of `answer` by parsing `answer_text`.
Args:
answer_text: A string representation of a Python list of strings.
For example: "[u'test', u'hello', ...]"
"""
try:
answer = []
for value in ast.literal_eval(answer_text):
answer.append(value)
except SyntaxError:
raise ValueError('Unable to evaluate %s' % answer_text)
return answer
data['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))
data['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))
```
Here I'm using Pandas to read in one of the TSV files (the dev set).
| ## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/
- **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253
- **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71).
Note 1: this dataset actually consists of 2 types of files:
1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test)
2) a folder of csv files, which contain the actual tabular data
Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub.
Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format.
Instructions to add a new dataset can be found [here](https://huggingface.co./docs/datasets/share_dataset.html).
| 185 | Add SQA
## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/
- **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253
- **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71).
Note 1: this dataset actually consists of 2 types of files:
1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test)
2) a folder of csv files, which contain the actual tabular data
Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub.
Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format.
Instructions to add a new dataset can be found [here](https://huggingface.co./docs/datasets/share_dataset.html).
@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings:
```
import pandas as pd
import ast
data = pd.read_csv("/content/sqa_data/random-split-1-dev.tsv", sep='\t')
def _parse_answer_coordinates(answer_coordinate_str):
"""Parses the answer_coordinates of a question.
Args:
answer_coordinate_str: A string representation of a Python list of tuple
strings.
For example: "['(1, 4)','(1, 3)', ...]"
"""
try:
answer_coordinates = []
# make a list of strings
coords = ast.literal_eval(answer_coordinate_str)
# parse each string as a tuple
for row_index, column_index in sorted(
ast.literal_eval(coord) for coord in coords):
answer_coordinates.append((row_index, column_index))
except SyntaxError:
raise ValueError('Unable to evaluate %s' % answer_coordinate_str)
return answer_coordinates
def _parse_answer_text(answer_text):
"""Populates the answer_texts field of `answer` by parsing `answer_text`.
Args:
answer_text: A string representation of a Python list of strings.
For example: "[u'test', u'hello', ...]"
"""
try:
answer = []
for value in ast.literal_eval(answer_text):
answer.append(value)
except SyntaxError:
raise ValueError('Unable to evaluate %s' % answer_text)
return answer
data['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))
data['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))
```
Here I'm using Pandas to read in one of the TSV files (the dev set).
|
https://github.com/huggingface/datasets/issues/879 | boolq does not load | Hi ! It runs on my side without issues. I tried
```python
from datasets import load_dataset
load_dataset("boolq")
```
What version of datasets and tensorflow are your runnning ?
Also if you manage to get a minimal reproducible script (on google colab for example) that would be useful. | Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
| 47 | boolq does not load
Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
Hi ! It runs on my side without issues. I tried
```python
from datasets import load_dataset
load_dataset("boolq")
```
What version of datasets and tensorflow are your runnning ?
Also if you manage to get a minimal reproducible script (on google colab for example) that would be useful. |
https://github.com/huggingface/datasets/issues/879 | boolq does not load | hey
i do the exact same commands. for me it fails i guess might be issues with
caching maybe?
thanks
best
rabeeh
On Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <[email protected]>
wrote:
> Hi ! It runs on my side without issues. I tried
>
> from datasets import load_datasetload_dataset("boolq")
>
> What version of datasets and tensorflow are your runnning ?
> Also if you manage to get a minimal reproducible script (on google colab
> for example) that would be useful.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A>
> .
>
| Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
| 117 | boolq does not load
Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
hey
i do the exact same commands. for me it fails i guess might be issues with
caching maybe?
thanks
best
rabeeh
On Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <[email protected]>
wrote:
> Hi ! It runs on my side without issues. I tried
>
> from datasets import load_datasetload_dataset("boolq")
>
> What version of datasets and tensorflow are your runnning ?
> Also if you manage to get a minimal reproducible script (on google colab
> for example) that would be useful.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A>
> .
>
|
https://github.com/huggingface/datasets/issues/879 | boolq does not load | Could you check if it works on the master branch ?
You can use `load_dataset("boolq", script_version="master")` to do so.
We did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881 | Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
| 43 | boolq does not load
Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
Could you check if it works on the master branch ?
You can use `load_dataset("boolq", script_version="master")` to do so.
We did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881 |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | > neat feature
I dint get these clearly, can you please elaborate like how to work on these | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | 18 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
> neat feature
I dint get these clearly, can you please elaborate like how to work on these |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no? | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | 18 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no? |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | Thanks thomwolf and julien-c
I'm still confusion on what you guys said,
I have solved the problem as follows:
1. read the csv file using pandas from s3
2. Convert to dictionary key as column name and values as list column data
3. convert it to Dataset using
`from datasets import Dataset`
`train_dataset = Dataset.from_dict(train_dict)` | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | 55 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
Thanks thomwolf and julien-c
I'm still confusion on what you guys said,
I have solved the problem as follows:
1. read the csv file using pandas from s3
2. Convert to dictionary key as column name and values as list column data
3. convert it to Dataset using
`from datasets import Dataset`
`train_dataset = Dataset.from_dict(train_dict)` |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | We were brainstorming around your use-case.
Let's keep the issue open for now, I think this is an interesting question to think about. | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | 23 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
We were brainstorming around your use-case.
Let's keep the issue open for now, I think this is an interesting question to think about. |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | > We were brainstorming around your use-case.
>
> Let's keep the issue open for now, I think this is an interesting question to think about.
Sure thomwolf, Thanks for your concern | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | 32 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
> We were brainstorming around your use-case.
>
> Let's keep the issue open for now, I think this is an interesting question to think about.
Sure thomwolf, Thanks for your concern |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | I agree it would be cool to have that feature. Also that's good to know that pandas supports this.
For the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | 45 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
I agree it would be cool to have that feature. Also that's good to know that pandas supports this.
For the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files |
https://github.com/huggingface/datasets/issues/878 | Loading Data From S3 Path in Sagemaker | Any updates on this issue?
I face a similar issue. I have many parquet files in S3 and I would like to train on them.
To be honest I even face issues with only getting the last layer embedding out of them. | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load | 42 | Loading Data From S3 Path in Sagemaker
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
Any updates on this issue?
I face a similar issue. I have many parquet files in S3 and I would like to train on them.
To be honest I even face issues with only getting the last layer embedding out of them. |
Dataset Card for Dataset Name
Dataset Summary in English
This customized dataset is made of a corpus of commun Github issues, typically utilized for tracking bugs or features within a repositories. This self-constructed corpus can serve multiple purposes, such as analyzing the time taken to resolve open issues or pull requests, training a classifier to tag issues based on their descriptions (e.g., "bug," "enhancement," "question"), or developing a semantic search engine for finding relevant issues based on user queries.
Résumé de l'ensemble de jeu de données en français
Ce jeu de données personnalisé est constitué d'un corpus de problèmes couramment rencontrés sur GitHub, généralement utilisés pour le suivi des bugs ou des fonctionnalités au sein des repositories. Ce corpus auto construit peut servir à de multiples fins, telles que l'analyse du temps nécessaire pour résoudre les problèmes ouverts ou les demandes d'extraction, l'entraînement d'un classificateur pour étiqueter les problèmes sur la base de leurs descriptions (par exemple, "bug", "amélioration", "question"), ou le développement d'un moteur de recherche sémantique pour trouver des problèmes pertinents sur la base des requêtes de l'utilisateur.
Languages
English
Dataset Structure
Data Splits
Train
Personal and Sensitive Information
Not applicable.
Considerations for Using the Data
Social Impact of Dataset
Not applicable.
Discussion of Biases
Possible. Comments within dataset were not monitored and are uncensored.
Licensing Information
Apache 2.0
Citation Information
- Downloads last month
- 49