text
stringlengths 137
1.74k
| source
stringclasses 4
values | url
stringlengths 56
73
| source_section
stringlengths 4
20
| file_type
stringclasses 1
value | id
stringlengths 3
3
|
---|---|---|---|---|---|
The how-to guides offer a more comprehensive overview of all the tools 馃 Datasets offers and how to use them. This will help you tackle messier real-world datasets where you may need to manipulate the dataset structure or content to get it ready for training.
The guides assume you are familiar and comfortable with the 馃 Datasets basics. We recommend newer users check out our [tutorials](tutorial) first.
<Tip>
Interested in learning more? Take a look at [Chapter 5](https://huggingface.co./course/chapter5/1?fw=pt) of the Hugging Face course!
</Tip>
The guides are organized into six sections:
- <span class="underline decoration-sky-400 decoration-2 font-semibold">General usage</span>: Functions for general dataset loading and processing. The functions shown in this section are applicable across all dataset modalities.
- <span class="underline decoration-pink-400 decoration-2 font-semibold">Audio</span>: How to load, process, and share audio datasets.
- <span class="underline decoration-yellow-400 decoration-2 font-semibold">Vision</span>: How to load, process, and share image and video datasets.
- <span class="underline decoration-green-400 decoration-2 font-semibold">Text</span>: How to load, process, and share text datasets.
- <span class="underline decoration-orange-400 decoration-2 font-semibold">Tabular</span>: How to load, process, and share tabular datasets.
- <span class="underline decoration-indigo-400 decoration-2 font-semibold">Dataset repository</span>: How to share and upload a dataset to the <a href="https://huggingface.co./datasets">Hub</a>.
If you have any questions about 馃 Datasets, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/datasets/10). | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/how_to.md | https://huggingface.co./docs/datasets/en/how_to/#overview | #overview | .md | 0_0 |
Welcome to the 馃 Datasets tutorials! These beginner-friendly tutorials will guide you through the fundamentals of working with 馃 Datasets. You'll load and prepare a dataset for training with your machine learning framework of choice. Along the way, you'll learn how to load different dataset configurations and splits, interact with and see what's inside your dataset, preprocess, and share a dataset to the [Hub](https://huggingface.co./datasets).
The tutorials assume some basic knowledge of Python and a machine learning framework like PyTorch or TensorFlow. If you're already familiar with these, feel free to check out the [quickstart](./quickstart) to see what you can do with 馃 Datasets.
<Tip>
The tutorials only cover the basic skills you need to use 馃 Datasets. There are many other useful functionalities and applications that aren't discussed here. If you're interested in learning more, take a look at [Chapter 5](https://huggingface.co./course/chapter5/1?fw=pt) of the Hugging Face course.
</Tip>
If you have any questions about 馃 Datasets, feel free to join and ask the community on our [forum](https://discuss.huggingface.co/c/datasets/10).
Let's get started! 馃弫 | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/tutorial.md | https://huggingface.co./docs/datasets/en/tutorial/#overview | #overview | .md | 1_0 |
Before you start, you'll need to setup your environment and install the appropriate packages. 馃 Datasets is tested on **Python 3.7+**.
<Tip>
If you want to use 馃 Datasets with TensorFlow or PyTorch, you'll need to install them separately. Refer to the [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2-packages-are-available) or the [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) for the specific install command for your framework.
</Tip> | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co./docs/datasets/en/installation/#installation | #installation | .md | 2_0 |
You should install 馃 Datasets in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep things tidy and avoid dependency conflicts.
1. Create and navigate to your project directory:
```bash
mkdir ~/my-project
cd ~/my-project
```
2. Start a virtual environment inside your directory:
```bash
python -m venv .env
```
3. Activate and deactivate the virtual environment with the following commands:
```bash
# Activate the virtual environment
source .env/bin/activate
# Deactivate the virtual environment
source .env/bin/deactivate
```
Once you've created your virtual environment, you can install 馃 Datasets in it. | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co./docs/datasets/en/installation/#virtual-environment | #virtual-environment | .md | 2_1 |
The most straightforward way to install 馃 Datasets is with pip:
```bash
pip install datasets
```
Run the following command to check if 馃 Datasets has been properly installed:
```bash
python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
```
This command downloads version 1 of the [Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer/), loads the training split, and prints the first training example. You should see:
```python
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame'}
``` | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co./docs/datasets/en/installation/#pip | #pip | .md | 2_2 |
To work with audio datasets, you need to install the [`Audio`] feature as an extra dependency:
```bash
pip install datasets[audio]
```
<Tip warning={true}>
To decode mp3 files, you need to have at least version 1.1.0 of the `libsndfile` system library. Usually, it's bundled with the python [`soundfile`](https://github.com/bastibe/python-soundfile) package, which is installed as an extra audio dependency for 馃 Datasets.
For Linux, the required version of `libsndfile` is bundled with `soundfile` starting from version 0.12.0. You can run the following command to determine which version of `libsndfile` is being used by `soundfile`:
```bash
python -c "import soundfile; print(soundfile.__libsndfile_version__)"
```
</Tip> | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co./docs/datasets/en/installation/#audio | #audio | .md | 2_3 |
To work with image datasets, you need to install the [`Image`] feature as an extra dependency:
```bash
pip install datasets[vision]
``` | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co./docs/datasets/en/installation/#vision | #vision | .md | 2_4 |
Building 馃 Datasets from source lets you make changes to the code base. To install from the source, clone the repository and install with the following commands:
```bash
git clone https://github.com/huggingface/datasets.git
cd datasets
pip install -e .
```
Again, you can check if 馃 Datasets was properly installed with the following command:
```bash
python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
``` | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co./docs/datasets/en/installation/#source | #source | .md | 2_5 |
馃 Datasets can also be installed from conda, a package management system:
```bash
conda install -c huggingface -c conda-forge datasets
``` | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/installation.md | https://huggingface.co./docs/datasets/en/installation/#conda | #conda | .md | 2_6 |
[Arrow](https://arrow.apache.org/) enables large amounts of data to be processed and moved quickly. It is a specific data format that stores data in a columnar memory layout. This provides several significant advantages:
* Arrow's standard format allows [zero-copy reads](https://en.wikipedia.org/wiki/Zero-copy) which removes virtually all serialization overhead.
* Arrow is language-agnostic so it supports different programming languages.
* Arrow is column-oriented so it is faster at querying and processing slices or columns of data.
* Arrow allows for copy-free hand-offs to standard machine learning tools such as NumPy, Pandas, PyTorch, and TensorFlow.
* Arrow supports many, possibly nested, column types. | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co./docs/datasets/en/about_arrow/#what-is-arrow | #what-is-arrow | .md | 3_0 |
馃 Datasets uses Arrow for its local caching system. It allows datasets to be backed by an on-disk cache, which is memory-mapped for fast lookup.
This architecture allows for large datasets to be used on machines with relatively small device memory.
For example, loading the full English Wikipedia dataset only takes a few MB of RAM:
```python
>>> import os; import psutil; import timeit
>>> from datasets import load_dataset
# Process.memory_info is expressed in bytes, so convert to megabytes
>>> mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
>>> wiki = load_dataset("wikipedia", "20220301.en", split="train")
>>> mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
>>> print(f"RAM memory used: {(mem_after - mem_before)} MB")
RAM memory used: 50 MB
```
This is possible because the Arrow data is actually memory-mapped from disk, and not loaded in memory.
Memory-mapping allows access to data on disk, and leverages virtual memory capabilities for fast lookups. | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co./docs/datasets/en/about_arrow/#memory-mapping | #memory-mapping | .md | 3_1 |
Iterating over a memory-mapped dataset using Arrow is fast. Iterating over Wikipedia on a laptop gives you speeds of 1-3 Gbit/s:
```python
>>> s = """batch_size = 1000
... for batch in wiki.iter(batch_size):
... ...
... """
>>> elapsed_time = timeit.timeit(stmt=s, number=1, globals=globals())
>>> print(f"Time to iterate over the {wiki.dataset_size >> 30} GB dataset: {elapsed_time:.1f} sec, "
... f"ie. {float(wiki.dataset_size >> 27)/elapsed_time:.1f} Gb/s")
Time to iterate over the 18 GB dataset: 31.8 sec, ie. 4.8 Gb/s
``` | /Users/nielsrogge/Documents/python_projecten/datasets/docs/source/about_arrow.md | https://huggingface.co./docs/datasets/en/about_arrow/#performance | #performance | .md | 3_2 |
README.md exists but content is empty.
- Downloads last month
- 9