Datasets documentation

Installation

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v3.2.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Installation

Before you start, you’ll need to setup your environment and install the appropriate packages. 🤗 Datasets is tested on Python 3.7+.

If you want to use 🤗 Datasets with TensorFlow or PyTorch, you’ll need to install them separately. Refer to the TensorFlow installation page or the PyTorch installation page for the specific install command for your framework.

Virtual environment

You should install 🤗 Datasets in a virtual environment to keep things tidy and avoid dependency conflicts.

  1. Create and navigate to your project directory:

    mkdir ~/my-project
    cd ~/my-project
  2. Start a virtual environment inside your directory:

    python -m venv .env
  3. Activate and deactivate the virtual environment with the following commands:

    # Activate the virtual environment
    source .env/bin/activate
    
    # Deactivate the virtual environment
    source .env/bin/deactivate

Once you’ve created your virtual environment, you can install 🤗 Datasets in it.

pip

The most straightforward way to install 🤗 Datasets is with pip:

pip install datasets

Run the following command to check if 🤗 Datasets has been properly installed:

python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"

This command downloads version 1 of the Stanford Question Answering Dataset (SQuAD), loads the training split, and prints the first training example. You should see:

{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame'}

Audio

To work with audio datasets, you need to install the Audio feature as an extra dependency:

pip install datasets[audio]

To decode mp3 files, you need to have at least version 1.1.0 of the libsndfile system library. Usually, it’s bundled with the python soundfile package, which is installed as an extra audio dependency for 🤗 Datasets. For Linux, the required version of libsndfile is bundled with soundfile starting from version 0.12.0. You can run the following command to determine which version of libsndfile is being used by soundfile:

python -c "import soundfile; print(soundfile.__libsndfile_version__)"

Vision

To work with image datasets, you need to install the Image feature as an extra dependency:

pip install datasets[vision]

source

Building 🤗 Datasets from source lets you make changes to the code base. To install from the source, clone the repository and install with the following commands:

git clone https://github.com/huggingface/datasets.git
cd datasets
pip install -e .

Again, you can check if 🤗 Datasets was properly installed with the following command:

python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"

conda

🤗 Datasets can also be installed from conda, a package management system:

conda install -c huggingface -c conda-forge datasets
< > Update on GitHub