image
imagewidth (px) 145
4.93k
| label
class label 90
classes |
---|---|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
0antelope
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
1badger
|
|
2bat
|
|
2bat
|
|
2bat
|
|
2bat
|
Dataset Card for Dataset Name
This dataset is a port of the "Animal Image Dataset" that you can find on Kaggle. The dataset contains 60 pictures for 90 types of animals, with various image sizes.
With respect to the original dataset, I created the train-test-split partitions (80%/20%) to make it compatible via HuggingFace datasets
.
Note. At the time of writing, by looking at the Croissant ML Metadata, the original license of the data is sc:CreativeWork
. If you believe this dataset violates any license, please
open an issue in the discussion tab, so I can take action as soon as possible.
How to use this data
from datasets import load_dataset
# for exploration
ds = load_dataset("lucabaggi/animal-wildlife", split="train")
# for training
ds = load_dataset("lucabaggi/animal-wildlife")
How the data was generated
You can find the source code for the extraction pipeline here. Note: partly generated with Claude3 and Codestral 😎😅 Please feel free to open an issue in the discussion sction if you wish to improve the code.
$ uv run --python=3.11 -- python -m extract --help
usage: extract.py [-h] [--destination-dir DESTINATION_DIR] [--split-ratio SPLIT_RATIO] [--random-seed RANDOM_SEED] [--remove-zip] zip_file
Reorganize dataset.
positional arguments:
zip_file Path to the zip file.
options:
-h, --help show this help message and exit
--destination-dir DESTINATION_DIR
Path to the destination directory.
--split-ratio SPLIT_RATIO
Ratio of data to be used for training.
--random-seed RANDOM_SEED
Random seed for reproducibility.
--remove-zip Whether to remove the source zip archive file after extraction.
Example usage:
Download the data from Kaggle. You can use Kaggle Python SDK, but that might require an API key if you use it locally.
Invoke the script:
uv run --python=3.11 -- python -m extract -- archive.zip
This will explode the contents of the zip archive into a data
directory, splitting the train and test dataset in a 80%/20% ratio.
- Upload to the hub:
from datasets import load_dataset
ds = load_datset("imagefolder", data_dir="data")
ds.push_to_hub()
- Downloads last month
- 130