trojblue's picture
Update README.md
92eccb2 verified
|
raw
history blame
10.8 kB
metadata
dataset_info:
  features:
    - name: approver_id
      dtype: float64
    - name: bit_flags
      dtype: int64
    - name: created_at
      dtype: string
    - name: down_score
      dtype: int64
    - name: fav_count
      dtype: int64
    - name: file_ext
      dtype: string
    - name: file_size
      dtype: int64
    - name: file_url
      dtype: string
    - name: has_active_children
      dtype: bool
    - name: has_children
      dtype: bool
    - name: has_large
      dtype: bool
    - name: has_visible_children
      dtype: bool
    - name: id
      dtype: int64
    - name: image_height
      dtype: int64
    - name: image_width
      dtype: int64
    - name: is_banned
      dtype: bool
    - name: is_deleted
      dtype: bool
    - name: is_flagged
      dtype: bool
    - name: is_pending
      dtype: bool
    - name: large_file_url
      dtype: string
    - name: last_comment_bumped_at
      dtype: string
    - name: last_commented_at
      dtype: string
    - name: last_noted_at
      dtype: string
    - name: md5
      dtype: string
    - name: media_asset_created_at
      dtype: string
    - name: media_asset_duration
      dtype: float64
    - name: media_asset_file_ext
      dtype: string
    - name: media_asset_file_key
      dtype: string
    - name: media_asset_file_size
      dtype: int64
    - name: media_asset_id
      dtype: int64
    - name: media_asset_image_height
      dtype: int64
    - name: media_asset_image_width
      dtype: int64
    - name: media_asset_is_public
      dtype: bool
    - name: media_asset_md5
      dtype: string
    - name: media_asset_pixel_hash
      dtype: string
    - name: media_asset_status
      dtype: string
    - name: media_asset_updated_at
      dtype: string
    - name: media_asset_variants
      dtype: string
    - name: parent_id
      dtype: float64
    - name: pixiv_id
      dtype: float64
    - name: preview_file_url
      dtype: string
    - name: rating
      dtype: string
    - name: score
      dtype: int64
    - name: source
      dtype: string
    - name: tag_count
      dtype: int64
    - name: tag_count_artist
      dtype: int64
    - name: tag_count_character
      dtype: int64
    - name: tag_count_copyright
      dtype: int64
    - name: tag_count_general
      dtype: int64
    - name: tag_count_meta
      dtype: int64
    - name: tag_string
      dtype: string
    - name: tag_string_artist
      dtype: string
    - name: tag_string_character
      dtype: string
    - name: tag_string_copyright
      dtype: string
    - name: tag_string_general
      dtype: string
    - name: tag_string_meta
      dtype: string
    - name: up_score
      dtype: int64
    - name: updated_at
      dtype: string
    - name: uploader_id
      dtype: int64
  splits:
    - name: train
      num_bytes: 20051410186
      num_examples: 8616173
  download_size: 7310216883
  dataset_size: 20051410186
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - text-to-image
  - image-classification
language:
  - en
  - ja
pretty_name: Danbooru 2025 Metadata
size_categories:
  - 1M<n<10M

Dataset Card for Danbooru 2025 Metadata

This dataset repo provides comprehensive, up-to-date metadata for the Danbooru booru site. All metadata was freshly scraped starting on January 2, 2025, resulting in more extensive tag annotations for older posts, fewer errors, and reduced occurrences of non-labelled AI-generated images in the data.

Dataset Details

What is this?
A refreshed, Parquet-formatted metadata dump of Danbooru, current as of January 2, 2025.

Why this over other Danbooru scrapes?

  • Fresh Metadata: Coverage includes post IDs from 1 through ~8.6M, with the newest vocabulary and tag annotations.
  • Maximized Tag Count: Many historical tag renames and additions are accurately reflected, reducing duplications for downstream tasks.
  • Reduced Noise: Fewer untagged or mislabeled AI images compared to older scrapes.

Tag Comparisons
[TODO: Contrast the tag counts, deleted entries, etc. with other Danbooru metadata scrapes.]

  • Shared by: trojblue
  • Language(s) (NLP): English, Japanese
  • License: MIT

Uses

The dataset can be loaded or filtered with the Huggingface datasets library:

from datasets import Dataset, load_dataset

danbooru_dataset = load_dataset("trojblue/danbooru2025-metadata", split="train")
df = danbooru_dataset.to_pandas()

This dataset can be used to:

  • Retrieve the full Danbooru image set via the metadata’s URLs
  • Train or fine-tune an image tagger
  • Compare against previous metadata versions to track changes, tag evolution, and historical trends

Dataset Structure

Below is a partial overview of the DataFrame columns, derived directly from the Danbooru JSONs:

import unibox as ub
ub.peeks(df)
(8616173, 59)
Index(['approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
       'file_ext', 'file_size', 'file_url', 'has_active_children',
       'has_children', 'has_large', 'has_visible_children', 'id',
       'image_height', 'image_width', 'is_banned', 'is_deleted', 'is_flagged',
       'is_pending', 'large_file_url', 'last_comment_bumped_at',
       'last_commented_at', 'last_noted_at', 'md5', 'media_asset_created_at',
       'media_asset_duration', 'media_asset_file_ext', 'media_asset_file_key',
       'media_asset_file_size', 'media_asset_id', 'media_asset_image_height',
       'media_asset_image_width', 'media_asset_is_public', 'media_asset_md5',
       'media_asset_pixel_hash', 'media_asset_status',
       'media_asset_updated_at', 'media_asset_variants', 'parent_id',
       'pixiv_id', 'preview_file_url', 'rating', 'score', 'source',
       'tag_count', 'tag_count_artist', 'tag_count_character',
       'tag_count_copyright', 'tag_count_general', 'tag_count_meta',
       'tag_string', 'tag_string_artist', 'tag_string_character',
       'tag_string_copyright', 'tag_string_general', 'tag_string_meta',
       'up_score', 'updated_at', 'uploader_id'],
      dtype='object')

approver_id bit_flags created_at down_score fav_count file_ext file_size file_url has_active_children has_children ... tag_count_meta tag_string tag_string_artist tag_string_character tag_string_copyright tag_string_general tag_string_meta up_score updated_at uploader_id
0 NaN 0 2015-08-07T23:23:45.072-04:00 0 66 jpg 4134797 https://cdn.donmai.us/original/a1/b3/a1b3d0fa9... False False ... 3 1girl absurdres ass bangle bikini black_bikini... kyouka. marie_(splatoon) splatoon_(series) splatoon_1 1girl ass bangle bikini black_bikini blush bra... absurdres commentary_request highres 15 2024-06-25T15:32:44.291-04:00 420773
1 NaN 0 2008-03-05T01:52:28.194-05:00 0 7 jpg 380323 https://cdn.donmai.us/original/d6/10/d6107a13b... False False ... 2 1girl aqua_hair bad_id bad_pixiv_id guitar hat... shimeko hatsune_miku vocaloid 1girl aqua_hair guitar instrument long_hair so... bad_id bad_pixiv_id 4 2018-01-23T00:32:10.080-05:00 1309
2 85307.0 0 2015-08-07T23:26:12.355-04:00 0 10 jpg 208409 https://cdn.donmai.us/original/a1/2c/a12ce629f... False False ... 1 1boy 1girl blush boots carrying closed_eyes co... yuuryuu_nagare jon_(pixiv_fantasia_iii) race_(pixiv_fantasia) pixiv_fantasia pixiv_fantasia_3 1boy 1girl blush boots carrying closed_eyes da... commentary_request 3 2022-05-25T02:26:06.588-04:00 95963

Dataset Creation

We scraped all post IDs on Danbooru from 1 up to the latest. Some restricted tags (e.g. loli) were hidden by the site and require a gold account to access, so they are not present.
For a more complete (but older) metadata reference, you may wish to combine this with Danbooru2021 or similar previous scrapes.

The scraping process used a pool of roughly 400 IPs over six hours, ensuring consistent tag definitions. Below is a simplified example of the process used to convert the metadata into Parquet:

import pandas as pd
from pandarallel import pandarallel

# Initialize pandarallel
pandarallel.initialize(nb_workers=4, progress_bar=True)

def flatten_dict(d, parent_key='', sep='_'):
    """
    Flattens a nested dictionary.
    """
    items = []
    for k, v in d.items():
        new_key = f"{parent_key}{sep}{k}" if parent_key else k
        if isinstance(v, dict):
            items.extend(flatten_dict(v, new_key, sep=sep).items())
        elif isinstance(v, list):
            items.append((new_key, ', '.join(map(str, v))))
        else:
            items.append((new_key, v))
    return dict(items)

def extract_all_illust_info(json_content):
    """
    Parses and flattens Danbooru JSON into a pandas Series.
    """
    flattened_data = flatten_dict(json_content)
    return pd.Series(flattened_data)

def dicts_to_dataframe_parallel(dicts):
    """
    Converts a list of dicts to a flattened DataFrame using pandarallel.
    """
    df = pd.DataFrame(dicts)
    flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
    return flattened_df

Recommendations

Users should be aware of potential biases and limitations, including the presence of adult content in some tags. More details and mitigations may be needed.