trojblue's picture
Update README.md
92eccb2 verified
|
raw
history blame
10.8 kB
---
dataset_info:
features:
- name: approver_id
dtype: float64
- name: bit_flags
dtype: int64
- name: created_at
dtype: string
- name: down_score
dtype: int64
- name: fav_count
dtype: int64
- name: file_ext
dtype: string
- name: file_size
dtype: int64
- name: file_url
dtype: string
- name: has_active_children
dtype: bool
- name: has_children
dtype: bool
- name: has_large
dtype: bool
- name: has_visible_children
dtype: bool
- name: id
dtype: int64
- name: image_height
dtype: int64
- name: image_width
dtype: int64
- name: is_banned
dtype: bool
- name: is_deleted
dtype: bool
- name: is_flagged
dtype: bool
- name: is_pending
dtype: bool
- name: large_file_url
dtype: string
- name: last_comment_bumped_at
dtype: string
- name: last_commented_at
dtype: string
- name: last_noted_at
dtype: string
- name: md5
dtype: string
- name: media_asset_created_at
dtype: string
- name: media_asset_duration
dtype: float64
- name: media_asset_file_ext
dtype: string
- name: media_asset_file_key
dtype: string
- name: media_asset_file_size
dtype: int64
- name: media_asset_id
dtype: int64
- name: media_asset_image_height
dtype: int64
- name: media_asset_image_width
dtype: int64
- name: media_asset_is_public
dtype: bool
- name: media_asset_md5
dtype: string
- name: media_asset_pixel_hash
dtype: string
- name: media_asset_status
dtype: string
- name: media_asset_updated_at
dtype: string
- name: media_asset_variants
dtype: string
- name: parent_id
dtype: float64
- name: pixiv_id
dtype: float64
- name: preview_file_url
dtype: string
- name: rating
dtype: string
- name: score
dtype: int64
- name: source
dtype: string
- name: tag_count
dtype: int64
- name: tag_count_artist
dtype: int64
- name: tag_count_character
dtype: int64
- name: tag_count_copyright
dtype: int64
- name: tag_count_general
dtype: int64
- name: tag_count_meta
dtype: int64
- name: tag_string
dtype: string
- name: tag_string_artist
dtype: string
- name: tag_string_character
dtype: string
- name: tag_string_copyright
dtype: string
- name: tag_string_general
dtype: string
- name: tag_string_meta
dtype: string
- name: up_score
dtype: int64
- name: updated_at
dtype: string
- name: uploader_id
dtype: int64
splits:
- name: train
num_bytes: 20051410186
num_examples: 8616173
download_size: 7310216883
dataset_size: 20051410186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text-to-image
- image-classification
language:
- en
- ja
pretty_name: Danbooru 2025 Metadata
size_categories:
- 1M<n<10M
---
# Dataset Card for Danbooru 2025 Metadata
This dataset repo provides comprehensive, up-to-date metadata for the Danbooru booru site. All metadata was freshly scraped starting on **January 2, 2025**, resulting in more extensive tag annotations for older posts, fewer errors, and reduced occurrences of non-labelled AI-generated images in the data.
## Dataset Details
**What is this?**
A refreshed, Parquet-formatted metadata dump of Danbooru, current as of January 2, 2025.
**Why this over other Danbooru scrapes?**
- **Fresh Metadata:** Coverage includes post IDs from 1 through ~8.6M, with the newest vocabulary and tag annotations.
- **Maximized Tag Count:** Many historical tag renames and additions are accurately reflected, reducing duplications for downstream tasks.
- **Reduced Noise:** Fewer untagged or mislabeled AI images compared to older scrapes.
**Tag Comparisons**
[TODO: Contrast the tag counts, deleted entries, etc. with other Danbooru metadata scrapes.]
- **Shared by:** [trojblue](https://huggingface.co./trojblue)
- **Language(s) (NLP):** English, Japanese
- **License:** MIT
## Uses
The dataset can be loaded or filtered with the Huggingface `datasets` library:
```python
from datasets import Dataset, load_dataset
danbooru_dataset = load_dataset("trojblue/danbooru2025-metadata", split="train")
df = danbooru_dataset.to_pandas()
```
This dataset can be used to:
- Retrieve the full Danbooru image set via the metadata’s URLs
- Train or fine-tune an image tagger
- Compare against previous metadata versions to track changes, tag evolution, and historical trends
## Dataset Structure
Below is a partial overview of the DataFrame columns, derived directly from the Danbooru JSONs:
```python
import unibox as ub
ub.peeks(df)
```
```
(8616173, 59)
Index(['approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
'file_ext', 'file_size', 'file_url', 'has_active_children',
'has_children', 'has_large', 'has_visible_children', 'id',
'image_height', 'image_width', 'is_banned', 'is_deleted', 'is_flagged',
'is_pending', 'large_file_url', 'last_comment_bumped_at',
'last_commented_at', 'last_noted_at', 'md5', 'media_asset_created_at',
'media_asset_duration', 'media_asset_file_ext', 'media_asset_file_key',
'media_asset_file_size', 'media_asset_id', 'media_asset_image_height',
'media_asset_image_width', 'media_asset_is_public', 'media_asset_md5',
'media_asset_pixel_hash', 'media_asset_status',
'media_asset_updated_at', 'media_asset_variants', 'parent_id',
'pixiv_id', 'preview_file_url', 'rating', 'score', 'source',
'tag_count', 'tag_count_artist', 'tag_count_character',
'tag_count_copyright', 'tag_count_general', 'tag_count_meta',
'tag_string', 'tag_string_artist', 'tag_string_character',
'tag_string_copyright', 'tag_string_general', 'tag_string_meta',
'up_score', 'updated_at', 'uploader_id'],
dtype='object')
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>approver_id</th>
<th>bit_flags</th>
<th>created_at</th>
<th>down_score</th>
<th>fav_count</th>
<th>file_ext</th>
<th>file_size</th>
<th>file_url</th>
<th>has_active_children</th>
<th>has_children</th>
<th>...</th>
<th>tag_count_meta</th>
<th>tag_string</th>
<th>tag_string_artist</th>
<th>tag_string_character</th>
<th>tag_string_copyright</th>
<th>tag_string_general</th>
<th>tag_string_meta</th>
<th>up_score</th>
<th>updated_at</th>
<th>uploader_id</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>NaN</td>
<td>0</td>
<td>2015-08-07T23:23:45.072-04:00</td>
<td>0</td>
<td>66</td>
<td>jpg</td>
<td>4134797</td>
<td>https://cdn.donmai.us/original/a1/b3/a1b3d0fa9...</td>
<td>False</td>
<td>False</td>
<td>...</td>
<td>3</td>
<td>1girl absurdres ass bangle bikini black_bikini...</td>
<td>kyouka.</td>
<td>marie_(splatoon)</td>
<td>splatoon_(series) splatoon_1</td>
<td>1girl ass bangle bikini black_bikini blush bra...</td>
<td>absurdres commentary_request highres</td>
<td>15</td>
<td>2024-06-25T15:32:44.291-04:00</td>
<td>420773</td>
</tr>
<tr>
<th>1</th>
<td>NaN</td>
<td>0</td>
<td>2008-03-05T01:52:28.194-05:00</td>
<td>0</td>
<td>7</td>
<td>jpg</td>
<td>380323</td>
<td>https://cdn.donmai.us/original/d6/10/d6107a13b...</td>
<td>False</td>
<td>False</td>
<td>...</td>
<td>2</td>
<td>1girl aqua_hair bad_id bad_pixiv_id guitar hat...</td>
<td>shimeko</td>
<td>hatsune_miku</td>
<td>vocaloid</td>
<td>1girl aqua_hair guitar instrument long_hair so...</td>
<td>bad_id bad_pixiv_id</td>
<td>4</td>
<td>2018-01-23T00:32:10.080-05:00</td>
<td>1309</td>
</tr>
<tr>
<th>2</th>
<td>85307.0</td>
<td>0</td>
<td>2015-08-07T23:26:12.355-04:00</td>
<td>0</td>
<td>10</td>
<td>jpg</td>
<td>208409</td>
<td>https://cdn.donmai.us/original/a1/2c/a12ce629f...</td>
<td>False</td>
<td>False</td>
<td>...</td>
<td>1</td>
<td>1boy 1girl blush boots carrying closed_eyes co...</td>
<td>yuuryuu_nagare</td>
<td>jon_(pixiv_fantasia_iii) race_(pixiv_fantasia)</td>
<td>pixiv_fantasia pixiv_fantasia_3</td>
<td>1boy 1girl blush boots carrying closed_eyes da...</td>
<td>commentary_request</td>
<td>3</td>
<td>2022-05-25T02:26:06.588-04:00</td>
<td>95963</td>
</tr>
</tbody>
</table>
</div>
## Dataset Creation
We scraped all post IDs on Danbooru from 1 up to the latest. Some restricted tags (e.g. `loli`) were hidden by the site and require a gold account to access, so they are not present.
For a more complete (but older) metadata reference, you may wish to combine this with Danbooru2021 or similar previous scrapes.
The scraping process used a pool of roughly 400 IPs over six hours, ensuring consistent tag definitions. Below is a simplified example of the process used to convert the metadata into Parquet:
```python
import pandas as pd
from pandarallel import pandarallel
# Initialize pandarallel
pandarallel.initialize(nb_workers=4, progress_bar=True)
def flatten_dict(d, parent_key='', sep='_'):
"""
Flattens a nested dictionary.
"""
items = []
for k, v in d.items():
new_key = f"{parent_key}{sep}{k}" if parent_key else k
if isinstance(v, dict):
items.extend(flatten_dict(v, new_key, sep=sep).items())
elif isinstance(v, list):
items.append((new_key, ', '.join(map(str, v))))
else:
items.append((new_key, v))
return dict(items)
def extract_all_illust_info(json_content):
"""
Parses and flattens Danbooru JSON into a pandas Series.
"""
flattened_data = flatten_dict(json_content)
return pd.Series(flattened_data)
def dicts_to_dataframe_parallel(dicts):
"""
Converts a list of dicts to a flattened DataFrame using pandarallel.
"""
df = pd.DataFrame(dicts)
flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
return flattened_df
```
### Recommendations
Users should be aware of potential biases and limitations, including the presence of adult content in some tags. More details and mitigations may be needed.