Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -45,31 +45,26 @@ It's worth nothing that the current collection of clips on sakugabooru has been
|
|
45 |
|
46 |
## Dataset Structure
|
47 |
|
|
|
48 |
|
49 |
-
|
50 |
-
The
|
51 |
-
|
52 |
-
```
|
53 |
-
|
54 |
-
./train/
|
55 |
-
./train/0.tar
|
56 |
-
./train/0.json
|
57 |
-
./train/1.tar
|
58 |
-
|
59 |
-
```
|
60 |
-
|
61 |
|
62 |
|
63 |
|
|
|
64 |
|
|
|
|
|
|
|
|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
69 |
-
from hfutils.index.make import tar_create_index
|
70 |
|
71 |
-
|
72 |
-
|
73 |
```
|
74 |
|
75 |
|
@@ -78,62 +73,65 @@ tar_create_index(src_tar_file="/rmt/sakuga-scraper/tars/0.tar")
|
|
78 |
|
79 |
## Dataset Creation
|
80 |
|
81 |
-
### Curation Rationale
|
82 |
-
|
83 |
-
<!-- Motivation for the creation of this dataset. -->
|
84 |
-
|
85 |
-
[More Information Needed]
|
86 |
-
|
87 |
### Source Data
|
88 |
|
89 |
-
|
90 |
-
|
91 |
-
#### Data Collection and Processing
|
92 |
-
|
93 |
-
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
94 |
-
|
95 |
-
[More Information Needed]
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
## Bias, Risks, and Limitations
|
100 |
-
|
101 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
102 |
|
103 |
-
|
|
|
|
|
104 |
|
105 |
-
### Recommendations
|
106 |
|
107 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
112 |
|
113 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
114 |
|
115 |
-
**BibTeX:**
|
116 |
|
117 |
-
|
118 |
|
119 |
-
**APA:**
|
120 |
|
121 |
-
[More Information Needed]
|
122 |
|
123 |
-
##
|
124 |
|
125 |
-
|
126 |
|
127 |
-
[More Information Needed]
|
128 |
|
129 |
-
## More Information [optional]
|
130 |
|
131 |
-
|
132 |
|
133 |
-
|
134 |
|
135 |
-
[
|
136 |
|
137 |
-
|
138 |
|
139 |
-
[
|
|
|
45 |
|
46 |
## Dataset Structure
|
47 |
|
48 |
+
The tar files are generated by splitting the Sakugabooru post IDs into **modulos of 1,000**:
|
49 |
|
50 |
+
- For example, a post ID of **42** goes to `0000.tar`, and a post ID of **5306** goes to `0005.tar`.
|
51 |
+
- The modulo of 1,000 makes each tar file around 1GB in size, which comes in handy when processing data in large batches.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
|
54 |
|
55 |
+
The dataset itself follows the [WebDataset](https://github.com/webdataset/webdataset) conventions, where each tar contains media files of around 1GB, along with their metadata:
|
56 |
|
57 |
+
```bash
|
58 |
+
./media/0.tar
|
59 |
+
# ./train/0.tar/{sakuga_5306.webm}
|
60 |
+
# ./train/0.tar/{sakuga_5306.json}
|
61 |
|
62 |
+
```
|
63 |
|
64 |
+
The index file `./sakugabooru-index.json` is generated with the `widsindex` command (see webdataset [FAQ](https://github.com/webdataset/webdataset/blob/main/FAQ.md)):
|
|
|
65 |
|
66 |
+
```bash
|
67 |
+
widsindex *.tar > mydataset-index.json
|
68 |
```
|
69 |
|
70 |
|
|
|
73 |
|
74 |
## Dataset Creation
|
75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
### Source Data
|
77 |
|
78 |
+
The dataset is sourced directly from Sakugabooru by quering ids from 0 to latest:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
+
```bash
|
81 |
+
https://www.sakugabooru.com/post/show/{id}
|
82 |
+
```
|
83 |
|
|
|
84 |
|
|
|
85 |
|
86 |
+
#### Data Collection and Processing
|
87 |
|
88 |
+
Since Sakugabooru doesn't have danbooru-like direct json support, metadata are collected and formatted by visiting the site and retrieving relevant parts from the page.
|
89 |
+
|
90 |
+
For future references, the scraper code has also been released here: [arot-devs/sakuga-scraper: SakugaBooru scraper](https://github.com/arot-devs/sakuga-scraper/tree/main)
|
91 |
+
|
92 |
+
Each post's metadata json contains information in format similar to:
|
93 |
+
|
94 |
+
```json
|
95 |
+
{
|
96 |
+
"post_id": 100112,
|
97 |
+
"post_url": "https://www.sakugabooru.com/post/show/100112",
|
98 |
+
"image_url": null,
|
99 |
+
"tags": {},
|
100 |
+
"id": "100112",
|
101 |
+
"posted": "Mon Sep 30 00:41:05 2019",
|
102 |
+
"timestamp": "2019-09-30T00:41:05",
|
103 |
+
"width": 808,
|
104 |
+
"height": 808,
|
105 |
+
"pixels": 652864,
|
106 |
+
"rating": "Safe",
|
107 |
+
"score": "0",
|
108 |
+
"favorited_by": [],
|
109 |
+
"favorite_count": 0,
|
110 |
+
"status_notice": [
|
111 |
+
"This post was deleted.\n\n Reason: https://www.sakugabooru.com/user/show/9611. MD5: c97dbe1d0d467af199402f0ca7b8bb02"
|
112 |
+
],
|
113 |
+
"status_notice_parsed": {}
|
114 |
+
}
|
115 |
+
```
|
116 |
|
|
|
117 |
|
|
|
118 |
|
119 |
+
No additional processings are done to the data other than collecting metadata from the page.
|
120 |
|
|
|
121 |
|
|
|
122 |
|
123 |
+
## Bias, Risks, and Limitations
|
124 |
|
125 |
+
Due to DMCA concerns, duplicates or other factors, only around 60% (163,918 out of 273,264) of all post IDs have media attached, and the posts may bias toward older japanese animations.
|
126 |
|
|
|
127 |
|
|
|
128 |
|
129 |
+
### Recommendations
|
130 |
|
131 |
+
The Sakugabooru has maintained an excellent blog for animation related news and informations:
|
132 |
|
133 |
+
- [Sakuga Blog – The Art of Japanese Animation](https://blog.sakugabooru.com/)
|
134 |
|
135 |
+
You can also support the Sakugabooru team by becoming a Patreon member:
|
136 |
|
137 |
+
- [Sakugabooru | Patreon](https://www.patreon.com/Sakugabooru)
|