Datasets:
Tasks:
Token Classification
Languages:
English
Size:
10K<n<100K
Tags:
Not-For-All-Audiences
License:
normalize_tags: add config file, update readme.
Browse files- README.md +94 -8
- e6db/utils/__init__.py +51 -41
- normalize.toml +92 -0
- normalize_tags.py +319 -175
- query_tags.py +19 -2
README.md
CHANGED
@@ -12,28 +12,114 @@ tags:
|
|
12 |
|
13 |
# E6DB
|
14 |
|
15 |
-
This a dataset is compiled from the e621 database. It currently provides
|
16 |
-
for normalizing tags and embeddings for finding similar
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Utilities
|
19 |
|
20 |
### `query_tags.py`
|
21 |
|
22 |
-
A small command-line utility that finds related tags and generates 2D plots
|
|
|
|
|
|
|
23 |
|
24 |
-
When only tags are provided as arguments, it displays the top-k most similar
|
|
|
|
|
25 |
|
26 |
-
Tag categories are represented with the e621 color scheme. Results can be
|
|
|
|
|
27 |
|
28 |
Filtering occurs in the following sequence:
|
29 |
|
30 |
-
- Tags used fewer than twice are excluded from the dataset; tags with a post
|
|
|
31 |
- If a category filter is specified, only matching tags are retained.
|
32 |
- For each query tag, the most `-k` similar neighboring tags are selected.
|
33 |
-
- The per-query neighbors are printed, and if no plot is being generated,
|
34 |
-
|
|
|
|
|
35 |
- Only the highest `-N` scoring tags are displayed in the plot.
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
### `e6db.utils`
|
38 |
|
39 |
This [module](./e6db/utils/__init__.py) contains utilities for loading the provided data and use it to normalize tag sets.
|
|
|
12 |
|
13 |
# E6DB
|
14 |
|
15 |
+
This a dataset is compiled from the e621 database. It currently provides
|
16 |
+
utilities and indexes for normalizing tags and embeddings for finding similar
|
17 |
+
tags.
|
18 |
+
|
19 |
+
## Installation
|
20 |
+
|
21 |
+
Please clone with `git` after having installed
|
22 |
+
[`git-lfs`](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage).
|
23 |
+
Do not download the github zip, it doesn't contain the data files.
|
24 |
|
25 |
## Utilities
|
26 |
|
27 |
### `query_tags.py`
|
28 |
|
29 |
+
A small command-line utility that finds related tags and generates 2D plots
|
30 |
+
illustrating tag relationships through local PCA projection. It utilizes
|
31 |
+
collaborative filtering embeddings, [computed with alternating least
|
32 |
+
squares](./notebooks/AltLstSq.ipynb).
|
33 |
|
34 |
+
When only tags are provided as arguments, it displays the top-k most similar
|
35 |
+
tags (where k is set with `-k`). By using `-o plot.png` or `-o -`, it saves or
|
36 |
+
displays a 2D plot showing the local projection of the query and related tags.
|
37 |
|
38 |
+
Tag categories are represented with the e621 color scheme. Results can be
|
39 |
+
filtered based on one or more categories using the `-c` flag once or multiple
|
40 |
+
times. The `-f` flag sets a post count threshold.
|
41 |
|
42 |
Filtering occurs in the following sequence:
|
43 |
|
44 |
+
- Tags used fewer than twice are excluded from the dataset; tags with a post
|
45 |
+
count lower than the `-f` threshold are also discarded.
|
46 |
- If a category filter is specified, only matching tags are retained.
|
47 |
- For each query tag, the most `-k` similar neighboring tags are selected.
|
48 |
+
- The per-query neighbors are printed, and if no plot is being generated,
|
49 |
+
filtering halts at this point.
|
50 |
+
- Similarity scores are aggregated across queries, and the `-n` tags closest to
|
51 |
+
all queries are chosen for the PCA.
|
52 |
- Only the highest `-N` scoring tags are displayed in the plot.
|
53 |
|
54 |
+
### `normalize_tags.py`
|
55 |
+
|
56 |
+
Tag Normalizer is a powerful command-line tool designed to clean, standardize,
|
57 |
+
and normalize e621 tags in text files. By applying a set of customizable rules,
|
58 |
+
Tag Normalizer helps maintain consistency and improve the quality of your tag
|
59 |
+
data.
|
60 |
+
|
61 |
+
#### Usage
|
62 |
+
|
63 |
+
```
|
64 |
+
python normalize_tags.py <path to input dataset> <path to output normalized tags> -s unknown
|
65 |
+
```
|
66 |
+
|
67 |
+
Find all `*.txt` and `*.cap*` files in `<path to input dataset>` and write the
|
68 |
+
normalized files in `<path to output normalized tags>` while reproducing the
|
69 |
+
folder hierarchy. Additionally, `-s unknown` writes to the standard output the
|
70 |
+
the top 100 unrecognized tags. `-s meta` will show the top 100 meta tags, while
|
71 |
+
`-k 50` alone will show the top 50 for all categories.
|
72 |
+
|
73 |
+
You can specify the same folder for input and output, and use `-f` to skip
|
74 |
+
confirmation.
|
75 |
+
|
76 |
+
#### Configuration
|
77 |
+
|
78 |
+
Tag Normalizer uses a TOML configuration file to customize its behavior. By
|
79 |
+
default, it looks for `normalize.toml` in the following locations:
|
80 |
+
|
81 |
+
1. The path specified by the `-c` or `--config` option
|
82 |
+
2. The output directory
|
83 |
+
3. The input directory
|
84 |
+
4. The current directory (most likely the example one)
|
85 |
+
|
86 |
+
Here's a brief overview of the main configuration options:
|
87 |
+
|
88 |
+
- `blacklist`: List of tags to remove
|
89 |
+
- `blacklist_regexp`: Regular expressions for blacklisting tags
|
90 |
+
- `keep_underscores`: List of tags where underscores should be preserved
|
91 |
+
- `blacklist_categories`: Categories of tags to remove entirely
|
92 |
+
- `remove_parens_suffix_for_categories`: Categories where parenthetical suffixes
|
93 |
+
should be removed
|
94 |
+
- `aliases`: Define tag aliases
|
95 |
+
- `aliases_overrides`: Define aliases that can override existing tag meanings
|
96 |
+
- `renames`: Specify tags to be renamed in the output
|
97 |
+
- `use_underscores`: Whether to use underscores or spaces in output tags
|
98 |
+
- `keep_implied`: Whether to keep or remove implied tags, may also be a list of
|
99 |
+
tags to keep even when implied by another one
|
100 |
+
- `on_alias_conflict`: How to handle conflicts when creating aliases
|
101 |
+
- `artist_by_prefix`: Whether to add "by\_" prefix to artist tags
|
102 |
+
- `blacklist_implied`: Whether to also blacklist tags implied by blacklisted
|
103 |
+
tags
|
104 |
+
|
105 |
+
For a detailed explanation of each option, refer to the comments in the
|
106 |
+
`normalize.toml` file. It contains the default values with some opinionated
|
107 |
+
changes.
|
108 |
+
|
109 |
+
#### Command-line Options
|
110 |
+
|
111 |
+
- `-c`, `--config`: Specify a custom configuration file (default: looks for
|
112 |
+
`normalize.toml` in output dir, input dir, or current dir)
|
113 |
+
- `-v`, `--verbose`: Enable verbose logging
|
114 |
+
- `-f`, `--force`: Don't ask for confirmation when overwriting input files
|
115 |
+
- `-b`, `--print-blacklist`: Print the effective list of blacklisted tags
|
116 |
+
- `-k`, `--print-topk [N]`: Print the N most common tags (default: 100 if no
|
117 |
+
value provided)
|
118 |
+
- `-s`, `--stats-categories <cat>`: Restrict tag count printing to specific
|
119 |
+
categories (or `unknown` for non e621 tags)
|
120 |
+
- `-j`, `--print-implied-topk [N]`: Print the N most common implied tags
|
121 |
+
(default: 100 if no value provided)
|
122 |
+
|
123 |
### `e6db.utils`
|
124 |
|
125 |
This [module](./e6db/utils/__init__.py) contains utilities for loading the provided data and use it to normalize tag sets.
|
e6db/utils/__init__.py
CHANGED
@@ -4,8 +4,12 @@ import gzip
|
|
4 |
import json
|
5 |
import warnings
|
6 |
import math
|
|
|
7 |
from typing import Callable, Iterable
|
8 |
|
|
|
|
|
|
|
9 |
tag_categories = [
|
10 |
"general",
|
11 |
"artist",
|
@@ -62,6 +66,7 @@ def load_tags(data_dir):
|
|
62 |
tag2idx = json.load(fp)
|
63 |
with gzip.open(data_dir / "tags_categories.bin.gz", "rb") as fp:
|
64 |
tag_categories = fp.read()
|
|
|
65 |
return tag2idx, idx2tag, tag_categories
|
66 |
|
67 |
|
@@ -81,6 +86,9 @@ def load_implications(data_dir):
|
|
81 |
implications = {int(k): v for k, v in implications.items()}
|
82 |
with gzip.open(data_dir / "implications_rej.json.gz", "rb") as fp:
|
83 |
implications_rej = json.load(fp)
|
|
|
|
|
|
|
84 |
return implications, implications_rej
|
85 |
|
86 |
|
@@ -100,7 +108,8 @@ def tag_freq_to_rank(freq: int) -> float:
|
|
100 |
)
|
101 |
|
102 |
|
103 |
-
|
|
|
104 |
|
105 |
|
106 |
class TagNormalizer:
|
@@ -173,28 +182,37 @@ class TagNormalizer:
|
|
173 |
if on_conflict == "raise":
|
174 |
raise ValueError(msg)
|
175 |
elif on_conflict == "warn":
|
176 |
-
|
177 |
elif on_conflict == "overwrite_rarest" and to_tid > conflict:
|
178 |
continue
|
179 |
elif on_conflict != "overwrite":
|
180 |
continue
|
181 |
tag2idx[tag] = to_tid
|
182 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
183 |
def rename_output(self, orig: int | str, dest: str):
|
184 |
"""Change the tag string associated with an id. Used by `decode`."""
|
185 |
if not isinstance(orig, int):
|
186 |
orig = self.tag2idx[orig]
|
187 |
self.idx2tag[orig] = dest
|
188 |
|
189 |
-
def map_inputs(self, mapfun:
|
190 |
res = type(self)(({}, self.idx2tag, self.tag_categories))
|
191 |
for tag, tid in self.tag2idx.items():
|
192 |
res.add_input_mappings(mapfun(tag, tid), tid, on_conflict=on_conflict)
|
193 |
return res
|
194 |
|
195 |
-
def map_outputs(self, mapfun:
|
196 |
-
|
197 |
-
idx2tag = [t if isinstance(t, str) else t[0] for t in idx2tag_gen]
|
198 |
return type(self)((self.tag2idx, idx2tag, self.tag_categories))
|
199 |
|
200 |
def get(self, key: int | str, default=None):
|
@@ -218,9 +236,9 @@ class TagSetNormalizer:
|
|
218 |
data = path_or_data
|
219 |
self.tag_normalizer, self.implications, self.implications_rej = data
|
220 |
|
221 |
-
def
|
222 |
-
self
|
223 |
-
|
224 |
implications_rej: dict[str, list[str]] = {}
|
225 |
for tag_string, implied_ids in self.implications_rej.items():
|
226 |
for new_tag_string in mapfun(tag_string, None):
|
@@ -235,36 +253,22 @@ class TagSetNormalizer:
|
|
235 |
continue
|
236 |
implications_rej[new_tag_string] = implied_ids
|
237 |
|
238 |
-
|
239 |
-
|
240 |
-
def map_tags(
|
241 |
-
self, mapfun: MapFun, map_input=True, map_output=True, on_conflict="raise"
|
242 |
-
) -> "TagSetNormalizer":
|
243 |
-
"""Apply a function to all tag strings.
|
244 |
-
|
245 |
-
The provided function will be run on:
|
246 |
|
247 |
-
|
248 |
-
|
249 |
-
|
250 |
-
* Implication source tags that are not used frequently enough to get an
|
251 |
-
id assigned (less than twice).
|
252 |
|
253 |
-
|
254 |
-
|
255 |
-
|
256 |
-
|
257 |
-
|
258 |
-
if map_input:
|
259 |
-
tag_normalizer = tag_normalizer.map_inputs(mapfun, on_conflict=on_conflict)
|
260 |
-
if map_output:
|
261 |
-
tag_normalizer = tag_normalizer.map_outputs(mapfun)
|
262 |
-
res = type(self)((tag_normalizer, self.implications, self.implications_rej))
|
263 |
-
if map_input:
|
264 |
-
res = res.map_implicaitons_rej(mapfun, on_conflict=on_conflict)
|
265 |
-
return res
|
266 |
|
267 |
-
def encode(
|
|
|
|
|
268 |
"""
|
269 |
Encode a list of string as numerical ids and strip implied tags.
|
270 |
|
@@ -277,17 +281,23 @@ class TagSetNormalizer:
|
|
277 |
"""
|
278 |
implied = set()
|
279 |
res = []
|
|
|
|
|
|
|
280 |
for tag in tags:
|
281 |
-
tag =
|
282 |
implied.update(
|
283 |
-
|
284 |
if isinstance(tag, int)
|
285 |
-
else
|
286 |
)
|
287 |
res.append(tag)
|
288 |
if not keep_implied:
|
289 |
res = [t for t in res if t not in implied]
|
|
|
|
|
290 |
return res, implied
|
291 |
|
292 |
-
def decode(self, tags):
|
293 |
-
|
|
|
|
4 |
import json
|
5 |
import warnings
|
6 |
import math
|
7 |
+
import logging
|
8 |
from typing import Callable, Iterable
|
9 |
|
10 |
+
|
11 |
+
logger = logging.getLogger(__name__)
|
12 |
+
|
13 |
tag_categories = [
|
14 |
"general",
|
15 |
"artist",
|
|
|
66 |
tag2idx = json.load(fp)
|
67 |
with gzip.open(data_dir / "tags_categories.bin.gz", "rb") as fp:
|
68 |
tag_categories = fp.read()
|
69 |
+
logging.info(f"Loaded {len(idx2tag)} tags, {len(tag2idx)} tag2id mappings")
|
70 |
return tag2idx, idx2tag, tag_categories
|
71 |
|
72 |
|
|
|
86 |
implications = {int(k): v for k, v in implications.items()}
|
87 |
with gzip.open(data_dir / "implications_rej.json.gz", "rb") as fp:
|
88 |
implications_rej = json.load(fp)
|
89 |
+
logger.info(
|
90 |
+
f"Loaded {len(implications)} implications + {len(implications_rej)} implication from tags without id"
|
91 |
+
)
|
92 |
return implications, implications_rej
|
93 |
|
94 |
|
|
|
108 |
)
|
109 |
|
110 |
|
111 |
+
InMapFun = Callable[[str, int | None], list[str]]
|
112 |
+
OutMapFun = Callable[[str], list[str]]
|
113 |
|
114 |
|
115 |
class TagNormalizer:
|
|
|
182 |
if on_conflict == "raise":
|
183 |
raise ValueError(msg)
|
184 |
elif on_conflict == "warn":
|
185 |
+
logger.warning(msg)
|
186 |
elif on_conflict == "overwrite_rarest" and to_tid > conflict:
|
187 |
continue
|
188 |
elif on_conflict != "overwrite":
|
189 |
continue
|
190 |
tag2idx[tag] = to_tid
|
191 |
|
192 |
+
def remove_input_mappings(self, tags: str | Iterable[str]):
|
193 |
+
"""Remove tag strings from the mapping"""
|
194 |
+
if isinstance(tags, str):
|
195 |
+
tags = (tags,)
|
196 |
+
for tag in tags:
|
197 |
+
if tag in self.tag2idx:
|
198 |
+
del self.tag2idx[tag]
|
199 |
+
else:
|
200 |
+
logger.warning(f"tag {tag!r} is not a valid tag")
|
201 |
+
|
202 |
def rename_output(self, orig: int | str, dest: str):
|
203 |
"""Change the tag string associated with an id. Used by `decode`."""
|
204 |
if not isinstance(orig, int):
|
205 |
orig = self.tag2idx[orig]
|
206 |
self.idx2tag[orig] = dest
|
207 |
|
208 |
+
def map_inputs(self, mapfun: InMapFun, on_conflict="raise") -> "TagNormalizer":
|
209 |
res = type(self)(({}, self.idx2tag, self.tag_categories))
|
210 |
for tag, tid in self.tag2idx.items():
|
211 |
res.add_input_mappings(mapfun(tag, tid), tid, on_conflict=on_conflict)
|
212 |
return res
|
213 |
|
214 |
+
def map_outputs(self, mapfun: OutMapFun) -> "TagNormalizer":
|
215 |
+
idx2tag = [mapfun(t, i) for i, t in enumerate(self.idx2tag)]
|
|
|
216 |
return type(self)((self.tag2idx, idx2tag, self.tag_categories))
|
217 |
|
218 |
def get(self, key: int | str, default=None):
|
|
|
236 |
data = path_or_data
|
237 |
self.tag_normalizer, self.implications, self.implications_rej = data
|
238 |
|
239 |
+
def map_inputs(self, mapfun: InMapFun, on_conflict="raise") -> "TagSetNormalizer":
|
240 |
+
tag_normalizer = self.tag_normalizer.map_inputs(mapfun, on_conflict=on_conflict)
|
241 |
+
|
242 |
implications_rej: dict[str, list[str]] = {}
|
243 |
for tag_string, implied_ids in self.implications_rej.items():
|
244 |
for new_tag_string in mapfun(tag_string, None):
|
|
|
253 |
continue
|
254 |
implications_rej[new_tag_string] = implied_ids
|
255 |
|
256 |
+
res = type(self)((tag_normalizer, self.implications, implications_rej))
|
257 |
+
return res
|
|
|
|
|
|
|
|
|
|
|
|
|
258 |
|
259 |
+
def map_outputs(self, mapfun: OutMapFun) -> "TagSetNormalizer":
|
260 |
+
tag_normalizer = self.tag_normalizer.map_outputs(mapfun)
|
261 |
+
return type(self)((tag_normalizer, self.implications, self.implications_rej))
|
|
|
|
|
262 |
|
263 |
+
def get_implied(self, tag: int | str) -> list[int]:
|
264 |
+
if isinstance(tag, int):
|
265 |
+
return self.implications.get(tag, ())
|
266 |
+
else:
|
267 |
+
return self.implications_rej.get(tag, ())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
268 |
|
269 |
+
def encode(
|
270 |
+
self, tags: Iterable[str], keep_implied: bool | set[int] = False
|
271 |
+
) -> tuple[list[int | str], set[int]]:
|
272 |
"""
|
273 |
Encode a list of string as numerical ids and strip implied tags.
|
274 |
|
|
|
281 |
"""
|
282 |
implied = set()
|
283 |
res = []
|
284 |
+
encode = self.tag_normalizer.tag2idx.get
|
285 |
+
get_implied = self.implications.get
|
286 |
+
get_implied_rej = self.implications_rej.get
|
287 |
for tag in tags:
|
288 |
+
tag = encode(tag, tag)
|
289 |
implied.update(
|
290 |
+
get_implied(tag, ())
|
291 |
if isinstance(tag, int)
|
292 |
+
else get_implied_rej(tag, ())
|
293 |
)
|
294 |
res.append(tag)
|
295 |
if not keep_implied:
|
296 |
res = [t for t in res if t not in implied]
|
297 |
+
elif isinstance(keep_implied, set):
|
298 |
+
res = [t for t in res if t not in implied or t in keep_implied]
|
299 |
return res, implied
|
300 |
|
301 |
+
def decode(self, tags: Iterable[int | str]) -> list[str]:
|
302 |
+
idx2tag = self.tag_normalizer.idx2tag
|
303 |
+
return [idx2tag[t] if isinstance(t, int) else t for t in tags]
|
normalize.toml
ADDED
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Tag Normalization Configuration
|
2 |
+
|
3 |
+
# Blacklist: A list of tags to be removed during normalization
|
4 |
+
# These tags will be completely excluded from the output
|
5 |
+
# Default: ["invalid tag"]
|
6 |
+
blacklist = [
|
7 |
+
"invalid tag",
|
8 |
+
"by conditional dnp",
|
9 |
+
"hi res",
|
10 |
+
"absurd res",
|
11 |
+
"superabsurd res",
|
12 |
+
"4k",
|
13 |
+
"uncensored",
|
14 |
+
"ambiguous gender",
|
15 |
+
"translation edit",
|
16 |
+
"story in description",
|
17 |
+
"non- balls",
|
18 |
+
"non- nipples",
|
19 |
+
"non- breasts",
|
20 |
+
"feet out of frame",
|
21 |
+
"funny_post_number",
|
22 |
+
"tagme",
|
23 |
+
"edit_request",
|
24 |
+
]
|
25 |
+
|
26 |
+
# Blacklist Regular Expressions: Tags matching these regexes will be removed
|
27 |
+
# These are full-match regexes, so they must match the entire tag
|
28 |
+
blacklist_regexp = [
|
29 |
+
"(\\d+s?|\\d+:\\d+)", # Numbers, years and aspect ratio
|
30 |
+
".*?_at_source",
|
31 |
+
]
|
32 |
+
|
33 |
+
# Keep Underscores: List of tags where underscores should be preserved
|
34 |
+
# By default, underscores are replaced with spaces unless specified here
|
35 |
+
keep_underscores = ["rating_explicit", "rating_questionable", "rating_safe"]
|
36 |
+
|
37 |
+
# Blacklist Categories: Entire categories of tags to be removed
|
38 |
+
# Common categories include "artist", "character", "copyright", "general", "meta", "species", "pool"
|
39 |
+
blacklist_categories = ["pool"]
|
40 |
+
|
41 |
+
# Remove Parentheses Suffix: Categories where parenthetical suffixes should be removed
|
42 |
+
# E.g., "character_(series)" becomes just "character" if does not conflicts with
|
43 |
+
# an existing tag/alias (for on_alias_conflict="ignore")
|
44 |
+
remove_parens_suffix_for_categories = [
|
45 |
+
"artist",
|
46 |
+
"character",
|
47 |
+
"copyright",
|
48 |
+
"lore",
|
49 |
+
"species",
|
50 |
+
]
|
51 |
+
|
52 |
+
# Use Underscores: Determines whether to use underscores or spaces in output tags
|
53 |
+
# Default: false (use spaces)
|
54 |
+
use_underscores = false
|
55 |
+
|
56 |
+
# Keep Implied: Whether to keep implied tags or remove them. Can also be a list of tags
|
57 |
+
# Default: false (remove implied tags)
|
58 |
+
keep_implied = false
|
59 |
+
|
60 |
+
# On Alias Conflict: How to handle conflicts when creating aliases
|
61 |
+
# Options: "silent", "overwrite", "overwrite_rarest", "warn", "raise"
|
62 |
+
# Default: "ignore" meaning do not modify the alias
|
63 |
+
on_alias_conflict = "ignore"
|
64 |
+
|
65 |
+
# Artist By Prefix: Whether to add "by_" prefix to artist tags
|
66 |
+
# Default: true
|
67 |
+
artist_by_prefix = true
|
68 |
+
|
69 |
+
# Blacklist Implied: Whether to also blacklist tags implied by blacklisted tags
|
70 |
+
# Default: true
|
71 |
+
blacklist_implied = true
|
72 |
+
|
73 |
+
# Aliases: Define tag aliases (alternative names for the same tag)
|
74 |
+
# The key is the alias, and the value is the target tag
|
75 |
+
[aliases]
|
76 |
+
explicit = "rating_explicit"
|
77 |
+
score_explicit = "rating_explicit"
|
78 |
+
score_safe = "rating_safe"
|
79 |
+
score_questionable = "rating_questionable"
|
80 |
+
|
81 |
+
# Aliases Overrides: Similar to aliases, but can override existing tag meanings
|
82 |
+
# Use this carefully as it can change the semantics of existing tags
|
83 |
+
[aliases_overrides]
|
84 |
+
safe = "rating_safe"
|
85 |
+
questionable = "rating_questionable"
|
86 |
+
|
87 |
+
# Renames: Specify tags to be renamed in the output
|
88 |
+
# This also causes them to be recognized as aliases in the input (for idempotency)
|
89 |
+
# The key is the original tag name, and the value is the new name
|
90 |
+
[renames]
|
91 |
+
domestic_cat = "cat"
|
92 |
+
domestic_dog = "dog"
|
normalize_tags.py
CHANGED
@@ -3,152 +3,206 @@
|
|
3 |
import argparse
|
4 |
import logging
|
5 |
import re
|
6 |
-
import sys
|
7 |
import time
|
8 |
from collections import Counter
|
9 |
-
from functools import cache
|
10 |
from itertools import chain
|
11 |
from pathlib import Path
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
from e6db.utils import TagSetNormalizer, tag_categories, tag_category2id
|
15 |
|
16 |
-
|
|
|
17 |
|
18 |
|
19 |
-
def make_tagset_normalizer(
|
20 |
"""
|
21 |
Create a TagSetNormalizer for encoding/decoding tags to and from integers.
|
22 |
-
|
23 |
"""
|
24 |
-
|
25 |
-
|
26 |
|
|
|
27 |
cat_artist = tag_category2id["artist"]
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
"""
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
|
|
|
|
|
|
44 |
cat = tagid2cat[tid] if tid is not None else -1
|
45 |
-
tag = tag_underscores.replace("_", " ")
|
46 |
-
tags = [tag, tag_underscores]
|
47 |
if cat == cat_artist:
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
|
|
|
|
|
|
|
|
57 |
|
58 |
# Recognize tags where ':' were replaced by a space (aspect ratio)
|
59 |
if ":" in tag:
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
# Example of debugging:
|
69 |
-
# if "digital media" in tag:
|
70 |
-
# print(tags)
|
71 |
-
return tags
|
72 |
-
|
73 |
-
tagset_normalizer = tagset_normalizer.map_tags(
|
74 |
-
tag_mapfun,
|
75 |
-
# on_conflictc choices: "silent", "overwrite", "overwrite_rarest",
|
76 |
-
# warn", "raise", use "warn" to debug conflicts.
|
77 |
-
on_conflict="warn" if warn_conflict else "overwrite_rarest",
|
78 |
)
|
79 |
-
|
80 |
-
# Add some underscores back in the output, for example "rating explicit"
|
81 |
-
# will be exported as "rating_explicit"
|
82 |
tag_normalizer = tagset_normalizer.tag_normalizer
|
83 |
-
tag_normalizer.
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
)
|
96 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
|
98 |
return tagset_normalizer
|
99 |
|
100 |
|
101 |
def make_blacklist(
|
102 |
tagset_normalizer: TagSetNormalizer,
|
103 |
-
|
104 |
-
|
105 |
-
override_base=False,
|
106 |
):
|
107 |
-
if
|
108 |
-
|
109 |
-
re_blacklist = set()
|
110 |
-
else:
|
111 |
-
# Base blacklist
|
112 |
-
blacklist = {
|
113 |
-
"invalid tag",
|
114 |
-
"by conditional dnp",
|
115 |
-
"hi res",
|
116 |
-
"absurd res",
|
117 |
-
"superabsurd res",
|
118 |
-
"4k",
|
119 |
-
"uncensored",
|
120 |
-
"ambiguous gender",
|
121 |
-
"translation edit",
|
122 |
-
"story in description",
|
123 |
-
"non- balls",
|
124 |
-
"non- nipples",
|
125 |
-
"non- breasts",
|
126 |
-
"feet out of frame",
|
127 |
-
}
|
128 |
-
# Base regexp
|
129 |
-
re_blacklist = {r"(\d+|\d+:\d+)"}
|
130 |
-
|
131 |
-
# Add additional tags and regexps
|
132 |
-
if additional_tags:
|
133 |
-
blacklist.update(additional_tags)
|
134 |
-
if additional_regexps:
|
135 |
-
re_blacklist.update(additional_regexps)
|
136 |
|
137 |
all_tags = tagset_normalizer.tag_normalizer.idx2tag
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
# Also blacklist tags implied by blacklisted tags
|
151 |
-
blacklist = set(blacklist) | implied
|
152 |
|
153 |
return blacklist
|
154 |
|
@@ -158,8 +212,11 @@ RE_SEP = re.compile(r"[,\n]") # Split on commas and newlines
|
|
158 |
|
159 |
def load_caption(fp: Path):
|
160 |
"""
|
161 |
-
Load caption from file.
|
162 |
-
|
|
|
|
|
|
|
163 |
"""
|
164 |
tags, captions = [], []
|
165 |
with open(fp, "rt") as fd:
|
@@ -178,14 +235,26 @@ def process_directory(
|
|
178 |
dataset_root: Path,
|
179 |
output_dir: Path,
|
180 |
tagset_normalizer: TagSetNormalizer,
|
|
|
181 |
blacklist: set = set(),
|
182 |
-
keep_implied=True,
|
183 |
):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
counter = Counter()
|
185 |
implied_counter = Counter()
|
186 |
processed_files = 0
|
187 |
skipped_files = 0
|
188 |
-
|
|
|
|
|
|
|
|
|
189 |
if "sample-prompts" in file.name:
|
190 |
skipped_files += 1
|
191 |
continue
|
@@ -193,8 +262,20 @@ def process_directory(
|
|
193 |
orig_tags = tags
|
194 |
|
195 |
# Convert tags to ids, separate implied tags
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
196 |
tags, implied = tagset_normalizer.encode(tags, keep_implied=keep_implied)
|
|
|
|
|
|
|
|
|
197 |
tags = [t for t in tags if t not in blacklist]
|
|
|
198 |
|
199 |
# Count tags
|
200 |
counter.update(tags)
|
@@ -202,6 +283,10 @@ def process_directory(
|
|
202 |
|
203 |
# Convert back to strings
|
204 |
tags = tagset_normalizer.decode(tags)
|
|
|
|
|
|
|
|
|
205 |
if tags == orig_tags:
|
206 |
skipped_files += 1
|
207 |
continue
|
@@ -214,10 +299,24 @@ def process_directory(
|
|
214 |
fd.write(result)
|
215 |
processed_files += 1
|
216 |
|
217 |
-
return
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
218 |
|
219 |
|
220 |
-
def print_topk(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
221 |
if implied:
|
222 |
implied = "implied "
|
223 |
else:
|
@@ -228,6 +327,9 @@ def print_topk(counter, tagset_normalizer, n=10, categories=None, implied=False)
|
|
228 |
else:
|
229 |
print(f"\nTop {n} most common {implied}tags:")
|
230 |
|
|
|
|
|
|
|
231 |
filtered_counter = counter
|
232 |
if categories:
|
233 |
filtered_counter = Counter()
|
@@ -245,15 +347,13 @@ def print_topk(counter, tagset_normalizer, n=10, categories=None, implied=False)
|
|
245 |
if isinstance(tag, int):
|
246 |
tag_string = tagset_normalizer.tag_normalizer.decode(tag)
|
247 |
cat = tag_categories[tagset_normalizer.tag_normalizer.tag_categories[tag]]
|
248 |
-
|
249 |
else:
|
250 |
-
|
251 |
-
|
252 |
-
|
253 |
-
|
254 |
-
|
255 |
-
for tag_str in sorted(tagset_normalizer.decode(blacklist)):
|
256 |
-
print(f" {tag_str}")
|
257 |
|
258 |
|
259 |
def setup_logger(verbose):
|
@@ -262,6 +362,20 @@ def setup_logger(verbose):
|
|
262 |
return logging.getLogger(__name__)
|
263 |
|
264 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
265 |
def main():
|
266 |
parser = argparse.ArgumentParser(
|
267 |
description="🏷️ Tag Normalizer - Clean and normalize your tags with ease!"
|
@@ -273,31 +387,23 @@ def main():
|
|
273 |
"output_dir", type=Path, help="Output directory for normalized tag files"
|
274 |
)
|
275 |
parser.add_argument(
|
276 |
-
"-
|
277 |
-
|
278 |
-
|
279 |
-
"
|
280 |
-
|
281 |
-
parser.add_argument(
|
282 |
-
"-b",
|
283 |
-
"--additional-blacklist",
|
284 |
-
action="append",
|
285 |
-
help="Additional tags to add to the blacklist",
|
286 |
)
|
287 |
parser.add_argument(
|
288 |
-
"-
|
289 |
-
"--additional-blacklist-regexp",
|
290 |
-
action="append",
|
291 |
-
help="Additional regular expressions for blacklisting tags",
|
292 |
)
|
293 |
parser.add_argument(
|
294 |
-
"-
|
295 |
-
"--
|
296 |
action="store_true",
|
297 |
-
help="
|
298 |
)
|
299 |
parser.add_argument(
|
300 |
-
"--print-blacklist",
|
301 |
action="store_true",
|
302 |
help="Print the effective list of blacklisted tags",
|
303 |
)
|
@@ -318,70 +424,108 @@ def main():
|
|
318 |
help="Print the N most common implied tags (default: 100 if flag is used without a value)",
|
319 |
)
|
320 |
parser.add_argument(
|
321 |
-
"-
|
322 |
"--stats-categories",
|
323 |
action="append",
|
324 |
choices=list(tag_category2id.keys()) + ["unknown"],
|
325 |
help="Restrict tag count printing to specific categories or 'unknown'",
|
326 |
)
|
327 |
-
parser.add_argument(
|
328 |
-
"--print-conflicts",
|
329 |
-
action="store_true",
|
330 |
-
help="Print the conflicts encountered during the construction of the normalization mapping (useful for debugging it)",
|
331 |
-
)
|
332 |
args = parser.parse_args()
|
333 |
-
|
334 |
logger = setup_logger(args.verbose)
|
335 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
336 |
logger.info("🚀 Starting Tag Normalizer")
|
337 |
-
logger.info(f"Input directory: {
|
338 |
-
logger.info(f"Output directory: {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
339 |
|
340 |
logger.info("🔧 Initializing tag normalizer...")
|
341 |
start_time = time.time()
|
342 |
-
tagset_normalizer = make_tagset_normalizer(
|
343 |
-
logging.info(f"
|
344 |
|
345 |
logger.info("🚫 Creating blacklist...")
|
346 |
blacklist = make_blacklist(
|
347 |
tagset_normalizer,
|
348 |
-
|
349 |
-
|
350 |
-
override_base=args.override_base_blacklist,
|
351 |
)
|
352 |
logger.info(f"Blacklist size: {len(blacklist)} tags")
|
353 |
-
if args.print_blacklist:
|
354 |
-
print_blacklist(blacklist, tagset_normalizer)
|
355 |
|
356 |
logger.info("🔍 Processing files...")
|
357 |
start_time = time.time()
|
358 |
-
|
359 |
-
|
360 |
-
|
361 |
tagset_normalizer,
|
|
|
362 |
blacklist=blacklist,
|
363 |
-
keep_implied=args.keep_implied,
|
364 |
)
|
365 |
|
366 |
logger.info(
|
367 |
f"✅ Processing complete! Time taken: {time.time() - start_time:.2f} seconds"
|
368 |
)
|
369 |
-
logger.info(f"Files
|
370 |
-
logger.info(f"Files skipped (no changes): {skipped_files}")
|
371 |
-
|
372 |
-
logger.info(f"
|
373 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
374 |
print_topk(
|
375 |
counter,
|
376 |
tagset_normalizer,
|
377 |
-
|
378 |
-
args.
|
|
|
379 |
)
|
380 |
if args.print_implied_topk:
|
381 |
print_topk(
|
382 |
-
implied_counter,
|
383 |
tagset_normalizer,
|
384 |
-
|
|
|
385 |
implied=True,
|
386 |
)
|
387 |
|
|
|
3 |
import argparse
|
4 |
import logging
|
5 |
import re
|
|
|
6 |
import time
|
7 |
from collections import Counter
|
|
|
8 |
from itertools import chain
|
9 |
from pathlib import Path
|
10 |
|
11 |
+
try:
|
12 |
+
from tqdm import tqdm
|
13 |
+
except ImportError:
|
14 |
+
tqdm = lambda x: x
|
15 |
+
|
16 |
+
try:
|
17 |
+
import tomllib
|
18 |
+
except ImportError:
|
19 |
+
try:
|
20 |
+
import tomli as tomllib
|
21 |
+
except ImportError:
|
22 |
+
import toml as tomllib
|
23 |
+
|
24 |
from e6db.utils import TagSetNormalizer, tag_categories, tag_category2id
|
25 |
|
26 |
+
DATA_DIR = Path(__file__).resolve().parent / "data"
|
27 |
+
RE_PARENS_SUFFIX = re.compile(r"_\([^)]+\)$")
|
28 |
|
29 |
|
30 |
+
def make_tagset_normalizer(config: dict) -> TagSetNormalizer:
|
31 |
"""
|
32 |
Create a TagSetNormalizer for encoding/decoding tags to and from integers.
|
33 |
+
Configures it based on the provided config.
|
34 |
"""
|
35 |
+
# This loads all the aliases and implications
|
36 |
+
tagset_normalizer = TagSetNormalizer(DATA_DIR)
|
37 |
|
38 |
+
tagid2cat = tagset_normalizer.tag_normalizer.tag_categories
|
39 |
cat_artist = tag_category2id["artist"]
|
40 |
+
cat2suffix = {
|
41 |
+
tag_category2id["character"]: "_(character)",
|
42 |
+
tag_category2id["lore"]: "_(lore)",
|
43 |
+
tag_category2id["species"]: "_(species)",
|
44 |
+
tag_category2id["copyright"]: "_(copyright)",
|
45 |
+
}
|
46 |
+
|
47 |
+
# Create additional aliases for tags using simple rules
|
48 |
+
def input_map(tag, tid):
|
49 |
+
yield tag
|
50 |
+
|
51 |
+
# Make an alias without parentheses, it might conflict but we'll handle
|
52 |
+
# it depending on `on_alias_conflict` config value.
|
53 |
+
without_suffix = RE_PARENS_SUFFIX.sub("", tag)
|
54 |
+
had_suffix = tag != without_suffix
|
55 |
+
if had_suffix:
|
56 |
+
yield without_suffix
|
57 |
+
|
58 |
+
# Add an alias with the suffix (special case for artist)
|
59 |
cat = tagid2cat[tid] if tid is not None else -1
|
|
|
|
|
60 |
if cat == cat_artist:
|
61 |
+
artist = without_suffix.removeprefix("by_")
|
62 |
+
if artist != without_suffix:
|
63 |
+
yield artist
|
64 |
+
if not had_suffix:
|
65 |
+
yield f"{artist}_(artist)"
|
66 |
+
else:
|
67 |
+
yield f"by_{artist}"
|
68 |
+
if not had_suffix:
|
69 |
+
yield f"by_{artist}_(artist)"
|
70 |
+
elif not had_suffix:
|
71 |
+
suffix = cat2suffix.get(cat)
|
72 |
+
if suffix is not None:
|
73 |
+
yield f"{without_suffix}{suffix}"
|
74 |
|
75 |
# Recognize tags where ':' were replaced by a space (aspect ratio)
|
76 |
if ":" in tag:
|
77 |
+
yield tag.replace(":", " ")
|
78 |
+
|
79 |
+
on_alias_conflict = config.get("on_alias_conflict", None)
|
80 |
+
tagset_normalizer = tagset_normalizer.map_inputs(
|
81 |
+
input_map,
|
82 |
+
# on_conflict choices: "silent", "overwrite", "overwrite_rarest",
|
83 |
+
# "warn", "raise", use "warn" to debug conflicts.
|
84 |
+
on_conflict=on_alias_conflict or "ignore",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
)
|
|
|
|
|
|
|
86 |
tag_normalizer = tagset_normalizer.tag_normalizer
|
87 |
+
tag2id = tag_normalizer.tag2idx
|
88 |
+
|
89 |
+
# Apply custom input mappings
|
90 |
+
for antecedent, consequent in config.get("aliases", {}).items():
|
91 |
+
antecedent = antecedent.replace(" ", "_")
|
92 |
+
consequent = consequent.replace(" ", "_")
|
93 |
+
tag_normalizer.add_input_mappings(
|
94 |
+
antecedent, consequent, on_conflict=on_alias_conflict or "warn"
|
95 |
+
)
|
96 |
+
for antecedent, consequent in config.get("aliases_overrides", {}).items():
|
97 |
+
antecedent = antecedent.replace(" ", "_")
|
98 |
+
consequent = consequent.replace(" ", "_")
|
99 |
+
tag_normalizer.add_input_mappings(
|
100 |
+
antecedent, consequent, on_conflict="overwrite"
|
101 |
+
)
|
102 |
+
|
103 |
+
# Apply custom output renames as opposite aliases to ensure
|
104 |
+
# idempotence:
|
105 |
+
output_renames = {
|
106 |
+
old.replace(" ", "_"): new.replace(" ", "_")
|
107 |
+
for old, new in config.get("renames", {}).items()
|
108 |
+
}
|
109 |
+
for old, new in output_renames.items():
|
110 |
+
tag_normalizer.add_input_mappings(new, old)
|
111 |
+
|
112 |
+
# Remove specified aliases
|
113 |
+
for tag in config.get("remove_aliases", []):
|
114 |
+
tag = tag.replace(" ", "_")
|
115 |
+
tag_normalizer.remove_input_mappings(tag)
|
116 |
+
|
117 |
+
# Apply rule based output renames
|
118 |
+
remove_suffix_for_cats = config.get(
|
119 |
+
"remove_parens_suffix_for_categories",
|
120 |
+
["artist", "character", "copyright", "lore", "species"],
|
121 |
)
|
122 |
+
remove_suffix_for_cats = {tag_category2id[c] for c in remove_suffix_for_cats}
|
123 |
+
artist_by_prefix = config.get("artist_by_prefix", True)
|
124 |
+
|
125 |
+
def map_output(tag, tid):
|
126 |
+
cat = tagid2cat[tid] if tid is not None else -1
|
127 |
+
if cat in remove_suffix_for_cats:
|
128 |
+
without_suffix = RE_PARENS_SUFFIX.sub("", tag)
|
129 |
+
if tag != without_suffix and tag2id.get(without_suffix) == tid:
|
130 |
+
tag = without_suffix
|
131 |
+
if cat == cat_artist and artist_by_prefix and not tag.startswith("by_"):
|
132 |
+
tag_wby = f"by_{tag}"
|
133 |
+
if tag2id.get(tag_wby) == tid:
|
134 |
+
tag = tag_wby
|
135 |
+
return tag
|
136 |
+
|
137 |
+
tagset_normalizer = tagset_normalizer.map_outputs(map_output)
|
138 |
+
tag2id = tagset_normalizer.tag_normalizer.tag2idx
|
139 |
+
|
140 |
+
# Apply custom output renames
|
141 |
+
for old, new in output_renames.items():
|
142 |
+
if tag2id[old] == tag2id[new]:
|
143 |
+
tag_normalizer.rename_output(old, new)
|
144 |
+
else:
|
145 |
+
logging.warning(
|
146 |
+
f"Cannot rename {old} -> {new}: old tag id={tag2id[old]} vs. new tag id={tag2id[new]})"
|
147 |
+
)
|
148 |
|
149 |
return tagset_normalizer
|
150 |
|
151 |
|
152 |
def make_blacklist(
|
153 |
tagset_normalizer: TagSetNormalizer,
|
154 |
+
config: dict,
|
155 |
+
print_blacklist=False,
|
|
|
156 |
):
|
157 |
+
if print_blacklist:
|
158 |
+
print("\n🚫 Blacklisted tags:")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
159 |
|
160 |
all_tags = tagset_normalizer.tag_normalizer.idx2tag
|
161 |
+
encode = tagset_normalizer.tag_normalizer.encode
|
162 |
+
decode = tagset_normalizer.tag_normalizer.decode
|
163 |
+
get_implied = tagset_normalizer.get_implied
|
164 |
+
|
165 |
+
blacklist = set()
|
166 |
+
for tag in config.get("blacklist", ["invalid tag"]):
|
167 |
+
tag = tag.replace(" ", "_")
|
168 |
+
encoded_tag = encode(tag, tag)
|
169 |
+
blacklist.add(encoded_tag)
|
170 |
+
if print_blacklist:
|
171 |
+
decoded_tag = decode(encoded_tag)
|
172 |
+
if tag != decoded_tag:
|
173 |
+
print(f" {tag} -> {decoded_tag}")
|
174 |
+
else:
|
175 |
+
print(f" {tag}")
|
176 |
+
|
177 |
+
for regexp in config.get("blacklist_regexp", []):
|
178 |
+
regexp = regexp.replace(" ", "_")
|
179 |
+
cregexp = re.compile(regexp)
|
180 |
+
for tid, tag in enumerate(all_tags):
|
181 |
+
if cregexp.fullmatch(tag):
|
182 |
+
blacklist.add(tid)
|
183 |
+
if print_blacklist:
|
184 |
+
print(f' {tag} (r"{regexp})"')
|
185 |
+
|
186 |
+
implied = set()
|
187 |
+
if config.get("blacklist_implied", True):
|
188 |
+
for tag in blacklist:
|
189 |
+
tag_implied = get_implied(tag)
|
190 |
+
implied.update(tag_implied)
|
191 |
+
if print_blacklist:
|
192 |
+
for implied_tag in tag_implied:
|
193 |
+
print(f" {decode(implied_tag)} (implied by {decode(tag)})")
|
194 |
+
blacklist |= implied
|
195 |
|
196 |
+
tagid2cat = tagset_normalizer.tag_normalizer.tag_categories
|
197 |
+
blacklist_categories = {
|
198 |
+
tag_category2id[c] for c in config.get("blacklist_categories", ["pool"])
|
199 |
+
}
|
200 |
+
if blacklist_categories:
|
201 |
+
for tid, cat in enumerate(tagid2cat):
|
202 |
+
if cat in blacklist_categories:
|
203 |
+
blacklist.add(tid)
|
204 |
+
if print_blacklist:
|
205 |
+
print(f" {tagid2cat[tid]} (cat:{tag_categories[cat]})")
|
|
|
|
|
|
|
206 |
|
207 |
return blacklist
|
208 |
|
|
|
212 |
|
213 |
def load_caption(fp: Path):
|
214 |
"""
|
215 |
+
Load caption from file and split out caption sentences.
|
216 |
+
|
217 |
+
Caption are formatted like this: tag1, tag2, sentence caption1., sentence
|
218 |
+
caption2. Optional sentence captions ending with "." are split out so that
|
219 |
+
they are left untouched.
|
220 |
"""
|
221 |
tags, captions = [], []
|
222 |
with open(fp, "rt") as fd:
|
|
|
235 |
dataset_root: Path,
|
236 |
output_dir: Path,
|
237 |
tagset_normalizer: TagSetNormalizer,
|
238 |
+
config: dict,
|
239 |
blacklist: set = set(),
|
|
|
240 |
):
|
241 |
+
use_underscores = config.get("use_underscores", False)
|
242 |
+
keep_underscores = set(config.get("keep_underscores", ()))
|
243 |
+
keep_implied = config.get("keep_implied", False)
|
244 |
+
if isinstance(keep_implied, list):
|
245 |
+
encode = tagset_normalizer.tag_normalizer.encode
|
246 |
+
keep_implied = {encode(t, t) for t in keep_implied}
|
247 |
+
|
248 |
+
# Running stats
|
249 |
counter = Counter()
|
250 |
implied_counter = Counter()
|
251 |
processed_files = 0
|
252 |
skipped_files = 0
|
253 |
+
blacklist_instances = 0
|
254 |
+
implied_instances = 0
|
255 |
+
|
256 |
+
files = [*dataset_root.glob("**/*.txt"), *dataset_root.glob("**/*.cap*")]
|
257 |
+
for file in tqdm(files):
|
258 |
if "sample-prompts" in file.name:
|
259 |
skipped_files += 1
|
260 |
continue
|
|
|
262 |
orig_tags = tags
|
263 |
|
264 |
# Convert tags to ids, separate implied tags
|
265 |
+
tags = [
|
266 |
+
t.lower().replace(" ", "_").replace(r"\(", "(").replace(r"\)", ")")
|
267 |
+
for t in tags
|
268 |
+
]
|
269 |
+
original_len = len(tags)
|
270 |
+
|
271 |
+
# Encode to integer ids and strip implied tags
|
272 |
tags, implied = tagset_normalizer.encode(tags, keep_implied=keep_implied)
|
273 |
+
implication_filtered_len = len(tags)
|
274 |
+
implied_instances += original_len - implication_filtered_len
|
275 |
+
|
276 |
+
# Remove blacklisted tags
|
277 |
tags = [t for t in tags if t not in blacklist]
|
278 |
+
blacklist_instances += implication_filtered_len - len(tags)
|
279 |
|
280 |
# Count tags
|
281 |
counter.update(tags)
|
|
|
283 |
|
284 |
# Convert back to strings
|
285 |
tags = tagset_normalizer.decode(tags)
|
286 |
+
if not use_underscores:
|
287 |
+
tags = [
|
288 |
+
t.replace("_", " ") if t not in keep_underscores else t for t in tags
|
289 |
+
]
|
290 |
if tags == orig_tags:
|
291 |
skipped_files += 1
|
292 |
continue
|
|
|
299 |
fd.write(result)
|
300 |
processed_files += 1
|
301 |
|
302 |
+
return dict(
|
303 |
+
counter=counter,
|
304 |
+
implied_counter=implied_counter,
|
305 |
+
processed_files=processed_files,
|
306 |
+
skipped_files=skipped_files,
|
307 |
+
blacklist_instances=blacklist_instances,
|
308 |
+
implied_instances=implied_instances,
|
309 |
+
)
|
310 |
|
311 |
|
312 |
+
def print_topk(
|
313 |
+
counter: Counter,
|
314 |
+
tagset_normalizer: TagSetNormalizer,
|
315 |
+
config: dict,
|
316 |
+
n=10,
|
317 |
+
categories=None,
|
318 |
+
implied=False,
|
319 |
+
):
|
320 |
if implied:
|
321 |
implied = "implied "
|
322 |
else:
|
|
|
327 |
else:
|
328 |
print(f"\nTop {n} most common {implied}tags:")
|
329 |
|
330 |
+
use_underscores = config.get("use_underscores", True)
|
331 |
+
keep_underscores = config.get("keep_underscores", set())
|
332 |
+
|
333 |
filtered_counter = counter
|
334 |
if categories:
|
335 |
filtered_counter = Counter()
|
|
|
347 |
if isinstance(tag, int):
|
348 |
tag_string = tagset_normalizer.tag_normalizer.decode(tag)
|
349 |
cat = tag_categories[tagset_normalizer.tag_normalizer.tag_categories[tag]]
|
350 |
+
source = f"e621:{cat}"
|
351 |
else:
|
352 |
+
tag_string = tag
|
353 |
+
source = "unknown"
|
354 |
+
if not use_underscores and tag_string not in keep_underscores:
|
355 |
+
tag_string = tag_string.replace("_", " ")
|
356 |
+
print(f" {tag_string:<30} count={count:<7} ({source})")
|
|
|
|
|
357 |
|
358 |
|
359 |
def setup_logger(verbose):
|
|
|
362 |
return logging.getLogger(__name__)
|
363 |
|
364 |
|
365 |
+
def ask_for_confirmation(prompt, default=False):
|
366 |
+
if default:
|
367 |
+
prompt = f"{prompt} (Y/n): "
|
368 |
+
else:
|
369 |
+
prompt = f"{prompt} (y/N): "
|
370 |
+
|
371 |
+
response = input(prompt).strip().lower()
|
372 |
+
|
373 |
+
if response not in "yn":
|
374 |
+
return default
|
375 |
+
|
376 |
+
return response == "y"
|
377 |
+
|
378 |
+
|
379 |
def main():
|
380 |
parser = argparse.ArgumentParser(
|
381 |
description="🏷️ Tag Normalizer - Clean and normalize your tags with ease!"
|
|
|
387 |
"output_dir", type=Path, help="Output directory for normalized tag files"
|
388 |
)
|
389 |
parser.add_argument(
|
390 |
+
"-c",
|
391 |
+
"--config",
|
392 |
+
type=Path,
|
393 |
+
help="Toml configuration file, defaults to output_dir/normalize.toml, input_dir/normalize.toml or ./normalize.toml",
|
394 |
+
default=None,
|
|
|
|
|
|
|
|
|
|
|
395 |
)
|
396 |
parser.add_argument(
|
397 |
+
"-v", "--verbose", action="store_true", help="Enable verbose logging"
|
|
|
|
|
|
|
398 |
)
|
399 |
parser.add_argument(
|
400 |
+
"-f",
|
401 |
+
"--force",
|
402 |
action="store_true",
|
403 |
+
help="Don't ask for confirmation for clobbering input files",
|
404 |
)
|
405 |
parser.add_argument(
|
406 |
+
"-b", "--print-blacklist",
|
407 |
action="store_true",
|
408 |
help="Print the effective list of blacklisted tags",
|
409 |
)
|
|
|
424 |
help="Print the N most common implied tags (default: 100 if flag is used without a value)",
|
425 |
)
|
426 |
parser.add_argument(
|
427 |
+
"-s",
|
428 |
"--stats-categories",
|
429 |
action="append",
|
430 |
choices=list(tag_category2id.keys()) + ["unknown"],
|
431 |
help="Restrict tag count printing to specific categories or 'unknown'",
|
432 |
)
|
|
|
|
|
|
|
|
|
|
|
433 |
args = parser.parse_args()
|
|
|
434 |
logger = setup_logger(args.verbose)
|
435 |
|
436 |
+
# Validate input/output directories
|
437 |
+
input_dir = args.input_dir.resolve()
|
438 |
+
output_dir = args.output_dir.resolve()
|
439 |
+
if not input_dir.is_dir():
|
440 |
+
logger.error(f"Input directory does not exist: {input_dir}")
|
441 |
+
exit(1)
|
442 |
+
try:
|
443 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
444 |
+
except OSError as e:
|
445 |
+
logger.error(f"Could not create output directory {output_dir}: {e}")
|
446 |
+
exit(1)
|
447 |
logger.info("🚀 Starting Tag Normalizer")
|
448 |
+
logger.info(f"Input directory: {input_dir}")
|
449 |
+
logger.info(f"Output directory: {output_dir}")
|
450 |
+
|
451 |
+
if input_dir == output_dir and not args.force:
|
452 |
+
if not ask_for_confirmation(
|
453 |
+
"Input and output directories are the same. This will clobber the input directory. Are you sure you want to continue?",
|
454 |
+
default=False,
|
455 |
+
):
|
456 |
+
exit(0)
|
457 |
+
|
458 |
+
# Load config file
|
459 |
+
for config_path in [
|
460 |
+
args.config,
|
461 |
+
output_dir / "normalize.toml",
|
462 |
+
input_dir / "normalize.toml",
|
463 |
+
Path(".") / "normalize.toml",
|
464 |
+
]:
|
465 |
+
if config_path is None:
|
466 |
+
continue
|
467 |
+
if config_path.exists():
|
468 |
+
config_path = config_path.resolve()
|
469 |
+
break
|
470 |
+
else:
|
471 |
+
logger.error(f"Could not find a config file in {input_dir}, {output_dir} or ./")
|
472 |
+
exit(1)
|
473 |
+
logger.info(f"🔧 Using config file: {config_path}")
|
474 |
+
with open(config_path, "rb") as f:
|
475 |
+
config = tomllib.load(f)
|
476 |
|
477 |
logger.info("🔧 Initializing tag normalizer...")
|
478 |
start_time = time.time()
|
479 |
+
tagset_normalizer = make_tagset_normalizer(config)
|
480 |
+
logging.info(f"✅ Data loaded in {time.time() - start_time:.2f} seconds")
|
481 |
|
482 |
logger.info("🚫 Creating blacklist...")
|
483 |
blacklist = make_blacklist(
|
484 |
tagset_normalizer,
|
485 |
+
config,
|
486 |
+
print_blacklist=args.print_blacklist,
|
|
|
487 |
)
|
488 |
logger.info(f"Blacklist size: {len(blacklist)} tags")
|
|
|
|
|
489 |
|
490 |
logger.info("🔍 Processing files...")
|
491 |
start_time = time.time()
|
492 |
+
stats = process_directory(
|
493 |
+
input_dir,
|
494 |
+
output_dir,
|
495 |
tagset_normalizer,
|
496 |
+
config,
|
497 |
blacklist=blacklist,
|
|
|
498 |
)
|
499 |
|
500 |
logger.info(
|
501 |
f"✅ Processing complete! Time taken: {time.time() - start_time:.2f} seconds"
|
502 |
)
|
503 |
+
logger.info(f"Files modified: {stats['processed_files']}")
|
504 |
+
logger.info(f"Files skipped (no changes): {stats['skipped_files']}")
|
505 |
+
counter = stats["counter"]
|
506 |
+
logger.info(f"Unique tags: {len(counter)}")
|
507 |
+
logger.info(f"Tag occurrences: {sum(counter.values())}")
|
508 |
+
unknown_counter = [count for t, count in counter.items() if not isinstance(t, int)]
|
509 |
+
logger.info(f"Unknown tags: {len(unknown_counter)}")
|
510 |
+
logger.info(f"Unknown tags occurrences: {sum(unknown_counter)}")
|
511 |
+
logger.info(f"Removed by blacklist: {stats['blacklist_instances']}")
|
512 |
+
logger.info(f"Removed by implication: {stats['implied_instances']}")
|
513 |
+
if args.print_topk or args.stats_categories:
|
514 |
+
if not args.print_topk:
|
515 |
+
args.print_topk = 100
|
516 |
print_topk(
|
517 |
counter,
|
518 |
tagset_normalizer,
|
519 |
+
config,
|
520 |
+
n=args.print_topk,
|
521 |
+
categories=args.stats_categories,
|
522 |
)
|
523 |
if args.print_implied_topk:
|
524 |
print_topk(
|
525 |
+
stats["implied_counter"],
|
526 |
tagset_normalizer,
|
527 |
+
config,
|
528 |
+
n=args.print_implied_topk,
|
529 |
implied=True,
|
530 |
)
|
531 |
|
query_tags.py
CHANGED
@@ -61,9 +61,26 @@ def dothething(args):
|
|
61 |
|
62 |
# Deduplicate, global top-k
|
63 |
neigh_idxs = np.unique(neigh_idxs)
|
64 |
-
scores = scores[neigh_idxs, :].
|
|
|
65 |
if len(neigh_idxs) > global_topk:
|
66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
|
68 |
idxs = np.concatenate([sel_idxs, neigh_idxs])
|
69 |
query_slice = slice(None, len(sel_idxs))
|
|
|
61 |
|
62 |
# Deduplicate, global top-k
|
63 |
neigh_idxs = np.unique(neigh_idxs)
|
64 |
+
scores = scores[neigh_idxs, :].mean(axis=1)
|
65 |
+
rej = None
|
66 |
if len(neigh_idxs) > global_topk:
|
67 |
+
partition = np.argpartition(-scores, global_topk)
|
68 |
+
rej = neigh_idxs[partition[global_topk:]]
|
69 |
+
rej_scores = scores[partition[global_topk:]]
|
70 |
+
scores = scores[partition[:global_topk]]
|
71 |
+
neigh_idxs = neigh_idxs[partition[:global_topk]]
|
72 |
+
|
73 |
+
tag_list = " ".join(
|
74 |
+
f"{idx2tag[i]} ({format_tagfreq(tag_rank_to_freq(i))})"
|
75 |
+
for s, i in zip(scores, neigh_idxs)
|
76 |
+
)
|
77 |
+
print("accepted:", tag_list)
|
78 |
+
if rej is not None:
|
79 |
+
tag_list = " ".join(
|
80 |
+
f"{idx2tag[i]} ({format_tagfreq(tag_rank_to_freq(i))}, {s})"
|
81 |
+
for s, i in zip(rej_scores, rej)
|
82 |
+
)
|
83 |
+
print("rejected:", tag_list)
|
84 |
|
85 |
idxs = np.concatenate([sel_idxs, neigh_idxs])
|
86 |
query_slice = slice(None, len(sel_idxs))
|