--- license: cc-by-sa-4.0 task_categories: - token-classification language: - en size_categories: - 10K -s unknown ``` Find all `*.txt` and `*.cap*` files in `` and write the normalized files in `` while reproducing the folder hierarchy. Additionally, `-s unknown` writes to the standard output the the top 100 unrecognized tags. `-s meta` will show the top 100 meta tags, while `-k 50` alone will show the top 50 for all categories. You can specify the same folder for input and output, and use `-f` to skip confirmation. #### Configuration Tag Normalizer uses a TOML configuration file to customize its behavior. By default, it looks for `normalize.toml` in the following locations: 1. The path specified by the `-c` or `--config` option 2. The output directory 3. The input directory 4. The current directory (most likely the example one) Here's a brief overview of the main configuration options: - `blacklist`: List of tags to remove - `blacklist_regexp`: Regular expressions for blacklisting tags - `keep_underscores`: List of tags where underscores should be preserved - `blacklist_categories`: Categories of tags to remove entirely - `remove_parens_suffix_for_categories`: Categories where parenthetical suffixes should be removed - `aliases`: Define tag aliases - `aliases_overrides`: Define aliases that can override existing tag meanings - `renames`: Specify tags to be renamed in the output - `use_underscores`: Whether to use underscores or spaces in output tags - `keep_implied`: Whether to keep or remove implied tags, may also be a list of tags to keep even when implied by another one - `on_alias_conflict`: How to handle conflicts when creating aliases - `artist_by_prefix`: Whether to add "by\_" prefix to artist tags - `blacklist_implied`: Whether to also blacklist tags implied by blacklisted tags For a detailed explanation of each option, refer to the comments in the `normalize.toml` file. It contains the default values with some opinionated changes. #### Command-line Options - `-c`, `--config`: Specify a custom configuration file (default: looks for `normalize.toml` in output dir, input dir, or current dir) - `-v`, `--verbose`: Enable verbose logging - `-f`, `--force`: Don't ask for confirmation when overwriting input files - `-b`, `--print-blacklist`: Print the effective list of blacklisted tags - `-k`, `--print-topk [N]`: Print the N most common tags (default: 100 if no value provided) - `-s`, `--stats-categories `: Restrict tag count printing to specific categories (or `unknown` for non e621 tags) - `-j`, `--print-implied-topk [N]`: Print the N most common implied tags (default: 100 if no value provided) ### `e6db.utils` This [module](./e6db/utils/__init__.py) contains utilities for loading the provided data and use it to normalize tag sets. - `load_tags` and `load_implications` loads the tags indexes and implications, - `TagNormalizer` allows to adapt the spellings of tag and alias for working with various datasets. It can normalize spellings by converting tag strings to numerical ids and back, - `TagSetNormalizer` uses the above class along tag implications to normalize tag sets and strip implied tags. See [this example notebook](./notebooks/Normalize%20tags%20T2I%20dataset.ipynb) that cleans T2I datasets in the sd-script format. Additionally, the `e6db.utils.numpy` and `e6db.utils.torch` modules provides functions to construct post-tag interaction matrices in sparse matrix format. For this you'll need to generate the `posts.parquet` file from CSVs. ### `importdb` Reads the CSVs from [e621 db export](https://e621.net/db_export/) `python -m e6db.importdb ./data` reads tags, aliases, implications and posts CSV files. The following operations are performed: - Tags used at least twice are assigned numerical ids based on their rank. - Computes the transitive closure of implications, - For each post, split the tags into direct and implied tag. - Write parquets files for tags and posts (~500MB) and convert tags indexes using simple formats described in the next section. The CSV files must be decompressed beforehand. ## Dataset content This dataset currently focus on tags alone using simple file formats that are easily parsed without additional dependencies. - `tag2idx.json.gz`: a dictionary mapping tag strings and aliases to numerical id (tag rank), - `tags.txt.gz`: list of tags sorted by rank, can be indexed by the ids given by `tag2idx.json.gz`, - `tags_categories.bin.gz`: a raw array of bytes representing tag categories in the same order than `tags.txt.gz`, - `implications.json.gz`: maps tags id to implied tag ids (including transitive implications), - `implications_rej.json.gz`: maps tag strings to a list of implied numerical ids. Keys in implications_rej are tags that have a very little usage (less than 2 posts) and don't have numerical ids associated with them. - `implicit_tag_factors.safetensors`: Tag embedding computed by [alternating least squares](./notebooks/AltLstSq.ipynb). No post data is currently included, since this wouldn't add any useful information compared to what's inside the CSVs. If you want the post parquet files with normalized tags, you can download the CSVs and run the [`e6db.importdb`](#importdb) script yourself. I plan to compile more post data in the future, such as aesthetic predictions, adjusted favcounts, etc. Utilities will then be added to assists with the selection of a subset of posts for specific ML tasks.