abhishek-kumar commited on
Commit
a04f9f9
β€’
1 Parent(s): d815313

Add github files

Browse files
README.md CHANGED
@@ -1,12 +1,73 @@
1
- ---
2
- title: CLIP Based NSFW Detector
3
- emoji: πŸ“Š
4
- colorFrom: gray
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLIP-based-NSFW-Detector
2
+
3
+ This 2 class NSFW-detector is a lightweight Autokeras model that takes CLIP ViT L/14 embbedings as inputs.
4
+ It estimates a value between 0 and 1 (1 = NSFW) and works well with embbedings from images.
5
+
6
+ DEMO-Colab:
7
+ https://colab.research.google.com/drive/19Acr4grlk5oQws7BHTqNIK-80XGw2u8Z?usp=sharing
8
+
9
+ The training CLIP V L/14 embbedings can be downloaded here:
10
+ https://drive.google.com/file/d/1yenil0R4GqmTOFQ_GVw__x61ofZ-OBcS/view?usp=sharing (not fully manually annotated so cannot be used as test)
11
+
12
+
13
+ The (manually annotated) test set is there https://github.com/LAION-AI/CLIP-based-NSFW-Detector/blob/main/nsfw_testset.zip
14
+
15
+ https://github.com/rom1504/embedding-reader/blob/main/examples/inference_example.py inference on laion5B
16
+
17
+ Example of use of the model:
18
+
19
+ ```python
20
+ @lru_cache(maxsize=None)
21
+ def load_safety_model(clip_model):
22
+ """load the safety model"""
23
+ import autokeras as ak # pylint: disable=import-outside-toplevel
24
+ from tensorflow.keras.models import load_model # pylint: disable=import-outside-toplevel
25
+
26
+ cache_folder = get_cache_folder(clip_model)
27
+
28
+ if clip_model == "ViT-L/14":
29
+ model_dir = cache_folder + "/clip_autokeras_binary_nsfw"
30
+ dim = 768
31
+ elif clip_model == "ViT-B/32":
32
+ model_dir = cache_folder + "/clip_autokeras_nsfw_b32"
33
+ dim = 512
34
+ else:
35
+ raise ValueError("Unknown clip model")
36
+ if not os.path.exists(model_dir):
37
+ os.makedirs(cache_folder, exist_ok=True)
38
+
39
+ from urllib.request import urlretrieve # pylint: disable=import-outside-toplevel
40
+
41
+ path_to_zip_file = cache_folder + "/clip_autokeras_binary_nsfw.zip"
42
+ if clip_model == "ViT-L/14":
43
+ url_model = "https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_binary_nsfw.zip"
44
+ elif clip_model == "ViT-B/32":
45
+ url_model = (
46
+ "https://raw.githubusercontent.com/LAION-AI/CLIP-based-NSFW-Detector/main/clip_autokeras_nsfw_b32.zip"
47
+ )
48
+ else:
49
+ raise ValueError("Unknown model {}".format(clip_model)) # pylint: disable=consider-using-f-string
50
+ urlretrieve(url_model, path_to_zip_file)
51
+ import zipfile # pylint: disable=import-outside-toplevel
52
+
53
+ with zipfile.ZipFile(path_to_zip_file, "r") as zip_ref:
54
+ zip_ref.extractall(cache_folder)
55
+
56
+ loaded_model = load_model(model_dir, custom_objects=ak.CUSTOM_OBJECTS)
57
+ loaded_model.predict(np.random.rand(10**3, dim).astype("float32"), batch_size=10**3)
58
+
59
+ return loaded_model
60
+
61
+
62
+ nsfw_values = safety_model.predict(embeddings, batch_size=embeddings.shape[0])
63
+ ```
64
+
65
+ This code and model is released under the MIT license:
66
+
67
+ Copyright 2022, Christoph Schuhmann
68
+
69
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
70
+
71
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
72
+
73
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
clip_autokeras_binary_nsfw.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:905520bb0336accc73ee282c605d926aef90cf14248889d44b9cee50eef71f8a
3
+ size 1094585
clip_autokeras_nsfw_b32.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78715ab9d64485167e8c9ade3b8e6761a17a16c24d114bd47431ac1dab46462b
3
+ size 231831
docs/train_h14_nsfw.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This is a basic guide that outlines what is necessary to train a NSFW detector
2
+ on top of a CLIP model.
3
+
4
+ Note that this is a general guide and meant to be used as a guideline.
5
+
6
+
7
+ ## Dataset Prep
8
+ You will need to obtain thousands of NSFW and SFW images to train a good model.
9
+
10
+ Once the images are gathered, you can place them in a directory and leverage
11
+ [clip-retrieval](https://github.com/rom1504/clip-retrieval#clip-inference)
12
+ to easily create a numpy array of embeddings. (For more details read the docs)
13
+
14
+ Finally, you will need to create target values which can be done easily in the
15
+ python interpreter.
16
+
17
+ ### Target Values & Dataset Combination
18
+
19
+ ```python
20
+ # find out how many positive samples you have
21
+ pos_x = np.load(path/to_positive/samples.npy)
22
+ neg_x = np.load(path/to_negative/samples.npy)
23
+
24
+ num_pos = pos_x.shape[0]
25
+ num_neg = neg_x.shape[0]
26
+
27
+ # create target values
28
+ pos_y = np.ones((num_pos, 1))
29
+ neg_y = np.zeros((num_neg, 1))
30
+
31
+ # combine the x samples
32
+ # NOTE: we will rely on torch dataloader shuffling to break the ordering here
33
+ x = np.vstack((pos_x, neg_x))
34
+ y = np.vstack((pos_y, neg_y))
35
+
36
+ # save the dataset x & y
37
+
38
+ np.save("train_x.npy", x)
39
+ np.save("train_y.npy", y)
40
+ ```
41
+
42
+ ## Model Training
43
+
44
+ Thankfully it is possible to use a very simple linear model to train the NSFW
45
+ detector.
46
+
47
+ For the purposes of this guide we will reference [this repo](https://github.com/christophschuhmann/improved-aesthetic-predictor)
48
+ and its model architecture.
49
+
50
+ > NOTE: It is also possible to utilize the training script provided in that repo
51
+ > as boilerplate code, provided you have `.npy` files for your dataset's x & y
52
+
53
+ ### Model Architecture
54
+
55
+ Feel free to tweak the model architecture here, but the important thing to
56
+ remember is that your input dimension should match the dimension of your CLIP
57
+ embeddings, and your output dimension should be 1.
58
+
59
+ ```python
60
+ import torch.nn as nn
61
+
62
+ class H14_NSFW_Detector(nn.Module):
63
+ def __init__(self, input_size=1024):
64
+ super().__init__()
65
+ self.input_size = input_size
66
+ self.layers = nn.Sequential(
67
+ nn.Linear(self.input_size, 1024),
68
+ nn.ReLU(),
69
+ nn.Dropout(0.2),
70
+ nn.Linear(1024, 2048),
71
+ nn.ReLU(),
72
+ nn.Dropout(0.2),
73
+ nn.Linear(2048, 1024),
74
+ nn.ReLU(),
75
+ nn.Dropout(0.2),
76
+ nn.Linear(1024, 256),
77
+ nn.ReLU(),
78
+ nn.Dropout(0.2),
79
+ nn.Linear(256, 128),
80
+ nn.ReLU(),
81
+ nn.Dropout(0.2),
82
+ nn.Linear(128, 16),
83
+ nn.Linear(16, 1)
84
+ )
85
+
86
+ def forward(self, x):
87
+ return self.layers(x)
88
+ ```
89
+
90
+ ### Training Snippets
91
+
92
+ Below is a list of snippets that should walk you through the general steps
93
+ necessary to train a MLP in PyTorch, however, it is completely fine to replace
94
+ the existing MLP in [this-repo](https://github.com/christophschuhmann/improved-aesthetic-predictor)
95
+ with the model provided above & begin training.
96
+
97
+ For those of you who wish to build out custom code, the following snippets should
98
+ get the ball rolling...
99
+
100
+ #### Import the necessary libraries:
101
+
102
+ ```python
103
+ import torch
104
+ import torch.nn as nn
105
+ from torch.optim import Adam
106
+ from torch.utils.data import TensorDataset, DataLoader
107
+ ```
108
+
109
+ #### Define the dataset and data loaders:
110
+
111
+ ```python
112
+ # Define the dataset
113
+ x = torch.from_numpy(np.load("train_x.npy"))
114
+ y = torch.from_numpy(np.load("train_y.npy"))
115
+ train_dataset = TensorDataset(x,y)
116
+
117
+ # Define the data loaders
118
+ train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
119
+ ```
120
+
121
+ #### Initialize the model
122
+ ```python
123
+ model = H14_NSFW_Detector()
124
+ ```
125
+
126
+ #### Define the loss function
127
+ ```python
128
+ criterion = nn.MSELoss()
129
+ ```
130
+
131
+ #### Define the optimizer
132
+ ```python
133
+ # Define the optimizer
134
+ optimizer = Adam(model.parameters())
135
+ ```
136
+
137
+ #### Define the training loop
138
+ ```python
139
+ # Define the number of epochs
140
+ num_epochs = 10
141
+
142
+ # Training loop
143
+ for epoch in range(num_epochs):
144
+ for inputs, labels in train_loader:
145
+ # Clear gradients
146
+ optimizer.zero_grad()
147
+
148
+ # Forward pass
149
+ outputs = model(inputs)
150
+
151
+ # Compute the loss
152
+ loss = criterion(outputs, labels)
153
+
154
+ # Backward pass and optimization
155
+ loss.backward()
156
+ optimizer.step()
157
+ ```
158
+
159
+ #### Define an evaluation loop
160
+ ```python
161
+ # Evaluation loop
162
+ model.eval()
163
+ with torch.no_grad():
164
+ for inputs, labels in val_loader:
165
+ # Forward pass
166
+ outputs = model(inputs)
167
+
168
+ # Compute the loss
169
+ loss = criterion(outputs, labels)
170
+ ```
171
+
172
+ Note that this is a basic guide and you may need to add additional functionality
173
+ such as model saving and loading, early stopping, etc.
174
+ You may want to adjust the learning rate,
175
+ batch size, and number of epochs as well.
176
+
h14_nsfw.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed87167db4dd6676f921d37f3f1b26a1f32275724cb5e7925017277f435ca78e
3
+ size 22182219
h14_nsfw_model.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch.nn as nn
2
+
3
+ class H14_NSFW_Detector(nn.Module):
4
+ def __init__(self, input_size=1024):
5
+ super().__init__()
6
+ self.input_size = input_size
7
+ self.layers = nn.Sequential(
8
+ nn.Linear(self.input_size, 1024),
9
+ nn.ReLU(),
10
+ nn.Dropout(0.2),
11
+ nn.Linear(1024, 2048),
12
+ nn.ReLU(),
13
+ nn.Dropout(0.2),
14
+ nn.Linear(2048, 1024),
15
+ nn.ReLU(),
16
+ nn.Dropout(0.2),
17
+ nn.Linear(1024, 256),
18
+ nn.ReLU(),
19
+ nn.Dropout(0.2),
20
+ nn.Linear(256, 128),
21
+ nn.ReLU(),
22
+ nn.Dropout(0.2),
23
+ nn.Linear(128, 16),
24
+ nn.Linear(16, 1)
25
+ )
26
+
27
+ def forward(self, x):
28
+ return self.layers(x)
license.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ This code and model is released under the MIT license:
2
+
3
+ Copyright 2022, Christoph Schuhmann
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
6
+
7
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
8
+
9
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
nsfw-clip.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import time
3
+
4
+ import autokeras as ak
5
+
6
+ import tensorflow as tf
7
+
8
+
9
+
10
+
11
+ import numpy as np
12
+
13
+ neutral = np.load ("./neutral/img_emb/img_emb_0.npy")
14
+ print(neutral.shape)
15
+
16
+ porn = np.load ("./porn/img_emb/img_emb_0.npy")
17
+ print(porn.shape)
18
+
19
+ drawings = np.load ("./drawings/img_emb/img_emb_0.npy")
20
+ print(drawings.shape)
21
+
22
+ hentai = np.load ("./hentai/img_emb/img_emb_0.npy")
23
+ print(hentai.shape)
24
+
25
+ sexy = np.load ("./sexy/img_emb/img_emb_0.npy")
26
+ print(sexy.shape)
27
+
28
+
29
+
30
+ x_t =np.concatenate((porn,sexy),axis = 0)
31
+ x_t =np.concatenate((x_t,hentai),axis = 0)
32
+ nsfw_t_len=x_t.shape[0]
33
+ print(nsfw_t_len)
34
+ x_t =np.concatenate((x_t,neutral),axis = 0)
35
+ x_t =np.concatenate((x_t,drawings),axis = 0)
36
+ y_t = np.zeros(x_t.shape[0], dtype = np.uint8)
37
+ sfw_t_len=x_t.shape[0] - nsfw_t_len
38
+ print(sfw_t_len)
39
+
40
+ for i in range(nsfw_t_len):
41
+ y_t[i]=1
42
+ from sklearn.utils import shuffle
43
+ x_train, y_train = shuffle(x_t, y_t)
44
+
45
+
46
+ print(y_t)
47
+ print(y_train)
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+ x_train = x_train.astype(float) #[100:-100]
56
+ y_train = y_train.astype(int)#[100:-100]
57
+
58
+
59
+ #x_test = x_test.astype(float) #[100:-100]
60
+ #y_test = y_test.astype(int)#[100:-100]
61
+
62
+
63
+ # It tries 10 different models.
64
+ clf = ak.StructuredDataClassifier(overwrite=True, max_trials=5)
65
+
66
+ # Feed the structured data classifier with training data.
67
+ clf.fit(x_train, y_train, epochs=10, validation_split=0.1)
68
+
69
+ model = clf.export_model()
70
+ model.summary()
71
+
72
+
73
+
74
+
75
+ model.save("clip_autokeras_nsfw")
nsfw_testset.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c70cf196d2653ea5512e78fb8773102c9886ea8b698905310fcb1ccba56fe67f
3
+ size 4576563
safety_settings.yml ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nsfw:
2
+ concepts:
3
+ sexual: 0.2
4
+ nude: 0.20
5
+ sex: 0.206
6
+ 18+: 0.21
7
+ naked: 0.195
8
+ nsfw: 0.2
9
+ porn: 0.2
10
+ dick: 0.19
11
+ vagina: 0.19
12
+ naked child: 0.22
13
+ explicit content: 0.19
14
+ uncensored: 0.2
15
+ fuck: 0.2
16
+ nipples: 0.2
17
+ visible nipples: 0.21
18
+ naked breasts: 0.214
19
+ areola: 0.2
20
+ action: obfuscate
21
+ special:
22
+ concepts:
23
+ little girl: 0.21
24
+ little boy: 0.21
25
+ young child: 0.21
26
+ young girl: 0.22
27
+ toddler: 0.21
28
+ baby: 0.22
violence_detection_vit_b_32.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76909cca3e77abbbb21313895756773a9c6fb26c9cc743f3ef0625f025741982
3
+ size 4224
violence_detection_vit_l_14.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc08f81ef50b1ef71452ddc87285fe6fbbc01b6fe8f8b8c24482c386ef2ab7e1
3
+ size 6272