Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: image-classification
|
|
8 |
library_name: transformers
|
9 |
---
|
10 |
# **Guard-Against-Unsafe-Content-Siglip2**
|
11 |
-
**Guard-Against-Unsafe-Content-Siglip2** is an image classification vision-language encoder model fine-tuned from
|
12 |
|
13 |
The model categorizes images into two classes:
|
14 |
- **Class 0:** "Unsafe Content" – indicating that the image contains vulgarity, nudity, or explicit content.
|
|
|
8 |
library_name: transformers
|
9 |
---
|
10 |
# **Guard-Against-Unsafe-Content-Siglip2**
|
11 |
+
**Guard-Against-Unsafe-Content-Siglip2** is an image classification vision-language encoder model fine-tuned from google/siglip2-base-patch16-224 for a single-label classification task. It is designed to detect NSFW content, including vulgarity and nudity, using the SiglipForImageClassification architecture.
|
12 |
|
13 |
The model categorizes images into two classes:
|
14 |
- **Class 0:** "Unsafe Content" – indicating that the image contains vulgarity, nudity, or explicit content.
|