tobi1modna
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,125 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
-
-
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
##
|
|
|
37 |
|
38 |
-
|
|
|
39 |
|
40 |
-
|
|
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
|
|
45 |
|
46 |
-
### Downstream Use [optional]
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
|
|
|
|
53 |
|
54 |
-
|
|
|
55 |
|
56 |
-
|
57 |
|
58 |
-
|
|
|
59 |
|
60 |
-
|
|
|
|
|
61 |
|
62 |
-
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
-
|
|
|
|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
|
|
69 |
|
70 |
-
|
|
|
|
|
71 |
|
72 |
-
|
|
|
|
|
|
|
73 |
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
-
##
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
|
173 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: cc-by-nc-4.0
|
4 |
+
tags:
|
5 |
+
- clip
|
6 |
+
- safeclip
|
7 |
+
- vision-and-language
|
8 |
+
- text-to-image
|
9 |
+
- image-to-text
|
10 |
+
- generation
|
11 |
+
- retrieval
|
12 |
+
- safety
|
13 |
+
- nsfw
|
14 |
---
|
15 |
|
16 |
+
# Model Card: Safe-CLIP
|
17 |
|
18 |
+
Safe-CLIP, introduced in the paper [**Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models**](https://arxiv.org/abs/2311.16254), is an ehnanced vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications.
|
19 |
|
20 |
+
Based on the CLIP model, Safe-CLIP is fine-tuned to serve the association between linguistic and visual concepts, ensuring **safer outputs** in text-to-image and image-to-text retrieval and generation tasks.
|
21 |
|
22 |
+
## NSFW Definition
|
23 |
+
In our work, with inspiration taken from this [paper](https://arxiv.org/abs/2211.05105), we define NSFW as a finite and fixed set concepts that are considered inappropriate, offensive, or harmful to individuals. These concepts are divided into twenty categories: _hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality and cruelty_.
|
24 |
|
25 |
## Model Details
|
26 |
|
27 |
+
Safe-CLIP is a fine-tuned version of [CLIP](https://huggingface.co/docs/transformers/en/model_doc/clip) model. The model fine-tuning is done through the ViSU (Visual Safe and Unsafe) Dataset, introduced in the same [paper](https://arxiv.org/abs/2311.16254).
|
28 |
|
29 |
+
ViSU contains quadruplets of elements: safe and NSFW sentence pairs along with corresponding safe and NSFW images. You can find the <u>text portion</u> of ViSU Dataset publicly released on the HuggingFace [ViSU-Text](https://huggingface.co/datasets/aimagelab/ViSU-Text) page. We decided not to release the Vision portion of the dataset due to the presence of extremely inappropriate images. These images have the potential to cause harm and distress to individuals. Consequently, releasing this part of the dataset would be irresponsible and contrary to the principles of ensuring safe and ethical use of AI technology. The final model redirects inappropriate content to safe regions of the embedding space while preserving the integrity of safe embeddings.
|
30 |
|
|
|
31 |
|
32 |
+
**Variations** Safe-CLIP comes in four versions to improve the compatibility across some of the most popular vision-and-language models employed for I2T and T2I generation tasks. More details are reported in the next table.
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
+
| | StableDiffusion compatibility | LLaVA compatibility |
|
35 |
+
|--------------------------|:-----------------------------:|:-------------------:|
|
36 |
+
| safe-CLIP ViT-L-14 | 1.4 | ? |
|
37 |
+
| safe-CLIP ViT-L-14-336px | - | 1.5 1.6 |
|
38 |
+
| safe-CLIP ViT-H-14 | - | - |
|
39 |
+
| safe-CLIP SD 2.0 | 2.0 | - |
|
40 |
|
41 |
+
**Model Release Date** 9 July 2024.
|
42 |
|
43 |
+
For more information about the model, training details, dataset, and evaluation, please refer to the [paper](https://arxiv.org/abs/2311.16254).
|
44 |
+
You can also find the donwstream-tasks example codes in the repository of the paper [here](https://github.com/aimagelab/safe-clip).
|
|
|
45 |
|
46 |
+
## Applications
|
47 |
+
Safe-CLIP can be employed in various applications where safety and appropriateness are critical, including cross-modal retrieval, text-to-image, and image-to-text generation. It works seamlessly with pre-trained generative models, providing safer alternatives without compromising on the quality of semantic content.
|
48 |
|
49 |
+
#### Use with Transformers
|
50 |
+
See the snippet below for usage with Transformers:
|
51 |
|
52 |
+
```python
|
53 |
+
>>> from transformers import CLIPModel
|
54 |
|
55 |
+
>>> model_id = "aimagelab/safeclip_vit-l_14"
|
56 |
|
57 |
+
>>> model = CLIPModel.from_pretrained(model_id)
|
58 |
+
```
|
59 |
|
|
|
60 |
|
61 |
+
## Downstream Use
|
62 |
+
#### Safe Text-to-Image Generation
|
63 |
+
```python
|
64 |
+
>>> from diffusers import StableDiffusionPipeline
|
65 |
+
>>> from transformers import CLIPTextModel
|
66 |
+
>>> from torch import Generator
|
67 |
|
68 |
+
>>> # set device to GPU
|
69 |
+
>>> device = 'cuda'
|
70 |
|
71 |
+
>>> # set generator with seed for reproducibility
|
72 |
+
>>> generator = Generator(device=device)
|
73 |
+
>>> generator.manual_seed(42)
|
74 |
|
75 |
+
>>> clip_backbone = "openai/clip-vit-large-patch14"
|
76 |
+
>>> sd_model_id = "CompVis/stable-diffusion-v1-4"
|
77 |
|
78 |
+
>>> safeclip_text_model = CLIPTextModel.from_pretrained("aimagelab/safeclip_vit-l_14")
|
79 |
|
80 |
+
>>> # import StableDiffusion 1.4 model
|
81 |
+
>>> safe_pipeline = StableDiffusionPipeline.from_pretrained(sd_model_id, safety_checker=None)
|
82 |
|
83 |
+
>>> # set the text_encoder of StableDiffusion to the safeCLIP text encoder to make it safe
|
84 |
+
>>> safe_pipeline.text_encoder = safeclip_text_model
|
85 |
+
>>> safe_pipeline = safe_pipeline.to(device)
|
86 |
|
87 |
+
>>> # Disclaimer! Note that the purpose of this snippet is to demonstrate the generation of a safe image using the safe-clip model.
|
88 |
+
>>> # The prompt used in this snippet is inappropriate and is only used for demonstration purposes (the resulting image is safe).
|
89 |
+
>>> prompt = "A young woman being raped on the beach from behind"
|
90 |
+
>>> safe_image = safe_pipeline(prompt=prompt, generator=generator).images[0]
|
91 |
+
>>> safe_image.save("safe_image.png")
|
92 |
+
```
|
93 |
|
94 |
+
#### Zero-shot classification example
|
95 |
+
```python
|
96 |
+
>>> from transformers import CLIPModel
|
97 |
|
98 |
+
>>> model_id = "aimagelab/safeclip_vit-l_14"
|
99 |
|
100 |
+
>>> model = CLIPModel.from_pretrained(model_id)
|
101 |
+
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
|
102 |
|
103 |
+
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
104 |
+
>>> image = Image.open(requests.get(url, stream=True).raw)
|
105 |
+
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
|
106 |
|
107 |
+
>>> outputs = clip(**inputs)
|
108 |
+
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
|
109 |
+
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
|
110 |
+
```
|
111 |
|
|
|
112 |
|
113 |
+
## Citation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
114 |
|
115 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
116 |
|
117 |
+
Please cite with the following BibTeX:
|
118 |
+
```
|
119 |
+
@article{poppi2024removing,
|
120 |
+
title={{Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models}},
|
121 |
+
author={Poppi, Samuele and Poppi, Tobia and Cocchi, Federico and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
|
122 |
+
journal={arXiv preprint arXiv:2311.16254},
|
123 |
+
year={2024}
|
124 |
+
}
|
125 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|