diffusers-benchmarking-bot
commited on
Upload folder using huggingface_hub
Browse files- main/README.md +5 -5
- main/lpw_stable_diffusion_xl.py +5 -2
- main/pipeline_demofusion_sdxl.py +5 -2
- main/pipeline_sdxl_style_aligned.py +5 -2
- main/pipeline_stable_diffusion_xl_controlnet_adapter.py +5 -2
- main/pipeline_stable_diffusion_xl_controlnet_adapter_inpaint.py +5 -2
- main/pipeline_stable_diffusion_xl_differential_img2img.py +5 -2
- main/pipeline_stable_diffusion_xl_ipex.py +5 -2
main/README.md
CHANGED
@@ -33,12 +33,12 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
|
|
33 |
| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
|
34 |
| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
|
35 |
| Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
36 |
-
| Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) |
|
37 |
| MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
|
38 |
-
| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) |
|
39 |
-
| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) |
|
40 |
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
41 |
-
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) |
|
42 |
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
|
43 |
| TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
44 |
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/edict_image_pipeline.ipynb) | [Joqsan Azocar](https://github.com/Joqsan) |
|
@@ -50,7 +50,7 @@ Please also check out our [Community Scripts](https://github.com/huggingface/dif
|
|
50 |
| IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
|
51 |
| Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
|
52 |
| Stable Diffusion XL Long Weighted Prompt Pipeline | A pipeline support unlimited length of prompt and negative prompt, use A1111 style of prompt weighting | [Stable Diffusion XL Long Weighted Prompt Pipeline](#stable-diffusion-xl-long-weighted-prompt-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LsqilswLR40XLLcp6XFOl5nKb_wOe26W?usp=sharing) | [Andrew Zhu](https://xhinker.medium.com/) |
|
53 |
-
| FABRIC - Stable Diffusion with feedback Pipeline | pipeline supports feedback from liked and disliked images | [Stable Diffusion Fabric Pipeline](#stable-diffusion-fabric-pipeline) |
|
54 |
| sketch inpaint - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion Pipeline](#stable-diffusion-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
|
55 |
| sketch inpaint xl - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion XL Pipeline](#stable-diffusion-xl-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
|
56 |
| prompt-to-prompt | change parts of a prompt and retain image structure (see [paper page](https://prompt-to-prompt.github.io/)) | [Prompt2Prompt Pipeline](#prompt2prompt-pipeline) | - | [Umer H. Adil](https://twitter.com/UmerHAdil) |
|
|
|
33 |
| Bit Diffusion | Diffusion on discrete data | [Bit Diffusion](#bit-diffusion) | - | [Stuti R.](https://github.com/kingstut) |
|
34 |
| K-Diffusion Stable Diffusion | Run Stable Diffusion with any of [K-Diffusion's samplers](https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py) | [Stable Diffusion with K Diffusion](#stable-diffusion-with-k-diffusion) | - | [Patrick von Platen](https://github.com/patrickvonplaten/) |
|
35 |
| Checkpoint Merger Pipeline | Diffusion Pipeline that enables merging of saved model checkpoints | [Checkpoint Merger Pipeline](#checkpoint-merger-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
36 |
+
| Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Diffusion and compare their results together | [Stable Diffusion Comparison](#stable-diffusion-comparisons) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_comparison.ipynb) | [Suvaditya Mukherjee](https://github.com/suvadityamuk) |
|
37 |
| MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
|
38 |
+
| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_unclip.ipynb) | [Ray Wang](https://wrong.wang) |
|
39 |
+
| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/unclip_text_interpolation.ipynb)| [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
40 |
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
|
41 |
+
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ddim_noise_comparative_analysis.ipynb)| [Aengus (Duc-Anh)](https://github.com/aengusng8) |
|
42 |
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
|
43 |
| TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
|
44 |
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/edict_image_pipeline.ipynb) | [Joqsan Azocar](https://github.com/Joqsan) |
|
|
|
50 |
| IADB Pipeline | Implementation of [Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model](https://arxiv.org/abs/2305.03486) | [IADB Pipeline](#iadb-pipeline) | - | [Thomas Chambon](https://github.com/tchambon)
|
51 |
| Zero1to3 Pipeline | Implementation of [Zero-1-to-3: Zero-shot One Image to 3D Object](https://arxiv.org/abs/2303.11328) | [Zero1to3 Pipeline](#zero1to3-pipeline) | - | [Xin Kong](https://github.com/kxhit) |
|
52 |
| Stable Diffusion XL Long Weighted Prompt Pipeline | A pipeline support unlimited length of prompt and negative prompt, use A1111 style of prompt weighting | [Stable Diffusion XL Long Weighted Prompt Pipeline](#stable-diffusion-xl-long-weighted-prompt-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LsqilswLR40XLLcp6XFOl5nKb_wOe26W?usp=sharing) | [Andrew Zhu](https://xhinker.medium.com/) |
|
53 |
+
| FABRIC - Stable Diffusion with feedback Pipeline | pipeline supports feedback from liked and disliked images | [Stable Diffusion Fabric Pipeline](#stable-diffusion-fabric-pipeline) | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/stable_diffusion_fabric.ipynb)| [Shauray Singh](https://shauray8.github.io/about_shauray/) |
|
54 |
| sketch inpaint - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion Pipeline](#stable-diffusion-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
|
55 |
| sketch inpaint xl - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion XL Pipeline](#stable-diffusion-xl-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
|
56 |
| prompt-to-prompt | change parts of a prompt and retain image structure (see [paper page](https://prompt-to-prompt.github.io/)) | [Prompt2Prompt Pipeline](#prompt2prompt-pipeline) | - | [Umer H. Adil](https://twitter.com/UmerHAdil) |
|
main/lpw_stable_diffusion_xl.py
CHANGED
@@ -827,7 +827,9 @@ class SDXLLongPromptWeightingPipeline(
|
|
827 |
)
|
828 |
|
829 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
830 |
-
pooled_prompt_embeds
|
|
|
|
|
831 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
832 |
|
833 |
prompt_embeds_list.append(prompt_embeds)
|
@@ -879,7 +881,8 @@ class SDXLLongPromptWeightingPipeline(
|
|
879 |
output_hidden_states=True,
|
880 |
)
|
881 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
882 |
-
negative_pooled_prompt_embeds
|
|
|
883 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
884 |
|
885 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
|
|
827 |
)
|
828 |
|
829 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
830 |
+
if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
|
831 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
832 |
+
|
833 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
834 |
|
835 |
prompt_embeds_list.append(prompt_embeds)
|
|
|
881 |
output_hidden_states=True,
|
882 |
)
|
883 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
884 |
+
if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
|
885 |
+
negative_pooled_prompt_embeds = negative_prompt_embeds[0]
|
886 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
887 |
|
888 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
main/pipeline_demofusion_sdxl.py
CHANGED
@@ -290,7 +290,9 @@ class DemoFusionSDXLPipeline(
|
|
290 |
)
|
291 |
|
292 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
293 |
-
pooled_prompt_embeds
|
|
|
|
|
294 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
295 |
|
296 |
prompt_embeds_list.append(prompt_embeds)
|
@@ -342,7 +344,8 @@ class DemoFusionSDXLPipeline(
|
|
342 |
output_hidden_states=True,
|
343 |
)
|
344 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
345 |
-
negative_pooled_prompt_embeds
|
|
|
346 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
347 |
|
348 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
|
|
290 |
)
|
291 |
|
292 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
293 |
+
if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
|
294 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
295 |
+
|
296 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
297 |
|
298 |
prompt_embeds_list.append(prompt_embeds)
|
|
|
344 |
output_hidden_states=True,
|
345 |
)
|
346 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
347 |
+
if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
|
348 |
+
negative_pooled_prompt_embeds = negative_prompt_embeds[0]
|
349 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
350 |
|
351 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
main/pipeline_sdxl_style_aligned.py
CHANGED
@@ -628,7 +628,9 @@ class StyleAlignedSDXLPipeline(
|
|
628 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
629 |
|
630 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
631 |
-
pooled_prompt_embeds
|
|
|
|
|
632 |
if clip_skip is None:
|
633 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
634 |
else:
|
@@ -688,7 +690,8 @@ class StyleAlignedSDXLPipeline(
|
|
688 |
output_hidden_states=True,
|
689 |
)
|
690 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
691 |
-
negative_pooled_prompt_embeds
|
|
|
692 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
693 |
|
694 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
|
|
628 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
629 |
|
630 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
631 |
+
if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
|
632 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
633 |
+
|
634 |
if clip_skip is None:
|
635 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
636 |
else:
|
|
|
690 |
output_hidden_states=True,
|
691 |
)
|
692 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
693 |
+
if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
|
694 |
+
negative_pooled_prompt_embeds = negative_prompt_embeds[0]
|
695 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
696 |
|
697 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
main/pipeline_stable_diffusion_xl_controlnet_adapter.py
CHANGED
@@ -359,7 +359,9 @@ class StableDiffusionXLControlNetAdapterPipeline(
|
|
359 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
360 |
|
361 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
362 |
-
pooled_prompt_embeds
|
|
|
|
|
363 |
if clip_skip is None:
|
364 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
365 |
else:
|
@@ -419,7 +421,8 @@ class StableDiffusionXLControlNetAdapterPipeline(
|
|
419 |
output_hidden_states=True,
|
420 |
)
|
421 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
422 |
-
negative_pooled_prompt_embeds
|
|
|
423 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
424 |
|
425 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
|
|
359 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
360 |
|
361 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
362 |
+
if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
|
363 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
364 |
+
|
365 |
if clip_skip is None:
|
366 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
367 |
else:
|
|
|
421 |
output_hidden_states=True,
|
422 |
)
|
423 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
424 |
+
if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
|
425 |
+
negative_pooled_prompt_embeds = negative_prompt_embeds[0]
|
426 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
427 |
|
428 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
main/pipeline_stable_diffusion_xl_controlnet_adapter_inpaint.py
CHANGED
@@ -507,7 +507,9 @@ class StableDiffusionXLControlNetAdapterInpaintPipeline(
|
|
507 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
508 |
|
509 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
510 |
-
pooled_prompt_embeds
|
|
|
|
|
511 |
if clip_skip is None:
|
512 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
513 |
else:
|
@@ -567,7 +569,8 @@ class StableDiffusionXLControlNetAdapterInpaintPipeline(
|
|
567 |
output_hidden_states=True,
|
568 |
)
|
569 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
570 |
-
negative_pooled_prompt_embeds
|
|
|
571 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
572 |
|
573 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
|
|
507 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
508 |
|
509 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
510 |
+
if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
|
511 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
512 |
+
|
513 |
if clip_skip is None:
|
514 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
515 |
else:
|
|
|
569 |
output_hidden_states=True,
|
570 |
)
|
571 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
572 |
+
if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
|
573 |
+
negative_pooled_prompt_embeds = negative_prompt_embeds[0]
|
574 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
575 |
|
576 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
main/pipeline_stable_diffusion_xl_differential_img2img.py
CHANGED
@@ -394,7 +394,9 @@ class StableDiffusionXLDifferentialImg2ImgPipeline(
|
|
394 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
395 |
|
396 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
397 |
-
pooled_prompt_embeds
|
|
|
|
|
398 |
if clip_skip is None:
|
399 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
400 |
else:
|
@@ -454,7 +456,8 @@ class StableDiffusionXLDifferentialImg2ImgPipeline(
|
|
454 |
output_hidden_states=True,
|
455 |
)
|
456 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
457 |
-
negative_pooled_prompt_embeds
|
|
|
458 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
459 |
|
460 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
|
|
394 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
395 |
|
396 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
397 |
+
if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
|
398 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
399 |
+
|
400 |
if clip_skip is None:
|
401 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
402 |
else:
|
|
|
456 |
output_hidden_states=True,
|
457 |
)
|
458 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
459 |
+
if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
|
460 |
+
negative_pooled_prompt_embeds = negative_prompt_embeds[0]
|
461 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
462 |
|
463 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
main/pipeline_stable_diffusion_xl_ipex.py
CHANGED
@@ -390,7 +390,9 @@ class StableDiffusionXLPipelineIpex(
|
|
390 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
391 |
|
392 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
393 |
-
pooled_prompt_embeds
|
|
|
|
|
394 |
if clip_skip is None:
|
395 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
396 |
else:
|
@@ -450,7 +452,8 @@ class StableDiffusionXLPipelineIpex(
|
|
450 |
output_hidden_states=True,
|
451 |
)
|
452 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
453 |
-
negative_pooled_prompt_embeds
|
|
|
454 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
455 |
|
456 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|
|
|
390 |
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
|
391 |
|
392 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
393 |
+
if pooled_prompt_embeds is None and prompt_embeds[0].ndim == 2:
|
394 |
+
pooled_prompt_embeds = prompt_embeds[0]
|
395 |
+
|
396 |
if clip_skip is None:
|
397 |
prompt_embeds = prompt_embeds.hidden_states[-2]
|
398 |
else:
|
|
|
452 |
output_hidden_states=True,
|
453 |
)
|
454 |
# We are only ALWAYS interested in the pooled output of the final text encoder
|
455 |
+
if negative_pooled_prompt_embeds is None and negative_prompt_embeds[0].ndim == 2:
|
456 |
+
negative_pooled_prompt_embeds = negative_prompt_embeds[0]
|
457 |
negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
|
458 |
|
459 |
negative_prompt_embeds_list.append(negative_prompt_embeds)
|