Text-to-Image
Diffusers
Safetensors

ImportError: cannot import name 'LuminaText2ImgPipeline' from 'diffusers'

#1
by Nicola007 - opened

when I follow the guide in Model card to try Alpha-VLLM/Lumina-Next-SFT-diffusers, an error occured:
from diffusers import LuminaText2ImgPipeline

ImportError: cannot import name 'LuminaText2ImgPipeline' from 'diffusers'

however my diffusers version is:
diffusers 0.29.2
even if I installed diffusers-0.30.0.dev0, still cannot import name 'LuminaText2ImgPipeline' from 'diffusers'
Has LuminaText2ImgPipeline added to diffusers already now?

Alpha-VLLM org

We are integrating Lumina into diffusers now, which will be available very soon!

thank you very much!
When I use this repo, I encouter following error:
https://github.com/PommesPeter/diffusers/tree/lumina

ValueError: Cannot load <class 'diffusers.models.transformers.lumina_nextdit2d.LuminaNextDiT2DModel'> from ./pretrained_models/models--Alpha-VLLM--Lumina-Next-SFT-diffusers/transformer because the following keys are missing:
layers.17.feed_forward.linear_2.weight, layers.10.feed_forward.linear_1.weight, layers.20.feed_forward.linear_1.weight, layers.20.feed_forward.linear_3.weight, layers.3.feed_forward.linear_1.weight, layers.21.feed_forward.linear_3.weight, layers.12.feed_forward.linear_1.weight, layers.3.feed_forward.linear_3.weight, layers.11.feed_forward.linear_3.weight, layers.8.feed_forward.linear_1.weight, layers.13.feed_forward.linear_2.weight, layers.4.feed_forward.linear_2.weight, layers.22.feed_forward.linear_2.weight, layers.10.feed_forward.linear_3.weight, layers.4.feed_forward.linear_3.weight, layers.1.feed_forward.linear_2.weight, layers.6.feed_forward.linear_2.weight, layers.14.feed_forward.linear_1.weight, layers.3.feed_forward.linear_2.weight, layers.2.feed_forward.linear_2.weight, layers.1.feed_forward.linear_1.weight, layers.10.feed_forward.linear_2.weight, layers.23.feed_forward.linear_3.weight, layers.15.feed_forward.linear_3.weight, layers.19.feed_forward.linear_2.weight, layers.15.feed_forward.linear_1.weight, layers.8.feed_forward.linear_3.weight, layers.7.feed_forward.linear_1.weight, layers.6.feed_forward.linear_1.weight, layers.16.feed_forward.linear_1.weight, layers.16.feed_forward.linear_2.weight, layers.5.feed_forward.linear_3.weight, layers.13.feed_forward.linear_1.weight, patch_embedder.proj.bias, layers.5.feed_forward.linear_1.weight, layers.9.feed_forward.linear_1.weight, layers.2.feed_forward.linear_1.weight, layers.20.feed_forward.linear_2.weight, layers.0.feed_forward.linear_3.weight, layers.9.feed_forward.linear_3.weight, layers.7.feed_forward.linear_3.weight, layers.12.feed_forward.linear_2.weight, patch_embedder.proj.weight, layers.14.feed_forward.linear_2.weight, layers.4.feed_forward.linear_1.weight, layers.0.feed_forward.linear_1.weight, layers.23.feed_forward.linear_1.weight, layers.19.feed_forward.linear_3.weight, layers.5.feed_forward.linear_2.weight, layers.17.feed_forward.linear_3.weight, layers.6.feed_forward.linear_3.weight, layers.13.feed_forward.linear_3.weight, layers.7.feed_forward.linear_2.weight, layers.0.feed_forward.linear_2.weight, layers.16.feed_forward.linear_3.weight, layers.11.feed_forward.linear_1.weight, layers.14.feed_forward.linear_3.weight, layers.18.feed_forward.linear_3.weight, layers.12.feed_forward.linear_3.weight, layers.21.feed_forward.linear_2.weight, layers.17.feed_forward.linear_1.weight, layers.23.feed_forward.linear_2.weight, layers.22.feed_forward.linear_1.weight, layers.11.feed_forward.linear_2.weight, layers.15.feed_forward.linear_2.weight, layers.19.feed_forward.linear_1.weight, layers.8.feed_forward.linear_2.weight, layers.18.feed_forward.linear_1.weight, layers.2.feed_forward.linear_3.weight, layers.21.feed_forward.linear_1.weight, layers.22.feed_forward.linear_3.weight, layers.18.feed_forward.linear_2.weight, layers.9.feed_forward.linear_2.weight, layers.1.feed_forward.linear_3.weight.
Please make sure to pass low_cpu_mem_usage=False and device_map=None if you want to randomly initialize those weights or else make sure your checkpoint file is correct.

Alpha-VLLM org

Did you pull the latest code and checkpoints?

I pull the latest code (https://github.com/PommesPeter/diffusers/tree/lumina), download the latest pretrained-model :
python setup.py bdist_wheel
pip install dist/diffusers-0.30.0.dev0-py3-none-any.whl
run following demo:

from diffusers import LuminaText2ImgPipeline
import torch

pipeline = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).to("cuda")

image = pipeline(prompt="Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. "
                        "Background shows an industrial revolution cityscape with smoky skies and tall, metal structures").images[0]

and encountered following error:

image.png

Traceback (most recent call last):
    image = pipeline(prompt="Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. "
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/lumina/pipeline_lumina.py", line 842, in __call__
    noise_pred = self.transformer(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/transformers/lumina_nextdit2d.py", line 450, in forward
    adaln_input=adaln_input,
NameError: name 'adaln_input' is not defined

using the latest code and pretrained model, I got following error:

image.png

Would you please help to take a look?

Alpha-VLLM org

Hi @Nicola007 ,

We currently merged our newest code on diffusers. could you update your code and model for one more try?

I installed the neweset diffusers by

pip install git+https://github.com/huggingface/diffusers.git

and generated a pretty nice picture with following demo:

from diffusers import LuminaText2ImgPipeline
import torch

pipeline = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).to("cuda")

image = pipeline(prompt="Upper body of a young woman in a Victorian-era outfit with brass goggles and leather straps. "
                        "Background shows an industrial revolution cityscape with smoky skies and tall, metal structures").images[0]
image.save('demo_young_woman.png')

many thanks for your replies!

demo_young_woman.png

PommesPeter changed discussion status to closed

Sign up or log in to comment