RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)
When I tried to enlarge the image, something went wrong
config
error msg
```
2023-11-07 10:23:55,024 - ControlNet - INFO - Loading model: control_v11p_sd15_inpaint [ebff9138]
2023-11-07 10:23:57,736 - ControlNet - INFO - Loaded state_dict from [E:\AIProject\sd-webui-aki-v4.4\models\ControlNet\control_v11p_sd15_inpaint.pth]
2023-11-07 10:23:57,736 - ControlNet - INFO - controlnet_default_config
2023-11-07 10:24:00,940 - ControlNet - INFO - ControlNet model control_v11p_sd15_inpaint [ebff9138] loaded.
2023-11-07 10:24:01,104 - ControlNet - INFO - using inpaint as input
2023-11-07 10:24:01,118 - ControlNet - INFO - Loading preprocessor: inpaint_only
2023-11-07 10:24:01,118 - ControlNet - INFO - preprocessor resolution = 512
2023-11-07 10:24:01,192 - ControlNet - INFO - ControlNet Hooked - Time = 6.18020224571228
2023-11-07 10:24:01,359 - ControlNet - INFO - ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 96, 64]).
*** Error completing request
*** Arguments: ('task(7t38e1k7917dhs7)', 'clownfish,coral reef,bubble,kelp,', '3d,realistic,badhandv4:1.4,EasyNegative,ng_deepnegative_v1_75t,bad anatomy,futa,sketches,(worst quality:2),(low quality:2),(normal quality:2),lowres,normal quality,monochrome,grayscale,(pointed chin),skin spots,acnes,skin blemishes(fat:1.2),facing away,looking away,', [], 20, 'Euler a', 1, 1, 7, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000002217C044D30>, 0, False, '', 0.8, 3483093119, False, -1, 0, 0, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000002217C39C730>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002217C39C910>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002217C17C700>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002217C17E920>, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False) {}
Traceback (most recent call last):
File "E:\AIProject\sd-webui-aki-v4.4\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "E:\AIProject\sd-webui-aki-v4.4\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "E:\AIProject\sd-webui-aki-v4.4\modules\processing.py", line 732, in process_images
res = process_images_inner(p)
File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\modules\processing.py", line 867, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 451, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\modules\processing.py", line 1140, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 235, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 235, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_cfg_denoiser.py", line 188, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_models_xl.py", line 37, in apply_model
return self.model(x, t, cond)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
return self.diffusion_model(
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 853, in forward_webui
raise e
File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 850, in forward_webui
return forward(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 591, in forward
control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
return self.control_model(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\cldm.py", line 314, in forward
h = module(h, emb, context)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward
x = layer(x, context)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 627, in forward
x = block(x, context=context[i])
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 459, in forward
return checkpoint(
File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint
return func(*inputs)
File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 478, in _forward
self.attn2(
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_hijack_optimizations.py", line 486, in xformers_attention_forward
k_in = self.to_k(context_k)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\AIProject\sd-webui-aki-v4.4\extensions-builtin\Lora\networks.py", line 429, in network_Linear_forward
return originals.Linear_forward(self, input)
File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x2048 and 768x320)
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Error occurred when executing KSampler:
mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
File "/workspace/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/workspace/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/workspace/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/workspace/ComfyUI/nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/workspace/ComfyUI/nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/workspace/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
raise e
File "/workspace/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs)
File "/workspace/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 126, in animatediff_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "/workspace/ComfyUI/comfy/sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/workspace/ComfyUI/comfy/samplers.py", line 709, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/workspace/ComfyUI/comfy/samplers.py", line 615, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/workspace/ComfyUI/comfy/samplers.py", line 554, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/workspace/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/workspace/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/ComfyUI/comfy/samplers.py", line 275, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "/workspace/ComfyUI/comfy/samplers.py", line 265, in forward
return self.apply_model(*args, **kwargs)
File "/workspace/ComfyUI/comfy/samplers.py", line 262, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "/workspace/ComfyUI/comfy/samplers.py", line 250, in sampling_function
cond, uncond = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
File "/workspace/ComfyUI/comfy/samplers.py", line 205, in calc_cond_uncond_batch
c['control'] = control.get_control(input_x, timestep, c, len(cond_or_uncond))
File "/workspace/ComfyUI/comfy/controlnet.py", line 166, in get_control
control = self.control_model(x=x_noisy.to(self.control_model.dtype), hint=self.cond_hint, timesteps=timestep.float(), context=context.to(self.control_model.dtype), y=y)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/ComfyUI/comfy/cldm/cldm.py", line 304, in forward
h = module(h, emb, context)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 43, in forward
x = layer(x, context, transformer_options)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/ComfyUI/comfy/ldm/modules/attention.py", line 560, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/ComfyUI/comfy/ldm/modules/attention.py", line 390, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "/workspace/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 123, in checkpoint
return func(*inputs)
File "/workspace/ComfyUI/comfy/ldm/modules/attention.py", line 492, in _forward
n = self.attn2(n, context=context_attn2, value=value_attn2)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/ComfyUI/comfy/ldm/modules/attention.py", line 358, in forward
k = self.to_k(context)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)