Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Refiner, Sizes of tensors must match except in dimension 0. Expected size 1280 but got size 768 for tensor number 1 in the list. #12400

Closed
1 task done
zz2222222222222 opened this issue Aug 8, 2023 · 4 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@zz2222222222222
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

tensor = torch.cat([tensor[0:offset + 1], emb[0:emb_len], tensor[offset + 1 + emb_len:]])
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 1280 but got size 768 for tensor number 1 in the list.

Steps to reproduce the problem

1.select one sdxl model for main
2.select sd model for Refiner
3.set Refiner switch at 0.5
4.click txt2img generate
5.get normal result
6.click txt2img generate again get error

What should have happened?

get this error
im ignore it if shape not same ,it can work but i cant sure is correct or not
`
class EmbeddingsWithFixes(torch.nn.Module):
def init(self, wrapped, embeddings, textual_inversion_key='clip_l'):
super().init()
self.wrapped = wrapped
self.embeddings = embeddings
self.textual_inversion_key = textual_inversion_key

def forward(self, input_ids):
    batch_fixes = self.embeddings.fixes
    self.embeddings.fixes = None

    inputs_embeds = self.wrapped(input_ids)

    if batch_fixes is None or len(batch_fixes) == 0 or max([len(x) for x in batch_fixes]) == 0:
        return inputs_embeds

    vecs = []
    for fixes, tensor in zip(batch_fixes, inputs_embeds):
        for offset, embedding in fixes:
            vec = embedding.vec[self.textual_inversion_key] if isinstance(embedding.vec, dict) else embedding.vec
            emb = devices.cond_cast_unet(vec)
            emb_len = min(tensor.shape[0] - offset - 1, emb.shape[0])

++ if emb.shape[1:] != tensor.shape[1:]:
++ continue

            tensor = torch.cat([tensor[0:offset + 1], emb, tensor[offset + 1 + emb_len:]])

        vecs.append(tensor)

`

Version or Commit where the problem happens

dev

What Python version are you running on ?

None

What platforms do you use to access the UI ?

Linux

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

default

List of extensions

none

Console logs

Traceback (most recent call last):
      File ".../stable-diffusion-webui/modules/call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File ".../stable-diffusion-webui/modules/call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File ".../stable-diffusion-webui/modules/txt2img.py", line 63, in txt2img
        processed = processing.process_images(p)
      File ".../stable-diffusion-webui/modules/processing.py", line 746, in process_images
        res = process_images_inner(p)
      File ".../stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File ".../stable-diffusion-webui/modules/processing.py", line 858, in process_images_inner
        p.setup_conds()
      File ".../stable-diffusion-webui/modules/processing.py", line 1302, in setup_conds
        super().setup_conds()
      File ".../stable-diffusion-webui/modules/processing.py", line 373, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
      File ".../stable-diffusion-webui/modules/processing.py", line 362, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File ".../stable-diffusion-webui/modules/prompt_parser.py", line 168, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File ".../stable-diffusion-webui/modules/sd_models_xl.py", line 31, in get_learned_conditioning
        c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
      File ".../stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File ".../stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 141, in forward
        emb_out = embedder(batch[embedder.input_key])
      File ".../stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File ".../stable-diffusion-webui/modules/sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File ".../stable-diffusion-webui/modules/sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File ".../stable-diffusion-webui/modules/sd_hijack_open_clip.py", line 57, in encode_with_transformers
        d = self.wrapped.encode_with_transformer(tokens)
      File ".../stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 467, in encode_with_transformer
        x = self.model.token_embedding(text)  # [batch_size, n_ctx, d_model]
      File ".../stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File ".../stable-diffusion-webui/modules/sd_hijack.py", line 330, in forward
        tensor = torch.cat([tensor[0:offset + 1], emb, tensor[offset + 1 + emb_len:]])
    RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 1280 but got size 768 for tensor number 1 in the list.

Additional information

No response

@zz2222222222222 zz2222222222222 added the bug-report Report of a bug, yet to be confirmed label Aug 8, 2023
@dhwz
Copy link
Contributor

dhwz commented Aug 8, 2023

Is that a bug report to unmerged changes? Please don't open issues then but report to the PR.

@zz2222222222222
Copy link
Author

sorry im a new user dont know how to use that still learn how the flow

Is that a bug report to unmerged changes? Please don't open issues then but report to the PR.

@dhwz
Copy link
Contributor

dhwz commented Aug 8, 2023

You did already comment there, is that the PR you're testing? #12377
Please close the issue.

@zz2222222222222
Copy link
Author

ok

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

2 participants