Skip to content

SD-XL 0.9 LoRAs from CivitAI cannot be loaded #4279

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
patrickvonplaten opened this issue Jul 26, 2023 · 3 comments
Closed

SD-XL 0.9 LoRAs from CivitAI cannot be loaded #4279

patrickvonplaten opened this issue Jul 26, 2023 · 3 comments
Labels
bug Something isn't working stale Issues that haven't received updates

Comments

@patrickvonplaten
Copy link
Contributor

patrickvonplaten commented Jul 26, 2023

Describe the bug

Many SDXL 0.9 LoRAs from Civit AI don't seem to work. Can we investigate what is going on here? It would be important to support SDXL LoRA out of the box for 1.0.

Try the following links:
https://civitai.com/models/110400
https://civitai.com/models/112904?modelVersionId=121933
https://civitai.com/models/113124?modelVersionId=122189
https://civitai.com/models/115888?modelVersionId=122189
https://civitai.com/models/106864?modelVersionId=123399

Reproduction

When downloading any of the LoRAs and trying to load them with:

from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.load_lora_weights("<locally-downloaded-lora>")

The loading breaks.

Logs

File ~/python_bin/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py:857, in StableDiffusionXLPipeline.load_lora_weights(self, pretrained_model_name_or_path_or_dict, **kwargs)
    855 def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
    856     state_dict, network_alpha = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
--> 857     self.load_lora_into_unet(state_dict, network_alpha=network_alpha, unet=self.unet)
    859     text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k}
    860     if len(text_encoder_state_dict) > 0:

File ~/python_bin/diffusers/loaders.py:1055, in LoraLoaderMixin.load_lora_into_unet(cls, state_dict, network_alpha, unet)
   1051     logger.info(f"Loading {cls.unet_name}.")
   1052     unet_lora_state_dict = {
   1053         k.replace(f"{cls.unet_name}.", ""): v for k, v in state_dict.items() if k in unet_keys
   1054     }
-> 1055     unet.load_attn_procs(unet_lora_state_dict, network_alpha=network_alpha)
   1057 # Otherwise, we're dealing with the old format. This means the `state_dict` should only
   1058 # contain the module names of the `unet` as its keys WITHOUT any prefix.
   1059 elif not all(
   1060     key.startswith(cls.unet_name) or key.startswith(cls.text_encoder_name) for key in state_dict.keys()
   1061 ):

File ~/python_bin/diffusers/loaders.py:349, in UNet2DConditionLoadersMixin.load_attn_procs(self, pretrained_model_name_or_path_or_dict, **kwargs)
    347 attn_processor = self
    348 for sub_key in key.split("."):
--> 349     attn_processor = getattr(attn_processor, sub_key)
    351 # Process non-attention layers, which don't have to_{k,v,q,out_proj}_lora layers
    352 # or add_{k,v,q,out_proj}_proj_lora layers.
    353 if "lora.down.weight" in value_dict:

File ~/python_bin/diffusers/models/modeling_utils.py:186, in ModelMixin.__getattr__(self, name)
    183     return self._internal_dict[name]
    185 # call PyTorch's https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
--> 186 return super().__getattr__(name)

File ~/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:1614, in Module.__getattr__(self, name)
   1612     if name in modules:
   1613         return modules[name]
-> 1614 raise AttributeError("'{}' object has no attribute '{}'".format(
   1615     type(self).__name__, name))

AttributeError: 'UNet2DConditionModel' object has no attribute 'input'

System Info

  • diffusers version: 0.19.0.dev0 (main)
  • Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
  • Python version: 3.10.6
  • PyTorch version (GPU?): 2.0.1+cu118 (True)
  • Huggingface_hub version: 0.15.1
  • Transformers version: 4.32.0.dev0
  • Accelerate version: 0.21.0
  • xFormers version: 0.0.20
  • Using GPU in script?: NO

Who can help?

@sayakpaul

Also @vladmandic for 👀

@patrickvonplaten patrickvonplaten added the bug Something isn't working label Jul 26, 2023
@sayakpaul
Copy link
Member

@patrickvonplaten there are another set of modules that don't directly fall into what we currently have in terms of LoRA support: #4133 (comment). Specifically, support for LoRAs added to input convs, skip connections, etc. Stating it as more of an FYI.

@sayakpaul
Copy link
Member

Being worked in #4287

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Aug 25, 2023
@github-actions github-actions bot closed this as completed Sep 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale Issues that haven't received updates
Projects
None yet
Development

No branches or pull requests

2 participants