Skip to content

pipe.unet.load_attn_procs not working in diffusers` version: 0.16.0 #3221

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
sing817 opened this issue Apr 25, 2023 · 6 comments
Closed

pipe.unet.load_attn_procs not working in diffusers` version: 0.16.0 #3221

sing817 opened this issue Apr 25, 2023 · 6 comments
Labels
bug Something isn't working

Comments

@sing817
Copy link

sing817 commented Apr 25, 2023

Describe the bug

I use unet.load_attn_procs but not working after update diffusers project.

what is load_lora_weights?
what is the difference between new and old lora format?
How can I know it is new or old?

Reproduction

When

pipe.load_lora_weights("./testlora/pytorch_lora_weights.bin")

or

pipe.unet.load_attn_procs("./testlora/pytorch_lora_weights.bin")

get

    KeyError: 'down_blocks.0.attentions.0.transformer_blocks.0.attn1.processor'

``
When

pipe.load_attn_procs("./testlora/pytorch_lora_weights.bin")

it can run but not loading any lora weights.

Logs

No response

System Info

  • diffusers version: 0.16.0.dev0
  • Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.27
  • Python version: 3.10.11
  • PyTorch version (GPU?): 2.0.0+cu117 (True)
  • Huggingface_hub version: 0.13.4
  • Transformers version: 4.28.1
  • Accelerate version: 0.18.0
  • xFormers version: 0.0.18
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:
@sing817 sing817 added the bug Something isn't working label Apr 25, 2023
@sing817 sing817 changed the title load_lora_weights not working? load_lora_weights not working in diffusers` version: 0.16.0 Apr 25, 2023
@sing817 sing817 changed the title load_lora_weights not working in diffusers` version: 0.16.0 pipe.unet.load_attn_procs not working in diffusers` version: 0.16.0 Apr 25, 2023
@sing817 sing817 closed this as completed Apr 25, 2023
@sing817 sing817 reopened this Apr 28, 2023
@patrickvonplaten
Copy link
Contributor

cc @sayakpaul

@sayakpaul
Copy link
Member

Could you install diffusers (pip install git+https://github.com/huggingface/diffusers) from the source and retry this? #3180 should have fixed this.

@webliupeng
Copy link

Could you install diffusers (pip install git+https://github.com/huggingface/diffusers) from the source and retry this? #3180 should have fixed this.

I had the same error. I run this in latest version diffusers

@sayakpaul
Copy link
Member

Could you post a reproducible code snippet? And also which Diffusers version are you using?

@webliupeng
Copy link

webliupeng commented Apr 30, 2023

Could you post a reproducible code snippet? And also which Diffusers version are you using?

@sayakpaul could you take a look at this issue #3284 , I made some change to the code mentioned in #3284 , and now I can run the training lora until it finishes all steps(meaning the progress display shows 500/500). However, I still get the same error at the end:

KeyError: 'down_blocks.0.attentions.0.transformer_blocks.0.attn1.processor'

@sayakpaul
Copy link
Member

We need to be able to reproduce this issue.

I did the following tests on my end. First, I installed diffusers from source: pip -U install git+https://github.com/huggingface/diffusers.

Then I ran inference with three checkpoints:

I used the following code to test inference WITHOUT any failures:

from huggingface_hub.repocard import RepoCard
from diffusers import StableDiffusionPipeline
import torch 

# any of the above checkpoints should work here
lora_model_id = "sayakpaul/dreambooth-text-encoder-test" 
card = RepoCard.load(lora_model_id)
base_model_id = card.data.to_dict()["base_model"]

pipe = StableDiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.load_lora_weights(lora_model_id)
image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).images[0]

Anything I am missing out on?

I have also created this PR to clear some ambiguity around loading method for LoRA: #3299

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants