Skip to content

Inferencing with Lora #3998

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
binnn6 opened this issue Jul 8, 2023 · 5 comments
Closed

Inferencing with Lora #3998

binnn6 opened this issue Jul 8, 2023 · 5 comments

Comments

@binnn6
Copy link

binnn6 commented Jul 8, 2023

Currently, LoRA is only supported for the attention layers of the UNet2DConditionalModel.

Does the above statement only apply to the training process, or does it also imply that only UNet is supported during the inference process?

I found that using the same LoRA model, the image quality generated by diffusers with load_lora_weight is much worse than that of the stable diffusion webui. Is it because LoRA in diffusers doesn't affect the text encoder?

@binnn6
Copy link
Author

binnn6 commented Jul 8, 2023

SD webUI

SD webui paramters:

1girl <lora:shuimobysimV3:1>
Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 1432216924, Size: 512x512, Model hash: e4a30e4607, Model: majicmixRealistic_v6, Lora hashes: "shuimobysimV3: 17cd20c7b6ea", Version: v1.3.2

00018-1432216924

diffusers

version: 0.18.0.dev0

import torch
from torch import autocast
from diffusers import EulerAncestralDiscreteScheduler,StableDiffusionPipeline
from transformers import CLIPImageProcessor, CLIPProcessor, CLIPTextModel,CLIPTokenizer
from compel import Compel
from diffusers.utils import get_class_from_dynamic_module

pipeline = StableDiffusionPipeline.from_single_file(
    "/data/stable-diffusion-webui/models/Stable-diffusion/majicmixRealistic_v6.safetensors"
).to("cuda")


pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config)
pipeline.safety_checker = None
pipeline.load_lora_weights("/data/stable-diffusion-webui/models/Lora", weight_name="moxinv1.khzz.safetensors")


pipeline.to("cuda")
prompt0 = "1girl"

generator = torch.Generator(device="cuda").manual_seed(1432216924)


with autocast("cuda"):
    images = pipeline(
        prompt0,
        width = 512,
        height = 512,
        num_images_per_prompt = 1,
        generator=generator,
        num_inference_steps=30,
        cross_attention_kwargs={"scale": 1},
        guidance_scale=7).images
    

images[0]

before-face-store-03

@binnn6 binnn6 changed the title inferencing with Lora Inferencing with Lora Jul 10, 2023
@krNeko9t
Copy link

it seems that loading A1111 style LoRA weights now has bug. #3725

@binnn6
Copy link
Author

binnn6 commented Jul 10, 2023

Thanks. The patch from #3756 works well for me. Appreciate !

Currently, LoRA is only supported for the attention layers of the UNet2DConditionalModel.

So this statement only apply to the training process, is this right ?

@binnn6 binnn6 closed this as completed Jul 10, 2023
@binnn6 binnn6 reopened this Jul 10, 2023
@krNeko9t
Copy link

Thanks. The patch from #3756 works well for me. Appreciate !

Currently, LoRA is only supported for the attention layers of the UNet2DConditionalModel.

So this statement only apply to the training process, is this right ?

I guess the emphasis is on "attention layers"? It means LoRA has no influence on other network modules.

@sayakpaul
Copy link
Member

This is a known issue. Here's an elaborate detail: #3725 (comment).

I am closing this issue in favor to centralizing the efforts in this thread: #3725. I hope that's fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants