-
Notifications
You must be signed in to change notification settings - Fork 6k
Inferencing with Lora #3998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
SD webUISD webui paramters:
diffusersversion: 0.18.0.dev0 import torch
from torch import autocast
from diffusers import EulerAncestralDiscreteScheduler,StableDiffusionPipeline
from transformers import CLIPImageProcessor, CLIPProcessor, CLIPTextModel,CLIPTokenizer
from compel import Compel
from diffusers.utils import get_class_from_dynamic_module
pipeline = StableDiffusionPipeline.from_single_file(
"/data/stable-diffusion-webui/models/Stable-diffusion/majicmixRealistic_v6.safetensors"
).to("cuda")
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config)
pipeline.safety_checker = None
pipeline.load_lora_weights("/data/stable-diffusion-webui/models/Lora", weight_name="moxinv1.khzz.safetensors")
pipeline.to("cuda")
prompt0 = "1girl"
generator = torch.Generator(device="cuda").manual_seed(1432216924)
with autocast("cuda"):
images = pipeline(
prompt0,
width = 512,
height = 512,
num_images_per_prompt = 1,
generator=generator,
num_inference_steps=30,
cross_attention_kwargs={"scale": 1},
guidance_scale=7).images
images[0] |
it seems that loading A1111 style LoRA weights now has bug. #3725 |
Thanks. The patch from #3756 works well for me. Appreciate !
So this statement only apply to the training process, is this right ? |
I guess the emphasis is on "attention layers"? It means LoRA has no influence on other network modules. |
This is a known issue. Here's an elaborate detail: #3725 (comment). I am closing this issue in favor to centralizing the efforts in this thread: #3725. I hope that's fine. |
Does the above statement only apply to the training process, or does it also imply that only UNet is supported during the inference process?
I found that using the same LoRA model, the image quality generated by diffusers with load_lora_weight is much worse than that of the stable diffusion webui. Is it because LoRA in diffusers doesn't affect the text encoder?
The text was updated successfully, but these errors were encountered: