-
Notifications
You must be signed in to change notification settings - Fork 6k
Loading .safetensors Lora #3064
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @adhikjoshi, Thanks for the issue we should indeed try to support also A1111 loading of LoRA tensors soon. cc @sayakpaul here |
Kohya-ss/sd-scripts has a nice mechanism for it, but it broke with 0.15, but you can for sure load A1111 LoRA Tensors with the function below on 0.14.0:
but as of 0.15 it fails:
|
|
Can someone provide LoRA file in the A1111 format? Providing as many relevant details associated to the file as possible would be great too. |
I have downloaded offset noise trained lora and uploaded its .safetensor on huggingface |
@sayakpaul here you go. This Lora was trained using ss-kohya's scripts and works fine in A1111. I can load it on diffusers 0.14.0 with the snippet above using the lora.py from sd-scripts: Thanks, Alejandro. |
Cc: @patrickvonplaten ^ |
* Fix regression introduced in huggingface#2448 * Style.
Here is function i made from convert_lora_safetensor_to_diffusers.py to load lora on inference time.
Can use it like,
|
Thanks! Do you have the checkpoints with which we could test this? |
This uploaded safetensor lora and others work well |
Thanks @adhikjoshi! Getting a lot further with your function but the output is not matching what I would expect. As a first guess, I would think this is the alpha handling as that is hard-coded to 0.75 but the LoRA's I'm using have .alpha keys in them. |
I updated the function from @adhikjoshi to use the .alpha elements and also added a multiplier that can be used to weight the LoRA overall. Tested this on 4 random LoRAs I downloaded from CivitAI and it matches the output from Automatic1111:
Example usage:
|
I tested on several custom-created Loras, works great! Excellent work, @pdoane, thanks for sharing. Quick question for the group. is there a way to quickly unload a Lora weight from a loaded pipeline? I want to maintain it in memory, and simply assign/remove Lora Embeddings on the fly after each inference. Any pointers are appreciated. Thanks again! Alejandro |
There are two options I can think of:
|
I'll try approach #2 |
Ugly, but worked for me. Tested making 600 inferences switching between 12 Loras safetensors 50 times on diffusers 0.15.1:
|
@pdoane thanks so much for your inputs and investigations! Do you mind sharing the |
@sayakpaul - followed up in e-mail. |
Thanks. However, I think having an end-to-end open example here would help the community a great deal to understand the nuances of the interoperability in a better manner. |
@pdoane come to think of it, would you be interested to improve our LoRA functionality to operate with the A1111 format as well? @patrickvonplaten recently incorporated similar support for our textual inversion scripts: https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion |
My assumption is this is just the first step to getting something more official - would be glad to help! I have some API questions about it:
In terms of format details:
|
@alejobrainz how do you use your code for it to work with a prompt in the same way as A111? I put this as prompt:
but it ignored the lora instructions completely. |
For prompt weighting you can use compel. It's great and easy to use. Just be sure to check out the syntax at https://github.com/damian0815/compel/blob/main/Reference.md |
Also, be mindful that the lora is embedded using the script. you only need the keyword your Lora uses within the prompt. |
…0653978 Signed-off-by: Sinri Edogawa <[email protected]>
…0653978 Signed-off-by: Sinri Edogawa <[email protected]>
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
I think we can close this one no @sayakpaul ? |
Yeah I think so. We have recently introduced support for loading A1111 formatted LoRAs (thanks to @takuma104). We will continue to reiterate on top of it. |
How can I update just the multiplier in |
How would you like adjust the multiplier? |
The same value is used on different layers, right? Say that I have initialized it with 1.0, then I just want to update it like Is it something possible to achieve? |
I think you would need to fetch the multiplier appropriately from the corresponding state dict accordingly for that. Since the multiplier only concerns the weight modification part, I don't think reinitialization would be required here. |
Hello, I'm using version v0.18 of Diffuser, and I managed to load a LoRa .safetensors file from CivitAI. However, I don't notice any changes even when I run lora:test:1. |
With SDXL old lora won't be working. Any workarounds? |
@sonaterai I think there are some extra keys and corresponding weight parameters for which we don't have support yet. See #3087. It also links to a couple of other similar threads.
It's not supposed as the corresponding UNet is different as far as I understand. |
If i train the Lora base on SDXL by myself, is that anything way to load the new Lora? |
You should be able to. If not, please open a new thread with a reproducible set of instructions. |
in my case in diffusers0.14.0, it gets wrong : can u help me ? thx |
Please use the |
however, i want to load lora for my unet and clip, not for the pipeline(Because I reused TensorRT's code) |
I tried the load_lora_weights() but inference results are confuse. load_lora_weights load the text encoder params in the lora? |
yes , both clip and unet |
I am having a similar issue. When I run pipe.load_lora_weights("lora_model_id"), where lora_model_id is the path to my model, I simply get this error: KeyError: 'lora.down.weight' Does anyone know how to fix this issue? |
Can you open a new issue with a reproducible code snippet? |
Describe the bug
I have downloaded lora from civitai which is in .safetensor format.
When i load it using below code,
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe.unet.load_attn_procs("lora.safetensors")
It throws error : KeyError: 'to_k_lora.down.weight'
File "/workspace/server/tasks.py", line 346, in txt2img
self.pipe.unet.load_attn_procs(embd, use_safetensors=True)
File "/opt/conda/envs/ldm/lib/python3.8/site-packages/diffusers/loaders.py", line 224, in load_attn_procs
rank = value_dict["to_k_lora.down.weight"].shape[0]
KeyError: 'to_k_lora.down.weight'
Reproduction
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe.unet.load_attn_procs("lora.safetensors")
Logs
No response
System Info
Diffusers Version: 0.15.0.dev0
The text was updated successfully, but these errors were encountered: