-
Notifications
You must be signed in to change notification settings - Fork 6k
Support for adding multiple LoRA layers to Diffusers #2613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Agree, we should try to allow loading multiple LoRA layers - will put this on my TODO List. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
thank you for your great work! expecting the support for adding multiple loras to diffuser |
cc @sayakpaul here as well |
What is the status of this issue now? Can we add multiple Loras for inference? |
Nope. Not yet. We're concretizing the interoperability with Kohya-style LoRA checkpoints along with support for SDXL too. |
Thank you for your quick reply. |
And also, just a short note that we'll also need a way to adjust the scale (e.g. |
Hi - I would like to give my gratitude in regards to all the recent advancements with LoRAs (and SDXL). Also to note that the ability to having multiple LoRAs is fundamental to my work and without it, I cannot effectively use the diffusers Library, which would otherwise be ideal. I also wanted to chime in with some things to think about when adding support for multiple LoRAs. Speaking of interoperability with Kohya-style LoRAs, the Kohya author had created a really great extension for sd-webui called "Additional Networks": https://github.com/kohya-ss/sd-webui-additional-networks I do hope that this can be prioritized soon. Also that the Kohya-style masks/additional-networks could make there way there as well in some fashion. |
Hello @patrickvonplaten @sayakpaul What is the latest status of supporting multiple LoRAs with certain scale? We would like to use multiple LoRAs with LoRA loading pipeline and set scale parameter for each individual LoRA just like Automatic1111 or ComfyUI doing When can we expect it to be released? We really need it ASAP. Thank you so much. |
It's definitely on the roadmap. But please bear in mind that even though our releases are often shaped by what the community is up to we tend to prioritize generally useful things without breaking consistent library design. This can often mean that our releases might not align well with the specific deadlines of other organizations. I am also ccing @apolinario here since he tried out our latest |
thanks will check out fuse_lora it supports multiple LoRA fusion right? |
I would argue that this should be at the top of the list. This is far more important than adding new SD1.5 derivatives like kadinsky or floydif.
This approach of fusing seems very wrong to me.... I would recommend using this instead: https://gist.github.com/takuma104/e38d683d72b1e448b8d9b3835f7cfa44 usage:
this is far more elegant, easy to use and flexible |
@alexblattner could you elaborate on how it's wrong? |
@apolinario, I'm curious of your usage of |
Yes we need this |
I am going to comment on this as it is something that I am looking to use in a tool I am building utilizing something that is built on diffusers. I have been using A1111 for some time to great success using multiple LoRAs and I would love to be able to utilize this in the future. I am curious though. I have been looking at the code lines 383 to 463 in loaders.py would seem to me that it is able to load multiple LoRAs from a state_dict. Unless I am misunderstanding the state_dict format that I was reading. Also... not sure how realistic this is, but it would prove useful to me in my tool, would it be possible to have a 2d array govern the scales and have them interpolated over the image area? So if you input a scale of 1, the lora scale is 1 over the whole space, but a scale of [[0.5,1]] would cause the weight to be 0.5 at the left and 1 at the right, or [[0.5],[1]] being the top and bottom. Extending to [[s_00, s_01, ..., s_0n], ... , [s_m0, s_m1, ... , s_mn]] for m<= output height and n <= output width. Not sure if this feasible or even understandable, but I would like to be able to have some finite control over how heavily scaled a certain LoRA is in a certain region of the output. I am planning an artist's tool and the input functionality can create manually/procedurally painted float gradients, which I would like to allow artists to leverage for finite art direction. If impossible, no biggie, I will be happy with just being able to use multiple LoRAs. |
The |
@williamberman because you can't modify each Lora individually afterwards hence limiting the options unnecessarily |
I am reading the loading LoRA code that added in the past weeks, @sayakpaul @patrickvonplaten @isidentical The code added by you are amazing, tremendous work has been done, especially the code that convert Kohya to diffusers and enabling SDXL LoRA loading. pure amazing. Thanks you! |
@xhinker so can we load multiple loras now? thanks |
yes |
Thanks @xhinker could you please provide a pointer to a multiple loras loading examples? is it pipe.load_lora_weights("lora1"), pipe.load_lora_weights("lora2")? thanks |
@linnanwang Please take a look at @sayakpaul 's comments, ->>>
|
Great, I saw there is a readme file that explains how it works. https://github.com/huggingface/diffusers/blob/0513a8cfd8c6a626da0e0bdafc469305a57a723e/docs/source/en/training/lora.md#low-rank-adaptation-of-large-language-models-lora Thanks all for your help. |
Glad you found that useful. Soon, it will be much more improved with our integration with @patrickvonplaten I am closing this issue as #2613 is actually supported now. |
@sayakpaul Thanks for the quick answering. One last question, will unload_lora_weights unload all the loaded loras (>2)? Thanks. |
are there any good examples of uploading multiple LoRAs implemented? @xhinker @xhinker @linnanwang |
I tried this : import copy
from peft import PeftModel
#base_unet = copy.deepcopy(pipe.unet)
base_unet = pipe.unet
model = PeftModel.from_pretrained(base_unet, lora1_repo, use_safetensors=True, adapter_name=lora1_filename)
model.load_adapter(lora2_repo, use_safetensors=True, adapter_name=lora2_filename)
model.load_adapter(lora3_repo, use_safetensors=True, adapter_name=lora3_filename)
new_adapter_name = f"{lora1_filename}-{lora2_filename}-{lora3_filename}"
# https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraModel.add_weighted_adapter
model.add_weighted_adapter(
adapters=[lora1_filename, lora2_filename, lora3_filename],
weights=[0.5, 0.5],
combination_type="cat",
adapter_name=new_adapter_name
)
model.set_adapters(new_adapter_name)
pipe.unet = model This works fine until I try to save my pipeline : pipe.save_pretrained(
'/content/newmodel',
variant = "fp16"
) Then I get this error :
|
Given the popularity of LoRA training for Stable Diffusion, it would be great if the support for adding LoRAs to Diffusers could be expanded to support adding multiple LoRAs. This would be an expansion of this feature - #2129 (comment).
The text was updated successfully, but these errors were encountered: