-
Notifications
You must be signed in to change notification settings - Fork 6k
LoRA for clip and vae? #2964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Working on it: #2918 |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Text encoder LoRA layers.from diffusers.utils import TEXT_ENCODER_TARGET_MODULES text_lora_attn_procs = {} text_encoder_lora_layers = AttnProcsLayers(text_lora_attn_procs) This piece of code simply identifies the attention and MLP layers of the text encoder and generates LoRAs, but it does not insert it into the forward propagation of the text encoder. Therefore, it does not play any role. |
Any way to apply lora to vae decoder? I implent it but parameter is not updated or output is same |
Do you have any update? Working on applying lora to vae too |
LoRA now only add parameters to the attention in unet. In my experiment, the performance is not good. I hope we can support lora layer for unet and clip. best wishes
The text was updated successfully, but these errors were encountered: