Skip to content

LoRA for clip and vae? #2964

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Zyriix opened this issue Apr 4, 2023 · 5 comments
Closed

LoRA for clip and vae? #2964

Zyriix opened this issue Apr 4, 2023 · 5 comments
Labels
stale Issues that haven't received updates

Comments

@Zyriix
Copy link

Zyriix commented Apr 4, 2023

LoRA now only add parameters to the attention in unet. In my experiment, the performance is not good. I hope we can support lora layer for unet and clip. best wishes

@patrickvonplaten
Copy link
Contributor

Working on it: #2918

@github-actions
Copy link
Contributor

github-actions bot commented May 4, 2023

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label May 4, 2023
@chenxinhua
Copy link

Working on it: #2918

Text encoder LoRA layers.

from diffusers.utils import TEXT_ENCODER_TARGET_MODULES

text_lora_attn_procs = {}
for name, module in text_encoder.named_modules():
if any([x in name for x in TEXT_ENCODER_TARGET_MODULES]):
text_lora_attn_procs[name] = LoRAAttnProcessor(
hidden_size=module.out_features, cross_attention_dim=None
)

text_encoder_lora_layers = AttnProcsLayers(text_lora_attn_procs)

This piece of code simply identifies the attention and MLP layers of the text encoder and generates LoRAs, but it does not insert it into the forward propagation of the text encoder. Therefore, it does not play any role.

@hyoseok1223
Copy link

Any way to apply lora to vae decoder? I implent it but parameter is not updated or output is same

@sgw-ite
Copy link

sgw-ite commented Apr 29, 2024

Any way to apply lora to vae decoder? I implent it but parameter is not updated or output is same

Do you have any update? Working on applying lora to vae too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Issues that haven't received updates
Projects
None yet
Development

No branches or pull requests

5 participants