-
Notifications
You must be signed in to change notification settings - Fork 6k
Having a "scale" parameter for text encoder lora layers #3480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
cc @sayakpaul |
This question has come before a few times. I am wondering about what would be the best option to support this currently. Here are some thoughts:
...
text_encoder_scale = kwargs.pop("text_encoder_scale", 1.0)
def new_forward(x):
return old_forward(x) + lora_layer(x, scale=text_encoder_scale) ^ should be fine since
WDYT @patrickvonplaten? |
Hmm yeah good question! I wonder whether we could do something ugly here: diffusers/src/diffusers/loaders.py Line 949 in d4197bf
def new_forward(x):
return old_forward(x) + lora_layer(x)
# Monkey-patch.
module.forward = new_forward could be changed to def new_forward(x):
return old_forward(x) + self.scale * lora_layer(x)
# Monkey-patch.
module.forward = new_forward and then we add a diffusers/src/diffusers/loaders.py Line 745 in d4197bf
@property
def scale(self):
return self._scale and a scale setter function that we need to call from the |
Where does |
Yes |
@patrickvonplaten sorry still a bit unclear to me.
To enable this we still need to expose an argument corresponding to the text encoder LoRA scale, no? |
@sayakpaul would this help: #3626 (comment) |
Thanks @patrickvonplaten! I left some comments there. |
Closing with #3626 |
Is your feature request related to a problem? Please describe.
No, not related to a problem
Describe the solution you'd like
A similar parameter to "scale" of unet lora layers, in case of text encoder lora layers
Additional context
This is an inference parameter that will determine the influence of text encoder lora layers on generation
The text was updated successfully, but these errors were encountered: