-
Notifications
You must be signed in to change notification settings - Fork 6k
[docs] Adds a doc on LoRA support for diffusers #2086
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The documentation is not available anymore as the PR was closed or merged. |
import torch | ||
|
||
model_path = "sayakpaul/sd-model-finetuned-lora-t4" | ||
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(nit) maybe we can show how to retrieve the base_model from the model card by loading the yaml code via huggingface_hub
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from huggingface_hub.repocard import RepoCard
card = RepoCard.load("sayakpaul/sd-model-finetuned-lora-t4")
card.data.to_dict()["base_model"]
# 'CompVis/stable-diffusion-v1-4'
I guess we would want to show it in a separate code snippet from the doc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Maybe include it as a tip below the current snippet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me it's fine in the same code snippet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See if the current changes make sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks!
docs/source/en/training/lora.mdx
Outdated
|
||
- Previous pretrained weights are kept frozen so that model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114). | ||
- Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable. | ||
- LoRA attention layers allow to control to which extent the model is adapted toward new training images via a `scale` parameter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: so far we've only mentioned "update matrices", but not how they work or whether they contain attention layers. Maybe we should very briefly introduce the concept? Something simple like "LoRA matrices are added to the model attention layers and they control ..." could work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See if the current edits make sense.
import torch | ||
|
||
model_path = "sayakpaul/sd-model-finetuned-lora-t4" | ||
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Maybe include it as a tip below the current snippet?
|
||
prompt = "A pokemon with green eyes and red legs." | ||
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] | ||
image.save("pokemon.png") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering, maybe display the image here? We never do it in the docs, what's your opinion about starting doing it to make things more visual?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Diffusion for computer vision is definitely about visuals. I like the idea and I think we should definitely add it :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for working on this. Looks great!
Co-authored-by: Pedro Cuenca <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks a lot!
Co-authored-by: Pedro Cuenca <[email protected]>
* add: a doc on LoRA support in diffusers. * Apply suggestions from code review Co-authored-by: Pedro Cuenca <[email protected]> * apply PR suggestions. * Apply suggestions from code review Co-authored-by: Pedro Cuenca <[email protected]> * remove visually incoherent elements. Co-authored-by: Pedro Cuenca <[email protected]>
Closes #2028.