Skip to content

Fix missing text encoder attn modules #1964

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 21, 2025
Merged

Conversation

ghost
Copy link

@ghost ghost commented Mar 1, 2025

Should fix #1952
I added alternative name for CLIPAttention.
I have no idea why this name changed.
Now it should accept both names.

@ghost
Copy link
Author

ghost commented Mar 1, 2025

It matches what I saw in older version in kohya-ss-gui docker:

enable LoRA for text encoder: 264 modules

Should fix #1952
I added alternative name for CLIPAttention.
I have no idea why this name changed.
Now it should accept both names.
@ghost
Copy link
Author

ghost commented Mar 1, 2025

UPD: make the same changes in 4 places

@kohya-ss
Copy link
Owner

Thank you for this! I have no idea why this happens even though transformers versions are the same (4.44.0).

@kohya-ss kohya-ss merged commit 367f348 into kohya-ss:main Mar 21, 2025
1 check passed
@iqddd
Copy link

iqddd commented Mar 23, 2025

LyCORIS (locon, preset full) still trains only 88 TE modules (SDXL). Is this the intended behavior?

[LyCORIS]-INFO: Using rank adaptation algo: locon
[LyCORIS]-INFO: Use Dropout value: 0.0
[LyCORIS]-INFO: Create LyCORIS Module
[LyCORIS]-INFO: Create LyCORIS Module
[LyCORIS]-INFO: create LyCORIS for Text Encoder: 88 modules.
[LyCORIS]-INFO: Create LyCORIS Module
[LyCORIS]-INFO: create LyCORIS for U-Net: 788 modules.
[LyCORIS]-INFO: module type table: {'LoConModule': 876}

Same for lokr.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SDXL seems to not train self_attn layers in Text Encoders
2 participants