Description
Describe the bug
Thank you for making LORA hotswap currently compatible with torch.compile. However, I'm trying to modify weights via set_adapters when hotswap LORA, but after torch.compile, the model type is no longer ModelMixin, which causes the error. Could you please help look into this issue? Thanks
Reproduction
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
before load_lora
pipe.enable_lora_hotswap(target_rank=256)
pipe.load_lora_weights("/lora_path", weight_name="pytorch_lora_weights_1.safetensors", adapter_name="lora1")
pipe.set_adapters(["lora1"], adapter_weights=[1.0])
use torch.compile
pipe.transformer = torch.compile(pipe.transformer)
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=3.5,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("flux-dev.png")
pipe.load_lora_weights("/lora_path", weight_name="pytorch_lora_weights_2.safetensors", adapter_name="lora1")
pipe.set_adapters(["lora1"], adapter_weights=[0.5])
Logs
Traceback (most recent call last):
File "/root/workspace/aigoodsfix/plugins/aigoodsfix/api.py", line 616, in <module>
Init("")
File "/root/workspace/aigoodsfix/plugins/aigoodsfix/api.py", line 136, in Init
pipe.set_adapters(["lora1"], adapter_weights=[0.5])
File "/root/workspace/diffusers/src/diffusers/loaders/lora_base.py", line 703, in set_adapters
raise ValueError(
ValueError: Adapter name(s) {'lora1'} not in the list of present adapters: set().
System Info
ᐅ diffusers-cli env
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers version: 0.34.0.dev0
- Platform: Linux-5.4.250-2-velinux1u3-amd64-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.13
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.30.2
- Transformers version: 4.47.0
- Accelerate version: 1.2.0
- PEFT version: 0.15.2
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA L20, 49140 MiB
- Using GPU in script?:
- Using distributed or parallel set-up in script?: