Open
Description
Describe the bug
添加完xformer加速之后出现了这个问题
Reproduction
def __init__():
self.unet = pipeline.unet
self.set_diffusers_xformers_flag(self.unet,True)
def set_diffusers_xformers_flag( model, valid):
def fn_recursive_set_mem_eff(module: torch.nn.Module):
if hasattr(module, "set_use_memory_efficient_attention_xformers"):
module.set_use_memory_efficient_attention_xformers(valid)
print("="*100)
print(hasattr(module, "set_use_memory_efficient_attention_xformers"))
for child in module.children():
fn_recursive_set_mem_eff(child)
fn_recursive_set_mem_eff(model)
def forward():
self.attn1(
mda_norm_hidden_states,
encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
attention_mask=attention_mask,
**cross_attention_kwargs,
)
Logs
ps:The rest of the code is not convenient to provide
System Info
- 🤗 Diffusers version: 0.33.1
- Platform: Linux-5.4.241-1-tlinux4-0017.7-x86_64-with-glibc2.2.5
- Running on Google Colab?: No
- Python version: 3.8.12
- PyTorch version (GPU?): 2.4.0+cu118 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.30.2
- Transformers version: 4.46.3
- Accelerate version: 1.0.1
- PEFT version: 0.13.2
- Bitsandbytes version: 0.42.0
- Safetensors version: 0.4.3
- xFormers version: 0.0.27.post2+cu118
- Accelerator: NVIDIA H20, 97871 MiB
Who can help?
No response