You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The argument upsample_size is passed to each upsampling block here. However, it looks like AttnUpBlock2D doesn't accept this argument. My code to reproduce is below. Am I using this incorrectly or should AttnUpBlock2D allow that argument (even if unused) to maintain consistency across upsample blocks.
Reproduction
from diffusers import UNet2DConditionModel
model = UNet2DConditionModel(
sample_size=256, # the target image resolution
in_channels=6, # the number of input channels, 3 for RGB images
out_channels=3, # the number of output channels
layers_per_block=2, # how many ResNet layers to use per UNet block
block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block
down_block_types=(
"DownBlock2D", # a regular ResNet downsampling block
"DownBlock2D",
"DownBlock2D",
"DownBlock2D",
"AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention
"DownBlock2D",
),
up_block_types=(
"UpBlock2D", # a regular ResNet upsampling block
"AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention
"UpBlock2D",
"UpBlock2D",
"UpBlock2D",
"UpBlock2D",
),
cross_attention_dim=256
)
outputs = model(torch.randn(8, 6, 256, 256), 1, encoder_hidden_states=torch.randn(8, 4, 256))
outputs[0].shape
Logs
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-ea419884a028>in<module>
26 )
27
---> 28 outputs = model(torch.randn(8, 6, 256, 256), 1, encoder_hidden_states=torch.randn(8, 4, 256))
29 outputs[0].shape
~/.pyenv/versions/notebook-3.8/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
~/.pyenv/versions/notebook-3.8/lib/python3.8/site-packages/diffusers/models/unet_2d_condition.py in forward(self, sample, timestep, encoder_hidden_states, class_labels, timestep_cond, attention_mask, cross_attention_kwargs, down_block_additional_residuals, mid_block_additional_residual, return_dict)
781 )
782 else:
--> 783 sample = upsample_block(
784 hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples, upsample_size=upsample_size
785 )
~/.pyenv/versions/notebook-3.8/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'upsample_size'
### System Info
diffusers v0.16.0
The text was updated successfully, but these errors were encountered:
Describe the bug
The argument
upsample_size
is passed to each upsampling block here. However, it looks likeAttnUpBlock2D
doesn't accept this argument. My code to reproduce is below. Am I using this incorrectly or shouldAttnUpBlock2D
allow that argument (even if unused) to maintain consistency across upsample blocks.Reproduction
Logs
The text was updated successfully, but these errors were encountered: