fix simple attention processor encoder hidden states dimension ordering #3014
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I accidentally flipped the sequence and hidden dimensions of the encoder hidden states in the text projection model for the unclip pipeline. This will standardize all attention processors to use
(batch, seq len, hidden dimension)
.The
encoder_hidden_states.transpose(1,2)
in the added kv attention processor is extraneous. I ran a script against the hub and confirmed that the karlo pipelines are the only pipelines which use the simple attention blocks, so this is a safe change to make.I separately ran the unclip integration tests and confirmed it works 👍