Skip to content

Conversation

@gante
Copy link
Contributor

@gante gante commented Jan 31, 2025

What does this PR do?

WIP

[⚠️ pattern change] Skip the log regarding eager attention fallback at compilation time. Instead, when compiling, silently fallback to eager attention. This was preventing calls like model.generate(model_inputs, cache_implementation="static", output_attentions=True, return_dict_in_generate=True) on most models

This fixes a bunch of failing tests and unlocks #33212


As a result, model components tend to be repeated across many files. There are as many attention layers defined
in `transformers` as there are models, and a significant number of those are identical to each other.
in `transformers` as there are models, and a significant number of those are identical to each other.
Copy link
Contributor Author

@gante gante Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the diff, my editor is configured to remove trailing whitespaces when saving 😅

The actual changes are L238-288

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@gante
Copy link
Contributor Author

gante commented Jan 31, 2025

Fixing the pattern and testing its effect is exposing new bugs 🤔

Opening separate PRs to avoid bloating this one.

@gante gante changed the title [generation] automatic compilation fixes: don't compile models wo support, don't log [generation] automatic compilation fixes: don't log when output_attentions is True Jan 31, 2025
@gante
Copy link
Contributor Author

gante commented Feb 3, 2025

Closing -- changing this pattern might not be the best idea. Updating tests instead.

@gante gante closed this Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants