Skip to content

Conversation

@FightingZhen
Copy link
Contributor

@FightingZhen FightingZhen commented Apr 23, 2025

What does this PR do?

When using Flash Attention2 on Ascend NPU, we have found that CPU memory keep increasing when calling func npu_flash_attn_varlen_func or npu_flash_attn_func.

The root cause is that the attention mask generated by func torch.ones() is initially defined on the CPU side, occupying CPU memory before being transferred to the NPU device. As the func npu_flash_attn_varlen_func or npu_flash_attn_func is called repeatedly, the CPU memory consumption continues to accumulate, which is not optimal solution. Below is one example:

attn_mask_npu = torch.triu(torch.ones([2048, 2048]), diagonal=1).bool().to(q.device)

Therefore, this PR is committed for solving this problem by defining attention mask tensor with torch.ones() on NPU device directy.

Fixes # (issue)
Not releated.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

@github-actions github-actions bot marked this pull request as draft April 23, 2025 09:47
@github-actions
Copy link
Contributor

Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the Ready for review button (at the bottom of the PR page). This will assign reviewers and trigger CI.

@FightingZhen FightingZhen marked this pull request as ready for review April 23, 2025 10:24
@github-actions github-actions bot requested review from MekkCyber and SunMarc April 23, 2025 10:24
@FightingZhen
Copy link
Contributor Author

@MekkCyber @SunMarc please help me review it, thanks : )

Copy link
Contributor

@MekkCyber MekkCyber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good ! thanks for catching that 🤗

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM !

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@FightingZhen
Copy link
Contributor Author

@MekkCyber @SunMarc I have rebased the code to main branch and repushed it, I am not clear why build_pr_documentation CI pending, and the CI error seems not related with my PR, and can you please help me merge this PR? thanks : )

@FightingZhen
Copy link
Contributor Author

@MekkCyber @SunMarc All CI passes and this PR seems to be ready for merge :)

@MekkCyber MekkCyber merged commit 0327d0f into huggingface:main Apr 24, 2025
20 checks passed
@MekkCyber
Copy link
Contributor

Merged 🎊 ! Thanks a lot

zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request May 14, 2025
@FightingZhen FightingZhen deleted the perf_optim_npu_fa branch August 14, 2025 01:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants