-
Notifications
You must be signed in to change notification settings - Fork 537
Qualcomm AI Engine Direct - Optimize the performance for AR-N model #9079
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qualcomm AI Engine Direct - Optimize the performance for AR-N model #9079
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9079
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit c94c0bd with merge base acae017 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @cccclai, |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@@ -16,8 +17,9 @@ class RecomposeRmsNorm(ExportPass): | |||
Merge decomposed operators back to one super node. | |||
""" | |||
|
|||
def __init__(self): | |||
super().__init__() | |||
def __init__(self, edge_program: torch.export.ExportedProgram): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can follow #8505 to get rid of some recompose logic to reduce engineer effort there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your information. I will try it.
@@ -19,12 +19,12 @@ | |||
def apply_rotary_emb_single( | |||
x: torch.Tensor, freqs_cos: torch.Tensor, freqs_sin: torch.Tensor | |||
) -> torch.Tensor: | |||
x_r, x_i = x[..., ::2], x[..., 1::2] | |||
|
|||
# Change to RoPE of huggingface version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which one is the huggingface version and why is it better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation of RoPE in huggingface process query and key with two half instead of interleaved way.
The main difference is stride in StrideSlice op. For interleaved way, stride is two which is not friendly for HTP backend about this memory handle.
Ref: huggingface/transformers#25199
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add this comment to part of the code comment, just so others know the context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The perf improvement looks awesome!!
There is still lint error, can you fix it? |
b4d5a63
to
fde0f80
Compare
Done. Thanks :) |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
This seems need rebase |
Summary: - Fix the bug of rms norm builder - Use HuggingFace version RoPE to improve the performance due to stride = 1 in StrideSlice Op - Modificate the axis order of the conv in qkv, feedforward and output - Original (AR:128, CL:2048): QNN_RmsNorm (1,1,128,2048) -> QNN_Reshape (1,128,2048,1)->QNN_Transpose (1,128,1,2048)->self.output-> QNN_Transpose(1,128,2048,1) -> QNN_Reshape (1,1,128,2048) - New: QNN_RmsNorm (1,1,128,2048) -> QNN_Reshape (1,128,1,2048)->QNN_Transpose (1,1,128,2048)->self.output-> QNN_Transpose(1,128,1,2048) -> QNN_Reshape (1,1,128,2048)
fde0f80
to
c5c149c
Compare
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Can you also share which part of the logic do the following optmization?
Is it the weight permutation or something else? Also good to have it as part of the code comment so it's easy to understand the intention. |
Got it. But this optimization is not quite general. Based on our experiments, the performance which sets sequence length to width dimension (1, 1, seq_len, CL) is better than the performance which sets sequence length to height dimension (1, seq_len, 1, CL) for the input axis order of the conv op. And another reason is that this change will be close with the structure of AI Hub version llama |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary:
Test Result:
Note that using Hugging Face RoPE will slightly affect accuracy