-
Notifications
You must be signed in to change notification settings - Fork 531
Use new API to register custom ExecuTorch kernels into ATen #2937
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2937
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 70c9622 with merge base 6e43135 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D55907749 |
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Differential Revision: D55907749
30f4736
to
df771f3
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Differential Revision: D55907749
df771f3
to
d3162fe
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Differential Revision: D55907749
d3162fe
to
e518f9e
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Differential Revision: D55907749
d9bdadc
to
1a67378
Compare
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Differential Revision: D55907749
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Differential Revision: D55907749
1a67378
to
5a8a804
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Reviewed By: lucylq Differential Revision: D55907749
This pull request was exported from Phabricator. Differential Revision: D55907749 |
5a8a804
to
db799c9
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Reviewed By: lucylq Differential Revision: D55907749
db799c9
to
64cf836
Compare
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Reviewed By: lucylq Differential Revision: D55907749
64cf836
to
b1e027e
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Reviewed By: lucylq Differential Revision: D55907749
b1e027e
to
120c145
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D55907749 |
Summary: Pull Request resolved: #2937 Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Reviewed By: lucylq Differential Revision: D55907749
120c145
to
fd09532
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
fd09532
to
6f9a2da
Compare
Summary: Retry of D55713944 Use `WRAP_TO_ATEN` to register custom ExecuTorch kernel to PyTorch. This PR added installation logic to `libcustom_ops_aot_lib.so` in `setup.py`. This is to make sure we can build `libcustom_ops_aot_lib.so` and install it to the correct position (`<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so`) and then it can be loaded by `torch.ops.load_library`. Reviewed By: lucylq Differential Revision: D55907749
6f9a2da
to
70c9622
Compare
This pull request was exported from Phabricator. Differential Revision: D55907749 |
This pull request has been merged in c322685. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The structure of the edits to setup.py look ok. My main worry is about the maintainability of the naming
@@ -144,6 +144,8 @@ option(EXECUTORCH_BUILD_COREML "Build the Core ML backend" OFF) | |||
|
|||
option(EXECUTORCH_BUILD_CUSTOM "Build the custom kernels" OFF) | |||
|
|||
option(EXECUTORCH_BUILD_CUSTOM_OPS_AOT "Build the custom ops lib for AOT" OFF) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name will be too vague over time: are we only going to have one library with custom ops? Or is this llama-specifc? If it is just one global library, how do we decide which kinds of custom ops get to live in it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah i agree. this custom ops were examples/models/llama specific at the time it was added. Though things have changed since.
@@ -144,6 +144,8 @@ option(EXECUTORCH_BUILD_COREML "Build the Core ML backend" OFF) | |||
|
|||
option(EXECUTORCH_BUILD_CUSTOM "Build the custom kernels" OFF) | |||
|
|||
option(EXECUTORCH_BUILD_CUSTOM_OPS_AOT "Build the custom ops lib for AOT" OFF) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name is too similar to EXECUTORCH_BUILD_CUSTOM
. If you ask a random person what the different is between EXECUTORCH_BUILD_CUSTOM
and EXECUTORCH_BUILD_CUSTOM_OPS_AOT
they probably wouldn't have a good answer.
Really, EXECUTORCH_BUILD_CUSTOM
should probably have a more descriptive name. If that's too disruptive, please update the help strings for both options to make it more clear how they differ, and help users understand which one to enable and when.
if(EXECUTORCH_BUILD_CUSTOM) | ||
set(EXECUTORCH_BUILD_OPTIMIZED ON) | ||
endif() | ||
|
||
if(EXECUTORCH_BUILD_CPUINFO) | ||
# --- cpuinfo | ||
set(ORIGINAL_CMAKE_POSITION_INDEPENDENT_CODE_FLAG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is an internal temp var it should have a name like _original_cmake_position_independent_code_flag
. Similar below.
@@ -25,7 +25,7 @@ if(NOT TORCH_ROOT) | |||
set(TORCH_ROOT ${EXECUTORCH_ROOT}/third-party/pytorch) | |||
endif() | |||
|
|||
set(_common_compile_options -Wno-deprecated-declarations) | |||
set(_common_compile_options -Wno-deprecated-declarations -fPIC) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment explaining why this sets -fPIC
manually instead of using CMAKE_POSITION_INDEPENDENT_CODE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I try to avoid using CMAKE_POSITION_INDEPENDENT_CODE
as much as possible since it's a global variable. We have to use it for third-party libs because we can't change their CMakeLists.txt but for this one we can tweak the compile options for this specific lib.
@@ -88,6 +88,11 @@ def pybindings(cls) -> bool: | |||
def xnnpack(cls) -> bool: | |||
return cls._is_env_enabled("EXECUTORCH_BUILD_XNNPACK", default=False) | |||
|
|||
@classmethod | |||
@property | |||
def llama_custom_ops(cls) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to remove llama_
here since that name isn't part of the env vars.
Summary:
Retry of D55713944
Use
WRAP_TO_ATEN
to register custom ExecuTorch kernel to PyTorch.This PR added installation logic to
libcustom_ops_aot_lib.so
insetup.py
. This is to make sure we can buildlibcustom_ops_aot_lib.so
and install it to the correct position (<site-packages>/executorch/examples/models/llama2/custom_ops/libcustom_ops_aot_lib.so
) and then it can be loaded bytorch.ops.load_library
.Differential Revision: D55907749