-
Notifications
You must be signed in to change notification settings - Fork 537
Make sure size_test CI uses -Wall -Werror #8016
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8016
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Continuing where I left off: #7940 Not changing CMakeLists.txt this time. |
348754d
to
89bd6ac
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
love it
I don't see size_test in the CI jobs? |
Just rebased (which contain CI fixes) size_test for clang12 passed, but not for gcc9 |
runtime/kernel/operator_registry.cpp
Outdated
@@ -135,7 +135,7 @@ void make_kernel_key_string(Span<const TensorMeta> key, char* buf) { | |||
// If no tensor is present in an op, kernel key does not apply | |||
return; | |||
} | |||
strncpy(buf, "v1/", 3); | |||
strcpy(buf, "v1/"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/pytorch/executorch/runtime/kernel/operator_registry.cpp:187:3: note: in expansion of macro 'ET_LOG_TENSOR_META'
187 | ET_LOG_TENSOR_META(meta_list);
| ^~~~~~~~~~~~~~~~~~
In file included from /usr/include/string.h:535,
from /usr/include/c++/9/cstring:42,
from /pytorch/executorch/../executorch/runtime/kernel/operator_registry.h:11,
from /pytorch/executorch/runtime/kernel/operator_registry.cpp:9:
In function 'char* strncpy(char*, const char*, size_t)',
inlined from 'void executorch::runtime::internal::make_kernel_key_string(executorch::runtime::Span<const executorch::runtime::TensorMeta>, char*)' at /pytorch/executorch/runtime/kernel/operator_registry.cpp:138:10:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:95:34: error: 'char* __builtin_strncpy(char*, const char*, long unsigned int)' output truncated before terminating nul copying 3 bytes from a string of the same length [-Werror=stringop-truncation]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the warning here was unhelpful IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this just be memcpy(buf, "v1/", 3);
? Or are we relying on the NUL byte that strcpy adds?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely rewrote this function in #8327, so watch out for merge conflicts (if mine lands first)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, thanks for the heads up, will be careful during rebase
|
@mergennachin has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@mergennachin has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@@ -38,7 +38,8 @@ Tensor& full_out( | |||
|
|||
ET_SWITCH_SCALAR_OBJ_TYPES(val_type, ctx, name, CTYPE_VAL, [&] { | |||
CTYPE_VAL val; | |||
utils::extract_scalar(fill_value, &val); | |||
ET_KERNEL_CHECK( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to adjust due to errors like this:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think ET_KERNEL_CHECK will do the right thing in a lambda -- it returns from the lambda, not the operator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any suggestions on how to fix?
i think it's fine, since it will return void from the lambda, but sets context to be fail, with error code InvalidArgument.
@@ -38,7 +38,8 @@ Tensor& full_out( | |||
|
|||
ET_SWITCH_SCALAR_OBJ_TYPES(val_type, ctx, name, CTYPE_VAL, [&] { | |||
CTYPE_VAL val; | |||
utils::extract_scalar(fill_value, &val); | |||
ET_KERNEL_CHECK( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think ET_KERNEL_CHECK will do the right thing in a lambda -- it returns from the lambda, not the operator
runtime/kernel/operator_registry.cpp
Outdated
@@ -135,7 +135,7 @@ void make_kernel_key_string(Span<const TensorMeta> key, char* buf) { | |||
// If no tensor is present in an op, kernel key does not apply | |||
return; | |||
} | |||
strncpy(buf, "v1/", 3); | |||
strcpy(buf, "v1/"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the warning here was unhelpful IMO
test/build_size_test.sh
Outdated
@@ -11,11 +11,14 @@ set -e | |||
# shellcheck source=/dev/null | |||
source "$(dirname "${BASH_SOURCE[0]}")/../.ci/scripts/utils.sh" | |||
|
|||
# Remove -Wno-sign-compare (https://github.com/pytorch/executorch/issues/8149) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you mean to put a TODO here?
@mergennachin do you need anything else from me on this one? |
i don't think we have come to a resolution on errors on boolean errors: i can specialize for boolean cases, and use &&= |
&&= does not exist; you'd need &=. IMO such a change would be clear uglification of the code only to suppress a warning |
But it's a legitimate warning, no? You can today pass in a random memory location and define it as "boolean" tensor, which will cause some undefined behavior. unless you pass in UBSAN flag |
I don't think the code change you are considering would address the issue you just raised above. |
988ee15
to
ae885e7
Compare
Okay, I will suppress that error for now and create a follow-up issue to track it |
8b9425b
to
0c82c8b
Compare
@mergennachin has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
b_shape_str.data()); | ||
tensor_shape_to_c_string(a_size_span).data(), | ||
tensor_shape_to_c_string(b_size_span).data()); | ||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
@mergennachin it seems suboptimal for us to enforce -Wall -Werror, but only in a CI job that runs only on trunk, as this will result in unnecessarily-delayed detection of broken trunk. could we perhaps have a pull-time CI job that builds with these flags? |
e.g. #8314 promptly broke this on trunk because of -Werror |
I am not changing the default compilation option for all of ExecuTorch. I'm only changing the compile option for size_test and size_test_all_ops tests.
We don't run size_test for Windows, at least for today
Test Plan:
Run
sh test/build_size_test.sh
Add op_fill.cpp with this patch at the end of the file.
Make sure the compilation fails, https://gist.github.com/mergennachin/25b78c63069b9e9b2203e66f94b1a0be
Make sure the compilation succeeds without the "-Wall -Werror" flags.