Skip to content

Include tensor shapes in get_broadcast_target_size error message #7944

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Feb 7, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 16 additions & 4 deletions kernels/portable/cpu/util/broadcast_util.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -213,10 +213,22 @@ ET_NODISCARD Error get_broadcast_target_size(
Tensor::SizesType* out_sizes,
const size_t out_sizes_len,
size_t* out_dim) {
ET_CHECK_OR_RETURN_ERROR(
tensors_are_broadcastable_between(a_size, b_size),
InvalidArgument,
"Two input tensors should be broadcastable.\n");
if ET_UNLIKELY (!tensors_are_broadcastable_between(a_size, b_size)) {
#ifdef ET_LOG_ENABLED
const auto a_shape_str = tensor_shape_to_c_string(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new API is great, keeping the caller totally ignorant of how big a buffer to allocate.

executorch::runtime::Span<const Tensor::SizesType>(
a_size.data(), a_size.size()));
const auto b_shape_str = tensor_shape_to_c_string(
executorch::runtime::Span<const Tensor::SizesType>(
b_size.data(), b_size.size()));
#endif
ET_LOG(
Error,
"Two input tensors should be broadcastable but got shapes %s and %s.",
a_shape_str.data(),
b_shape_str.data());
Comment on lines +224 to +229
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be less surprising for readers if the if/endif completely covered the scopes of the locals that are defined in it, so they don't need to do the extra mental jump of "I guess ET_LOG evaluates to nothing when ET_LOG_ENABLED is false"

Suggested change
#endif
ET_LOG(
Error,
"Two input tensors should be broadcastable but got shapes %s and %s.",
a_shape_str.data(),
b_shape_str.data());
ET_LOG(
Error,
"Two input tensors should be broadcastable but got shapes %s and %s.",
a_shape_str.data(),
b_shape_str.data());
#endif

return executorch::runtime::Error::InvalidArgument;
}

auto a_dim = a_size.size();
auto b_dim = b_size.size();
Expand Down
9 changes: 9 additions & 0 deletions kernels/portable/cpu/util/test/broadcast_test.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,15 @@ TEST(BroadcastUtilTest, GetBroadcastTargetSize) {
EXPECT_TRUE(
ArrayRef<Tensor::SizesType>(expected_output_size, expected_output_dim)
.equals(ArrayRef<Tensor::SizesType>({5, 2, 2})));

Tensor c = tf.zeros({4, 5});
err = get_broadcast_target_size(
a,
c,
expected_output_size,
torch::executor::kTensorDimensionLimit,
&expected_output_dim);
EXPECT_EQ(err, torch::executor::Error::InvalidArgument);
}

size_t linearize_indexes(size_t* indexes, size_t indexes_len, const Tensor& t) {
Expand Down
Loading