Skip to content

Conversation

@ple13
Copy link
Contributor

@ple13 ple13 commented Dec 1, 2025

No description provided.

@wfa-reviewable
Copy link

This change is Reviewable

Copy link
Contributor Author

@ple13 ple13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 3 of 16 files reviewed, 12 unresolved discussions (waiting on @kungfucraig, @SanjayVas, and @tristanvuong2021)


src/main/proto/wfa/measurement/internal/reporting/postprocessing/report_summary_v2.proto line 70 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Better to omit the "weekly or total" since you don't have control over the enum in this file.

I've removed the comment.


src/main/proto/wfa/measurement/internal/reporting/v2/BUILD.bazel line 434 at r3 (raw file):

Previously, SanjayVas (Sanjay Vasandani) wrote…

Add this missing message type to MESSAGE_LIBS instead, which will result in all the appropriate targets being generated.

Done.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 19 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Looks unused

Thanks. I've removed it.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 89 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Any reason you're using a dict comprehension vs. a list comprehension?

Here I'm using a set for external_reporting_set_ids as there may be duplications.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 116 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Shouldn't the failure log get stored in the results table?

I've added a TODO to write the output log.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 130 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Add a type annotation for the return value type.

Done.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 151 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Does this need to be handled before we release?

This is not handled by the internal API yet. I've updated the comment to make it clear when we can update the code to support pagination.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 157 at r3 (raw file):

Previously, SanjayVas (Sanjay Vasandani) wrote…

You should always use the least restrictive (most generic) type for parameters, and a sufficiently specific type for return values. This is true across languages.

e.g. in Java/Kotlin, you should take in Iterable or Collection over List when you can, but return List if that's what the function builds.

Thanks Sanjay.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 187 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Why not process reporting_set.primitive instead of weighted_subset_unions?

In case this is not a primitive external_reporting_set_id, the weighted_subset_union will give us all the primitives (It works in our special case as all the measurements are union). This removes the need to make extra API calls to get the ReportingSet for non-primitive external_reporting_set_id.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 195 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

get_primitive_ids takes a single reporting set, but the map may have a list of reporting sets.

The reporting_set_map have a 1:1 mapping between external_reporting_set_id -> its corresponding reporting set.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 286 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Seems like the empty list cases may actually be errors. Wdyt?

I agree that if there is no report summaries, something may have gone wrong. I've added a TODO to add this to the Output Log.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 361 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Can you extract the computation of metric results into a separate module and include tests for it separately?

It's important that the calculations are separated and tested apart from the processing.

Done.

Copy link
Member

@SanjayVas SanjayVas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 3 of 16 files reviewed, 13 unresolved discussions (waiting on @kungfucraig, @ple13, and @tristanvuong2021)


MODULE.bazel line 429 at r4 (raw file):

    version = BORINGSSL_VERSION,
)
single_version_override(

Add DO_NOT_SUBMIT comments to these to be clear that they are just for debugging. These overrides need to be removed before submitting/merging.

Copy link
Member

@SanjayVas SanjayVas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 3 of 16 files reviewed, 13 unresolved discussions (waiting on @kungfucraig, @ple13, and @tristanvuong2021)


MODULE.bazel line 429 at r4 (raw file):

Previously, SanjayVas (Sanjay Vasandani) wrote…

Add DO_NOT_SUBMIT comments to these to be clear that they are just for debugging. These overrides need to be removed before submitting/merging.

To be clear, if you can't get it working without these overrides then there's a problem. boringssl above is a special case due to how some older Bazel modules got defined.

Copy link
Member

@kungfucraig kungfucraig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kungfucraig reviewed 4 of 14 files at r1, 3 of 6 files at r3, 4 of 8 files at r4, 4 of 4 files at r5, all commit messages.
Reviewable status: all files reviewed, 6 unresolved discussions (waiting on @ple13, @SanjayVas, and @tristanvuong2021)


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 187 at r3 (raw file):

Previously, ple13 (Phi) wrote…

In case this is not a primitive external_reporting_set_id, the weighted_subset_union will give us all the primitives (It works in our special case as all the measurements are union). This removes the need to make extra API calls to get the ReportingSet for non-primitive external_reporting_set_id.

Please leave this explanation in a comment.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 433 at r5 (raw file):

        basic_metric_set.impressions = impressions
        basic_metric_set.grps = impressions / population * 100
        if basic_metric_set.reach > 0:

This will throw TypeError if reach is None. So you'll want to write:

if basic_metric_set.reach is not None and basic_metric_set.reach > 0:


src/test/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result_test.py line 65 at r5 (raw file):

            )

    def test_compute_basic_metric_set_no_population_raise_error(self):

It's probably not worth testing every combination of values.

It seems like this should be adequate:

  1. The no population case.
  2. The population only case
  3. The impressions only case (that should be an error case)
  4. All inputs present
  5. Any other error cases.

src/test/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result_test.py line 107 at r5 (raw file):

        self.assertEqual(basic_metric_set.percent_reach, 0.0)
        self.assertEqual(basic_metric_set.impressions, 200)
        self.assertEqual(basic_metric_set.grps, 200.0)

I'm surprised this works given the dependency on reach. Any idea why?

Copy link
Contributor Author

@ple13 ple13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 12 of 16 files reviewed, 5 unresolved discussions (waiting on @kungfucraig, @SanjayVas, and @tristanvuong2021)


MODULE.bazel line 429 at r4 (raw file):

Previously, SanjayVas (Sanjay Vasandani) wrote…

To be clear, if you can't get it working without these overrides then there's a problem. boringssl above is a special case due to how some older Bazel modules got defined.

Done.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 187 at r3 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Please leave this explanation in a comment.

Done.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 433 at r5 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

This will throw TypeError if reach is None. So you'll want to write:

if basic_metric_set.reach is not None and basic_metric_set.reach > 0:

basic_metric_set.reach has a default value of 0. We can have a check for reach is not None, however, when average_frequency is not set, it takes the default value of 0.0 anyway.


src/test/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result_test.py line 65 at r5 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

It's probably not worth testing every combination of values.

It seems like this should be adequate:

  1. The no population case.
  2. The population only case
  3. The impressions only case (that should be an error case)
  4. All inputs present
  5. Any other error cases.

When there is only impressions, we may not want to fail the computation as there could be reports with just impressions. Similarly, for reach only report, the metric will show impressions = 0 as the default value when impression is not requested in the report. I think we would need to have a way to tell if a value is None or not in BasicMetricSet. Please let me know what you think.


src/test/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result_test.py line 107 at r5 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

I'm surprised this works given the dependency on reach. Any idea why?

When a field is not set, the default value is used. I think that's why we see many zeros here that does not make sense.

Copy link
Member

@kungfucraig kungfucraig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 12 of 16 files reviewed, 4 unresolved discussions (waiting on @ple13, @SanjayVas, and @tristanvuong2021)


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 433 at r5 (raw file):

Previously, ple13 (Phi) wrote…

basic_metric_set.reach has a default value of 0. We can have a check for reach is not None, however, when average_frequency is not set, it takes the default value of 0.0 anyway.

That's fine. We can just work off of the default.


src/test/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result_test.py line 65 at r5 (raw file):

Previously, ple13 (Phi) wrote…

When there is only impressions, we may not want to fail the computation as there could be reports with just impressions. Similarly, for reach only report, the metric will show impressions = 0 as the default value when impression is not requested in the report. I think we would need to have a way to tell if a value is None or not in BasicMetricSet. Please let me know what you think.

Yeah, you're right. Impressions only should just avoid those calculations that also require reach.


src/test/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result_test.py line 107 at r5 (raw file):

Previously, ple13 (Phi) wrote…

When a field is not set, the default value is used. I think that's why we see many zeros here that does not make sense.

Makes sense. I missed that the calculation function was working off of the metric set values not the function inputs.

Copy link
Member

@SanjayVas SanjayVas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SanjayVas reviewed 1 of 6 files at r3, 3 of 8 files at r4, 1 of 3 files at r6, 3 of 3 files at r7, all commit messages.
Reviewable status: all files reviewed, 3 unresolved discussions (waiting on @ple13 and @tristanvuong2021)


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 78 at r7 (raw file):

        cmms_measurement_consumer_id: str,
        external_report_result_id: int,
    ) -> Optional[AddProcessedResultValuesRequest]:

Document the non-error condition under which no message is returned.

In general, public methods should be documented.

Copy link
Contributor Author

@ple13 ple13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 15 of 16 files reviewed, 3 unresolved discussions (waiting on @kungfucraig, @SanjayVas, and @tristanvuong2021)


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 433 at r5 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

That's fine. We can just work off of the default.

Done.


src/main/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result.py line 78 at r7 (raw file):

Previously, SanjayVas (Sanjay Vasandani) wrote…

Document the non-error condition under which no message is returned.

In general, public methods should be documented.

Done.


src/test/python/wfa/measurement/reporting/postprocessing/tools/post_process_report_result_test.py line 65 at r5 (raw file):

Previously, kungfucraig (Craig Wright) wrote…

Yeah, you're right. Impressions only should just avoid those calculations that also require reach.

Done.

Copy link
Member

@kungfucraig kungfucraig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kungfucraig reviewed 1 of 3 files at r6, 3 of 3 files at r7, 1 of 1 files at r8, all commit messages.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @tristanvuong2021)

@ple13 ple13 requested a review from stevenwarejones December 9, 2025 15:31
Copy link
Collaborator

@stevenwarejones stevenwarejones left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stevenwarejones reviewed 6 of 14 files at r1, 2 of 6 files at r3, 3 of 8 files at r4, 1 of 3 files at r6, 3 of 3 files at r7, 1 of 1 files at r8, all commit messages.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @tristanvuong2021)

Copy link
Member

@SanjayVas SanjayVas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SanjayVas reviewed 1 of 1 files at r8, all commit messages.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @tristanvuong2021)

@ple13 ple13 merged commit 47abcc2 into main Dec 9, 2025
7 checks passed
@ple13 ple13 deleted the lephi-get-denoise-result-request-from-corrected-report branch December 9, 2025 19:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants