Skip to content

Design an extensible mechanism to group metrics in the end-of-test summary #4617

@joanlopez

Description

@joanlopez

Since #4089, the end-of-test summary groups metrics displayed by "area" or "feature", like: "http", "websockets" or "network", among others.

That's the desired design, because it makes it easier to read for users, and to let them focus on the areas they really want. But, it has a couple of aspects, related among them, that should be improved.

The initial design basically has a statically defined (harcoded) set of groups, and a small logic to determine what metrics belongs to each grup (see this and this). So, all the metrics that aren't recognized as belonging to any of these groups are placed in a group named "custom", that holds all the remaining ones.

This design, despite simple and likely enough for the first iteration has clearly some negative aspects that we could improve. To mention some:

  • If we ever add a new metric that should belong to any of the existing groups, or a new one, it's very easy to forget to adjust the existing definition, and thus introduce a small bug.
  • If we (or any extension developer) ever want to either create a new group (because "custom" doesn't fit everything, and/or putting everything else there makes the initial goal of making the summary more readable, a bit useless), or add a new metric to any of the existing groups, there is no way to do so (unless they satisfy the "not-very-extensible" existing design that some groups have, based on prefix).
  • Less likely, any new metric that satisfies one of the defined prefixes may fall into a group that it shouldn't belong to. For instance, a metric starting with browser_ would be placed as a "browser metric", while maybe it shouldn't.

Some of the ideas we barely considered, but some may worth some additional research are:

  • Expose a Go/JS API to make these static configurations configurable.
  • Extend the existing metrics package API to let k6 users/developers configure these groups.
    • _I guess this would be similar to the previous one, but more generic (not tied to the summary output), and trying to extend, as long as possible, the existing APIs for defining new metrics, either from Go code or from JS code (scripts).
  • Expose a way to configure such thing through k6 configuration (e.g. based on metrics' prefix).

--

If you're interested, feel free to drop your idea, and/or share your thoughts on any of the ideas aforementioned 🙇🏻

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions