From a4a2b62cb0a85dcf4214c67818723967c2933a67 Mon Sep 17 00:00:00 2001 From: Joel Takvorian Date: Mon, 2 Jun 2025 10:59:42 +0200 Subject: [PATCH 1/2] Regenerate API references --- ...wcollector-flows-netobserv-io-v1beta2.adoc | 98 +++++-------------- ...lowmetric-flows-netobserv-io-v1alpha1.adoc | 9 +- docs/flows-format.adoc | 16 +-- hack/asciidoc-flows-gen.sh | 2 +- pkg/helper/cardinality/cardinality.json | 1 + 5 files changed, 41 insertions(+), 85 deletions(-) diff --git a/docs/flowcollector-flows-netobserv-io-v1beta2.adoc b/docs/flowcollector-flows-netobserv-io-v1beta2.adoc index 762d9fb54..a9350afe4 100644 --- a/docs/flowcollector-flows-netobserv-io-v1beta2.adoc +++ b/docs/flowcollector-flows-netobserv-io-v1beta2.adoc @@ -180,7 +180,8 @@ Type:: | `object` | `advanced` allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. You can also +override the default Linux capabilities from there. | `cacheActiveTimeout` | `string` @@ -221,10 +222,13 @@ IMPORTANT: This feature is available as a Technology Preview. + - `EbpfManager`: [Unsupported (*)]. Use eBPF Manager to manage Network Observability eBPF programs. Pre-requisite: the eBPF Manager operator (or upstream bpfman operator) must be installed. + -- `UDNMapping`: [Unsupported (*)]. Enable interfaces mapping to User Defined Networks (UDN). + +- `UDNMapping`: Enable interfaces mapping to User Defined Networks (UDN). + This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. -It requires using the OVN-Kubernetes network plugin with the Observability feature. +It requires using the OVN-Kubernetes network plugin with the Observability feature. + + +- `IPSec`, to track flows between nodes with IPsec encryption. + + | `flowFilter` | `object` @@ -256,7 +260,7 @@ Otherwise it is matched as a case-sensitive string. | `privileged` | `boolean` | Privileged mode for the eBPF Agent container. When ignored or set to `false`, the operator sets -granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container. +granular capabilities (BPF, PERFMON, NET_ADMIN) to the container. If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF is in use, then you can turn on this mode for more global privileges. Some agent features require the privileged mode, such as packet drops tracking (see `features`) and SR-IOV support. @@ -268,7 +272,7 @@ For more information, see https://kubernetes.io/docs/concepts/configuration/mana | `sampling` | `integer` -| Sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled. +| Sampling ratio of the eBPF probe. 100 means one packet on 100 is sent. 0 or 1 means all packets are sampled. |=== == .spec.agent.ebpf.advanced @@ -277,7 +281,8 @@ Description:: -- `advanced` allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, -such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. +such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. You can also +override the default Linux capabilities from there. -- Type:: @@ -290,6 +295,10 @@ Type:: |=== | Property | Type | Description +| `capOverride` +| `array (string)` +| Linux capabilities override, when not running as privileged. Default capabilities are BPF, PERFMON and NET_ADMIN. + | `env` | `object (string)` | `env` allows passing custom environment variables to underlying components. Useful for passing @@ -445,11 +454,10 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports: | `rules` defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules. -[Unsupported (*)]. | `sampling` | `integer` -| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`. +| `sampling` is the sampling ratio for the matched packets, overriding the global sampling defined at `spec.agent.ebpf.sampling`. | `sourcePorts` | `integer-or-string` @@ -471,7 +479,6 @@ Description:: `rules` defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules. -[Unsupported (*)]. -- Type:: @@ -552,7 +559,7 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports: | `sampling` | `integer` -| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`. +| `sampling` is the sampling ratio for the matched packets, overriding the global sampling defined at `spec.agent.ebpf.sampling`. | `sourcePorts` | `integer-or-string` @@ -2705,14 +2712,12 @@ such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. | `deduper` | `object` | `deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage. -[Unsupported (*)]. | `filters` | `array` | `filters` lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. -[Unsupported (*)]. | `imagePullPolicy` | `string` @@ -2746,9 +2751,9 @@ This setting is ignored when Kafka is disabled. - `Flows` to export regular network flows. This is the default. + -- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates. + +- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates. Note that in this mode, Prometheus metrics are not accurate on long-standing conversations. + -- `EndedConversations` to generate only ended conversations events. + +- `EndedConversations` to generate only ended conversations events. Note that in this mode, Prometheus metrics are not accurate on long-standing conversations. + - `All` to generate both network flows and all conversations events. It is not recommended due to the impact on resources footprint. + @@ -2959,7 +2964,6 @@ Description:: + -- `deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage. -[Unsupported (*)]. -- Type:: @@ -2974,7 +2978,7 @@ Type:: | `mode` | `string` -| Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication because the Agent cannot de-duplicate same flows reported from different nodes. + +| Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication, since the Agent cannot de-duplicate same flows reported from different nodes. + - Use `Drop` to drop every flow considered as duplicates, allowing saving more on resource usage but potentially losing some information such as the network interfaces used from peer, or network events. + @@ -2985,7 +2989,7 @@ Type:: | `sampling` | `integer` -| `sampling` is the sampling rate when deduper `mode` is `Sample`. +| `sampling` is the sampling ratio when deduper `mode` is `Sample`. For example, a value of `50` means that 1 flow in 50 is sampled. |=== == .spec.processor.filters @@ -2995,7 +2999,6 @@ Description:: `filters` lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. -[Unsupported (*)]. -- Type:: @@ -3021,64 +3024,17 @@ Type:: |=== | Property | Type | Description -| `allOf` -| `array` -| `filters` is a list of matches that must be all satisfied in order to remove a flow. - | `outputTarget` | `string` -| If specified, these filters only target a single output: `Loki`, `Metrics` or `Exporters`. By default, all outputs are targeted. - -| `sampling` -| `integer` -| `sampling` is an optional sampling rate to apply to this filter. - -|=== -== .spec.processor.filters[].allOf -Description:: -+ --- -`filters` is a list of matches that must be all satisfied in order to remove a flow. --- - -Type:: - `array` - - +| If specified, these filters target a single output: `Loki`, `Metrics` or `Exporters`. By default, all outputs are targeted. - -== .spec.processor.filters[].allOf[] -Description:: -+ --- -`FLPSingleFilter` defines the desired configuration for a single FLP-based filter. --- - -Type:: - `object` - -Required:: - - `field` - - `matchType` - - - -[cols="1,1,1",options="header"] -|=== -| Property | Type | Description - -| `field` +| `query` | `string` -| Name of the field to filter on. -Refer to the documentation for the list of available fields: https://github.com/netobserv/network-observability-operator/blob/main/docs/flows-format.adoc. +| A query that selects the network flows to keep. More information about this query language in https://github.com/netobserv/flowlogs-pipeline/blob/main/docs/filtering.md. -| `matchType` -| `string` -| Type of matching to apply. - -| `value` -| `string` -| Value to filter on. When `matchType` is `Equal` or `NotEqual`, you can use field injection with `$(SomeField)` to refer to any other field of the flow. +| `sampling` +| `integer` +| `sampling` is an optional sampling ratio to apply to this filter. For example, a value of `50` means that 1 matching flow in 50 is sampled. |=== == .spec.processor.kafkaConsumerAutoscaler diff --git a/docs/flowmetric-flows-netobserv-io-v1alpha1.adoc b/docs/flowmetric-flows-netobserv-io-v1alpha1.adoc index 88647448d..861a386f3 100644 --- a/docs/flowmetric-flows-netobserv-io-v1alpha1.adoc +++ b/docs/flowmetric-flows-netobserv-io-v1alpha1.adoc @@ -103,8 +103,7 @@ When set to `Egress`, it is equivalent to adding the regular expression filter o | `filters` | `array` -| `filters` is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must -be used to eliminate duplicates: `Duplicate != "true"` and `FlowDirection = "0"`. +| `filters` is a list of fields and values used to restrict which flows are taken into account. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html. | `flatten` @@ -131,9 +130,10 @@ Refer to the documentation for the list of available fields: https://docs.opensh | `type` | `string` -| Metric type: "Counter" or "Histogram". +| Metric type: "Counter", "Histogram" or "Gauge". Use "Counter" for any value that increases over time and on which you can compute a rate, such as Bytes or Packets. Use "Histogram" for any value that must be sampled independently, such as latencies. +Use "Gauge" for other values that don't necessitate accuracy over time (gauges are sampled only every N seconds when Prometheus fetches the metric). | `valueField` | `string` @@ -261,8 +261,7 @@ To learn more about `promQL`, refer to the Prometheus documentation: https://pro Description:: + -- -`filters` is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must -be used to eliminate duplicates: `Duplicate != "true"` and `FlowDirection = "0"`. +`filters` is a list of fields and values used to restrict which flows are taken into account. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html. -- diff --git a/docs/flows-format.adoc b/docs/flows-format.adoc index d87ab52ab..5435fbb2f 100644 --- a/docs/flows-format.adoc +++ b/docs/flows-format.adoc @@ -155,13 +155,6 @@ The "Cardinality" column gives information about the implied metric cardinality | no | fine | n/a -| `Duplicate` -| boolean -| Indicates if this flow was also captured from another interface on the same host -| n/a -| no -| fine -| n/a | `Flags` | string[] | List of TCP flags comprised in the flow, as per RFC-9293, with additional custom flags to represent the following per-packet combinations: + @@ -182,6 +175,13 @@ The "Cardinality" column gives information about the implied metric cardinality | yes | fine | host.direction +| `IPSecStatus` +| string +| Status of the IPSec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input) +| `ipsec_status` +| no +| fine +| n/a | `IcmpCode` | number | ICMP code @@ -242,7 +242,7 @@ The "Cardinality" column gives information about the implied metric cardinality | `Packets` | number | Number of packets -| `pkt_drop_cause` +| n/a | no | avoid | packets diff --git a/hack/asciidoc-flows-gen.sh b/hack/asciidoc-flows-gen.sh index 7019f4414..55e069c5a 100755 --- a/hack/asciidoc-flows-gen.sh +++ b/hack/asciidoc-flows-gen.sh @@ -48,7 +48,7 @@ for i in $(seq 0 $(( $nbfields-1 )) ); do fi cardWarn=$(printf "$cardinalityMap" | jq -r ".$name") if [[ "$cardWarn" == "null" ]]; then - errors="$errors\nmissing cardinality for field $name" + errors="$errors\nmissing cardinality for field $name; check cardinality.json" fi otel=$(printf "$otelMap" | jq -r ".$name") if [[ "$otel" == "null" ]]; then diff --git a/pkg/helper/cardinality/cardinality.json b/pkg/helper/cardinality/cardinality.json index 01512f227..eef215fc7 100644 --- a/pkg/helper/cardinality/cardinality.json +++ b/pkg/helper/cardinality/cardinality.json @@ -66,6 +66,7 @@ "XlatDstPort": "careful", "XlatDstAddr": "avoid", "Udns": "careful", + "IPSecStatus": "fine", "_RecordType": "fine", "_HashId": "avoid" } From 980f352e0282e5482e9d0552855e7072769cc8ef Mon Sep 17 00:00:00 2001 From: Joel Takvorian Date: Mon, 2 Jun 2025 11:14:34 +0200 Subject: [PATCH 2/2] s/IPSec/IPsec/ --- .../consoleplugin/config/static-frontend-config.yaml | 6 +++--- docs/flows-format.adoc | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/controllers/consoleplugin/config/static-frontend-config.yaml b/controllers/consoleplugin/config/static-frontend-config.yaml index 7829b3bf1..aa4c611af 100644 --- a/controllers/consoleplugin/config/static-frontend-config.yaml +++ b/controllers/consoleplugin/config/static-frontend-config.yaml @@ -662,7 +662,7 @@ columns: feature: packetTranslation - id: IPSecStatus name: IPSec Status - tooltip: Status of the IPSec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input). + tooltip: Status of the IPsec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input). field: IPSecStatus filter: ipsec_status default: true @@ -1072,7 +1072,7 @@ filters: name: IPSec Status component: text placeholder: 'E.g: success, error' - hint: Status of the IPSec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input). + hint: Status of the IPsec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input). scopes: - id: cluster name: Cluster @@ -1428,7 +1428,7 @@ fields: description: Cluster name or identifier - name: IPSecStatus type: string - description: Status of the IPSec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input) + description: Status of the IPsec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input) - name: _RecordType type: string description: "Type of record: `flowLog` for regular flow logs, or `newConnection`, `heartbeat`, `endConnection` for conversation tracking" diff --git a/docs/flows-format.adoc b/docs/flows-format.adoc index 5435fbb2f..a824d0035 100644 --- a/docs/flows-format.adoc +++ b/docs/flows-format.adoc @@ -177,7 +177,7 @@ The "Cardinality" column gives information about the implied metric cardinality | host.direction | `IPSecStatus` | string -| Status of the IPSec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input) +| Status of the IPsec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input) | `ipsec_status` | no | fine