Skip to content

Regenerate API references #1586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions controllers/consoleplugin/config/static-frontend-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -662,7 +662,7 @@ columns:
feature: packetTranslation
- id: IPSecStatus
name: IPSec Status
tooltip: Status of the IPSec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input).
tooltip: Status of the IPsec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input).
field: IPSecStatus
filter: ipsec_status
default: true
Expand Down Expand Up @@ -1072,7 +1072,7 @@ filters:
name: IPSec Status
component: text
placeholder: 'E.g: success, error'
hint: Status of the IPSec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input).
hint: Status of the IPsec encryption (on egress, provided by the kernel function xfrm_output) or decryption (on ingress, via xfrm_input).
scopes:
- id: cluster
name: Cluster
Expand Down Expand Up @@ -1428,7 +1428,7 @@ fields:
description: Cluster name or identifier
- name: IPSecStatus
type: string
description: Status of the IPSec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input)
description: Status of the IPsec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input)
- name: _RecordType
type: string
description: "Type of record: `flowLog` for regular flow logs, or `newConnection`, `heartbeat`, `endConnection` for conversation tracking"
Expand Down
98 changes: 27 additions & 71 deletions docs/flowcollector-flows-netobserv-io-v1beta2.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,8 @@ Type::
| `object`
| `advanced` allows setting some aspects of the internal configuration of the eBPF agent.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. You can also
override the default Linux capabilities from there.

| `cacheActiveTimeout`
| `string`
Expand Down Expand Up @@ -221,10 +222,13 @@ IMPORTANT: This feature is available as a Technology Preview. +

- `EbpfManager`: [Unsupported (*)]. Use eBPF Manager to manage Network Observability eBPF programs. Pre-requisite: the eBPF Manager operator (or upstream bpfman operator) must be installed. +

- `UDNMapping`: [Unsupported (*)]. Enable interfaces mapping to User Defined Networks (UDN). +
- `UDNMapping`: Enable interfaces mapping to User Defined Networks (UDN). +

This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged.
It requires using the OVN-Kubernetes network plugin with the Observability feature.
It requires using the OVN-Kubernetes network plugin with the Observability feature. +

- `IPSec`, to track flows between nodes with IPsec encryption. +


| `flowFilter`
| `object`
Expand Down Expand Up @@ -256,7 +260,7 @@ Otherwise it is matched as a case-sensitive string.
| `privileged`
| `boolean`
| Privileged mode for the eBPF Agent container. When ignored or set to `false`, the operator sets
granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container.
granular capabilities (BPF, PERFMON, NET_ADMIN) to the container.
If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF
is in use, then you can turn on this mode for more global privileges.
Some agent features require the privileged mode, such as packet drops tracking (see `features`) and SR-IOV support.
Expand All @@ -268,7 +272,7 @@ For more information, see https://kubernetes.io/docs/concepts/configuration/mana

| `sampling`
| `integer`
| Sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled.
| Sampling ratio of the eBPF probe. 100 means one packet on 100 is sent. 0 or 1 means all packets are sampled.

|===
== .spec.agent.ebpf.advanced
Expand All @@ -277,7 +281,8 @@ Description::
--
`advanced` allows setting some aspects of the internal configuration of the eBPF agent.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk. You can also
override the default Linux capabilities from there.
--

Type::
Expand All @@ -290,6 +295,10 @@ Type::
|===
| Property | Type | Description

| `capOverride`
| `array (string)`
| Linux capabilities override, when not running as privileged. Default capabilities are BPF, PERFMON and NET_ADMIN.

| `env`
| `object (string)`
| `env` allows passing custom environment variables to underlying components. Useful for passing
Expand Down Expand Up @@ -445,11 +454,10 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports:
| `rules` defines a list of filtering rules on the eBPF Agents.
When filtering is enabled, by default, flows that don't match any rule are rejected.
To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules.
[Unsupported (*)].

| `sampling`
| `integer`
| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`.
| `sampling` is the sampling ratio for the matched packets, overriding the global sampling defined at `spec.agent.ebpf.sampling`.

| `sourcePorts`
| `integer-or-string`
Expand All @@ -471,7 +479,6 @@ Description::
`rules` defines a list of filtering rules on the eBPF Agents.
When filtering is enabled, by default, flows that don't match any rule are rejected.
To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules.
[Unsupported (*)].
--

Type::
Expand Down Expand Up @@ -552,7 +559,7 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports:

| `sampling`
| `integer`
| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`.
| `sampling` is the sampling ratio for the matched packets, overriding the global sampling defined at `spec.agent.ebpf.sampling`.

| `sourcePorts`
| `integer-or-string`
Expand Down Expand Up @@ -2705,14 +2712,12 @@ such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
| `deduper`
| `object`
| `deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage.
[Unsupported (*)].

| `filters`
| `array`
| `filters` lets you define custom filters to limit the amount of generated flows.
These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace,
but with a lesser improvement in performance.
[Unsupported (*)].

| `imagePullPolicy`
| `string`
Expand Down Expand Up @@ -2746,9 +2751,9 @@ This setting is ignored when Kafka is disabled.

- `Flows` to export regular network flows. This is the default. +

- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates. +
- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates. Note that in this mode, Prometheus metrics are not accurate on long-standing conversations. +

- `EndedConversations` to generate only ended conversations events. +
- `EndedConversations` to generate only ended conversations events. Note that in this mode, Prometheus metrics are not accurate on long-standing conversations. +

- `All` to generate both network flows and all conversations events. It is not recommended due to the impact on resources footprint. +

Expand Down Expand Up @@ -2959,7 +2964,6 @@ Description::
+
--
`deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage.
[Unsupported (*)].
--

Type::
Expand All @@ -2974,7 +2978,7 @@ Type::

| `mode`
| `string`
| Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication because the Agent cannot de-duplicate same flows reported from different nodes. +
| Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication, since the Agent cannot de-duplicate same flows reported from different nodes. +

- Use `Drop` to drop every flow considered as duplicates, allowing saving more on resource usage but potentially losing some information such as the network interfaces used from peer, or network events. +

Expand All @@ -2985,7 +2989,7 @@ Type::

| `sampling`
| `integer`
| `sampling` is the sampling rate when deduper `mode` is `Sample`.
| `sampling` is the sampling ratio when deduper `mode` is `Sample`. For example, a value of `50` means that 1 flow in 50 is sampled.

|===
== .spec.processor.filters
Expand All @@ -2995,7 +2999,6 @@ Description::
`filters` lets you define custom filters to limit the amount of generated flows.
These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace,
but with a lesser improvement in performance.
[Unsupported (*)].
--

Type::
Expand All @@ -3021,64 +3024,17 @@ Type::
|===
| Property | Type | Description

| `allOf`
| `array`
| `filters` is a list of matches that must be all satisfied in order to remove a flow.

| `outputTarget`
| `string`
| If specified, these filters only target a single output: `Loki`, `Metrics` or `Exporters`. By default, all outputs are targeted.

| `sampling`
| `integer`
| `sampling` is an optional sampling rate to apply to this filter.

|===
== .spec.processor.filters[].allOf
Description::
+
--
`filters` is a list of matches that must be all satisfied in order to remove a flow.
--

Type::
`array`


| If specified, these filters target a single output: `Loki`, `Metrics` or `Exporters`. By default, all outputs are targeted.


== .spec.processor.filters[].allOf[]
Description::
+
--
`FLPSingleFilter` defines the desired configuration for a single FLP-based filter.
--

Type::
`object`

Required::
- `field`
- `matchType`



[cols="1,1,1",options="header"]
|===
| Property | Type | Description

| `field`
| `query`
| `string`
| Name of the field to filter on.
Refer to the documentation for the list of available fields: https://github.com/netobserv/network-observability-operator/blob/main/docs/flows-format.adoc.
| A query that selects the network flows to keep. More information about this query language in https://github.com/netobserv/flowlogs-pipeline/blob/main/docs/filtering.md.

| `matchType`
| `string`
| Type of matching to apply.

| `value`
| `string`
| Value to filter on. When `matchType` is `Equal` or `NotEqual`, you can use field injection with `$(SomeField)` to refer to any other field of the flow.
| `sampling`
| `integer`
| `sampling` is an optional sampling ratio to apply to this filter. For example, a value of `50` means that 1 matching flow in 50 is sampled.

|===
== .spec.processor.kafkaConsumerAutoscaler
Expand Down
9 changes: 4 additions & 5 deletions docs/flowmetric-flows-netobserv-io-v1alpha1.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,7 @@ When set to `Egress`, it is equivalent to adding the regular expression filter o

| `filters`
| `array`
| `filters` is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must
be used to eliminate duplicates: `Duplicate != "true"` and `FlowDirection = "0"`.
| `filters` is a list of fields and values used to restrict which flows are taken into account.
Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html.

| `flatten`
Expand All @@ -131,9 +130,10 @@ Refer to the documentation for the list of available fields: https://docs.opensh

| `type`
| `string`
| Metric type: "Counter" or "Histogram".
| Metric type: "Counter", "Histogram" or "Gauge".
Use "Counter" for any value that increases over time and on which you can compute a rate, such as Bytes or Packets.
Use "Histogram" for any value that must be sampled independently, such as latencies.
Use "Gauge" for other values that don't necessitate accuracy over time (gauges are sampled only every N seconds when Prometheus fetches the metric).

| `valueField`
| `string`
Expand Down Expand Up @@ -261,8 +261,7 @@ To learn more about `promQL`, refer to the Prometheus documentation: https://pro
Description::
+
--
`filters` is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must
be used to eliminate duplicates: `Duplicate != "true"` and `FlowDirection = "0"`.
`filters` is a list of fields and values used to restrict which flows are taken into account.
Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html.
--

Expand Down
16 changes: 8 additions & 8 deletions docs/flows-format.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -155,13 +155,6 @@ The "Cardinality" column gives information about the implied metric cardinality
| no
| fine
| n/a
| `Duplicate`
| boolean
| Indicates if this flow was also captured from another interface on the same host
| n/a
| no
| fine
| n/a
| `Flags`
| string[]
| List of TCP flags comprised in the flow, as per RFC-9293, with additional custom flags to represent the following per-packet combinations: +
Expand All @@ -182,6 +175,13 @@ The "Cardinality" column gives information about the implied metric cardinality
| yes
| fine
| host.direction
| `IPSecStatus`
| string
| Status of the IPsec encryption (on egress, given by the kernel xfrm_output function) or decryption (on ingress, via xfrm_input)
| `ipsec_status`
| no
| fine
| n/a
| `IcmpCode`
| number
| ICMP code
Expand Down Expand Up @@ -242,7 +242,7 @@ The "Cardinality" column gives information about the implied metric cardinality
| `Packets`
| number
| Number of packets
| `pkt_drop_cause`
| n/a
| no
| avoid
| packets
Expand Down
2 changes: 1 addition & 1 deletion hack/asciidoc-flows-gen.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ for i in $(seq 0 $(( $nbfields-1 )) ); do
fi
cardWarn=$(printf "$cardinalityMap" | jq -r ".$name")
if [[ "$cardWarn" == "null" ]]; then
errors="$errors\nmissing cardinality for field $name"
errors="$errors\nmissing cardinality for field $name; check cardinality.json"
fi
otel=$(printf "$otelMap" | jq -r ".$name")
if [[ "$otel" == "null" ]]; then
Expand Down
1 change: 1 addition & 0 deletions pkg/helper/cardinality/cardinality.json
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@
"XlatDstPort": "careful",
"XlatDstAddr": "avoid",
"Udns": "careful",
"IPSecStatus": "fine",
"_RecordType": "fine",
"_HashId": "avoid"
}