Skip to content

K8s Monitoring Alloy deployment failing for undefined "beak" #1370

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Maharshi-Mimo opened this issue Mar 26, 2025 · 1 comment · May be fixed by #1412
Open

K8s Monitoring Alloy deployment failing for undefined "beak" #1370

Maharshi-Mimo opened this issue Mar 26, 2025 · 1 comment · May be fixed by #1412

Comments

@Maharshi-Mimo
Copy link

I am trying to deploy the Alloy but I am getting the following error.

$ ./alloy.sh 
"grafana" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cnpg" chart repository
...Successfully got an update from the "grafana" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "grafana-k8s-monitoring" does not exist. Installing it now.
Error: parse error at (k8s-monitoring/templates/_platform_validations.tpl:10): function "break" not defined
./alloy.sh: line 6: --version: command not found

The Configuration I am trying to deploy:

helm repo add grafana https://grafana.github.io/helm-charts &&
helm repo update &&
helm upgrade --install --timeout 300s grafana-k8s-monitoring grafana/k8s-monitoring
--version 1.6.30 --namespace monitoring --create-namespace --values -<<EOF
cluster:
  name: LAIP-cluster
externalServices:
  prometheus:
      host: https://mimir-dev-laip.domain.ai
      authMode: none
      writeEndpoint: '/api/v1/push'
  loki:
      host: https://loki-dev-laip.domain.ai
      authMode: none
      processors:
        batch:
          size: 16384
          maxSize: 0
          timeout: 20s
        memory_limiter:
          check_interval: 1s
          limit_memory: 256MB
          spike_limit_memory: 128MB
  tempo:
      host: https://tempo-dev-laip.domain.ai/http
      protocol: "otlphttp"
      tls:
        insecure: true
      authMode: none
  pyroscope:
      host: https://pyroscope-dev-laip.domain.ai
      authMode: none
 
metrics:
  enabled: true
  autoDiscover:
    enabled: true
  cost:
    enabled: false
  node-exporter:
    enabled: true
  cadvisor:
    enabled: true
    metricsTuning:
      dropEmptyImageLabels: false
      dropEmptyContainerLabels: false
      keepPhysicalNetworkDevices: [".*"]
    extraRelabelingRules: |-
      rule {
        source_labels = ["__meta_kubernetes_node_label_name"]
        regex = "(.*)"
        target_label = "node_pool"
      }
      rule {
        source_labels = ["__meta_kubernetes_node_label_topology_kubernetes_io_zone"]
        regex = "(.*)"
        target_label = "node_zone"
      }
      rule {
        source_labels = ["__meta_kubernetes_node_label_hostname"]
        regex = "(.*)"
        target_label = "node_hostname"
      }
  kubelet:
    enabled: true
    extraRelabelingRules: |-
      rule {
        source_labels = ["__meta_kubernetes_node_label_name"]
        regex = "(.*)"
        target_label = "node_pool"
      }
      rule {
        source_labels = ["__meta_kubernetes_node_label_topology_kubernetes_io_zone"]
        regex = "(.*)"
        target_label = "node_zone"
      }
      rule {
        source_labels = ["__meta_kubernetes_node_label_hostname"]
        regex = "(.*)"
        target_label = "node_hostname"
      }
 
logs:
  enabled: true
  pod_logs:
    enabled: true
  cluster_events:
    enabled: true
 
traces:
  enabled: true
receivers:
  grpc:
    enabled: true
  http:
    enabled: true
  jaeger:
    grpc:
      enabled: true
    thriftBinary:
      enabled: true
    thriftCompact:
      enabled: true
    thriftHttp:
      enabled: true
  zipkin:
    enabled: false
  grafanaCloudMetrics:
    enabled: false
processors:
  batch:
    # Amount of data to buffer before flushing the batch.
    size: 16384
    # Upper limit of a batch size. When set to 0, there is no upper limit.
    maxSize: 0
    # How long to wait before flushing the batch.
    timeout: 20s
    queue_size: 4096 
    max_memory: "256MB"
opencost:
  enabled: false
  config:
    name: null
    create: true
profiles:
  enabled: false
kube-state-metrics:
  enabled: true
prometheus-node-exporter:
  enabled: true
prometheus-operator-crds:
  enabled: true
grafana-agent: {}
grafana-agent-logs: {}
alloy-events:
  logging:
    level: error
alloy-logs:
  logging:
    level: error
alloy:
  logging:
    level: error
  alloy:
    resources:
      requests:
        cpu: "500m"
        memory: "16Gi"
  controller:
    autoscaling:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 70
      targetMemoryUtilizationPercentage: 70
 
EOF
@petewall
Copy link
Collaborator

You're missing a \ on your command line after grafana/k8s-monitoring, which is why you're getting the --version: command not found error.

helm repo add grafana https://grafana.github.io/helm-charts &&
helm repo update &&
helm upgrade --install --timeout 300s grafana-k8s-monitoring grafana/k8s-monitoring \
--version 1.6.30 --namespace monitoring --create-namespace --values -<<EOF
...

That should fix your issue, because there isn't a break in the v1 chart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants